arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
1608.08647
|
\section{Introduction}
\subsection{Prime Numbers}
\label{Prime Numbers}
\begin{defn} A \textit{prime} number $p$ is any integer $p > 1$ whose divisors are only $1$ and itself. A \textit{composite} number is any integer that is not a prime number or the \textit{unit} number, $1$. \end{defn}
One of the first mathematicians to study the primes was Eratosthenes, to whom is attributed an algorithm to find all primes less than or equal to a certain value. The Sieve of Eratosthenes starts by marking all multiples of $2$ as composite, then proceeding to multiples of $3$, $5$, $7$ and so on up to $x$.
For example, after all even numbers up to (and including) $30$ have been marked as composite, we have:
$\textbf{2}, 3, \sout{4}, 5, \sout{6}, 7, \sout{8}, 9, \sout{10}, 11, \sout{12}, 13, \sout{14}, 15, \sout{16}, 17, \sout{18}, 19, \sout{20}, 21, \sout{22}, 23, \sout{24}, 25, \sout{26}, 27, \sout{28}, 29, \sout{30}$
Next, we mark composite all multiples of $3$ not already marked:
$2, \textbf{3}, \sout{4}, 5, \sout{6}, 7, \sout{8}, \sout{9}, \sout{10}, 11, \sout{12}, 13, \sout{14}, \sout{15}, \sout{16}, 17, \sout{18}, 19, \sout{20}, \sout{21}, \sout{22}, 23, \sout{24}, 25, \sout{26}, \sout{27}, \sout{28}, 29, \sout{30}$
Next, we continue to multiples of $5$ and proceed as before, continuing until multiples of $29$:
$2, 3, \sout{4}, 5, \sout{6}, 7, \sout{8}, \sout{9}, \sout{10}, 11, \sout{12}, 13, \sout{14}, \sout{15}, \sout{16}, 17, \sout{18}, 19, \sout{20}, \sout{21}, \sout{22}, 23, \sout{24}, \sout{25}, \sout{26}, \sout{27}, \sout{28}, \textbf{29}, \sout{30}$
The remaining values form the set $\{2, 3, 5, 7, 11, 13, 17, 19, 23, 29\}$, which are the prime numbers less than or equal to $30$; i.e. the set of numbers less than or equal to $30$ whose divisors are only $1$ and itself.
\pagebreak
\begin{prpn}{(Fundamental Theorem of Arithmetic)} Every integer has a unique prime factorization. \end {prpn}
In other words, every integer can we expressed in a unique way as an infinite product of powers of primes:
\begin{equation}
\label{eq: fund arith}
n = 2^{\alpha_1} 3^{\alpha_2} 5^{\alpha_3} 7^{\alpha_4} \cdots = \prod p_i^{\alpha_i}
\end{equation}
where $p \in$ primes, and a finite number of $\alpha_i$ are positive integers with the rest being zero. For example, we can write $10 = 2^1 \cdot 3^0 \cdot 5^1 \cdot 7^0 \cdot 11^0 \cdots$.
\begin{prpn}{(Euclid's Theorem)} There are infinitely many prime numbers. \end{prpn}
There are many well-known proofs of Euclid's theorem. Euler's proof is as follows:\\
Let $p$ denote prime numbers and $P$ denote the set of all prime numbers. Then,
\begin{equation*}
\label{eq: eulcidsthm}
\prod_{p \in P} \sum_{\alpha \geq 0} \frac{1}{p^\alpha} = \sum_{\alpha \geq 0} \frac{1}{2^\alpha} \cdot \sum_{\alpha \geq 0} \frac{1}{3^\alpha} \cdot \sum_{\alpha \geq 0} \frac{1}{5^\alpha} \cdot \sum_{\alpha \geq 0} \frac{1}{7^\alpha} \cdots = \sum_{\alpha_1 , \alpha_2, \alpha_3, \ldots \geq 0} \frac{1}{2^{\alpha_1} 3^{\alpha_2} 5^{\alpha_3} \cdots}
\end{equation*}
However, by \eqref{eq: fund arith}, we know that every integer can be written uniquely as a product of primes. Thus, we can rewrite our equation as:
\begin{equation}
\label{eq: eulersproof}
\prod_{p \in P} \sum_{\alpha \geq 0} \frac{1}{p^\alpha} = \sum_{\alpha_1 , \alpha_2, \alpha_3, \ldots \geq 0} \frac{1}{2^{\alpha_1} 3^{\alpha_2} 5^{\alpha_3} \cdots} = \sum_{n} \frac{1}{n}
\end{equation}
We then recognize the right hand side of \eqref{eq: eulersproof} as the harmonic series. Because of the divergence of the harmonic series, we know our product must be infinite as well. Since each term of our product is a finite number, there must be an infinite number of terms for the product to be infinite.
Euler also proved a stronger version of the divergence of the harmonic series, in which he shows the sum of reciprocals of primes also diverges \cite{prime reciprocal divergence}. We will use this fact in a later proof.
\begin{equation}
\label{eq:sum of reciprocals}
\sum_{p \in P} \frac{1}{p} = \infty
\end{equation}
\subsection{Arithmetic Progressions}
\label{Arithmetic Progressions}
The Sieve of Eratosthenes is effective because of the simplicity of identifying multiples of a number. For example, it is easy to identify all numbers of the form $3n$ (which is the set $\{3, 6, 9, 12, 15, 18, \ldots \}$ for $n\geq 1$) as multiples of $3$, and subsequently mark them as composite (with the exception of the first element). However, what happens if we change the starting value of the set, while keeping the distance between elements the same?
\begin{defn} We call a sequence of numbers with constant difference between terms an \textit{arithmetic progression}.\end{defn}
For example, consider all numbers of the form $3n+2$ and $3n+1$, which represent the sets $\{2, 5, 8, 11, 14, 17 \ldots \}$ and $\{1, 4, 7, 10, 13, 16, \ldots \}$ respectively. Both sets of numbers are arithmetic progressions with a difference of $3$.
The reader might then inquire:
\begin{itemize}
\item Between $3n+2$ and $3n+1$, which arithmetic progression contains more primes up to a value $x$? In other words, if we consider the count of primes in each progression as a race, which team is in the lead at a given $x$?
\item Can we extend Euclid's Theorem to primes in arithmetic progressions? In other words, do arithmetic progressions contain infinitely many primes?
\item What is the distribution of primes in these progressions?
\end{itemize}
To answer these questions, we must first introduce a few tools to give our analysis some sophistication.
\subsection{Euclidean Algorithm, Euler's Totient Function, and Modulo}
\label{Euclidean Algorithm}
\begin{defn} An integer $a \not = 0$ \textit{divides} another integer $b$ if there exists another integer $c$, such that $b=ac$. We denote that $a$ divides $b$ with $a \vert b$. \end{defn}
\begin{defn}
Pick two integers $a$ and $b$. An integer $c$ such that $c \vert a$ and $c \vert b$ is said to be a \textit{common divisor} of $a$ and $b$. If there exists another integer $d \geq c$ that also divides $a$ and $b$, we say that $d$ is the \textit{greatest common divisor} of $a$ and $b$. We denote this by $\gcd(a,b)=d$.
\end{defn}
\begin{prpn}
Let $a$ and $b$ be integers. The Euclidean Algorithm allows us to compute the greatest common divisor of $a$ and $b$; i.e. it allows us to find the largest number that divides both $a$ and $b$, leaving no remainder. The algorithm is as follows:
\end{prpn}
\begin{align*}
a & =bq_0+r_0 && \text{for} && 0<r_0<b \\
b & =r_0 q_1+r_1 && \text{for} && 0<r_1<r_0 \\
r & =r_1 q_2+r_2 && \text{for} && 0<r_2<r_1 \\
& && \ldots &&& \\
r_{k-1} & = r_k q_{k+1}+r_{k+1} && \text{for} && 0<r_{k+1}<r_k \\
r_k & =r_{k+1} q_{k+2}+ 0
\end{align*}
Then $\gcd(a, b) = r_{k+1}$. For example, to find $\gcd(6188,4709)$, we apply the Euclidean Algorithm as follows:
\begin{align*}
6188 & = 4709 \cdot 1 + 1479 \\
4709 & = 1479\cdot 3 + 272 \\
1479 & = 272 \cdot 5 + 119 \\
272 & = 119 \cdot 2 + 34 \\
119 & = 34 \cdot 3 + 17 \\
34 & = 17 \cdot 2 \\
17 & = \gcd(6188,4709)
\end{align*}
\begin{defn}
$a$ and $b$ are said to be \textit{relatively prime}, or \textit{coprime} if $\gcd(a,b) =1$.
\end{defn}
Two prime numbers, $p$ and $q$, will always be coprime to each other. A composite number, $a$, will be coprime to prime number, $p$, if and only if $a$ is not a multiple of $p$.
\begin{defn}
\textit{Euler's totient function}, denoted $\phi(n)$, counts the number of \textit{totatives} of $n$, i.e. the number of (positive) integers up to $n$ that are coprime to $n$.
\end{defn}
For example, $\phi(10) = \#\{1,3,7,9\} = 4$. In this example, the numbers $1$, $3$, $7$, and $9$, are the totatives of $10$. For a prime number $p$, $\phi(p) = \#\{1,2,\ldots,p-1\} = p-1$ since all integers $< p$ are also coprime to $p$.
\begin{defn}
We say that $a$ is \textit{congruent} to $r$ \textit{modulo} $b$ if $b \vert a-r$. We write this relation as $a \equiv r \pmod{b}$
\end{defn}
In other words, we say that $a \equiv r$ (mod $b$) if $r$ is the remainder when $a$ is divided by $b$. For example, when $9$ is divided by $7$, the remainder is $2$. In other words, $9 \equiv 2$ (mod $7$). This concept allows us to conveniently refer to arithmetic progressions by their congruences modulo $a$. For instance, we can refer to the progression $4n+3$ as the set of all integers congruent to $3$ (mod $4$). Furthermore, we can refer to all primes in the progression $4n+3$ as the set of primes congruent to $3$ (mod $4$).
\begin{corly}
Let $\mathbb{Z}$ denote the set of all integers. The modulo operation allows us to define a quotient ring, $\mathbb{Z}/n\mathbb{Z}$, which is the ring of integers modulo $n$. \end{corly}
For example, the set of all integers modulo $6$ repeats as $\{\ldots, 1,2,3,4,5,0,1,2,3,4,5,\ldots\}$. The unique elements of this set are $\{0,1,2,3,4,5\}$, which is the ring $\mathbb{Z}/6\mathbb{Z}$. We say that an element $u$ in $\mathbb{Z}/n\mathbb{Z}$ is a \textit{unit} in the ring if there exists a multiplicative element $v$, such that $uv = vu = 1$. We denote the group of units as $(\mathbb{Z}/n\mathbb{Z})^{\times}$.
The group $(\mathbb{Z}/n\mathbb{Z})^{\times}$ has $\phi(n)$ elements, which are the totatives of $n$. For example, for the ring $\mathbb{Z}/6\mathbb{Z}$, the group of units, $(\mathbb{Z}/6\mathbb{Z})^{\times}$ is given by the totatives of 6: \{1,5\}. We notice that $1$ and $5$ are both units in $\mathbb{Z}/6\mathbb{Z}$ since $1\equiv 1 \pmod 6$ and $5\cdot 5 \equiv 1 \pmod 6$.
Thus for a prime number $p$, the group $(\mathbb{Z}/p\mathbb{Z})^{\times}$ has $p-1 = \phi(p)$ elements.
\subsection{The Prime Number Theorem and Dirichlet's Theorem on Arithmetic Progressions}
\label{Prime Number Theorem}
Let $\pi(x)$ denote the number of primes up to $x$. \\
\begin{prpn}
Gauss's Prime Number Theorem (PNT), which Hadamard and Vall\`{e}e-Poussin proved independently in 1896, states that $\pi(x)$ behaves asymptotically to $x/\log(x)$\footnote{ $\log(x)$ here is actually the natural log of $x$, but we wish to use the same notation as in our references}
\end{prpn}
Put another way:
\begin{equation}
\label{eq:PNT}
\lim_{x \to \infty} \frac{\pi(x)}{x/\log(x)} = 1
\end{equation}
Thus for an arbitrarily large value of $x$, one can expect $\pi(x)$ to be close to $x/\log(x)$, with some error term. One might next wonder about approximating the count of primes within an arithmetic progression. One way of intuitively approaching this problem is by viewing the set of all positive integers as a union of arithmetic progressions. For example, if we consider the arithmetic progressions with a difference of $3$ between elements in each set, we have the three progressions:
\begin{align*}
& \{3n+1 &&\text{for} &&n \in \mathbb{N}_0\} &=&&\{1,4,7,10,13,16,\ldots\} \\
& \{3n+2 &&\text{for} &&n \in \mathbb{N}_0\} &=&&\{2,5,8,11,14,17,\ldots\} \\
& \{3n &&\text{for} &&n \in \mathbb{N}_1\} &=&&\{3,6,9,12,15,18,\ldots\}
\end{align*}
Combining these three sets will yield the set of all positive integers. Since each element in the third set is a multiple of $3$, and thus a composite number, we can ignore this set and only consider the first two. We can then expect the primes to be split approximately equally between $3n+1$ and $3n+2$. Similarly, for a difference of $4$ between elements in each set, primes would be split approximately evenly between $4n+1$ and $4n+3$.
Thus applying our intuition to \eqref{eq:PNT}, we arrive at:
\begin{thm}
\label{dirichletsthm}
(Dirichlet's Theorem on Arithmetic Progressions)
If $\gcd(a,b) = 1$, there are infinitely many primes congruent to $b$ modulo $a$. In addition, for progressions of the form $an+b$, the primes will be split among $\phi(a)$ different progressions. In other words, the proportion of primes in a progression with increment $a$ is $\frac{1}{\phi(a)}$.
\end{thm}
\begin{equation}
\label{eq:PNTforAPs}
\lim_{x \to \infty} \frac{\pi(x;a,b)}{x/(\phi(a) \cdot \log(x))} = 1
\end{equation}
For example, the progression $5n+1$ holds one-fourth of primes ($\phi(5)=4$), and we write:
\begin{equation*}
\lim_{x \to \infty} \frac{\pi(x;5,1)}{x/(\phi(5) \cdot \log(x))} = 1
\end{equation*}
\pagebreak
The complete proof of Dirichlet's Theorem is quite lengthy, but excellently shown by Pete L. Clark \cite{dirichlets_theorem} and Austin Tran \cite{dirichlets theorem orthogonality}. Here, we only briefly introduce important concepts from analytic number theory and highlight crucial points of the proof as shown by Clark and Tran. For readers not familiar with analytic number theory, this section may be particularly difficult. Nevertheless, we encourage the reader on.
\begin{defn}
A \textit{Dirichlet Character} modulo $a$ is a function $\chi$ on the units of $\mathbb{Z}/a\mathbb{Z}$ that has the following properties:
\end{defn}
\begin{itemize}
\item $\chi$ is periodic modulo $a$, i.e. $\chi(b) = \chi(b+a)$ for $b \in \mathbb{N}$.
\item $\chi$ is multiplicative, i.e. $\chi(b) \cdot \chi(c) = \chi(bc)$.
\item $\chi(1) = 1$.
\item $\chi(b) \not = 0$ if and only if $\gcd(a, b) = 1$.
\end{itemize}
We say that a character is \textit{principal} if its value is $1$ for all arguments coprime to its modulus, and $0$ otherwise. We denote the principal character modulo $a$ as $\chi_0$. Note that the principal character still depends on $a$.
\begin{exmp}
Consider the Dirichlet characters modulo 3. We have $\chi(1) = 1$ and
$\chi(3) = 0$ by properties stated above. Using the multiplicativity and periodicity of $\chi$ we note that $(\chi(2))^2 = \chi(2)\cdot \chi(2) = \chi(1) = 1$. This implies that $\sqrt{(\chi(2))^2} = \chi(2) = \pm 1$. If $\chi(2) = 1$, then $\chi = \chi_0$ is a principal character by definition. On the other hand, we use $\chi_1$ to denote the character for when $\chi(2) = -1$. We note that $\chi_1$ also satisfies all necessary properties to be a Dirichlet character, but is not a principal character.
\end{exmp}
\begin{prpn}
Let $X(a)$ denote the set of all Dirichlet Characters modulo $a$. $X(a)$ is a group with multiplication and an identity element given by the principal character $\chi_0$ modulo $a$. In addition, the following orthogonality relation holds (orthogonality of characters):
\end{prpn}
\begin{equation*}
\sum_{\chi \pmod a} = \begin{cases}
1 & \text{if $b \equiv 1$ (mod $a$)}, \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
(A proof of the orthogonality of characters is nicely shown by A. Tran in \cite{dirichlets theorem orthogonality}).
\begin{corly}
The values of a character $\chi$ are either $0$ or the $\phi(a)^{\text{th}}$ roots of unity.
\end{corly}
Recall that if $\chi(b) \not = 0$, then $\gcd(a, b) = 1$. If order of the group is $\phi(a)$, then $\chi(b)^{\phi(a)}$ is principal, so $\chi(b)^{\phi(a)} = 1$. Thus, $\chi(b) = e^{\frac{2\pi i \nu}{\phi(a)}}$ for $\nu \in \mathbb{N}$.
\begin{defn}
A \textit{Dirichlet L-series} is a function of the form:
\end{defn}
\begin{equation*}
L(\chi, s) = \sum_{n=1}^{\infty}\frac{\chi(k)}{n^s}
\end{equation*}
where $s$ is a complex variable with Re($s$)$>1$.
\begin{prpn}
\label{prpn: euler product}
The Dirichlet L-function can be also expressed as an Euler product as follows (A proof can be found in \cite{euler product reference}):
\end{prpn}
\begin{equation}
\label{eq: L series Euler rep}
L(\chi, s) = \prod_p \left(1 - \frac{\chi(p)}{p^s}\right)^{-1}
\end{equation}
We introduce an intermediate theorem necessary for the proof of theorem \ref{dirichletsthm}:
\begin{thm}
\label{prop: nonvanishing}
\textit{Dirichlet's Non-vanishing Theorem} states that $L(\chi,1) \not = 0$ if $\chi$ is not a principal character.
\end{thm}
Here, we will only highlight crucial sections of the proof of Dirichlet's non-vanishing theorem (as shown by J.P. Serre). A more complete proof of Theorem \ref{prop: nonvanishing} can be found in \cite{jp serre}.
Let $a$ be a fixed integer $\geq 1$. If $p \nmid m$, we denote the image of $p$ in $(\mathbb{Z}/a\mathbb{Z})^\times$ by $\overline{p}$. In addition, we use $f(p)$ to denote the order of $p$ in $(\mathbb{Z}/a\mathbb{Z})^\times$; i.e. $f(p)$ is the smallest integer $f$ such that $p^f \equiv 1 \pmod a$. We let $g(p) = \frac{\phi(a)}{f(p)}$. This is the order of the quotient of $(\mathbb{Z}/a\mathbb{Z})^\times$ by the subgroup $(\overline{p})$ generated by $p$.
\begin{lemma}
\label{lemma1}
For $p \nmid a$, we have the identity:
\end{lemma}
\begin{equation*}
\prod_{\chi \in X(a)}(1 - \chi(p)T) = (1-T^{f(p)})^{g(p)}
\end{equation*}
For the derivation of lemma \ref{lemma1}, we let $\mu_{f(p)}$ denote the set of $f(p)^{th}$ roots of unity. We then have the identity:
\begin{equation}
\label{eq:nonvanish identity}
\prod_{w \in \mu_{f(p)}}(1-wT) = 1-T^{f(p)}
\end{equation}
For all $w\in\mu_{f(p)}$, there exists $g(p)$ characters $\chi \in X(a)$ such that $\chi(\overline{p}) = w$. This fact, together with \eqref{eq:nonvanish identity}, brings us to lemma \ref{lemma1}.
We now define a function $\zeta_a(s)$ as follows:
\begin{equation*}
\zeta_a(s) = \prod_{\chi \in X(a)}L(\chi, s)
\end{equation*}
We continue by replacing each $L(\chi,s)$ in the product by its product expansion as in \eqref{eq: L series Euler rep}, and then applying lemma \ref{lemma1} with $T=p^{-s}$.
\begin{prpn}
\label{prpn: zeta expansion}
We can then represent the product expansion of $\zeta_a(s)$ as follows:
\end{prpn}
\begin{equation*}
\zeta_a(s) = \prod_{p \nmid a}\dfrac{1}{\left(1-\dfrac{1}{p^{f(p)s}}\right)^{g(p)}}
\end{equation*}
We note that this is a Dirichlet series with positive integral coefficients converging in the half plane $Re(s) > 1$.
We now wish to show (a) that $\zeta_a(s)$ has a simple pole at $s=1$ and (b) that $L(\chi, 1) \not = 0$ for all $\chi \not = \chi_0$. The fact that $L(1, s)$ has a simple pole at $s=1$ implies the same for $\zeta_a(s)$. Thus, showing (b) would imply (a).
Suppose for contradiction that $L(\chi,1) = 0$ for $\chi \not = \chi_0$. Then $\zeta_a(s)$ would be holomorphic at $s=1$, and also for all $s$ with $Re(s)>0$. Since by proposition \ref{prpn: zeta expansion}, $\zeta_a(s)$ is a Dirichlet series with positive coefficients, the series would converge for all $s$ in that domain. However, this cannot be true. We show this by expanding the $p^{th}$ factor of $\zeta_a(s)$ as follows:
\begin{equation*}
\dfrac{1}{(1-p^{-f(p)s})^{g(p)}} = (1+p^{-f(p)s} + p^{-2f(p)s} + p^{-3f(p)s}+\ldots)
\end{equation*}
We then ignore crossterms with negative contribution to arrive at an upper bound:
\begin{equation*}
1 + \dfrac{1}{p^{\phi(a)s}}+\dfrac{1}{p^{2\phi(a)s}}+\dfrac{1}{p^{3\phi(a)s}} + \ldots
\end{equation*}
Multiplying over $p$, it follows that $\zeta_a(s)$ has all its coefficients greater than the series:
\begin{equation}
\label{eq:divergence at phi a}
\sum_{n | \gcd(a,n)=1}\dfrac{1}{n^{\phi(a)s}}
\end{equation}
Evaluating equation \eqref{eq:divergence at phi a} at $s=\frac{1}{\phi(a)}$, we finish the proof of theorem \ref{prop: nonvanishing} by arriving at the following divergent series:
\begin{equation*}
\sum_{n | \gcd(a,n)=1}\dfrac{1}{n}.
\end{equation*}
We now proceed with the proof of Dirichlet's Theorem.
\textit{Proof of Theorem 1}. Let $X(a)$ denote the group of Dirichlet characters modulo $a$. We then fix $\gcd(a,b)=1$ as stated in Dirichlet's Theorem. In addition, we let $\Psi$ denote the set of prime numbers $p \equiv b \pmod a$. Our goal is to show that $\Psi$ is an infinite set.
We wish to consider a function similar to the one in \eqref{eq: eulersproof}. We define:
\begin{equation}
\label{eq:congruent primes recip}
P_b(s) := \sum_{p\in \Psi} \frac{1}{p^s}
\end{equation}
In particular, we wish to show that the function $P_b(s)$ approaches $\infty$ as $s$ approaches $1$. This would imply infinitely many elements in $\Psi$. We also define $\theta_b$ to be the characteristic function of the congruence class $b \pmod a$. In other words:
\begin{equation*}
\theta_b(n) = \begin{cases}
1 & \text{if $n \equiv b$ (mod $a$)}, \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
Note that $\theta_b$ is periodic modulo $a$ and is $0$ when $\gcd(n,a)>1$.
Using this characteristic function, we wish to express $P_b(s)$ as a sum over all primes:
\begin{equation*}
P_b(s) = \sum_{p\in P} \frac{\theta_b(p)}{p^s}
\end{equation*}
\begin{lemma}
\label{lemma: theta}
For all $n \in \mathbb{Z}$, we have:
\end{lemma}
$$\theta_b = \sum_{\chi \in X(a)}\frac{\chi(b^{-1})}{\phi(a)}\chi(n)$$
\textit{Proof of Lemma \ref{lemma: theta}}. Using the multiplicative property of the Dirichlet character:
$$\theta_b = \frac{1}{\phi(a)}\left(\sum_{\chi \in X(a)}\chi(b^{-1}n)\right)$$
By our orthogonality relation, the summation term becomes $\phi(a)$ if $b^{-1}n = 1$ (i.e. if $n\equiv b \pmod a$) and zero otherwise. The result is exactly $\theta_b$.
Applying Lemma \ref{lemma: theta} to \eqref{eq:congruent primes recip}, we arrive at:
\begin{equation}
\label{eq:dirichletproof2}
P_b(s) = \sum_{\chi \in X(a)}\frac{\chi(b^{-1})}{\phi(a)}\sum_p\frac{\chi(p)}{p^s}
\end{equation}
We recognize the second summation term as reminiscent of the Dirichlet series we defined earlier. We will come back to this equation later.
Consider the convergent Taylor series expansion of $\log(1-z)$ for $\vert z \vert < 1$
\begin{equation}
\label{eq:complex log}
\log(1-z) = - \sum_{n=1}^{\infty}\frac{z^n}{n}
\end{equation}
In addition, consider the Euler product representation of our Dirichlet series in \eqref{eq: L series Euler rep}. Applying logarithms, we get:
\begin{equation}
\label{eq: log of euler prod of L series}
\log(L(\chi, s)) = - \sum_p \log\left(1-\frac{\chi(p)}{p^s}\right)
\end{equation}
Combining \eqref{eq:complex log} and \eqref{eq: log of euler prod of L series}, we have:
\begin{equation}
\label{eq: log euler with complex log}
\log(L(\chi, s)) = \sum_p\sum_n \frac{1}{n}\left(\frac{\chi(p)}{p^s}\right)^n
\end{equation}
The right side of \eqref{eq: log euler with complex log} is absolutely convergent for Re($s$) $> 1$, and is therefore an analytic function on that half plane. We now denote the right hand side of \eqref{eq: log euler with complex log} as $\mathit{l}(\chi, s)$.
\begin{lemma}
In the half plane with Re($s$)$>1$, $e^{\mathit{l}(\chi,s)} = L(\chi, s)$.
\end{lemma}
The proof of Lemma 2 is shown in \cite{dirichlets theorem orthogonality}.
We now split $\mathit{l}(\chi, s)$ into two parts. The first part will be for the sums when $n=1$, and the second part will be for the sums when $n>1$. We denote these as $\mathit{I}(\chi, s)$ and $\mathit{R}(\chi, s)$ respectively. Symbolically,
$$ \mathit{l}(\chi, s) = \mathit{I}(\chi,s) + \mathit{R}(\chi, s)$$ $$\mathit{I}(\chi,s) = \sum_p \frac{\chi(p)}{p^s}, \mathit{R}(s,\chi) = \sum_{n \geq 2}\sum_{p}\frac{\chi(p)^n}{np^{ns}}$$
We now note that we can write $P_b(s)$ from \eqref{eq:dirichletproof2} as:
\begin{equation}
\label{eq: split equation applied to l series log}
P_b(s) = \sum_{\chi \in X(a)} \frac{\chi(b^{-1})}{\phi(a)} \mathit{I}(\chi, s)
\end{equation}
\begin{lemma} $\mathit{R}(\chi, s)$ is bounded when $s=1$ (Recall, that we wish to show that $P_b(s) \rightarrow \infty$ as $s \rightarrow 1$). \end{lemma}
This can be shown by comparing $\mathit{R}(\chi, s)$ to the well-known Basel problem:
\begin{equation*}
\vert \mathit{R}(\chi, 1) \vert \leq \sum_{n\leq 2} \sum_{p} \frac{1}{np^n} \leq \sum_{p} \sum_{n \leq 2} \frac{1}{p^n} \leq 2 \sum_n \frac{1}{n^2} = \frac{2\pi^2}{6}
\end{equation*}
Since we know that $\mathit{R}(\chi, 1)$ is bounded, we can ignore it as it will not help us in showing that $P_b(s)$ diverges as $s \rightarrow 1$.
We now wish to split our summation from \eqref{eq: split equation applied to l series log} into an expression with only principal characters, and a sum over non-principal characters. Recall that
a principal character $\chi_0(n) = 1$ for $\gcd(n,a) =1$, and 0 otherwise.
\begin{equation*}
P_b(s) = \sum_{\chi \in X(a)}\frac{\chi(b^{-1})}{\phi(a)}\mathit{I} (\chi, s)
\end{equation*}
\begin{equation*}
= \frac{\chi_0(b^{-1})}{\phi(a)}\mathit{I}(\chi_0, s) + \sum_{\chi \not = \chi_0}\frac{\chi(b^{-1})}{\phi(a)}\mathit{I} (\chi, s)
\end{equation*}
\begin{equation}
\label{eq: almost done with proof}
P_b(s) = \frac{1}{\phi(a)}\sum_{p \nmid a} \frac{1}{p^s} + \sum_{\chi \not = \chi_0}\mathit{l}(\chi, s)
\end{equation}
We know that $a$ will have a finite number of prime divisors. This fact, together with equation \eqref{eq:sum of reciprocals}, tells us that the first term in \eqref{eq: almost done with proof} is unbounded. All that remains is to show that the second summation in \eqref{eq: almost done with proof} is bounded as $s \rightarrow 1$. Doing so will show that the primes (mod $a$) will fall into one of the $\phi(a)$ congruence classes as claimed in theorem \ref{dirichletsthm}. To do this, we must use Dirichlet's non-vanishing theorem (theorem \ref{prop: nonvanishing}). Recall that $L(\chi, 1) \not = 0$ if $\chi$ is not a principal character. Thus:
\begin{equation*}
\label{eq: almost last eq in proof of dirichlets theorem}
L(\chi, s) = \lim_{s\rightarrow 1}L(\chi, s) = \lim_{s \rightarrow 1}e^{\mathit{l}(\chi, s)}
\end{equation*}
Since logarithms of an analytic function differ only by multiples of $2\pi i$, $\mathit{l}(\chi, s) = \log L(\chi, s)$ always remains bounded as $s \rightarrow 1$. As a result, the contribution to $P_b(s)$ from non-principal Dirichlet characters remains bounded, while the contribution from principal characters is unbounded. $P_b(s)$ itself is then unbounded as $s \rightarrow 1$. In conclusion, we have:
\begin{equation*}
\sum_{p \in \Psi}\frac{1}{p^s} = \lim_{s \rightarrow 1}P_b(s) = \infty
\end{equation*}
Thus, there must be infinitely many elements in $\Psi$, i.e. there are infinitely many primes congruent to $b$ modulo $a$ for $\gcd(a, b) = 1$.
\subsection{Chebyshev's Bias, Quadratic Residue, and the Legendre Symbol}
\label{Chebyshev's Bias}
As quite thoroughly shown by A. Granville and G. Martin in their paper, Prime Number Races \cite{prime_number_races}, when we ``race'' progressions, some progressions hold the lead for an overwhelming majority of the time. For example, in the mod $4$ race of $4n+1$ against $4n+3$, the bias is as much as $99.59\%$ in favor of the $4n+3$ team!
This bias, first observed by Chebyshev in 1853, is attributed to primes in the $4n+1$ progression being \textit{quadratic residues} modulo $4$. As noted by Terry Tao \cite{consecutive_biases}:
\begin{displayquote}
...Chebyshev bias asserts, roughly speaking, that a randomly selected prime $p$ of a large magnitude $x$ will typically (though not always) be slightly more likely to be a quadratic non-residue modulo $q$ than a quadratic residue, but the bias is small (the difference in probabilities is only about $\mathit{O}(\frac{1}{\sqrt{x}})$ for typical choices of $x$)
\end{displayquote}
\begin{defn}
Let $p$ be an odd prime number\footnote{Restriction is such that the Legendre symbol will be defined for any $p$.} . We say that a number $a$ is a \textit{quadratic residue} (QR) modulo $p$ if there exists an element $x$ in the set of totatives of $p$, such that $x^2 \equiv a$ (mod $p$).
\end{defn}
(Note: $p$ does not necessarily need to be prime for the definition of quadratic residues. However, as we will see later, the modulus must be prime for our Legendre symbol model to work. Thus, we restrict our study to only prime moduli).
For example, let us consider the set of totatives of $7$, which is the set \{1,2,3,4,5,6\}:
\begin{table}[h]
\centering
\caption{Quadratic Residues (mod 7)}
\label{QRtable}
\begin{tabular}{|c|c|c|c|l}
\cline{1-4}
$x$ & $x^2$ & $x^2$ (mod 7) & Conclusion & \\ \cline{1-4}
1 & 1 & 1 & 1 is a QR (mod 7) & \\ \cline{1-4}
2 & 4 & 4 & 4 is a QR (mod 7) & \\ \cline{1-4}
3 & 9 & 2 & 2 is a QR (mod 7) & \\ \cline{1-4}
4 & 16 & 2 & 2 is a QR (mod 7) & \\ \cline{1-4}
5 & 25 & 4 & 4 is a QR (mod 7) & \\ \cline{1-4}
6 & 36 & 1 & 1 is a QR (mod 7) & \\ \cline{1-4}
\end{tabular}
\end{table}
In this example, $1$ is a quadratic residue since both $1^2$ and $6^2$ are congruent to $1$ (mod 7). In addition, $4$ is a quadratic residue since $2^2$ and $5^2$ are congruent to $4$ (mod 7), and $2$ is a quadratic residue since $3^2$ and $4^2$ are congruent to $2$ (mod 7). Note the symmetry of quadratic residues when ordered by $x$.
We now might like a convenient notation to quantify the notion of quadratic residues.
\pagebreak
\begin{defn}
The Legendre symbol separates an integer $a$ into three classes, depending on its residue modulo an odd prime $p$.
\end{defn}
\begin{equation*}
\left(\frac{a}{p}\right) = \begin{cases}
1 & \text{if $a$ is a quadratic residue (mod $p$)}, \\
-1 & \text{if $a$ is a nonquadratic residue (mod $p$)}, \\
0 & \text{if $a \equiv 0$ (mod $p$)}
\end{cases}
\end{equation*}
Note: the Legendre symbol is only defined for $p$ being an odd prime number. If $a$ is a prime number $\not = p$, the Legendre symbol will never be $0$ (since two different prime numbers will be coprime) We know by Theorem \ref{dirichletsthm} that the residues of $a \pmod p$ are then equally distributed among congruence classes in $\{1,2,3 \ldots, p-1\}$.
Continuing with our definition, we introduce several properties of the Legendre symbol:
\begin{itemize}
\item The Legendre symbol is periodic on its top argument modulo $p$. In other words, if $a \equiv b \pmod p$, then
\begin{equation*}
\left(\frac{a}{p}\right) = \left(\frac{b}{p}\right)
\end{equation*}
\item The Legendre symbol is multiplicative on its top argument, i.e.
\begin{equation*}
\left(\frac{a}{p}\right)\left(\frac{b}{p}\right) = \left(\frac{ab}{p}\right)
\end{equation*}
\item The product of two squares is a square. The product of two nonsquares is a square. The product of a square and a nonsquare is a nonsquare. This can be expressed as follows:
\begin{align*}
&\text{Two squares}:&1 \cdot 1 &= 1 \\
&\text{Two nonsquares}: &-1 \cdot -1 &= 1 \\
&\text{Square and nonsquare}:& 1 \cdot -1 &= -1
\end{align*}
\item The Legendre symbol can also be defined equivalently using Euler's criterion as:
$$\left(\frac{a}{p}\right) \equiv \mathlarger{a}^{(p-1)/2} \pmod p$$
\end{itemize}
\begin{prpn}
\label{prpn: quadratic reciprocity}
(Law of Quadratic Reciprocity) For $p$ and $q$ odd prime numbers:
\end{prpn}
\begin{equation*}
\left(\frac{q}{p}\right) = \mathlarger{(-1)}^{\frac{p-1}{2}\frac{q-1}{2}} \left(\frac{p}{q}\right)
\end{equation*}
The Law of Quadratic Reciprocity \cite{QR stein} has several supplements for different values of $a$. Here, we only introduce the first two supplements without proof. For $x$ in the set of totatives of $p$:
\begin{enumerate}
\item $x^2 \equiv -1 \pmod p$ is solvable if and only if $p \equiv 1 \pmod 4$.
\item $x^2 \equiv 2 \pmod p$ is solvable if and only if $p \equiv \pm 1 \pmod 8$.
\end{enumerate}
These supplements can be expressed equivalently as follows:
\begin{enumerate}
\item \begin{equation*}
\left(\frac{-1}{p}\right) = \mathlarger{(-1)}^{\frac{p-1}{2}} =
\begin{cases}
1 & \text{if $p \equiv 1$ (mod $4$)}, \\
-1 & \text{if $p \equiv 3$ (mod $4$)}
\end{cases}
\end{equation*}
\item \begin{equation*}
\left(\frac{2}{p}\right) = \mathlarger{(-1)}^{\frac{p^2-1}{8}} =
\begin{cases}
1 & \text{if $p \equiv 1$, $p \equiv 7$ (mod $8$)}, \\
-1 & \text{if $p \equiv 3$, $p \equiv 5$ (mod $8$)}
\end{cases}
\end{equation*}
\end{enumerate}
\pagebreak
Continuing with our example for $a$ in $(\mathbb{Z}/7\mathbb{Z})^{\times}$, we have:
\begin{table}[h]
\centering
\caption{Legendre Symbols (mod 7)}
\label{LStable}
\begin{tabular}{|c|c|c|c|c|c|c|}
\cline{1-7}
$a$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\ \cline{1-7}
$\left(\frac{a}{7}\right)$ & $1$ & $1$ & $-1$ & $1$ & $-1$ & $-1$ \\ \cline{1-7}
\end{tabular}
\end{table}
\begin{prpn}
In general, Chebyshev's bias suggests that, in a race between $\alpha n + \beta_1$ and $\alpha n+\beta_2$, the progression in which $\beta_i$ is a nonquadratic residue (mod $\alpha$) will likely contain more primes up to $x$.
\end{prpn}
For instance, when racing $1$ (mod $3$) against $2$ (mod $3$), we observe that $2$ (mod $3$) almost always has more primes up to $x$. Indeed, $1$ is a quadratic residue (mod $3$), and $2$ is a nonquadratic residue (mod $3$).
\begin{table}[h]
\centering
\caption{Count of Primes in the mod 3 Race}
\label{mod3race}
\begin{tabular}{|c|c|c|l}
\cline{1-3}
$x$ & Primes in $3n+1$ up to $x$ & Primes in $3n+2$ up to $x$ \\ \cline{1-3}
$10^1$ & 1 & 2 & \\ \cline{1-3}
$10^2$ & 11 & 13 & \\ \cline{1-3}
$10^3$ & 80 & 87 & \\ \cline{1-3}
$10^4$ & 611 & 617 & \\ \cline{1-3}
$10^5$ & 4784 & 4807 & \\ \cline{1-3}
$10^6$ & 39231 & 39266 & \\ \cline{1-3}
\end{tabular}
\end{table}
Despite the apparent domination by the $2$ (mod $3$) team, a theorem from J.E. Littlewood (1914) asserts that there are infinitely many values of $x$ for which the $1$ (mod $3$) team is in the lead (of course, this theorem applies to races in other moduli as well). In fact, the first value for which this occurs is at $608,981,813,029$ (discovered by Bays and Hudson in 1976).
In 1962, Knapowski and Tur\'{a}n conjectured that if we randomly pick an arbitrarily large value of $x$, then there will ``almost certainly'' be more primes of the form $3n+2$ than $3n+1$ up to $x$. However, the Knapowski-Tur\'{a}n conjecture was later disproved by Kaczorowski and Sarnak, each working independently. In fact if we let $\nu$ denote the number of values of $x(\leq X)$ for which there are more primes of the form $3n+2$, the proportion $\frac{\nu}{X}$ does not tend to any limit as $X \rightarrow \infty$, but instead fluctuates. This opens the question of: what happens if we go out far enough? Will the race be unbiased if we set $X$ sufficiently far away from $0$? That is, is Chebyshev's bias only apparent for ``small'' values of X?
In 1994, while working with the mod $4$ race, Rubinstein and Sarnak introduced the logarithmic measure to find the percentage of time a certain team is in the lead \cite{chebyshevs bias}. Instead of counting 1 for each $x(\leq X)$ where there are more primes of the form $4n+3$ than of the form $4n+1$, Rubinstein and Sarnak count $\frac{1}{x}$ . Instead of $\nu$, the sum is then approximately $\ln X$. They then scale this with the exact value of $\ln X$ to find the approximate proportion of time the $4n+3$ team is in the lead:
\begin{equation*}
1 = \frac{\ln X}{\ln X} >
\left(\frac{1}{\ln X} \cdot \sum_{x \leq X} \frac{1}{x}\right) \rightarrow 0.9959 \ldots
\end{equation*}
where $x$ in the summation is only over values where there are more primes of the form $4n+3$ than of the form $4n+1$.
For the mod $3$ race, we have:
\begin{equation*}
\left(\frac{1}{\ln X} \cdot \sum_{x \leq X} \frac{1}{x}\right) \rightarrow 0.9990 \ldots
\end{equation*}
Using the logarithmic measure, we see that the $3n+2$ team is in the lead 99.9\% of the time!
\subsection{The Gaussian Primes}
\begin{defn}
A \textit{Gaussian integer} is a complex number whose real and imaginary parts are both integers. The Gaussian integers form an integral domain, which we denote with $\mathbb{Z}[i]$.
\end{defn}
In other words, for $i^2 = -1$, we have:
\begin{equation*}\mathbb{Z}[i] = \{a+bi | a,b \in \mathbb{Z}\}.\end{equation*}
The units of $\mathbb{Z}[i]$ are $\pm i$ and $\pm 1$. In addition, we say that two elements, $\mu$ and $\nu$ are \textit{associated} if $\mu=u\nu$ for $u$ being a unit in $\mathbb{Z}[i]$. Because of the four units, Gaussian primes (along with their complex conjugates) have an eightfold symmetry in the complex plane (figure \ref{gaussianprimes}). For convenience, we often write ``primes'' in place of ``primes unique up to associated elements.''
\begin{defn}
We say that an element in $\mathbb{Z}[i]$ is a \textit{Gaussian prime} if it is irreducible, i.e. if its only divisors are itself and a unit in $\mathbb{Z}[i]$.
\end{defn}
One might initially believe that the primes in $\mathbb{Z}$ are also irreducible elements in $\mathbb{Z}[i]$. However, this is not the case. In fact, there is a surprising connection between primes in mod $4$ arithmetic progressions in $\mathbb{Z}$ and the Gaussian primes. To understand this connection, we must first introduce the concept of norm.
\begin{defn}
The \textit{norm} function takes a Gaussian integer $a+bi$ and maps it to a strictly positive real value. We denote the norm of a Gaussian integer as $N(a+bi) = (a+bi)(\overline{a+bi}) = (a+bi)(a-bi) = a^2 + b^2$. In other words, the norm function takes a Gaussian integer and multiplies it by its complex conjugate. One can geometrically understand the norm as the squared distance from the origin.
\end{defn}
Let $\gamma = \alpha \cdot \beta$. The norm function is multiplicative; i.e. for $\gamma, \alpha, \beta$ elements in $\mathbb{Z}[i]$,
\begin{equation*}N(\gamma) = N(\alpha \beta) = N(\alpha) N(\beta)\end{equation*}
We also note that the norm of any unit is 1. For example, if $\alpha = i = 0 + 1i$, then $N(\alpha) = 0^2 + 1^2 = 1$. In addition, we note that if an integer can be written as a sum of two squares, we can reduce it to two elements with smaller norms. For example, we note that $5=2^2 + 1^2 = (2+i)\cdot (2-i)= (2+i)\cdot (\overline{2+i}) = N(2+i)$. Thus, if a prime $p$ (in $\mathbb{Z}$) can be written as a sum of squares, we know it is not a prime element in $\mathbb{Z}[i]$.
\begin{prpn}
\label{prpn: odd primes congruent to 1 mod 4}
If an odd prime $p$ is a sum of squares, it is congruent to $1 \pmod 4$ and not a prime element in $\mathbb{Z}[i]$.
\end{prpn}
Suppose $p = a^2 + b^2$. Since $p$ is odd, exactly one of $a$ or $b$ must be odd, and the other even. For the proof, we let $a$ be odd. Let $a = 2m+1$ and let $b=2n$. Then we have:
\begin{align*}
p & = a^2 + b^2 \\
&= (2m+1)^2 + (2n)^2 \\
&= 4m^2 + 4m +1 + 4n^2 \\
p & \equiv 1 \pmod 4
\end{align*}
Thus if $p \equiv 1 \pmod 4$, $p$ represents the \textit{norm} of two primes in $\mathbb{Z}[i]$. For example, $p = 13 \equiv 1 \pmod 4$ and $13 = N(\pi_1) = N(\pi_2)$, where $\pi_1 = 2 +3i$ and $\pi_2 = 3 +2i$. We note that $\pi_2 = i \cdot \overline{\pi_1}$. (Here, we also note that counting primes in one quadrant is the same as counting primes unique up to associated elements).
\begin{prpn}
If an odd prime $p$ is congruent to $3 \pmod 4$, then $p$ is a prime element in $\mathbb{Z}[i]$.
\end{prpn}
For the proof, suppose for contradiction that we can factor $p$ into $(a+bi)\cdot(c+di)$. Using the multiplicative property of the norm function, we have:
\begin{align*}
N(p) &= N(a+bi)\cdot N(c+di) \\
p^2 & = (a^2 +b^2) \cdot (c^2 + d^2)
\end{align*}
Since $p$ is prime, $p^2$ can only be either $1 \cdot p^2$ or $p \cdot p$. Since we do not want a unit as a factor, we let $(a^2 + b^2) = p$ and $(c^2 + d^2) = p$. However, by proposition \ref{prpn: odd primes congruent to 1 mod 4}, we know that a solution would imply that $p$ is a sum of squares; i.e. $p \equiv 1 \pmod 4$. Thus, $p \equiv 3 \pmod 4$ cannot be factorized; i.e. $p$ is a Gaussian prime.
We now have enough information to classify a Gaussian prime into one of three general cases. Let $u$ be a unit in $\mathbb{Z}[i]$. Then:
\begin{itemize}
\item{\makebox[2cm]{$u(1+i)$\hfill} Since $p = 2 = N(1+i)$}
\item{\makebox[2cm]{$u(a+bi)$\hfill} $a^2 + b^2 = p \equiv 1 \pmod 4$}
\item{\makebox[2cm]{$u(p)$\hfill} $p \equiv 3 \pmod 4$}
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=9cm]{10610_tex.png}
\end{center}
\caption{Plot of Gaussian primes with norm $\leq 103^2$}
\label{gaussianprimes}
\end{figure}
Thus, we can see that primes in $\mathbb{Z}$ with quadratic residue (modulo $4$) are not primes in $\mathbb{Z}[i]$. Instead, they represent the norms of two separate Gaussian primes. We can use this to derive an equation for the exact count of Gaussian primes (unique up to associated elements) within a certain norm. Let $\pi_G(x)$ represent the count of Gaussian primes up to norm x, then:
\begin{equation*}
\pi_G(x) = 2\pi(x;4,1) + \pi(\sqrt{x};4,3) + 1
\end{equation*}
The extra count is to include the Gaussian prime at $1+i$, which has norm $2$.
In addition, we can extend our prime number theorem in the rational integers \eqref{eq:PNT} to a prime number theorem in the Gaussian integers by a modification of Dirichlet's Theorem \eqref{eq:PNTforAPs}. Moreover, we note the infinitude of Gaussian primes by their intimate connection with Dirichlet's Theorem for primes in mod $4$ progressions.
\begin{equation}
\label{eq: dirichlets gaussian thm}
\pi_G(x) \approx \frac{2x}{\phi(4)\log(x)} + \frac{\sqrt{x}}{\phi(4)\log(\sqrt{x})}
\end{equation}
The first term represents the approximation of primes congruent to $1$ (mod $4$), which are the norms of two primes in $\mathbb{Z}[i]$. The second term represents the approximation of primes congruent to $3$ (mod $4$), which have a norm of $p^2$ for $p \in 4n+3$. More precisely, we have:
\begin{equation*}
\lim_{x \rightarrow \infty} \dfrac{\pi_G(x)}{\frac{2x}{\phi(4)\log(x)} + \frac{\sqrt{x}}{\phi(4)\log(\sqrt{x})}} = 1
\end{equation*}
\pagebreak
The following code can be used in Sage to generate plots of Gaussian primes within a specified norm\footnote{We also created a video animation of Gaussian prime plots with norms from $10^1$ to $10^7$: \href{https://youtu.be/jRBCmXGlVJU}{https://youtu.be/jRBCmXGlVJU}}:
\begin{verbatim}
def gi_of_norm(max_norm):
Gaussian_primes = {}
Gaussian_integers = {}
Gaussian_integers[0] = [(0,0)]
for x in range(1, ceil(sqrt(max_norm))):
for y in range(0, ceil(sqrt(max_norm - x^2))):
N = x^2 + y^2
if Gaussian_integers.has_key(N):
Gaussian_integers[N].append((x,y))
else:
Gaussian_integers[N] = [(x,y)]
if(y == 0 and is_prime(x) and
have_prime = True
elif is_prime(N) and
have_prime = True
else:
have_prime =False
if have_prime:
if Gaussian_primes.has_key(N):
Gaussian_primes[N].append((x,y))
else:
Gaussian_primes[N] = [(x,y)]
return Gaussian_primes,Gaussian_integers
def all_gaussian_primes_up_to_norm(N):
gips = gi_of_norm(N)[0]
return flatten([uniq([(x,y), (-y,x), (y,-x), (-x,-y)]) for x,y in flatten(gips.values(),
max_level=1)], max_level=1)
N=10609 + 1 ### Declare norm here (in place of 10609)
P=scatter_plot(all_gaussian_primes_up_to_norm(N), markersize=RR(1000)/(N/50))
P.show(aspect_ratio=1, figsize=13, svg=False, axes = False)
\end{verbatim}
\section{Findings in the Rational Primes}
\subsection{Bias in the Legendre Symbols of Primes Modulo Another Prime}
\label{Legendre Symbol Races}
One phenomenon we wished to study in detail was Chebyshev's bias, specifically in regards to a randomly selected prime being more likely to have nonquadratic residue modulo some other prime. We approached this by first attempting to model the bias as a ``random'' walk using Legendre symbol values as steps.
Let $q$ and $p$ be two randomly selected prime numbers. Then, according to Chebyshev's bias, $\left(\frac{q}{p}\right)$ has a slightly less than half probability of being a quadratic residue (i.e. returning a $1$). If we fix $p$ and let $q$ iterate through all primes, we get a sequence of $1$s and $(-1)$s (with the exception of when $q=p$, in which case we have $0$). If modeling as a random walk, the summation of our sequence should not wander far from $y=\sqrt{t}$, where $t$ denotes the index of the prime number $q$. Indeed, this is the case with all observed values of $p$ up to the final value of $q$ (we tested for primes $p < 1000$ and for $q$ iterating over primes $<10,000,000$). However, there is a noticeable bias in the summation. Most of the time, the summation of Legendre symbol values is negative, supporting the claim that there are slightly more nonquadratic residues.
\pagebreak
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm]{pQ97_p1000000_all.png}
\end{center}
\caption{Legendre Symbol walk for $p=97$}
\label{walk97all}
\end{figure}
We wished to model the average behavior of our Legendre symbol walks. To do this, we recorded the ratio of quadratic residues in each of our walks for $p$ fixed as we increase the range of primes over which $q$ iterates. For example, when $p$ = 97 and $q$ iterates over all primes less than $1000$, the ratio is $0.4698795$. When we allow $q$ to iterate over all primes less than $10,000,000$, the ratio of quadratic residues increases to $0.4997826$. We then plotted the average ratio for $167$ values of $p$ ($p \in \{3 \leq$ all primes $<1000\}$). In addition, we plotted the within-$p$ standard deviation of our ratio for each range of $q$ iterated. Since most primes have nonquadratic residue modulo another prime, the average ratio seems to converge to 0.50 from below as we increase the $q$-range.
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm]{ratio_1s_q1000_p1000000.png}
\end{center}
\captionsetup{justification=centering,margin=2cm}
\caption{Plot of the Average Ratios. Horizontal axis denotes $\log(x)$, where $x$ is the range over which $q$ iterates. Vertical bars represent $1$ standard deviation}
\label{AvgRatioAll}
\end{figure}
We repeated our experiments with $\left(\frac{p}{q}\right)$ for $p$ fixed and $q$ varying and arrived at similar results. For $p \equiv 1 \pmod 4$, we know from quadratic reciprocity that $\left(\frac{p}{q}\right) = \left(\frac{q}{p}\right)$, so the contribution is the same (see theorem 10 in \cite{quadratic reciprocity clark}). For $p \equiv 3 \pmod 4$, $\left(\frac{p}{q}\right) \not = \left(\frac{q}{p}\right)$. However, Chebyshev's bias still exists (i.e. there are slightly fewer +1s than -1s). As a result, the average behavior is similar.
\subsection{Bias in the Legendre Symbols of consecutive Primes}
\label{bias in consecutive primes}
Our next experiment in the rational primes was to examine the ratio of consecutive quadratic or nonquadratic residues for primes $q$ modulo a fixed prime $p$. I.e. we wished to model the behavior of the ratio of $1,1$s or $-1, -1$s.
Since the probability of $q \pmod p$ being is quadratic residue is very slightly less than 0.5, we should expect expect our average ratio to converge to $\left(\frac{1}{2}\right)^{n-1}$ from below, where $n$ denotes the length of the consecutive chain. For example, for the ratio of three consecutive quadratic or nonquadratic residues, we expect to obtain approximately: $(\frac{1}{2})^3 + (\frac{1}{2})^3 \approx (\frac{1}{2})^{3-1}$. (The first term in the summation represents the probability of $3$ consecutive quadratic residues, and the second term represents the probability of $3$ consecutive nonquadratic residues). However, in a very recent paper (March, 2016), R. Lemke Oliver and K. Soundararajan \cite{unexpected_bias}, note that there is a much stronger bias in the residue of consecutive primes than expected. We set out to model this (stronger) bias with our Legendre symbol walk.
We repeated our average ratio experiment as in section \ref{Legendre Symbol Races}. However, we instead searched for $2$, $3$, and $4$ consecutive residues having the same sign. We notice that the average ratios converge to their expected values quite slowly, supporting R. Lemke Oliver and K. Soundararajan's recent discovery.
\begin{figure}[htp]
\centering
\includegraphics[width=.33\textwidth]{ratio_1100s_q1000_p1000000.png}\hfill
\includegraphics[width=.33\textwidth]{ratio_111000s_q1000_p1000000.png}\hfill
\includegraphics[width=.33\textwidth]{ratio_11110000s_q1000_p1000000.png}
\caption{From Left to Right: Two Consecutive, Three Consecutive, Four Consecutive}
\label{fig:AvgConsecutives}
\end{figure}
\subsection{Bias in the Legendre Symbols Modulo Primes in the Mod 4 Races}
\label{mod4 races}
We repeated our Legendre symbol walk with fixed $p$, but for $q$ varying only over primes congruent to $1$ (mod $4$), and again with primes congruent to $3$ (mod $4$). We observed Chebyshev's bias in both cases (on average). However, when $q$ varied over primes congruent to $1$ (mod $4$), we noticed a much stronger bias. For example, if we consider the walks for $p$=97, the walk for $q \equiv 1 \pmod 4$ seems to lie mostly below the $t$-axis. On the other hand, the walk for $q \equiv 3 \pmod 4$ seems to lie mostly above the $t$-axis.
\begin{figure}[htp]
\centering
\includegraphics[width=.33\textwidth]{pQ97_p1000000_all.png}\hfill
\includegraphics[width=.33\textwidth]{pQ97_p1000000_quadratic.png}
\includegraphics[width=.33\textwidth]{pQ97_p1000000_nonquadratic.png}\hfill
\captionsetup{justification=centering,margin=2cm}
\caption{From Left to Right: iterating over all $q$, $q \equiv 1$ (mod $4$), $q \equiv 3$ (mod $4$). Iteration range for $q$ in all plots is $10,000,000$}.
\label{fig:97walks}
\end{figure}
We wished to check if this pattern exists on average. For $q\equiv 1$ (mod $4$), the average converges to $0.50$ more slowly than the average for $q\equiv 3$ (mod $4$). It seems that only allowing $q$ to iterate over primes with nonquadratic residue (mod $4$) removes, or at least diminishes, some part of Chebyshev's bias. We noticed a similar, but less distinct (see section \ref{bias in consecutive primes} and [\href{unexpected_bias}{7}]), pattern while testing for consecutive residues being the same.
\begin{figure}[htp]
\centering
\includegraphics[width=.5\textwidth]{ratio_1s_q1000_p1000000_quad.png}\hfill
\includegraphics[width=.5\textwidth]{ratio_1s_q1000_p1000000_nonquad.png}
\captionsetup{justification=centering,margin=2cm}
\caption{Average ratios of quadratic residues for $\left(\frac{q}{p}\right)$ \\ Left: $q \equiv 1$ (mod $4$). \\ Right:$q \equiv 3$ (mod $4$)}.
\label{fig:QuadNonquadratios}
\end{figure}
The following simple code can be used in Sage to generate a plot for Legendre symbol walks of $\left(\frac{q}{p}\right)$:
\begin{verbatim}
#declares maximum q-iteration range
maxN=10^7
#P must be an odd prime for legendre_symbol(q,P) to be defined
P = 97
primes = prime_range(3, maxN)
pm4={1:[], 3:[]}
pm4[1] = [q for q in primes if q
pm4[3] = [q for q in primes if q
#replace "3" with "1" to model walk with quadratic residues (mod 4)
lqP = [legendre_symbol(q, P) for q in pm4[3]]
print "Legendre symbol walk for P={} and q iterating over primes less than {}".format(P,maxN)
sum_lqP = TimeSeries(lqP).sums()
#replace "3" with "1" to model walk with quadratic residues (mod 4)
sum_lqP.plot()+plot([sqrt(x),-sqrt(x)],(x,0,len(pm4[3])))
\end{verbatim}
\pagebreak
\section{Findings in the Gaussian Primes}
Chebyshev's bias in the rational primes has been well-documented. However, there has been comparatively less experimental research on such a bias in the Gaussian primes. In this section, we extend our model of Legendre symbol walks to the Gaussian primes to see if a similar bias occurs. To do this, we must first introduce a way to map a Gaussian integer to its residue in the rational integers modulo a Gaussian prime.
\begin{prpn}
\label{prpn: isomorphism}
A map that sends a Gaussian prime $a+bi$ to a residue $r$ (mod $\pi$), where $\pi = \alpha + \beta i$, is an isomorphism of rings between $\mathbb{Z}[i]/\pi\mathbb{Z}[i]$ and $\mathbb{Z}/p\mathbb{Z}$, where $p = N(\pi)$. In particular, if $\pi$ is an irreducible element in $\mathbb{Z}[i]$, then the residue class ring $\mathbb{Z}[i]/\pi\mathbb{Z}[i]$ is a finite field with $N(\pi)$ elements.
\end{prpn}
A rigorous proof of proposition \ref{prpn: isomorphism} can be found in \cite{quadratic_reciprocity} as Theorem 12.
We first start with a ``soft'' proof as motivation for calculating a residue before showing a more rigorous proof. For two primes $p$ and $q$, the Euclidean algorithm shows that the $\gcd(p,q) = 1$. This fact allows us to easily calculate the residue of $q \pmod p$. Let $p$ and $q$ be prime numbers with $q>p$. Let n and r be integers:
\begin{align*}
q &= pn + r \\
q-r &= pn \\
q-r &\equiv 0 \pmod p \\
r &\equiv q \pmod p
\end{align*}
Where $r$ is a element from $(\mathbb{Z}/p\mathbb{Z})^\times$; i.e. $r$ is an element from the set of totatives of $p$.
We can extend this algorithm to the Gaussian primes. Let $a+bi$ and $\pi = \alpha + \beta i$ denote Gaussian primes with $N(a+bi) > N(\alpha+\beta i) = N(\pi)$. We can then write:
\begin{align*}
a+bi &= \pi(\phi+i\psi) + r \\
a+bi &= (\alpha+\beta i)(\phi+i\psi) + r \\
a+bi &= \alpha \phi + \alpha i \psi + \beta i \phi - \beta \psi + r
\end{align*}
We then group the real and imaginary terms:
\begin{align*}
a &= \alpha \phi - \beta \psi + r \\
b &= \alpha \psi + \beta \phi
\end{align*}
Use the imaginary component to solve for $\psi$, then solve for $a$:
\begin{align*}
\psi &= \frac{b - \beta \phi}{\alpha} \\
a &= \alpha \phi - \beta\left(\frac{b-\beta \phi}{\alpha}\right) + r\\
a &= \alpha \phi - \frac{b\beta}{\alpha} + \frac{\beta^2 \phi}{\alpha} + r
\end{align*}
Rearrange, multiply both sides by $\alpha$, and solve for $r$:
\begin{alignat*}{3}
a + \frac{b\beta}{\alpha} + r &= \alpha \phi + \frac{\beta^2 \phi}{\alpha} \\
a\alpha + b\beta - r\alpha &= \phi(\alpha^2 + \beta^2) \\
a\alpha + b\beta -r\alpha &\equiv 0 & \pmod{\alpha^2 + \beta^2} \\
a\alpha + b\beta &\equiv r\alpha & \pmod{\alpha^2 + \beta^2}\\
r &\equiv a + \alpha^{-1}b\beta & \pmod{\alpha^2 + \beta^2} \numberthis \label{eq: residue}
\end{alignat*}
where $r$ is an element from $(\mathbb{Z}/(\alpha^2 + \beta^2)\mathbb{Z})^\times = (\mathbb{Z}/N(\pi)\mathbb{Z})^\times = (\mathbb{Z}/p\mathbb{Z})^\times$ since $\alpha^2 + \beta^2 = N(\pi) = p$.
\pagebreak
The idea is to use this residue to calculate the value of a \textit{Gaussian Legendre symbol} $\left[\frac{a+bi}{\pi}\right]$ with the hope of observing a bias as in the rationals. First, we must lay the groundwork by introducing several concepts. (A comprehensive reference by Nancy Buck regarding Gaussian Legendre symbols, which includes the full proofs for the following propositions, can be found in \cite{quadratic_reciprocity}. Since many of the proofs are quite lengthy, we will only highlight sections relevant for our model).
\begin{defn}
For $k, l, \pi \in \mathbb{Z}[i]$, let $\pi$ be a Gaussian prime $\not = u(1+i)$ and such that $k$ and $l$ are not divisible by $\pi$. The \textit{Gaussian Legendre symbol} has the following properties
\end{defn}
\begin{itemize}
\item $\left[\dfrac{k}{\pi}\right] = \left[\dfrac{l}{\pi}\right]$ for $k \equiv l \pmod \pi$
\item $\left[\dfrac{k}{\pi}\right] \cdot \left[\dfrac{l}{\pi}\right] = \left[\dfrac{kl}{\pi}\right]$
\end{itemize}
For $p=N(\pi)$, the second point can be equivalently expressed as:
\begin{equation*}
k^{\frac{p-1}{2}}l^{\frac{p-1}{2}} = (kl)^{\frac{p-1}{2}} \equiv \left[\dfrac{kl}{\pi}\right] \pmod \pi
\end{equation*}
In addition, we have an analog of Euler's criterion in the Gaussian Legendre symbols:
\begin{equation*}
\left[\dfrac{k}{\pi}\right] \equiv k^{(p-1)/2}
\end{equation*}
\begin{thm}
\label{thm: GLS}
Every Gaussian Legendre symbol can be expressed in terms of a Legendre symbol in the rational integers.
\end{thm}
In particular, we have the following two equations for $\left[\dfrac{k}{\pi}\right]$. Let $k = a+bi$, $\pi = \alpha + \beta i$, and $N(\pi) = p$. Then:
\begin{alignat}{3}
\label{eq:no imaginary part}
\left[\dfrac{a+bi}{\alpha}\right] &= \left(\dfrac{a^2 + b^2}{\alpha}\right); \quad && \pi \equiv 3 & \pmod 4\\
\label{eq:with imaginary part}
\left[\frac{a+bi}{\alpha+\beta i}\right] &= \left(\frac{a\alpha + b\beta}{p}\right); \quad && N(\pi) \equiv 1 & \pmod 4
\end{alignat}
Recall that if $\pi$ is a prime element in $\mathbb{Z}[i]$, a zero imaginary part implies that $\pi = \alpha \equiv 3 \pmod 4$. For the proof of equation \eqref{eq:no imaginary part}, we must show that there exists an element $x \in \mathbb{Z}[i]$ such that $x^2 \equiv a + bi \pmod \alpha$ has a solution. We set $x = \phi + \psi i$ so that $\phi^2 - \psi^2 +2\phi\psi i \equiv a + bi \pmod \alpha$. Then we have the following two congruences by grouping real and imaginary terms:
\begin{alignat*}{3}
\phi^2 - \psi^2 &\equiv a \pmod \alpha \\
2\phi\psi &\equiv a \pmod \alpha
\end{alignat*}
We then square each congruence and add them together to get:
\begin{equation*}
\phi^4 + 2\phi^2\psi^2 + \psi^4 = (\phi^2 + \psi^2)^2 \equiv a^2+b^2 \pmod \alpha
\end{equation*}
It then suffices to check that there exists $\phi$ and $\psi \in \mathbb{Z}[i]$ such that both congruences have simultaneous solutions for the cases $a \not \equiv 0 \pmod \alpha$ and $a \equiv 0 \pmod \alpha$ (shown in \cite{quadratic_reciprocity}). Doing so shows that $\left[\dfrac{a+bi}{\alpha}\right]=1$ if and only if $\left(
\dfrac{a^2+b^2}{\alpha}\right) = 1$. In other words, we arrive at equation \eqref{eq:no imaginary part}: $\left[\dfrac{a+bi}{\alpha}\right]=\left(\dfrac{a^2+b^2}{\alpha}\right)$.
We now wish to consider the more interesting case when $N(\pi) \equiv 1 \pmod 4$; i.e. when $\pi = \alpha+\beta i$ for $\alpha,\beta \in \mathbb{Z}\backslash\{0\}$ and $\pi \not = (1+i)$. Let $\alpha$ be odd and $\beta$ be even. Let $k = a+bi$ with $a,b \in \mathbb{Z}$ and $\gcd(\pi, k)=1$. As above, we wish to determine if $x^2 \equiv a+bi \pmod \pi$ has a solution for $x \in \mathbb{Z}[i]$.
Recall that $p = N(\pi)$ is a prime congruent to $1 \pmod 4$. By proposition \ref{prpn: isomorphism}, we know the set of congruence class representatives modulo $\pi$ is $\{0, 1, 2, \ldots, p-1\}$. This allows us to only consider $x \in \mathbb{Z}$ when determining if $x^2 \equiv a+bi \pmod \pi$ has a solution.
We start by writing our congruence as an equivalence. The congruence $x^2 \equiv a+bi \pmod \pi$ is solvable if and only if there exists $x, \phi, \psi \in \mathbb{Z}$ such that:
\begin{align*}
x^2 - a - bi &= (\phi + \psi i )(\alpha+\beta i) \\
x^2 - a - bi &= \phi\alpha + \phi\beta i + \alpha \psi i - \beta \psi
\end{align*}
We then group the real and imaginary terms into separate equations:
\begin{align*}
x^2 - \alpha &= \phi \alpha - \beta \psi \\
-b & = \phi\beta + \alpha \psi
\end{align*}
Then we multiply the real part by $\alpha$ and the imaginary part by $\beta$ and add:
\begin{align*}
x^2 - a\alpha &= \phi\alpha^2-\beta\psi\alpha \\
-b\beta &= \phi\beta^2 + \alpha\beta\psi \\
x^2\alpha - a\alpha-b\beta &=\phi\alpha^2+\phi\beta^2 \\
x^2\alpha - a\alpha-b\beta &=p\phi \\
x^2\alpha &= p\phi + a\alpha+b\beta
\end{align*}
Converting back to a congruence statement modulo $p$, we arrive at the following result:
\begin{equation*}
x^2\alpha \equiv a\alpha +b\beta \pmod p\\
\end{equation*}
\begin{equation}
\left(\dfrac{x^2 \alpha}{p}\right) = \left(\dfrac{a\alpha + b\beta}{p}\right) = \left(\dfrac{\alpha}{p}\right)\left(\dfrac{a+\alpha^{-1}b\beta}{p}\right) = \left(\dfrac{\alpha}{p}\right)\left(\dfrac{r}{p}\right)
\end{equation}
All that remains is to show that $\left(\dfrac{\alpha}{p}\right) = 1$. To do this, we use the law of quadratic reciprocity as described in proposition \ref{prpn: quadratic reciprocity}:
\begin{equation*}
\left(\dfrac{\alpha}{p}\right) = \mathlarger(-1)^{(\alpha-1)(p-1)/4}\left(\dfrac{p}{\alpha}\right)
\end{equation*}
Since $p \equiv 1 \pmod 4$, $p-1 \equiv 0 \pmod 4$. Thus, $\left(\dfrac{\alpha}{p}\right) = \left(\dfrac{p}{\alpha}\right)$. In addition, recall that $p = \alpha^2 + \beta^2$, so $p \equiv \beta^2 \pmod \alpha$. Thus, we can write $\left(\dfrac{\alpha}{p}\right) = \left(\dfrac{p}{\alpha}\right) = \left(\dfrac{\beta^2}{\alpha}\right)= \left(\dfrac{\beta}{\alpha}\right)\left(\dfrac{\beta}{\alpha}\right)$. It is then clear that regardless of the value of $\left(\dfrac{\beta}{\alpha}\right)$, we have $\left(\dfrac{\alpha}{p}\right) =1$.
In conclusion, we arrive at equation \eqref{eq:with imaginary part}:
\begin{equation*}
\left[\dfrac{a+bi}{\alpha+\beta i}\right] = \left(\dfrac{a\alpha + b\beta}{p}\right) = \left(\dfrac{r}{p}\right)
\end{equation*}
\textbf{The Experiment.}
While implementing our random walk model on Sage, we decided to fix $\pi = \alpha+\beta i$ and let $a+bi$ iterate over Gaussian primes in the first quadrant sorted by increasing norm. In the case of $a+bi = a \equiv 3 \pmod 4$, the sorting is obvious. However, when $N(a+bi) = q \equiv 1 \pmod 4$, there are exactly two (distinct) Gaussian primes with norm $q$ (we have $a+bi$ and $b+ai = i(\overline{a+bi})$, where $a^2 + b^2 = q$). When this is the case, we sort by the size of the real component. (For example, when $q= 17 = N(1+4i)$ and $N(4+i)$, we find the residue of $1+4i \pmod \pi$ first and then proceed to find the residue of $4+i \pmod \pi$).
\pagebreak
When viewed individually, the resulting plots resemble the Legendre symbol walks in section \ref{Legendre Symbol Races}. However, we observe an interesting phenomenon when comparing walks that have the same $p = N(\pi_1) = N(\pi_2)$ where $\pi_1$ and $\pi_2$ are fixed with $a+bi$ iterating. We noticed for some $p$, the plots for $\pi_1$ and $\pi_2$ have strong positive correlation. For other $p$, the plots for $\pi_1$ and $\pi_2$ have strong negative correlation.
\begin{figure}[htp]
\centering
\includegraphics[width=.5\textwidth]{Norm97_4_9i_1000.png}\hfill
\includegraphics[width=.5\textwidth]{Norm97_9_4i_1000.png}
\captionsetup{justification=centering,margin=2cm}
\caption{Gaussian Legendre symbol walks for $p=97$ \\ (strong positive correlation)\\ Left: $\left[\dfrac{a+bi}{4+9i}\right]$. \quad Right: $\left[\dfrac{a+bi}{9+4i}\right]$}.
\label{fig:97gaussianwalks}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=.5\textwidth]{Norm29_2_5i_1000.png}\hfill
\includegraphics[width=.5\textwidth]{Norm29_5_2i_1000.png}
\captionsetup{justification=centering,margin=2cm}
\caption{Gaussian Legendre symbol walks for $p=29$ \\ (strong negative correlation) \\ Left: $\left[\dfrac{a+bi}{2+5i}\right]$. \quad Right: $\left[\dfrac{a+bi}{5+2i}\right]$}.
\label{fig:29gaussianwalks}
\end{figure}
Before we attempt to (partially) explain this phenomenon, we must first introduce additional theory.
\begin{thm}
The following $3$ properties hold for the Gaussian Legendre symbol
\end{thm}
\begin{align}
\label{eq: thm4 1}
\left[\dfrac{i}{\alpha+\beta i}\right] &= \left(-1\right)^{\frac{p-1}{4}} \\
\label{eq: thm4 2}
\left[\dfrac{1+i}{\alpha+\beta i}\right] &= \left(-1\right)^{\frac{(\alpha+\beta)^2-1}{8}} \\
\label{eq: thm4 3}
\left[\dfrac{a+bi}{\alpha+\beta i}\right] & =
\left[\dfrac{\alpha+\beta i}{a+bi}\right]
\end{align}
\pagebreak
The proof of equation \eqref{eq: thm4 1} is as follows:
From Euler's criterion in the Gaussian Legendre symbols, we know that $i^{(p-1)/2} \equiv \left[\dfrac{i}{\alpha+\beta i}\right] \pmod{\alpha+\beta i}$. We note that $i^{(p-1)/2}$ can be rewritten as follows:
\begin{equation*}
\mathlarger{i}^{\frac{p-1}{2}} = \mathlarger{i}^{2 \cdot \frac{p-1}{4}} = \mathlarger{(-1)}^{\frac{p-1}{4}}.
\end{equation*}
Thus, we have the congruence:
\begin{equation*}
\mathlarger{(-1)}^{\frac{p-1}{4}} \equiv \left[\dfrac{i}{\alpha+\beta i}\right]
\end{equation*}
For a proof by contradiction, we assume that the left side $\not \equiv$ the right side. Then let $-1 \equiv 1 \pmod{\alpha+\beta i}$. Converting the congruence to an equivalence, we get:
\begin{equation*}
-2 = (\alpha+\beta i)(\phi+\psi i)
\end{equation*}
We then take norms of both sides and simplify:
\begin{align*}
N(-2) &= N(\alpha + \beta i)N(\phi+\psi i) \\
4 &= p \cdot N(\phi + \psi i)
\end{align*}
This implies that $p \vert 4$, which cannot be true since $p \equiv 1 \pmod 4$. Therefore, we arrive at equation \eqref{eq: thm4 1}:
\begin{equation*}
\left[\dfrac{i}{\alpha+\beta i}\right] = \left(-1\right)^{\frac{p-1}{4}}.
\end{equation*}
For the proof of equation \eqref{eq: thm4 2}, we must consider two cases: when $\beta=0$ and when $\beta\not=0$.
Case 1: let $\beta=0$, so $p = \alpha^2$ and $\alpha \equiv 3 \pmod 4$. Recall our relations between the Gaussian Legendre symbols and the Legendre symbols in the rational integers as shown in theorem \ref{thm: GLS}. From equation \eqref{eq:no imaginary part}, we have:
\begin{equation*}
\left[\dfrac{1+i}{\alpha}\right] = \left(\dfrac{1+1}{\alpha}\right) = \left(\dfrac{2}{\alpha}\right)
\end{equation*}
Recall our second supplement of quadratic reciprocity in the rational integers. We can then express this as:
\begin{equation*}
\left(\dfrac{2}{\alpha}\right) = \mathlarger{(-1)}^{\frac{\alpha^2 -1}{8}} = \mathlarger{(-1)}^{\frac{(\alpha+\beta)^2 -1}{8}}
\end{equation*}
Case 2: Let $\beta \not = 0$, so $p = \alpha^2 + \beta^2$ and $p \equiv 3 \pmod 4$. By equation \eqref{eq:with imaginary part}, we have:
\begin{equation*}
\left[\dfrac{1+i}{\alpha+\beta i}\right] = \left(\dfrac{\alpha+\beta}{p}\right).
\end{equation*}
Since our model only uses prime elements in the first quadrant, we assume that $\vert \alpha+\beta \vert > 1$ (the full proof without this assumption can be found in \cite{quadratic_reciprocity}). We continue by using the law of quadratic reciprocity:
\begin{equation*}
\left(\dfrac{\alpha+\beta}{p}\right) = \mathlarger{(-1)}^{\left(\frac{p-1}{2}\right)\left(\frac{\alpha+\beta-1}{2}\right)}\left(\dfrac{p}{\alpha+\beta}\right)
\end{equation*}
Since $p \equiv 1 \pmod 4$, then $\left(\frac{p-1}{2}\right)$ is always even. Thus, $\left(\frac{p-1}{2}\right)\left(\frac{\alpha+\beta -1}{2}\right)$ is even. So $\left(\dfrac{\alpha+\beta}{p}\right) = \left(\dfrac{p}{\alpha+\beta}\right)$.
\pagebreak
Next, we multiply $p$ by $2$ and apply a clever series of manipulations. We note that:
\begin{align*}
2p &= 2(\alpha^2 + \beta^2) \\
&=\alpha^2 + 2\alpha\beta + \beta^2 - 2\alpha\beta \\
&= (\alpha^2 + \beta^2)(\alpha^2-\beta^2) \\
0 &= (\alpha+\beta)^2 + (\alpha-\beta)^2 -2p \\
-(\alpha + \beta)^2 & = (\alpha-\beta)^2 -2p \\
0 & \equiv (\alpha-\beta)^2 - 2p \pmod{\alpha+\beta} \\
2p & \equiv (\alpha-\beta)^2 \pmod{\alpha+\beta}
\end{align*}
Let $x = (\alpha - \beta)^2$. Then there exists a solution to the congruence $x^2 \equiv 2p \pmod{\alpha+\beta}$. Then we have:
\begin{alignat*}{3}
\left(\dfrac{x^2}{\alpha+\beta}\right) &= \left(\dfrac{x}{\alpha+\beta}\right)\left(\dfrac{x}{\alpha+\beta}\right) &&= 1 \\
\left(\dfrac{x^2}{\alpha+\beta}\right) &= \left(\dfrac{2p}{\alpha+\beta}\right) &&= 1 \\
\left(\dfrac{2p}{\alpha+\beta}\right) &= \left(\dfrac{2}{\alpha+\beta}\right)\left(\dfrac{p}{\alpha+\beta}\right) &&= 1
\end{alignat*}
Which implies that $\left(\dfrac{2}{\alpha+\beta}\right) = \left(\dfrac{p}{\alpha+\beta}\right) = \left(\dfrac{\alpha+\beta}{p}\right) = \left[\dfrac{1+i}{\alpha+\beta i}\right]$. Using the second supplement to quadratic reciprocity, we have:
\begin{equation*}
\left[\dfrac{1+i}{\alpha+\beta i}\right] = \left(\dfrac{2}{\alpha+\beta}\right) = \mathlarger{(-1)}^{\frac{(\alpha+\beta)^2-1}{8}}.
\end{equation*}
For the proof of equation \eqref{eq: thm4 3}, we must consider three cases:
\begin{enumerate}
\item $b = \beta = 0$
\item $b = 0$ and $\beta \not = 0$
\item $b \not = 0$ and $\beta \not =0$.
\end{enumerate}
Case 1: Let $b = \beta = 0$. Then by equation \eqref{eq:no imaginary part}:
\begin{alignat*}{3}
\left[\dfrac{a}{\alpha}\right] &= \left[\dfrac{a^2}{\alpha}\right] &= 1\\
\left[\dfrac{\alpha}{a}\right] &= \left[\dfrac{\alpha^2}{a}\right] &= 1
\end{alignat*}
It is then clear that $\left[\dfrac{a}{\alpha}\right] = \left[\dfrac{\alpha}{a}\right] = 1$.
Case 2: Assume $b = 0$ and $\beta \not = 0$. Then:
\begin{equation*}
\left[\dfrac{a}{\alpha+\beta i}\right] = \left(\dfrac{a\alpha}{p}\right) = \left(\dfrac{a}{p}\right)\left(\dfrac{\alpha}{p}\right) = \left(\dfrac{a}{p}\right)
\end{equation*}
(Recall we have already shown in theorem \ref{thm: GLS} that $\left(\dfrac{\alpha}{p}\right)=1$. Then we have:
\begin{equation*}
\left[\dfrac{\alpha+\beta i}{a}\right] = \left(\dfrac{\alpha^2+\beta^2}{a}\right) = \left(\dfrac{p}{a}\right)
\end{equation*}
From quadratic reciprocity, we know that $\left(\dfrac{a}{p}\right) = \mathlarger{(-1)}^{\frac{(p-1)(a-1)}{4}} \left(\dfrac{p}{a}\right)$
Since $p\equiv 1 \pmod 4$, we then see that $\left(\dfrac{a}{p}\right) = \left(\dfrac{p}{a}\right)$. Thus, we have:
\begin{equation*}
\left[\dfrac{a}{\alpha+\beta i}\right] = \left[\dfrac{\alpha+\beta i}{a}\right].
\end{equation*}
Case 3: Assume both $b$ and $\beta$ are nonzero. Since $a+bi$ and $\alpha+\beta i$ are distinct odd Gaussian primes, we have:
\begin{align*}
\left[\dfrac{a+bi}{\alpha+\beta i}\right] &= \left[\dfrac{a\alpha+b\beta}{p}\right] \\
\left[\dfrac{\alpha+\beta i}{a+ bi}\right] &= \left[\dfrac{a\alpha+b\beta}{q}\right]
\end{align*}
where $p = \alpha^2 + \beta^2$ and $q=a^2 + b^2$. Since we are working in the first quadrant, we assume that $a\alpha+b\beta >1$. We then wish to perform another manipulation (the idea is similar to the proof of equation \eqref{eq: thm4 2}). In particular, we wish to show that a certain congruence is solvable (mod $a\alpha+b\beta$). We note that:
\begin{alignat*}{3}
(a\alpha+b\beta)^2 + (a\beta-b\alpha)^2 & = a^2\alpha^2 +2ab\alpha\beta+b^2\beta^2+a^2\beta^2-2ab\alpha\beta+b^2\alpha^2 \\
&= a^2\alpha^2 +b^2\beta^2 +a^2\beta^2+ b^2\alpha^2 \\
&=(\alpha^2+\beta^2)(a^2+b^2) \\
(a\alpha+b\beta)^2 + (a\beta-b\alpha)^2 &= pq \\
(a\alpha+b\beta)^2 &= pq -(a\beta-b\alpha)^2 \\
0 &\equiv pq -(a\beta-b\alpha)^2 \pmod{a\alpha+b\beta}\\
pq &\equiv (a\beta-b\alpha)^2 \ \qquad \pmod{a\alpha+b\beta}
\end{alignat*}
We then set $a\beta-b\alpha = x$. Thus we have the congruence:
\begin{equation*}
pq \equiv x^2 \pmod{a\alpha+b\beta}.
\end{equation*}
To finish the proof, we show:
\begin{alignat*}{3}
\left(\dfrac{x^2}{a\alpha+b\beta}\right) &= \left(\dfrac{x}{a\alpha+b\beta}\right)\left(\dfrac{x}{a\alpha+b\beta}\right) &= 1 \\
= \left(\dfrac{pq}{a\alpha+b\beta}\right) &= \left(\dfrac{p}{a\alpha+b\beta}\right)\left(\dfrac{q}{a\alpha+b\beta}\right) &= 1
\end{alignat*}
which implies that $\left(\dfrac{p}{a\alpha+b\beta}\right) = \left(\dfrac{q}{a\alpha+b\beta}\right)$. Since we know that $p$ and $q$ are primes in $\mathbb{Z}$ that are congruent to $1 \pmod 4$, by quadratic reciprocity, we can equivalently write this as: $\left(\dfrac{a\alpha+b\beta}{p}\right) = \left(\dfrac{a\alpha+b\beta}{q}\right)$. By applying equation \eqref{eq:with imaginary part} of theorem \ref{thm: GLS}, we then see that $\left[\dfrac{a+bi}{\alpha+\beta i}\right] = \left[\dfrac{\alpha+\beta i}{a+bi}\right]$.
\pagebreak
We now attempt to explain the strong ($\pm$) correlations we observed between Gaussian Legendre symbol walks with $\pi_1$ and $\pi_2$ fixed, where $\pi_2 = i\overline{\pi_1}$ and for $a+bi$ iterating over Gaussian primes in the first quadrant.
We first wish to establish a relationship between $\left[\dfrac{a+bi}{\alpha+\beta i}\right]$ and $\left[\dfrac{b+ai}{\alpha+\beta i}\right]$. This will allow us to find their combined contribution. (Recall the iteration order is one of $\left[\dfrac{a+bi}{\alpha+\beta i}\right] \rightarrow \left[\dfrac{b+ai}{\alpha+\beta i}\right]$ or $\left[\dfrac{b+ai}{\alpha+\beta i}\right] \rightarrow \left[\dfrac{a+bi}{\alpha+\beta i}\right]$, based on the size of the real part).
To find the conditions such that $\left[\dfrac{a+bi}{\alpha+\beta i}\right] = \left[\dfrac{b+ai}{\alpha+\beta i}\right]$ we set:
\begin{align*}
1 &= \left[\dfrac{a+bi}{\alpha+\beta i}\right]\cdot \left[\dfrac{b+ai}{\alpha+\beta i}\right] \\
&= \left[\dfrac{ab + a^2i +b^2i-ab}{\alpha+\beta i}\right] \\
&= \left[\dfrac{i}{\alpha+\beta i}\right]\cdot \left[\dfrac{a^2+b^2}{\alpha+\beta i}\right] \\
&=\mathlarger{(-1)}^{(p-1)/4}\left[\dfrac{q}{\alpha+\beta i}\right] \\
&= \mathlarger{(-1)}^{(p-1)/4}\left(\dfrac{q}{p}\right) \left(\dfrac{\alpha}{p}\right) \\
&= \mathlarger{(-1)}^{(p-1)/4}\left(\dfrac{q}{p}\right) \numberthis \label{eq: ab-ba relation}
\end{align*}
Thus, $\left[\dfrac{a+bi}{\alpha+\beta i}\right] = \left[\dfrac{b+ai}{\alpha+\beta i}\right]$ if $\frac{p-1}{4}$ is even and $\left(\dfrac{q}{p}\right) = 1$, or if $\frac{p-1}{4}$ is odd and $\left(\dfrac{q}{p}\right) = -1$. The conditions for the equivalence of $\left[\dfrac{a+bi}{\beta+\alpha i}\right] = \left[\dfrac{b+ai}{\beta+\alpha i}\right]$ are similar.
Case 1: Let $\pi_1=\alpha+\beta i$ and $\pi_2 = \beta+ \alpha i$, where $N(\pi_1) = N(\pi_2) = p$. Let $\frac{p-1}{4}$ be an even integer. Suppose $\left(\dfrac{q}{p}\right) = 1$. Then by our equivalence relations, we have:
\begin{equation*}
\left[\dfrac{a+bi}{\alpha+\beta i}\right] = \left[\dfrac{b+ai}{\alpha+\beta i}\right] \text{and} \left[\dfrac{a+bi}{\beta+\alpha i}\right] = \left[\dfrac{b+bi}{\beta+\alpha i}\right]
\end{equation*}
Thus, whether the iteration order is $\left[\dfrac{a+bi}{\alpha+\beta i}\right] \rightarrow \left[\dfrac{b+ai}{\alpha+\beta i}\right]$ or $\left[\dfrac{b+ai}{\alpha+\beta i}\right] \rightarrow \left[\dfrac{a+bi}{\alpha+\beta i}\right]$, the combined contribution is one of $\pm2$. The same is true with $\left[\dfrac{a+bi}{\beta + \alpha i}\right] \rightarrow \left[\dfrac{b+ai}{\beta + \alpha i}\right]$ or $\left[\dfrac{b+ai}{\beta + \alpha i}\right] \rightarrow \left[\dfrac{a+bi}{\beta+\alpha i}\right]$.
We now consider the case when $\frac{p-1}{4}$ is still an even integer, but $\left(\frac{q}{p}\right) = -1$. Then by our equivalence relations, we have:
\begin{equation*}
\left[\dfrac{a+bi}{\alpha+\beta i}\right] \not = \left[\dfrac{b+ai}{\alpha+\beta i}\right] \text{and} \left[\dfrac{a+bi}{\beta + \alpha i}\right] \not = \left[\dfrac{b+ai}{\beta+\alpha i}\right]
\end{equation*}
Thus, for any norm-sorted iteration order, the combined contribution will be 0.
Case 2: Now we let $\frac{p-1}{4}$ be an odd integer. Suppose $\left(\dfrac{q}{p}\right) = 1$. From our equivalence relations, we know that:
\begin{equation*}
\left[\dfrac{a+bi}{\alpha+\beta i}\right] \not = \left[\dfrac{b+ai}{\alpha+\beta i}\right] \text{and} \left[\dfrac{a+bi}{\beta + \alpha i}\right] \not = \left[\dfrac{b+ai}{\beta+\alpha i}\right]
\end{equation*}
Then for any norm-sorted iteration order, the combined contribution from $a+bi$ and $b+ai$ will be zero for both walks of $\pi_2$ and $\pi_2$.
Now we consider the case when $\frac{p-1}{4}$ is still odd, but $\left(\frac{q}{p}\right) = -1$. In this case, $\left[\dfrac{a+bi}{\alpha+\beta i}\right] = \left[\dfrac{b+ai}{\alpha+\beta i}\right]$. Thus, for any norm-sorted iteration order, the combined contribution will be one of $\pm 2$.
If we can establish the conditions for equivalence between $\left[\dfrac{a+bi}{\alpha+\beta i}\right]$ and $\left[\dfrac{a+bi}{\beta+\alpha i}\right]$ we will be able to fully explain the strong positive and negative correlations observed. (Note: it still remains to show what happens when $a+bi$ iterates over Gaussian primes $a+bi = a\equiv 3 \pmod4$. However, since prime elements of this form are much more sparse by equation \eqref{eq: dirichlets gaussian thm}, we can ignore them for the purposes of our explanation). Unfortunately, we found it quite difficult to rigorously prove the equivalence conditions (in particular, because the Legendre (more precisely, Jacobi) symbol $\left(\dfrac{p}{\beta}\right)$ is not defined for $\beta$ an even integer), so we leave it as a conjecture.
\begin{conj}
The equivalence between $\left[\dfrac{a+bi}{\alpha+\beta i}\right]$ and $\left[\dfrac{a+bi}{\beta+\alpha i}\right]$ depends only on the value of the Legendre symbol $\left(\dfrac{q}{p}\right)$. In particular, $\left[\dfrac{a+bi}{\alpha+\beta i}\right] = \left[\dfrac{a+bi}{\beta+\alpha i}\right]$ if $\left(\dfrac{q}{p}\right) = 1$, and $\left[\dfrac{a+bi}{\alpha+\beta i}\right] \not= \left[\dfrac{a+bi}{\beta+\alpha i}\right]$ if $\left(\dfrac{q}{p}\right) \not= 1$.
\end{conj}
We will use the following shorthand notation for clarity and convenience:
\begin{align*}
\pi_{1a} &= \left[\dfrac{a+bi}{\alpha+\beta i}\right] \quad
\pi_{1b} = \left[\dfrac{b+ai}{\alpha+\beta i}\right] \\
\pi_{2a} &= \left[\dfrac{a+bi}{\beta+\alpha i}\right] \quad
\pi_{2b} = \left[\dfrac{b+ai}{\beta+\alpha i}\right] \\
\pi_1 &= \pi_{1a} + \pi_{1b} \quad \quad \pi_2 = \pi_{2a} + \pi_{2b}
\end{align*}
To summarize, we have shown (conjectured) the following relations:
\begin{align}
\label{pi1a1b}
\pi_{1a} \pi_{1b} = \mathlarger{(-1)}^{(p-1)/4}\left(\frac{q}{p}\right) \\
\label{pi2a2b}
\pi_{2a} \pi_{2b} = \mathlarger{(-1)}^{(p-1)/4}\left(\frac{q}{p}\right) \\
\label{pi1a2a}
\pi_{1a} \pi_{2a} = \left(\frac{q}{p}\right) \\
\label{pi1b2b}
\pi_{1b} \pi_{2b} = \left(\frac{q}{p}\right)
\end{align}
We can now explain the strong ($\pm$) correlations between plots for $\pi_1$ and $\pi_2$ fixed.
Consider the case when $\frac{p-1}{4}$ is even and $\left(\dfrac{q}{p}\right) = 1$. If $\pi_{1a} = 1$ (resp. $-1$), then by equation \eqref{pi1a1b}, $\pi_{1b}=1$ (resp. $-1$). Using equation \eqref{pi1a2a}, $\pi_{2a}=1$ (resp. $-1$), and by equation \eqref{pi2a2b}, $\pi_{2b} = 1$ (resp. $-1$). Thus, when $\frac{p-1}{4}$ is even and $\left(\dfrac{q}{p}\right) = 1$, the walks for $\pi_{1}$ and $\pi_{2}$ move exactly together with combined contribution one of $\pm 2$. Consider the case when $\frac{p-1}{4}$ is even and $\left(\dfrac{q}{p}\right) = -1$. If $\pi_{1a} = 1$ (resp. $-1$), then by equation \eqref{pi1a1b}, $\pi_{1b}=-1$ (resp. $1$). Using equation \eqref{pi1a2a}, $\pi_{2a}=-1$ (resp. $1$), and by equation \eqref{pi2a2b}, $\pi_{2b} = 1$ (resp. $-1$). Then $\pi_1$ and $\pi_2$ do not move together, but the combined contribution for that particular q is $0$, so there is little movement and the correlation remains close to $+1$.
\pagebreak
Consider the case when $\frac{p-1}{4}$ is odd and $\left(\dfrac{q}{p}\right) = 1$. If $\pi_{1a} = 1$ (resp. $-1$), then by equation \eqref{pi1a1b}, $\pi_{1b}=-1$ (resp. $1$). Using equation \eqref{pi1a2a}, $\pi_{2a}=1$ (resp. $-1$), and by equation \eqref{pi2a2b}, $\pi_{2b} = -1$ (resp. $1$). Thus, when $\frac{p-1}{4}$ is odd and $\left(\dfrac{q}{p}\right) = 1$, the walks move together, but with a combined contribution of $0$ for that particular $q$. Consider the case when $\frac{p-1}{4}$ is odd and $\left(\dfrac{q}{p}\right) = -1$. If $\pi_{1a} = 1$ (resp. $-1$), then by equation \eqref{pi1a1b}, $\pi_{1b}=1$ (resp. $-1$). Using equation \eqref{pi1a2a}, $\pi_{2a}=-1$ (resp. $1$), and by equation \eqref{pi2a2b}, $\pi_{2b} = -1$ (resp. $1$). Then $\pi_1$ and $\pi_2$ move exactly \textit{opposite} to each other, causing the correlation to remain close to $-1$.
\section{Conclusions}
If one performs a Legendre symbol race in the rational primes, the sorting is obvious. However, if one extends the model to the Gaussian primes, the sorting is less clear. In this project, we only used one sorting order (by norm and then by size of real part). In addition, we only considered primes in the first quadrant. Perhaps future projects can model Gaussian Legendre symbol walks with different sorting orders, iterating over different combinations of quadrants, and up to greater norm values. Moreover, we mostly ignored the contribution of Gaussian primes of the form $a\equiv 3 \pmod4$ since they are much less numerous. Although it was not rigorously discussed, it seems that primes of this form contribute to a bias toward nonquadratic residues when comparing plots with odd $\frac{p-1}{4}$ (i.e. the plots with negative correlation). It would be interesting to quantify their effect on the correlation between the plots of $\pi_1$ and $\pi_2$. In addition, we noted in section \ref{mod4 races} that a Legendre symbol walk over rational primes $\equiv 3 \pmod 4$ seems to reduce some of Chebyshev's bias. It would be interesting to see an explanation for this phenomenon as well (perhaps there is an interesting connection to the Gaussian primes). We hope that we outlined enough theory for an inquisitive reader to begin asking their own questions about the fascinating Gaussian primes.
\section*{Acknowledgments}
I would like to extend a special thank you to Dr. Stephan Ehlen for his guidance and teaching throughout the past year and through the duration of this project.
I would also like to thank Dr. Henri Darmon of McGill University, le Centre de recherches mathématiques, and l'Institut des sciences mathématiques for providing me with funding and the opportunity to research this topic.
In addition, I would like to thank Dr. Yara Elias and Dr. Kenneth Ragan for their excellent teaching, and for helping me secure this research project.
\pagebreak
|
0810.2086
|
\section{introduction}
The spin-$1/2$ Heisenberg antiferromagnet on the square lattice has had a long association with high-temperature superconductivity \cite{and}\cite{afm2}, but there has been almost an equally long-standing controversy surrounding the topological nature of certain of its ground states \cite{p1029_1}-\cite{p7215_1}. This question has become more pressing recently due to the discovery, via STM and neutron scattering data, that spin impurities in the underdoped cuprates apparently induce a direct second order transition between the N\'eel ordered state and a condensed Bose phase \cite{exp1}\cite{exp2}, fuelling further interest in mechanisms of the quantum criticality of edge states \cite{132}\cite{133}. In contrast, the behaviour of the one dimensional analogue of this system is largely settled. Many quantum spin chain models can be solved exactly, and there is a well-known direct mapping between chains at criticality and the $SU(2)$ WZW model \cite{chain}, which leads to a distinction between gapped and gapless groundstates for integer and half-integer spin chains. What makes the 2d system inherently more involved is that whilst the Hilbert space dimension is an extensive quantity the Hopf invariant is also zero \cite{p937_1}, which leads to the emergence of two competing spin wave theories for the ground state (or more properly a chiral wave theory coupled via a Chern-Simons term in the continuum limit \cite{10}). Since the hard-core bosons of the Jordan-Wigner approach will then necessarily carry fermionic statistics in the long-distance (IR) regime of this system (in addition to the short-distance (UV) regime \cite{142}\cite{147}) this has motivated various ideas of Bose condensation via skyrmions for the ground state of this system in the quantum regime \cite{200}\cite{p7215_1}\cite{149}.
In this article we consider the anistropic lattice model defined via,
\begin{equation}
H = J \, \left( \,\, \sum_{i,\,j=1}^{L}{\bm S}^{\,{\bm r}}_{i} . {\bm S}^{ \, {\bm r}}_{j} + (\Delta -1){\bm S}^{z}_{i} .{\bm S}^{z}_{j} \,\, \right)
\end{equation}
where $i$ and $j$ are spin sites indices, $J$ is the nearest-neighbour spin interaction coupling, $\Delta$ is the XXZ spin anisotropy parameter, and ${\bm S}$ is a spin-$1/2$ operator represented by usual the Pauli matrices with spin components $ \bm{r} = (x,y,z) $. At $\Delta=1$ the zero temperature ground state is N\'eel ordered \cite{and}. By considering the convergence of the support of spin correlators as a function of $L$ it has been identified in \cite{11}\cite{138} that there is also long-range order (LRO) for anisotropy parameter values of $0<\Delta <0.13$, and $\Delta > 1.78$ (in the easy-axis and easy-plane limits). At $\Delta \sim 0$ a second order transition is expected as a function of temperature, but this is then expected to become first order when quantum fluctuations suppress this LRO \cite{13}, which has been confirmed by numerics in \cite{8}\cite{14}.
Attempts to treat quantum fluctuations in the spin-$1/2$ Heisenberg antiferromagnet on the square lattice in the vicinity of $\Delta=1$ have focused on treating the N\'eel order parameter as the basis of a low-energy effective field theory, via the $(2+1)$-dimensional $\sigma$ model \cite{afm2}\cite{p1029_1}. In \cite{17}\cite{GetPDF5} a fugacity expansion on the dual is used to identify that a dangerously irrelevant monopole contribution will lead to a second order (quantum) transition from the N\'eel to the Valence Bond Solid phases \cite{and}.
The focus of this article is universality of the transitions in the quantum regime of the spin-$1/2$ Heisenberg antiferromagnet on the square lattice. However, two factors make the direct numerical verification of the quantum transitions in \cite{17}\cite{GetPDF5} difficult. Firstly, it is unclear what the relevant order parameters should be \cite{133}, secondly, although lattices serve as effective short-range regulators, lattice IR cutoffs arise from statistical averaging, which makes the lattice Lorentz symmetry (of gauge theories) approximate \cite{wilson}. Although there are no explicit gauge fields involved the construction of the spin-$1/2$ Heisenberg antiferromagnet on the square lattice in (1), because the Hopf invariant is zero and the Hilbert space dimension is extensive \cite{p937_1}, it is important that an exact $U(1)$ Wick rotation exists between Euclidean and Imaginary time at the boundary of the system. This symmetry is described as an "emergent" gauge field when the system is both Lorentz and scale invariant \cite{17}\cite{GetPDF5}. Therefore, because the Lorentz symmetry is approximate on the lattice, it is more difficult to construct numerics which have the correct Hopf invariant. For example, it is not possible to guarantee that a global minimum can be reached in quantum spin systems using DMRG via the standard approach of minimising the following expectation \cite{c},
\begin{equation}
\left\langle \psi | H | \psi \right\rangle: \, | \psi \rangle \in (\mathbb{C}^{2})^{\otimes L^{2}}
\end{equation}
Quantifying the scaling behaviour of entanglement entropy is one means of directly establishing the universality of quantum transitions (at least for quantum chains). This is because scale invariance also implies conformal invariance in $(1+1)$ dimensions. The quantum transition of spin-$1/2$ chains is characterised universally by the central charge $c$, where $c=1$ for free bosons and $c=1/2$ for free fermions. This has been verified by a number of numerical studies \cite{st3-dm}\cite{st1-dm}\cite{dm2} (although similar problems arise with ensuring the Lorentz invariance of numerical lattice methods \cite{c}). This universal scaling is quantified via the Von Neumann entropy \cite{wil},
\begin{equation}
S = \frac{(c + \overline{c})\chi}{6} \log \left( \frac{\xi}{a} \right) + \mathcal{O}(e^{-L/\xi})
\end{equation}
where the Von Neumann entropy $S$ describes the bipartite entanglement between two subsystems, $c$ and $\overline{c}$ are the holomorphic and antiholomorphic central charges of the corresponding conformal field theory at the boundary between the subsystems, $\xi$ is the correlation length (of the finite temperature system), $a$ is the lattice UV cutoff, and $\chi$ is the Euler characteristic which categorises the topology of the boundary between subsystems \cite{122}\cite{123}\cite{euler}.
Unfortunately, there is no generalisation of the above $c$-theorem for quantum chains \cite{Z} to higher dimensions, since the Witt algebra associated with the primary (chiral) fields (of which the central extension is the Virasoro algebra) does not map directly onto the conformal anomaly in higher dimensions. The above area law scaling is therefore not universal in higher dimensions. On dimensional grounds, the coefficient in the area law scaling for the spin-$1/2$ Heisenberg antiferromagnet on the square lattice goes as $a^{1-d}$ and is not universal, but there should also be a universal term area law for the 2d system term proportional to $\xi^{1-d}$ \cite{119}\cite{122}\cite{st2}.
Since we are faced with approximate Lorentz symmetries in standard numerical lattice approaches \cite{wilson}, and the non-universal area law scaling of the entanglement entropy of the spin-$1/2$ Heisenberg antiferromagnet on the square lattice \cite{119}\cite{122}\cite{st2}, we take a new approach to identifying the quantum transitions of this system (which we have also recently applied to the quantum spin chain in \cite{me7}). For this, we construct an exact polynomial expansion for the partition function from Wick rotating the transfer matrix elements of a standard nonperturbative (Quantum Monte Carlo) ensemble. This enables us to construct two exact nonorthogonal polynomial representations of the primary chiral field separated via a nonperturbative Chern-Simons term (which corresponds to non-analytic portion of support of this polynomial) \cite{p937_1}, by identifying the quotient of this polynomial expansion. Taking the infinite volume limit of the support of this formalism corresponds to a $\zeta$-function renormalization of the partition function, as we show, and the (central) extension of the polynomial can be determined from this limit. In this article, we generate standard ensembles at three points in the quantum regime of the anisotropic spin-$1/2$ Heisenberg antiferromagnet on the square lattice ($\Delta=1.01$, $\Delta=1.78$ and $\Delta=2$ at $\beta=100$), and determine the scaling of the Von Neumann entropy (identified from the density of the polynomial zeroes) associated with the mapping on to its $(1+1)$ primary chiral field at criticality.
\section{Partition Function Zeroes of Quantum Critical Points}
In order to understand the homology of the treatment of the spin-$1/2$ antiferromagnet on the square lattice that is presented in \cite{p937_1} it is instructive to write the lattice partition function in the following Wick-rotated form \cite{me1},
\begin{equation}
\mathcal{Z} = \int\!\int \mathcal{D}\theta_{1}\mathcal{D}\theta_{2} \,\,\, {\rm{exp}}\!\left[\,\int_{0}^{\beta} d\tau H \,\, \,\right] \equiv \int\! \int \mathcal{D}\theta_{1}\mathcal{D}\theta_{2} \,\,\, {\rm{exp}}\!\left[\,\int_{0}^{\beta} d\tau\,\, A_{\bm{n}}\,i\phi_{1} + B_{\bm{n}}\,i\phi_{2} -V_{\bm{n}} \,\, \right]
\end{equation}
\begin{equation}
A_{\bm{n}} \equiv \sum_{s,\,s',\,s''}^{T \otimes \Theta_{1} \otimes \Theta_{2}} \,\, \sum_{\sigma \in Z_{2} } \lambda_{s,s',s'',\sigma}(\bm{n})
\frac{\langle \bm{n}\oplus \bm{1}_{s\sigma} \oplus \bm{1}_{s'\sigma} \oplus \bm{1}_{s''\sigma}| \theta_{1} \rangle}{\langle \bm{n}| \theta_{1} \rangle}\,
\end{equation}
\begin{equation}
B_{\bm{n}} \equiv \sum_{s,\,s',\,s''}^{T \otimes \Theta_{1} \otimes \Theta_{2} } \,\, \sum_{\sigma \in Z_{2}} \lambda_{s,s',s'',\sigma}(\bm{n})
\frac{\langle \bm{n}\oplus \bm{1}_{s\sigma} \oplus \bm{1}_{s'\sigma} \oplus \bm{1}_{s''\sigma}| \theta_{2} \rangle}{\langle \bm{n}| \theta_{2} \rangle}\,
\end{equation}
\begin{equation}
V_{\bm{n}} \equiv \sum_{s,\,s',\,s''}^{T \otimes \Theta_{1} \otimes \Theta_{2} } \,\, \sum_{\sigma \in Z_{2}}\lambda_{s,s',s'',\sigma}(\bm{n}) \frac{\langle
\bm{n}\oplus \bm{1}_{s\sigma} \oplus \bm{1}_{s'\sigma} \oplus \bm{1}_{s''\sigma}| \bm{n} \rangle} {\langle \bm{n}| \bm{n} \rangle}\,
\end{equation}
where $A_{\bm{n}}$, $B_{\bm{n}}$ and $V_{\bm{n}}$ are defined as the compact and noncompact portions of the spin operators in (1) determined through the nonperturbative matrix elements $\lambda(\bm{n})$ and $s$ is the Euclidean-time lattice site index. To define the indices $s'$ and $s''$ a state is taken from $|\psi \rangle \in (\mathbb{C}^{2})^{\otimes L^{2}}$ and the order of these tensor products is interchanged, which is valid on the system boundary \cite{p937_1}. Locally, however, this creates a mismatch between $\theta_{i}$ and $\phi_{i}$ ($i=1,2$), hence polynomial representations of $A_{n}$ and $B_{n}$ will not be orthogonal, in general, only in the special case where $\theta_{i} = \phi_{i}$ and the system is both Lorentz and scale invariant.
A meaningful statistical ensemble for spin-$1/2$ antiferromagnet on the square lattice can be obtained numerically using the continuous-time Quantum Monte Carlo method \cite{qmc} by generating a Markov chain from importance sampling using the following local transfer matrices \cite{t1},
\begin{equation}
\label{transfer}
{\cal{Z}} = \prod_{i=1,j=1,t=1}^{L,L,T} \left( \begin{array}{cccc}
p_{\{i,j\},\{i+1,j\}} & p_{\{i,j\},\{i,j+1\}} \\
p_{\{i+1,j\},\{i+1,j+1\}} & p_{\{i,j+1\},\{i+1,j+1\}} \end{array} \right)
\end{equation}
\begin{equation}
\label{p1}
p_{\{i,j\},\{i+1,j\}} = \left( \begin{array}{cccc}
{\rm {exp}}(-\frac{\Delta\tau J_z}{2}) & 0 & 0 & 0\\
0 & {\rm {cosh}}(\frac{\Delta\tau J_{xy}}{2}) & {\rm {sinh}}(\frac{\Delta\tau J_{xy}}{2}) & 0 \\
0 & {\rm {sinh}}(\frac{\Delta\tau J_{xy}}{2}) & {\rm {cosh}}(\frac{\Delta\tau J_{xy}}{2}) & 0 \\
0 & 0 & 0 & {\rm {exp}}(-\frac{\Delta\tau J_z}{2})
\end{array} \right)
\end{equation}
\begin{equation}
\label{p2}
p_{\{i,j\},\{i+1,j+1\}} = \left( \begin{array}{cccc}
{\rm {exp}}(-\frac{\Delta\tau J_z}{2}) & 0 & 0 & {\rm {cosh}}(\frac{\Delta\tau J_{xy}}{2}) \\
0 & 0 & {\rm {sinh}}(\frac{\Delta\tau J_{xy}}{2}) & 0 \\
0 & {\rm {sinh}}(\frac{\Delta\tau J_{xy}}{2}) & 0 & 0 \\
{\rm {cosh}}(\frac{\Delta\tau J_{xy}}{2}) & 0 & 0 & {\rm {exp}}(-\frac{\Delta\tau J_z}{2})
\end{array} \right)
\end{equation}
where $p_{\{i,j\},\{i+1,j+1\}} \leftrightarrow p_{\{i,j\},\{i+1,j\}}$ and $p_{\{i+1,j\},\{i+1,j+1\}} \leftrightarrow p_{\{i,j\},\{i+1,j+1\}}$ for $J_z \leftrightarrow J_{xy}$, $J_{z}/J_{xy} = \Delta$, and $\Delta\tau$ is the Euclidean-time lattice spacing. Discrete steps in Euclidean-time are exchanged in the continuous-time method with discrete spin flips, hence the Markov chain that is generated is ergodic in probability space (Imaginary time) but not in Euclidean-time \cite{phys}. The nonpertubative matrix elements realised in this Markov chain can be identified at each step $t$, and up to a normalising factor of $\beta J$ (the inverse temperature and bipartite lattice interaction coupling) these matrix elements are defined via,
\begin{equation}
\label{newtransfer}
p_{\{i,j\},\{i+1,j\}}^{(t)} = \left( \begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & \lambda_{t,i,j} & 1 - \lambda_{t,i,j} & 0 \\
0 & 1 -\lambda_{t,i,j} & \lambda_{t,i,j} & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
\end{equation}
The Wick-rotation of these elements is then defined by compactifying the Euclidean-time boundary conditions of the
lattice partition function from $(0,\beta] \rightarrow \theta = [-\pi,\pi]$, via the following definition of the trace of the transfer matrix $P$ (defined as the transfer matrix for the whole of the $L^{2}$ volume),
\begin{equation}
{\rm Tr} P = \sum_{k=1}^{L^{2}} ( 1 -{\lambda_k}^2 )( 1 -(\Delta\lambda_k)^2 ), \quad \mathcal{Z} = \int \mathcal{D}\theta \,\,\, P
\end{equation}
where $k$ is the lattice site index on $\Theta_{1}\otimes \Theta_{2}$, and $\Delta$ is the spin anisotropy parameter. This formalism relates the matrix elements in (11) to those in (4).
The assumption for this Wick rotation prescription is that each local plaquette on $i\otimes j$ can be smoothly deformed into $\mathbb{C}^{2}$ via the spin anisotropy parameter $\Delta$. This is not true, since any two states in the system are not orthogonal in general. Therefore, we recover an exact expansion in $\beta J$ from these operators that is only analytic up to $\phi_{1}\otimes \phi_{2}$ (rather than $\theta_{1}\otimes \theta_{2}$). However, this allows us to probe the criticality of the primary chiral field of the nonperturbatively realised system that corresponds to this singularity.
In the replica method \cite{122}, the Von Neumann entanglement entropy of an $n$-fold Riemann sheet (such as $\mathbb{R}^{2}$ for $n=L^{2}$) is defined by identifying the analytic continuation properties of the lattice partition function with respect to Euclidean-time,
\begin{equation}
S = - \lim_{n\rightarrow 1} \frac{\partial}{ \partial n } \left( \frac{\mathcal{Z}_{n}}{(\mathcal{Z})^{n}}\right)
\end{equation}
where $\mathcal{Z}_{n}$ is the lattice partition function for a subsystem consisting of $n$ disjoint unions of $\mathcal{Z}$. We go slightly further than the usual replica argument for defining the entanglement entropy, in our approach, by constructing a polynomial ring for $\mathbb{C}^{2}$. This removes the assumption that the partition function has to be factorisable in the above denominator \cite{300}. This is particularly important for the spin-$1/2$ antiferromagnet on the square lattice, because general states in this system are defined by two non-orthogonal polynomials, hence, the entanglement entropy defined in the above limit is degenerate (even though the branch points are resolved in $\mathbb{R}^{2}$). The relevant quantity for determing the entanglement entropy in our formalism is the logarithm of ${\rm {det}} P$. An analogous expression to the replica limit is then obtained by analytically continuing the following function from large $s$ to $s=0$,
\begin{equation}
\sum_{k=1}^{L^{2}} \Lambda_{k}^{-s} \, {\rm{ln}}\, \Lambda_{k}
\end{equation}
where $\Lambda_{k} = ( 1 -{\lambda_k}^2 )( 1 -(\Delta\lambda_k)^2 )$. This amounts to the renormalization of the singularities of the support of the free energy of the partition function via a $\zeta$-function prescription, where,
\begin{equation}
\zeta_{P}(s) = \sum_{k=1}^{L^{2}} \Lambda_{k}^{-s} = \frac{1}{\Gamma(s)} \int_{0}^{\infty} \mathcal{D}\theta \,\, {\rm{Tr}} \, (e^{-\theta P}) \, \theta^{s-1}
\end{equation}
However, although the logarithm of $\det P$ (free energy) is completely defined by the first derivative of this $\zeta$-function at $s=0$, the corresponding free energy minima is not unique since any number of the higher moments of $\zeta_{P}(s)$ can also be nonzero. Hence, to uniquely determine the entanglement entropy a further prescription must be given for resolving the choices of branch in $k$ for all of these higher moments at a different point on the fundamental strip, namely at $s=1$,
\begin{equation}
\left. \frac{ d^{k} \zeta_{P}(s) }{ ds^{k} } \right|_{s=1} = \int_{0}^{\infty} \mathcal{D} \theta \,\, \Lambda(\theta) \,\, {\rm {ln}}^{k}(\theta) , \quad k = 0 \,\, ... \,\, L^{2}
\end{equation}
This prescription is implemented by constructing a polynomial in $\theta$ consisting of $k$ terms - the zeroes of this polynomial ring therefore correspond to elements of the quotient $\mathbb{C}^{2}$ (rather than $\mathbb{R}^{2}$). Hence, the meromorphic convergence of the zeroes of this polynomial uniquely specifies the resolution of the branches in $k$, via,
\begin{equation}
\label{entropy}
{\rm{ln}}\, {\rm{det}} P = -\int^{\infty}_{0} \Lambda(\theta) \,\, {\rm{ln}}(\theta) \,\, d\theta \quad
\rightarrow \quad S = - \int_{0}^{\infty} \mathcal{D} \theta\,\, \Lambda(\theta) \ln \Lambda(\theta) \,\,
\end{equation}
which is the familiar Von Neumann form of the entanglement entropy.
Practically, the support of the logarithm of $ {\rm{det}} P$ can be investigated via the scaling of the zeroes density of the eigenvalue problem ${\rm{det}}(P - (\beta J)^{2} ) =0$, since the divergences of the logarithm of $ {\rm{det}} P$ occur where this polynomial ring has its zeroes. The zeroes are obtained numerically by rootfinding the characteristic polynomial equation for this eigenvalue problem, where the characteristic polynomial coefficients are obtained from powers of ${\rm Tr} P$ by using Newton's identities \cite{me7}\cite{me55},
\begin{equation}
\label{poly}
{\cal{Z}} = \sum_{k=0}^{L^{2}} \, \langle c_{k} \rangle \, (\beta J)^{2k}, \quad k\, c_{k} + {\rm Tr} P^{k} + \sum_{n=1}^{k-1} c_{n} {\rm Tr} P^{k-n} = 0, \quad n=1 ... L^{2}
\end{equation}
This polynomial zeroes approach is similar in spirit to Lee and Yang's determination of the zeroes of the Ising partition function \cite{l+y2}, although the singularities in our formulation correspond to elements of the quotient $\mathbb{C}^{2}$ (rather than $Z_{2}$). In both cases the polynomial zeroes are constrained to lie on the unit circle in the complex plane of the polynomial expansion parameter $X$. This is because the quotient of the polynomial representations of each model and the $Z_{2}$ symmetry of their spin operators form a polynomial ring, $\mathbb{C} \equiv \mathbb{R}[X]/X^{2} +1$. For the Ising model this constraint is an identity, and all the zeroes lie on the unit circle, but for our formulation this constraint only applies at criticality, since $PGL_{2}(\mathbb{C})\supseteq PSU_{2} \cong SO(3)$. Motivated by this, we apply a simple difference relation to define the zeroes density $\Lambda(\beta J)$ of our lattice systems along the unit circle \cite{lambda1},
\begin{equation}
\label{den}
\Lambda(\phi) \equiv \Lambda \left( \frac{\phi_{k+1}+\phi_{k}}{2} \right) =
\frac{1}{L^{2}(\phi_{k+1}-\phi_k)}
\end{equation}
where $k$ is the sequential index assigned to the zeros along the locus and $\phi$ is the angle subtended from the real-$(\beta J)^2$ axis to a given zero. The scaling of the zeroes density describes the renormalization group scale transformations of the lattice system, hence,
\begin{equation}
\label{asymp1}
\Lambda(\beta,J,\Delta,L) = L^{\tilde{c}} \Lambda(\beta L, J L, \Delta)
\end{equation}
where $\tilde{c}$ is the scaling exponent in the vicinity of a second order fixed point. Therefore,
\begin{equation}
\label{asymp}
\lim_{J,\beta \rightarrow 0} \Lambda(J) = J^{\tilde{c}} ( 1 - J\, ... \,) \quad \Rightarrow \quad {\rm{ln}} \Lambda(\phi) = \tilde{c}\, / \phi + \, ...
\end{equation}
\section{numerical results}
We have generated lattice ensembles of the spin-$1/2$ antiferromagnet on the square lattice at several different values of length $L=\{12,16,20,24,32,40\}$, at the inverse temperature $\beta=100$, at strong coupling (where $J=1$) and at three different fixed values of the XXZ anisotropy parameter $\Delta=\{1.01, 1.78, 2.0\}$ using the continuous-time Quantum Monte Carlo method \cite{qmc}. We have constructed the characteristic polynomial coefficients, defined in (12), for a polynomial ring whose singularities form the quotient $\mathbb{C}^{2}$ using the numerical transfer matrix entries generated for the Markov process of each ensemble, following (18). We have applied standard rootfinding techniques to this polynomials to find its zeroes, and have constructed the simple difference relation for the zeroes density along the unit circle in the complex expansion parameter plane defined in (19). This zeroes density relation equivalently defines the Von Neumann entanglement entropy of the spin-$1/2$ antiferromagnet on the square lattice in (17), for the entanglement defined between the set of $L^{2}$ disjoint subsystems of the square lattice (that are analytic in the expansion parameter $\beta J$) and the rest of the ensemble. Each of these subsystems is therefore defined nonperturbatively by the mismatch between $\phi_{1}\otimes\phi_{2}$ and $\theta_{1}\otimes\theta_{2}$ in (4). This approach allows us to probe the entanglement entropy scaling of the primary chiral field associated with $\mathbb{C}^{2}$ directly, even though a general state of the spin-$1/2$ antiferromagnet on the square lattice system is defined by two non-orthogonal polynomials \cite{p937_1}.
Whilst all of the zeroes we have evaluated in the complex plane automatically live on the surface traced out by $\phi_{1}\otimes\phi_{2}$ (from the definition of the polynomial in (12)) the real test of this numerical method is whether the zeroes density (entanglement entropy) scales in any meaningful way along this surface. As with the Ising model \cite{l+y2}, the zeroes polynomial in (12) defines the extension of the expansion parameter $X (=\beta J)$, and this extension becomes the central extension when the lattice ensembles we have generated are both Lorentz and scale invariant. Hence, it is only if the zeroes demonstrate some symmetry by lying on a well defined locus in the complex plane that the extension of the system can be quantified via the scaling of the zeroes density.
In Figure 1 the polynomial zeroes are plotted in the complex-$(\beta J)^2$ plane for three lattice ensembles generated at $\Delta=\{1.01, 1.78, 2.0 \}$.
For the ensemble generated at $\Delta=1.01$ in Figure 1 all of the zeroes lie on the unit circle, hence the entanglement entropy scaling can be quantified via the zeroes density along this locus. However, whilst for $\Delta=1.78$ and $\Delta=2.0$ a subset of the zeroes lie on the unit circle, a further subset of the zeros lie on (or to the exterior of) a line at a fixed angle $\phi'$ from the real-$(\beta J)^2$ axis. There is therefore a mismatch between $\phi_{1}\otimes\phi_{2}$ and $\theta_{1}\otimes\theta_{2}$ for these ensembles. The reason for this is that in (12) the polynomial is assumed to be analytic in $\Delta$ and $J$ but it is only when the system is at criticality (Lorentz and scale invariant) that the scaling is critical in both variables. Hence, further from the N\'eel point scaling in $\Delta$ becomes subcritical and the critical point develops at $\phi =\phi'$.
In Figure 2 we directly test the scaling hypothesis for $\phi_{1}\otimes\phi_{2}$. We rescale the pseudo-critical value $\phi_{\phi'}$ by subtracting from it the value of the zero closest the real-$(\beta J)^{2}$ axis $\phi_{0}$, and plot this difference $(\phi_{\phi'} - \phi_{0})$ against $1/L^{2}$. There should be no evidence of any meaningful scaling if the mismatch between $\phi_{1}\otimes\phi_{2}$ and $\theta_{1}\otimes\theta_{2}$ is a purely numerical artifact. However, the result in Figure 2 indicates a clear linear confining flux between these two branch points, with a string tension of $\sigma=68(4)$ for $\Delta=1.78$, and $\sigma =78(7)$ for $\Delta=2.0$, in lattice units. The results in Figure 1 and Figure 2 strongly suggest that it is meaningful to model the general states of the spin-$1/2$ antiferromagnet on the square lattice system either by two non-orthogonal polynomials \cite{p937_1}, or equivalently by a chiral wave theory coupled via a nonperturbative Chern-Simons term in the continuum limit \cite{10}, since the polynomial zeroes we have measured do have a well defined extension, which links the UV to the IR via statistical transmutation \cite{200}\cite{p7215_1}\cite{149}.
\begin{figure}
\epsfxsize=3.5 in\centerline{\epsffile{z101.eps}}
\epsfxsize=3.5 in\centerline{\epsffile{z178.eps}}
\epsfxsize=3.5 in\centerline{\epsffile{z200.eps}}
\caption{The polynomial zeros of the spin-1/2 antiferromagnet on the square lattice plotted in the complex-$K^{2}$ plane (where $K = \beta J$) for three different fixed values of the XXZ anisotropy parameter $\Delta=\{1.01, 1.78, 2.0\}$. The lattice size of all three systems is $L^2 = 32\times 32$, and the inverse temperature is kept fixed at $\beta=100$. A subset of the zeros lie on a locus, corresponding to the unit circle in the complex-$K^{2}$ plane, for all three ensembles. }
\end{figure}
\begin{figure}
\epsfxsize=4.5 in\centerline{\epsffile{string.eps}}
\caption{Pseudo-critical scaling of the difference in the critical values of the zeroes, $\phi_0$ and $\phi_{\phi'}$, as a function of the inverse lattice size squared, $1/L^2$. }
\end{figure}
On the basis of dimensional analysis \cite{122}\cite{st2}, the general picture of entanglement entropy scaling in $(2+1)$ dimensions is of a leading area law term with a prefactor of $a^{1-d}$ (where $a$ is the lattice UV cutoff), which is therefore not universal, and a subleading universal area law term propotional to $\xi^{1-d}$ (where $\xi$ is the correlation length) of the form of (3). This general picture is therefore modified if the system is either gapped or gapless \cite{118}. In \cite{euler}, conformally invariant systems at criticality have been found to satisfy this general picture. However, for the spinless gapped systems considered in \cite{112}, whilst the leading contribution is as above, the subleading behaviour differs in the quantum critical and non-critical regimes where it is either scales as in (3) or is a negative constant. In \cite{116}\cite{117}, it is argued that the general form of the scaling for systems with finite Fermi surfaces is for a leading term proportional to the product of the area between the subsystems and a logarithmic correction: a rescaling of the general picture on to the finite Fermi surface. Our expectation of the numerics from these results is therefore threefold: dependent on whether the lattice ensembles are gapped \cite{112}, gapless and conformally invariant \cite{euler} or gapless with finite Fermi surfaces \cite{116}\cite{117}.
In Figure 3 the logarithm of the zeroes density in (19) is plotted as a function of the angle subtended along the unit circle in the complex-$(\beta J)^2$ plane, for an ensemble generated at $\Delta=2.0$. The scaling of the zeroes density can be fitted via two separate curves; where the density of the zeros sharply increases at the points $\phi\sim 0$ and $\phi\sim\phi'$, and also for the flat $\phi$-independent plateau. The scaling exponent values for fits to the second order scaling ansatz in (21) at $\phi\sim 0$ and $\phi\sim\phi'$ are tabulated in Table 1, along with the value of the intercept of the flat plateau, ${\rm{ln}}\Delta\tau_{IR}$.
The fitted exponent values are different for ensembles generated at $\Delta=1.78$ (where $\tilde{c}\sim 1$) and $\Delta=2.0$ (where $\tilde{c}\sim 1/9$), whilst for $\Delta=1.01$ the measured scaling is purely first order.
Although there is variation in the exponent values within the (jackknife) error estimates, it is important to note that the lattice units used to define $\mathbb{C}^{2}$ in the analysis are not held fixed. The comparison of finite size effects through volumes is therefore best made through ${\rm{ln}}\Delta\tau_{IR}$ rather than $L^{2}$. If the scaling exponent values are compared directly with comparable DMRG measurements the values show a similar level of consistency as a function of lattice volume \cite{st3-dm}\cite{st1-dm}.
For ensembles generated in the vicinity of the N\'eel point, at $\Delta=1.01$, the $\mathbb{C}^{2}$ subsystem is analytically continued on to the boundary of the system $\theta_{1}\otimes\theta_{2}$. The flat plateau in the zeroes density indicates that the entanglement entropy scales linearly in the boundary area ($\theta_{1}\otimes\theta_{2}$) with a non-universal prefactor. There is no multiplicative logartihmic correction, and the scaling is therefore of the general form expected for conformally invariant critical points \cite{euler}\cite{112}. Any subleading universal correction is lost in the analysis, because the analytic continuation is not defined beyond the system boundary.
This scaling picture differs for the ensembles generated at $\Delta=1.78$ and $\Delta=2.0$, where the $\mathbb{C}^{2}$ subsystem is analytically continued on to the boundary of the subsystem $\phi_{1}\otimes\phi_{2}$, and not on to the boundary of the system $\theta_{1}\otimes\theta_{2}$. For ensembles generated in the vicinity of the first order transition between the Ising phase and quantum phase above the N\'eel point, at $\Delta=1.78$ \cite{11}\cite{138}, the flat plateau in the zeroes density indicates that the entanglement entropy scales linearly in the boundary area of the subsystem ($\phi_{1}\otimes\phi_{2}$) with a non-universal prefactor and also logarithmically with a universal prefactor $\tilde{c}$. However, the $\phi_{1}\otimes\phi_{2}$ subsystem itself also scales linearly on to $\theta_{1}\otimes\theta_{2}$ from the operator definition in (4) (this corresponds to the global mismatch between $\phi_{1}\otimes\phi_{2}$ and $\theta_{1}\otimes\theta_{2}$). Hence the scaling follows the general form for gapless systems with a finite Fermi surface in \cite{116}\cite{117}, of area law times a logarithmic correction.
Even though the mismatch between $\phi_{1}\otimes\phi_{2}$ and $\theta_{1}\otimes\theta_{2}$ is a non-universal factor, by constructing exact operators for $\mathbb{C}^{2}$ we are able to extract the central charge and Euler characteristic for the finite Fermi surfaces of these ensembles. At $\Delta=1.78$ the ensembles lie in the vicinity of the first order transition between the Ising phase and quantum phase above the N\'eel point, whereas at $\Delta=2.0$ the ensembles are in the Ising regime \cite{11}\cite{138}.
In both cases the central charge for the primary chiral field in $\mathbb{C}^{2}$ is $c=\overline{c}=1/2$, but the Euler characteristics differ as the later case the $Z_{2}$ symmetry is broken. The Euler characteristics for the surfaces of the ensembles are defined by counting the number of vertices of the hypercubes in (8), subtracting the number of edges and adding the number of faces. At $\Delta=1.78$ the Euler characteristic is given by $12/3 -8/2 +6 = 6$, whereas at $\Delta=2.0$ the Euler characteristics is given by.
$8/3 -8/2 +2 = 2/3$. From (3), the universal prefactors of the scaling of the boundary $\phi_{1}\otimes\phi_{2}$ should therefore be $\tilde{c}\sim 1$ and $\tilde{c}\sim 1/9$, which are consistent with the values in Table 1.
\begin{figure}
\epsfxsize=4.5 in\centerline{\epsffile{lambda4.eps}}
\caption{The logarithm of the zeroes density ${\rm{ln}} \, \Lambda(\phi)$ $(={\rm{ln}} \, \lambda(i\phi))$ of the spin-1/2 antiferromagnet on the square lattice plotted as a function of the angle $\phi$ subtended from the real axis of the complex-$K^{2}$ plane to a given zero along the unit circle. The lattice size of this ensemble is $L^2 = 40\times 40$, the inverse temperature is $\beta=100$, and the XXZ anisotropy parameter is $\Delta=2.0$. }
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l|l||r|r|r|r|r|}
\hline
$L$ & $\Delta$ & $\tilde{c}_{\,\,\phi \, \sim \, 0\,\,}$ & $\tilde{c}_{\,\,\phi \,\sim\, \phi'\,\,}$ & ${\rm{ln}}\Delta\tau_{IR}$ \\
\hline
\hline
12 & 1.78 & 1.0332(0.1903) & 1.0781(0.0182) &-2.3983(0.0090)\\
\hline
16 & 1.78 & 1.0177(0.0226) & 1.0160(0.0256) &-2.4309(0.0025)\\
\hline
20 & 1.78 & 1.0097(0.0189) & 1.0074(0.0648) &-2.4291(0.0021)\\
\hline
24 & 1.78 & 1.0073(0.0250) & 0.9999(0.2222) &-2.4236(0.0017) \\
\hline
32 & 1.78 & 1.0101(0.0065) & 0.9997(0.0167) &-2.4245(0.0012) \\
\hline
40 & 1.78 & 1.0066(0.0480) & 1.0000(0.0119) &-2.4052(0.0003) \\
\hline
\hline
$L$ & $\Delta$ & $\tilde{c}_{\phi \sim 0}$ & $\tilde{c}_{\phi \sim \phi'}$ & ${\rm{ln}}\Delta\tau_{IR}$ \\
\hline
\hline
12 & 2.00 & 0.1102(0.0952) & 0.1092(0.0526) & -2.5679(0.0132) \\
\hline
16 & 2.00 & 0.1502(0.0505) & 0.1547(0.0045) & -2.3807(0.0022)\\
\hline
20 & 2.00 & 0.1504(0.0471) & 0.1305(0.0034) & -2.4685(0.0017) \\
\hline
24 & 2.00 & 0.1401(0.0057) & 0.1076(0.0058) & -2.4863(0.0010)\\
\hline
32 & 2.00 & 0.1337(0.0067) & 0.1077(0.0065) & -2.4690(0.0002)\\
\hline
40 & 2.00 & 0.1226(0.0476) & 0.1172(0.0313) & -3.5141(0.0038) \\
\hline
\end{tabular}
\end{center}
\caption{Dependence of the scaling exponent $\tilde{c}$ (determined from the fit of the logarithm of the zeros density to (\ref{asymp})) on the lattice system length $L$ and anisotropy $\Delta$, at a fixed inverse temperature of $\beta=100$. In the final column we give the non-analytic first order contribution to scaling, ${\rm{ln}}\Delta\tau_{IR}$, corresponding to the intercept of the flat plateau in Figure 2.}
\end{table}
\section{summary}
We have presented a new procedure in this article, which uses the convergence properties of an exact expansion of the lattice partition function of the continuous-time Quantum Monte Carlo method to identify the critical scaling of the spin-1/2 antiferromagnet on the square lattice for ensembles we have generated at fixed XXZ anisotropy values of $\Delta=1.01, \Delta=1.78$ and $\Delta=2.0$. This procedure is closely related to recent exact diagonalization studies of the scaling of the entanglement entropy of several quantum chain systems, and shows a similar level of accuracy \cite{st1-dm}\cite{st3-dm}\cite{st2}. The entanglement entropy in our approach is found by calculating the density of zeroes of a polynomial ring, and we have used this ring to represent the Chern-Simons phase difference between the two non-orthogonal polynomial representations that define the ground state of the spin-1/2 antiferromagnet on the square lattice in \cite{p937_1}. We have shown that the meromorphic convergence of the density of these zeroes arises from treating the replica method for entangelment entropy \cite{122} via a prescription for $\zeta$-function renormalization. We have compared our results with three different scaling pictures for the entangelement entropy of two dimensional fermionic quantum systems which are either gapped \cite{112}, gapless and conformally invariant \cite{euler} or are gapless with finite Fermi surfaces \cite{116}\cite{117}. Our approach allows us to identify the universality of the scaling associated with the primary chiral field of the spin-1/2 antiferromagnet on the square lattice, even though the Fermi surfaces of our ensembles may be finite and the general form of the scaling of the entanglement entropy in this case comes with a non-universal prefactor \cite{116}\cite{117}. This scheme is therefore useful for directly investigating the entanglement entropy of spin impurities in the underdoped cuprates that can be treated via the spin-1/2 antiferromagnet on the square lattice \cite{exp1}\cite{exp2}. Nothing in our construction forces the zeroes density we have obtained from numerics to scale in a meaningful way. However, in each of the ensembles we have generated, the polynomial zeroes show good evidence of an analytic continuation symmetry which validates the idea presented in \cite{200}\cite{p7215_1}\cite{149} that there is an intrinsic Chern-Simons term in the quantum regime of the spin-1/2 antiferromagnet on the square lattice that leads to the statistical transmutation of fermionic statistics from the UV into the IR. In the vicinity of the N\'eel point at $\Delta=1.01$, we have found that the scaling follows the form for gapless and conformally invariant critical points in 2d quantum systems \cite{euler}. This supports the general idea presented in deconfined quantum criticality \cite{17}\cite{GetPDF5}, that the N\'eel point can be smoothly deformed by quantum fluctuations to give a second order fixed point somewhere above $\Delta = 1$. For our ensembles in the Ising phase at $\Delta=1.78$ and $\Delta=2.0$ we have we have found the scaling follows the form for gapless with finite Fermi surfaces in 2d quantum systems \cite{116}\cite{117}. The Euler characteristics we have identified for the prefactor of the scaling associated with the primary chiral field of these ensembles are consistent with topology of the hypercubes we have used to construct the lattice operators in these two regions which are above and at the boundary for the Ising phase \cite{11}\cite{138}.
|
0810.1673
|
\section{Introduction}
Our primary interest in this paper is the topology of the Fatou set for
holomorphic endomorphisms of $\mathbb{CP}^k$ (written as $\ensuremath{\field{P}^k}$ in the remainder
of the paper). We develop a type of linking number that in many cases allows
one to conclude that a given loop in the Fatou set is homologically
non-trivial. One motivation is to find a generalization of the fundamental
dichotomy for polynomial (or rational) maps of the Riemann sphere: the Julia
set is either connected, or has infinitely many connected components. Further,
this type of result paves the way to an exploration of a potentially rich
algebraic structure to the dynamics on the Fatou set.
Given a holomorphic endomorphism $f:\ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$, the {\em Fatou set}
$U(f)$ is the maximal open set on which the iterates $\{f^n\}$ form a normal
family. The {\em Julia set} $J(f)$ is the complement, $J(f) = \ensuremath{\field{P}^k} \setminus
U(f)$. The standard theory \cite{FS2,HP2,Ueda} gives a convenient
description of these sets in terms the {\em Green's current} $T$. Specifically,
$T$ is a dynamically defined closed positive $(1,1)$ current with the property
that $J(f) = {\rm supp}(T)$. We provide relevant background about the Green's
current in Section \ref{SEC:GREENS_CURRENT}. Throughout this paper we assume
the degree of $f$ is at least two (i.e. that the components of a lift of $f$ to
$\CC^{k+1}$, with no common factors, have degree at least two).
Motivated by this description of the Fatou set, in Section \ref{SEC:LINKING}
we define a linking number $lk(\gamma,S)$ between a closed loop $\gamma \subset
\ensuremath{\field{P}^k} \setminus {\rm supp} \ S$ and a closed positive $(1,1)$ current $S$. In
Proposition \ref{PROP:LINKING_DEPENDS_ON_HOMOLOGY} we will show that it depends
only on the homology class of $\gamma$, and that it defines a homomorphism
\begin{eqnarray*}
lk(\cdot,S): H_1(\ensuremath{\field{P}^k} \setminus {\rm supp} \ S) \rightarrow \RR/\ZZ.
\end{eqnarray*}
In particular, a non-trivial linking number in $\RR/\ZZ$ proves that the
homology class of $\gamma$ is non-trivial.
The techniques are based on a somewhat similar theory in \cite{ROE_NEWTON}.
This linking number can also be restricted to loops within any open $\Omega
\subset \ensuremath{\field{P}^k} \setminus {\rm supp} S$, giving a homomorphism $lk(\cdot,S): H_1(\Omega)
\rightarrow \RR/\ZZ.$ If $\Omega$ is the basin of attraction for an attracting
periodic point of a holomorphic endomorphism $f:\ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$ and $S$ is
the Green's current, we will show in Proposition \ref{PROP:RATIONAL_LINKING}
that the image of this homomorphism is contained in $\QQ/\ZZ$. This provides a
natural setting to show that, under certain hypotheses, the Fatou set $U(f)$
has infinitely generated first homology:
\begin{thm}\label{THM:GENERAL_TECHNIQUE}
Suppose that $f:\ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$ is a holomorphic endomorphism and $\Omega
\subset U(f)$ is a union of basins of attraction of attracting periodic points
for $f$. If there are $c \in H_1(\Omega)$ with linking number $lk(c,T) \neq 0$
arbitrarily close to $0$ in $\QQ/\ZZ$, then $H_1(\Omega)$ is infinitely
generated.
\end{thm}
\noindent
(We prove Theorem \ref{THM:GENERAL_TECHNIQUE} in Section \ref{SEC:LINKING}.)
Note that the hypotheses of Theorem \ref{THM:GENERAL_TECHNIQUE} are satisfied
if there are piecewise smooth loops $\gamma \subset \Omega$ with $lk(\gamma,T)
\neq 0$ arbitrarily close to $0$ in $\QQ/\ZZ$. In our applications, we often
find a loop $\gamma_0$ with nontrivial linking number, and then take an
appropriate sequence of iterated preimages $\gamma_n$ under $f^n$ so that
$lk(\gamma_n,T) \rightarrow 0$ in $\QQ/\ZZ$.
\vspace{0.1in}
In order to apply this theory to specific examples, one needs a detailed
knowledge of the geometry of the Green's Current $T$. In the second half of the paper
we consider two situations in which it can be readily applied to provide examples
of endomorphism $f$ of $\ensuremath{\field{P}^2}$ having Fatou set $U(f)$ with infinitely
generated homology.
The first situation is for polynomial endomorphisms of $\ensuremath{\field{P}^2}$, that is,
holomorphic maps of $\ensuremath{\field{P}^2}$ that are obtained as the extension a polynomial map
$f(z,w) = (p(z,w),q(z,w))$ on $\CC^2$.
Such mappings (and their generalizations to $\ensuremath{\field{P}^k}$) were studied in \cite{BEDFORD_JONSSON}.
Given a polynomial endomorphism $f:\ensuremath{\field{P}^2} \rightarrow \ensuremath{\field{P}^2}$, the line at
infinity, denoted by $\Pi$, is totally invariant and superattracting.
Therefore the restriction of $T$ to $\Pi$ can be understood using the
dynamics of the resulting rational map of $f_{|\Pi}$ and its Julia set $J_\Pi$.
In Section \ref{SEC:ENDO} we prove the following theorem.
\begin{thm}\label{THM:JPI_DISCONN}
Suppose that $f$ is a polynomial endomorphism of $\ensuremath{\field{P}^2}$ with restriction $f_{|\Pi}$ to the line at infinity $\Pi$. If $f_{|\Pi}$ is hyperbolic and $J_\Pi$ is disconnected, then the Fatou set $U(f)$ has
infinitely generated first homology.
\end{thm}
\noindent
This theorem provides for many examples of polynomial endomorphisms $f$ of $\ensuremath{\field{P}^2}$ with interesting homology of $U(f)$.
We present one concrete family in Example \ref{EXAMPLE:RABBITS}.
We then consider the special family of polynomial endomorphisms known as
polynomial skew products. While Theorem 1.2 applies to certain polynomial
skew products, we develop additional sufficient criteria for $U(f)$ to have
interesting homology.
A polynomial skew product is a polynomial endomorphism having the form $f(z,w) = (p(z),
q(z,w))$, where $p$ and $q$ are polynomials. We assume that ${\rm deg}(p) =
{\rm deg} (q) = d$ and $p(z) = z^d + O(z^{d-1})$ and $q(z) = w^d
+O_z(w^{d-1})$, where we have normalized leading coefficients. Since $f$
preserves the family of vertical lines $\{z \} \times \CC$, one can analyze $f$
via the collection of one variable fiber maps $q_z(w) = q(z,w)$, for each $z
\in \CC$. In particular, one can define fiber-wise filled Julia sets $K_z$ and
Julia sets $J_z :=\partial K_z$ with the property that $w \in \CC \setminus K_z$ if and only if
the orbit of $(z,w)$ escapes vertically to a superattracting fixed point $[0:1:0]$ at infinity.
For this reason, polynomial skew products provide an
accessible generalization of one variable dynamics to two variables and have
been previously studied by many authors, including Jonsson in \cite{JON_SKEW}
and DeMarco, together with the first author of this paper, in \cite{S}.
In Section~\ref{SEC:SKEW} we provide the basic background on
polynomial skew products and prove:
\begin{thm}\label{THM:MAIN}
Suppose $f(z,w) = (p(z),q(z,w))$ is a polynomial skew product.
\begin{itemize}
\item If $J_{z_0}$ is disconnected for any $z_0 \in J_p$, then
$W^s([0:1:0])$ has infinitely generated first homology.
\item Otherwise, $W^s([0:1:0])$ is homeomorphic to an open ball.
\end{itemize}
\end{thm}
\noindent
The first statement is obtained by using Theorem \ref{THM:GENERAL_TECHNIQUE}, while the second is obtained using Morse Theory.
For any endomorphism there is also the measure of maximal
entropy $\mu = T \wedge T$. Thus another candidate for the name ``Julia set'' is
$J_2 := \text{supp}(\mu)$. The Julia set that is defined as the
complement of the Fatou set is sometimes denoted by $J_1$, to distinguish it from $J_2$.
The condition from Theorem \ref{THM:MAIN} that for some $z_0 \in \CC$, $J_{z_0}$ is
disconnected might seem somewhat unnatural. A seemingly more natural condition
might be that $J_2$ is disconnected, since for polynomial skew products it is
known (see \cite{JON_SKEW}) that $J_2 = \overline{\bigcup_{z \in J_p} J_z}$.
However,
in Example \ref{EXAMPLE:J1DISCONNJ2CONN} we present certain polynomial skew products with
$J_2$ connected, but with the Fatou set having infinitely generated first
homology. (These examples are obtained by applying
Theorem \ref{THM:MAIN} to examples from \cite{JON_SKEW} and \cite{S}.)
In fact, some of these examples persist over an open set within a one-variable holomorphic family of polynomial skew products.
Therefore,
for polynomial skew products,
connectivity of the fiber Julia sets $J_z$ is at least as important as the connectivity of $J_2$ to understanding the homology of the
Fatou set.
In Section \ref{SEC:QUADRATIC_FAMILY} we provide an example of a family of
polynomial skew products $f_a$ depending on a single complex parameter $a$ with
the following property: if $a$ is in the Mandelbrot set $\mathcal{M}$, then the Fatou
set $U(f_a)$ is homeomorphic to the union of three open balls, while if $a$ is outside of $\mathcal{M}$
then $H_1(U(f_a))$ is infinitely generated.
Since neither of the sufficient conditions from Theorems \ref{THM:JPI_DISCONN}
and \ref{THM:MAIN} extend naturally to general endomorphisms of $\ensuremath{\field{P}^k}$, it remains a
mystery what is an appropriate condition for endomorphism to have non-simply
connected Fatou set. We conclude Section \ref{SEC:FURTHER_APPS}, and this
paper, with a discussion of a few potential further applications of the
techniques of this paper to holomorphic endomorphisms of $\ensuremath{\field{P}^k}$.
\subsection*{Acknowledgments}
We thank John H. Hubbard for bringing us and some central ideas together at the
start of this project. We have benefited greatly from discussions with many
people, including Eric Bedford, Greg Buzzard, Laura DeMarco, Mattias Jonsson, Sarah Koch,
Lex Oversteegen, Rodrigo Perez, Han Peters, Enrique Pujals and Nessim Sibony.
The second author thanks Mikhail Lyubich and Ilia Binder for their mathematical
guidance and financial support while he was a postdoctoral fellow.
We thank the anonymous referee for many helpful comments, particularly those
encouraging us to prove stronger statements in Theorems \ref{THM:MAIN} and
\ref{THM:QUADRATIC_FAMILY}.
\section{The Green's current $T$}
\label{SEC:GREENS_CURRENT}
We provide a brief reminder of the properties of the Green's
current that will be needed later in this paper. We refer the reader who would
like to see more details to \cite{FS2,HP2,Ueda}. While the following construction works
more generally for generic (algebraically stable) rational maps having points of
indeterminacy, we restrict our attention to globally holomorphic maps of
$\ensuremath{\field{P}^k}$.
Suppose that $f:\ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$ is holomorphic and that the Jacobian of
$f$ does not identically vanish on $\ensuremath{\field{P}^k}$. Then $f$ lifts to a polynomial map
$F:\CC^{k+1} \rightarrow \CC^{k+1}$ each of whose coordinates is a homogeneous
polynomial of degree $d$ and so that the coordinates do not have a common
factor. It is a theorem that
\begin{eqnarray}\label{EQN:GREEN1}
G(z) = \lim_{n\rightarrow \infty} \frac{1}{d^n} \log ||F^n(z)||
\end{eqnarray}
\noindent
converges to a plurisubharmonic\footnote{We will often use the abbreviation PSH in place of plurisubharmonic and we use the convention that PSH functions
cannot be identically equal to $-\infty$.}
function $G:\CC^{k+1} \rightarrow [-\infty,\infty)$
called the {\em Green's function associated to $f$}. Since $f$ is globally
well-defined on $\ensuremath{\field{P}^k}$ we have that $F^{-1}(0) = 0$. It has been established
that $G$ is Holder continuous and locally bounded on $\CC^{k+1} \setminus
\{0\}$.
If $\pi:\CC^{k+1} \setminus \{0\} \rightarrow \ensuremath{\field{P}^k}$ is the canonical
projection, there is a unique positive closed $(1,1)$ current $T$ on $\ensuremath{\field{P}^k}$
satisfying $\pi^{*} T = \frac{1}{2\pi} dd^c G$. (This normalization is not uniform--many authors do not divide by $2\pi$.) More explicitly, consider any open set $V
\subset \ensuremath{\field{P}^k}$ that is ``small enough'' so that a holomorphic section $\sigma : V
\rightarrow \CC^{k+1}$ of $\pi$ exists. Then, on $V$ we have that $T$ is given
by $T = \frac{1}{2\pi} dd^c (G \circ \sigma)$.
Choosing appropriate open sets covering $\ensuremath{\field{P}^k}$ and sections of $\pi$ on each of them, the result extends to all of $\ensuremath{\field{P}^k}$ producing a
single closed positive $(1,1)$ current on $\ensuremath{\field{P}^k}$ independent of the choice of
open sets and sections used. See \cite[Appendix A.4]{SI1}.
By construction, the Green's current satisfies the invariance $f^*T = d\cdot T$. (See Section \ref{SUBSECTION:INVARIANCE_AND_RESTRICTION} for the definition of the pull-back $f^*T$.)
Recall that the Fatou set $U(f)$ is the maximal open set in $\ensuremath{\field{P}^k}$ where the
family of iterates $\{f^n\}$ form a normal family and that the Julia set of $f$
is given by $J(f) = \ensuremath{\field{P}^k} \setminus U(f)$. A major motivation for studying the Green's current is the following.
\begin{thm}
Let $f:\ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$ be a holomorphic endomorphism and let $T$ be the Green's current corresponding to $f$. Then, $J(f) = {\rm supp} \ T$.
\end{thm}
\noindent
See, for example, \cite[Proposition 4.5]{FS2} or \cite[Theorem 2.2]{Ueda}.
\begin{rmk}\label{RMK:AFFINE_GREENS_FUNCTION}
If $f$ is a polynomial endomorphism, another form of Green's function, given by
\begin{eqnarray}\label{EQN:GREEN2}
G_{\rm affine}(z) = \lim \frac{1}{d^n} \log_+ ||f^n(z)||
\end{eqnarray}
\noindent
is often considered in the literature.
(Here $\log_+ = \max\{\log,0\}$.) The result is again a PSH function $G:\CC^k \rightarrow [0,\infty)$.
We can relate $G_{\rm affine}$ to $G$ in the following way. Consider the open
set $V=\CC^k \subset \ensuremath{\field{P}^k}$. Using the section
$\sigma(z_1,\cdots,z_k) = (z_1,\cdots,z_k,1)$, we find $G_{\rm
affine}(z_1,\cdots,z_k) = G \circ \sigma(z_1,\cdots,z_k)$ because $||F^k \circ
\sigma||$ only differs from $||f^k||$ by a bounded amount for each iterate
$k$.
Therefore, if $f$ is a polynomial endomorphism of $\ensuremath{\field{P}^k}$, one can compute $T$ on $\CC^k$ using the formula $T = \frac{1}{2\pi} dd^c G_{\rm affine}$.
\end{rmk}
\begin{rmk}
Note that formulae (\ref{EQN:GREEN1}) and (\ref{EQN:GREEN2}) are independent of the norm $\|\cdot \|$ that is used since any two norms are equivalent up to a multiplicative constant.
\end{rmk}
\begin{rmk}\label{RMK:GREEN_1D}
When $k = 1$, the resulting Green's current is precisely the measure of maximal entropy $\mu_f$ whose support is the Julia set $J(f) \subset \Po$. If $f$ is a polynomial, then $\mu_f$ also coincides with the harmonic measure on $K(f)$, taken with respect to the point at infinity.
\end{rmk}
\section{Linking with a closed positive $(1,1)$ current in $\ensuremath{\field{P}^k}$.}
\label{SEC:LINKING}
Suppose that $S$ is an (appropriately normalized) closed positive $(1,1)$
current on $\ensuremath{\field{P}^k}$ and $\gamma \subset \ensuremath{\field{P}^k} \setminus {\rm supp}(S)$ is a piecewise
smooth closed loop. We will define a linking number $lk(\gamma,S) \in \RR /
\ZZ$, depending only on the homology class $[\gamma] \in H_1(\ensuremath{\field{P}^k} \setminus
{\rm supp}(S))$.
\subsection{Classical linking numbers in $\mathbb{S}^3$}
\label{SUBSEC:CLASSICAL_DEF}
Classically one considers the linking number of two oriented loops $c$ and $d$
in $\mathbb{S}^3$. The linking number $lk(c,d) \in \mathbb{Z}$ is found by
taking any oriented surface $\Gamma$ with oriented boundary $c$ and defining
$lk(c,d)$ to be the signed intersection number of $\Gamma$ with $d$ as in
Figure \ref{LINK}. For this and many equivalent definitions of linking number
in $\mathbb{S}^3$ see \cite[pp. 132-133]{ROLF}, \cite[pp. 229-239]{BO_TU}, and \cite[Problems
13 and 14]{MILNOR_TOP}.
\begin{figure}[!ht]\label{LINK}
\begin{center}
\begin{picture}(0,0)%
\epsfig{file=general_linking.ps}%
\end{picture}%
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(1577,1103)(1519,-966)
\put(2690,-393){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\Gamma$}%
}}}}
\put(2912,-170){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$c$}%
}}}}
\put(1724, 53){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$d$}%
}}}}
\end{picture}%
\end{center}
\caption{Here $lk(c,d) = +2$.}
\end{figure}
To see that this linking number is well-defined notice that assigning $lk(c,d) = [\Gamma]
\cdot [d]$, where $\cdot$ indicates the intersection product on
$H_*(\mathbb{S}^3,c)$, coincides with the classical definition.
(For background on the intersection product on homology, see \cite[pages 366-372]{BREDON}.)
If $\Gamma'$
is any other 2-chain
with $\partial \Gamma ' = c$ then $\partial (\Gamma -
\Gamma') = [c]- [c] = 0$ and $(\Gamma - \Gamma')$ represents a homology class in
$H_2(\mathbb{S}^3)$. Since $H_2(\mathbb{S}^3) = 0$, $[\Gamma - \Gamma'] = 0$
forcing $[\Gamma - \Gamma'] \cdot [d] = 0$. Therefore: $[\Gamma] \cdot [d] =
[\Gamma'] \cdot [d]$, so that $lk(c,d)$ is well defined.
\vspace{.1in}
\subsection{Generalization}
Given any closed positive $(1,1)$ current $S$ on $\ensuremath{\field{P}^k}$ and any piecewise smooth two chain $\sigma$ in $\ensuremath{\field{P}^k}$ with $\partial \sigma$ disjoint from ${\rm supp} \ S$,
we can define
\begin{eqnarray*}
\left<
\sigma,S \right> = \int_\sigma \eta_S
\end{eqnarray*}
\noindent
where $\eta_S$ is a smooth approximation of $S$ within it's cohomology class in
$\ensuremath{\field{P}^k}-\partial \sigma$, see \cite[pages 382-385]{GH}. The resulting number
$\left< \sigma,S \right>$ will depend only on the cohomology class of $S$ and
the homology class of $\sigma$ within $H_2(\ensuremath{\field{P}^k},\partial \sigma)$. (Note that if $S$ is already a smooth form, one need not require
that $\partial \sigma$ be disjoint from ${\rm supp} \ S$.)
Notice that $H_2(\ensuremath{\field{P}^k})$ is generated by the class of any complex projective line
$L \subset \ensuremath{\field{P}^k}$. Since $S$ is non-trivial, $\left<L,S\right> \neq 0$, so that
after an appropriate rescaling we can assume that $\left<L,S\right> =1$. In
the remainder of the section we assume this normalization. (It is satisfied by
the Green's Current from Section \ref{SEC:GREENS_CURRENT}.)
What made the linking numbers in $\mathbb{S}^3$ well-defined, independent of
the choice of $\Gamma$, is that $H_2(\mathbb{S}^3) = 0$. One cannot
make the immediately analogous definition that $lk(\gamma,S) = \left<
\Gamma,S \right>$ in $\ensuremath{\field{P}^k}$, since $H_2(\ensuremath{\field{P}^k}) \neq 0$ implies that $\left< \Gamma,S
\right>$ can depend on the choice of $\Gamma$. For example, given $\Gamma$
with $\partial \Gamma = \gamma$ then $\partial \Gamma' = \gamma$ for $\Gamma' =
\Gamma + L$, however $\left< \Gamma',S \right> - \left< \Gamma,S \right> =
\left<L,S \right> = 1 \neq 0$.
There is a simple modification: Given any $\Gamma$ and $\Gamma'$ both having
boundary $\gamma$, $[\Gamma' - \Gamma] \in H_2(\ensuremath{\field{P}^k})$ so that $[\Gamma' -
\Gamma] \sim k\cdot [L]$ for some $k \in \ZZ$. Since $S$ is normalized, this
gives that $\left< \Gamma',S \right> = \left< \Gamma,S \right> \ (\text{mod} \
1)$.
\begin{defn}\label{DEFN:LK}
Let $S$ be a normalized closed positive $(1,1)$ current on $\ensuremath{\field{P}^k}$ and let $\gamma$
be a piecewise smooth closed curve in $\ensuremath{\field{P}^k} \setminus {\rm supp}(S)$.
We define the {\em linking number}
$lk(\gamma,S)$ by
\begin{eqnarray*}
lk(\gamma,S) := \left< \Gamma,S \right> \ (\text{mod} \ 1)
\end{eqnarray*}
\noindent
where $\Gamma$ is any piecewise smooth two chain with $\partial \Gamma =
\gamma$.
\end{defn}
Unlike linking numbers between closed loops in $\mathbb{S}^3$, it is often the
case that that $\left< \Gamma,S \right> \not \in \ZZ$, resulting in non-zero
linking numbers $(\text{mod} \ 1)$. See Subsection \ref{SUBSEC:EXAMPLE_LK} for an explicit example.
\begin{prop}\label{PROP:LINKING_DEPENDS_ON_HOMOLOGY}
If $\gamma_1$ and $\gamma_2$ are homologous in $H_1(\ensuremath{\field{P}^k} \setminus {\rm supp} \ S)$, then
$lk(\gamma_1,S) = lk(\gamma_2,S)$.
\end{prop}
\begin{proof}
Let $\Gamma$ be any piecewise smooth two chain contained in $\ensuremath{\field{P}^k}
\setminus {\rm supp} \ S$ with $\partial \Gamma = \gamma_1 - \gamma_2$. Then, since
$\ensuremath{\field{P}^k} \setminus {\rm supp} \ S$ is open and $\Gamma$ is compact subset,
$\Gamma$ is bounded away from the support of $S$. Consequently for any smooth
approximation $\eta_S$ of $S$ supported in a sufficiently small neighborhood of $S$,
we have $lk(\gamma_1,S) - lk(\gamma_2,S) = \int_\Gamma \eta_T = 0$.
\end{proof}
\begin{cor}\label{COR:NONZERO_LINK}If $\gamma \in \ensuremath{\field{P}^k} \setminus {\rm supp} \ S$ with $lk(\gamma,S) \neq 0$, then $\gamma$ is a homologically non-trivial
loop in $\ensuremath{\field{P}^k} \setminus {\rm supp} \ S$.
\end{cor}
Since $lk(\gamma,S)$ depends only on the homology class of $\gamma$ and the pairing $\left<\cdot,S\right>$ is linear in the space
of chains $\sigma$ (having $\partial \sigma$ disjoint from ${\rm supp} \ S$), the linking number descends to a homomorphism:
\begin{eqnarray*}
lk(\cdot,S): H_1(\ensuremath{\field{P}^k} \setminus {\rm supp} \ S) \rightarrow \RR/\ZZ.
\end{eqnarray*}
\noindent
Similarly $lk(\cdot,S) :H_1(\Omega) \rightarrow \RR/\ZZ$ for any open $\Omega \subset \ensuremath{\field{P}^k} \setminus {\rm supp} \ S$.
\begin{rmk}{\bf (Topological versus Geometric linking numbers.)}\label{RMK:GEOMETRIC_VS_TOPOLOICAL_LK}
The classical linking number, and also
Definition \ref{DEFN:LK}, depend only on the homology class of the loop $\gamma$ (in
the complement of some other loop of the support of some current, respectively.)
A linking number depending on the geometry of
$\gamma$ is given by
\begin{eqnarray*}\label{EQN:ALT_LINK_DEF}
\widehat{lk}(\gamma,T) := \left<\Gamma,S - \Omega \right> \in \mathbb{R},
\end{eqnarray*}
\noindent
\noindent
where $\partial \Gamma = \gamma$ and $\Omega$ is (normalization of) the K\"ahler form defining the
Fubini-Study metric on $\ensuremath{\field{P}^k}$. Given any $\Gamma$ and $\Gamma'$ both having boundary
$\gamma$ we have that $\left<\Gamma-\Gamma',T - \Omega \right> = 0$, since $S$ and $\Omega$
are cohomologous. (In the language of \cite[p. 132]{ROE_NEWTON}, we say that
$T - \Omega$ is in the ``linking kernel of $\ensuremath{\field{P}^k}$''.)
Because ${\rm supp} \ \Omega = \ensuremath{\field{P}^k}$, the statement of Proposition
\ref{PROP:LINKING_DEPENDS_ON_HOMOLOGY} does not apply.
Rather, $\widehat{lk}(\gamma,S)$ depends on the geometry of $\gamma \subset
\ensuremath{\field{P}^k} \setminus {\rm supp} \ S$. In fact, similar linking numbers were used in \cite{HL1,HL2} to
determine if a given real-analytic $\gamma$ has the appropriate geometry to be the boundary of a positive holomorphic
$1$-chain (with bounded mass).
\end{rmk}
\begin{rmk}\label{RMK:GENERAL_MANIFOLDS}{\bf (Other manifolds.)}
Suppose that $M$ is some other compact complex manifold with $H_2(M)$ of rank
$k$, generated by $\sigma_1,\ldots,\sigma_k$. If
$\left<\sigma_1,S\right>,\ldots,\left<\sigma_k,S\right>$ are rationally
related, then $S$ can be appropriately rescaled so that Definition
\ref{DEFN:LK} provides a well-defined linking number between any piecewise
smooth closed curve $\gamma \in M \setminus {\rm supp} S$ and $S$. If $H_2(M)$ has
rank $k > 1$, this provides a rather restrictive cohomological condition on
$S$. (It is similar to the restriction of being in the ``linking kernel'' described in \cite{ROE_NEWTON}.)
\end{rmk}
\subsection{Invariance and restriction properties of $\langle \cdot , \cdot \rangle$}
\label{SUBSECTION:INVARIANCE_AND_RESTRICTION}
Suppose that $\Omega, \Lambda$ are open subsets of $\CC^j$ and $\CC^k$, and $f:
\Omega \rightarrow \Lambda$ is a (possibly ramified) analytic mapping. Let $S$
be a closed positive $(1,1)$ current given on $\Lambda$ by $S = dd^c u$ for
some PSH function $u$. If $f(\Omega)$ is not contained in the polar locus of
$u$, then the {\em pull-back of $S$ under} $f$ is defined by pulling back the
potential: $f^*(S) := dd^c (u \circ f)$. Since $u \circ f$ is not identically
equal to $-\infty$, it is also a PSH function, and $f^*(S)$ is a well-defined
closed positive $(1,1)$ current.
Suppose that $M$ and $N$ are complex manifolds and that $S$ is a closed
positive $(1,1)$ current on $N$. If $f:M \rightarrow N$ is a holomorphic map
with $f(M)$ not entirely contained in the polar locus of $S$, then the
pull-back $f^*S$ can be defined by taking local charts and local potentials for
$S$. See \cite[Appendix A.7]{SI1} and \cite[p.
330-331]{HP2} for further details.
\begin{prop}\label{PROP:PAIRING_INV}
Suppose that $S$ is a closed positive $(1,1)$ current on $N$ and $f:M
\rightarrow N$, with $f(M)$ not contained in the polar locus of $S$.
If $\sigma$ is a piecewise smooth two chain in $M$ with $\partial \sigma$
disjoint from ${\rm supp} \ f^* S$, then
$\left< f_* \sigma, S \right> = \left< \sigma, f^*S \right>$.
\end{prop}
\begin{proof}
Since $f(M)$ is not contained in the polar locus of $S$, $f^*S$ is well-defined.
Since $\partial \sigma$ is disjoint from ${\rm supp} f^* S$, $\partial f(\sigma)$ is disjoint from ${\rm supp} S$.
Let $\eta_S$ be a smooth approximation of $S$ in the same cohomology class as $S$
and having support disjoint from $\partial f(\sigma)$.
Then, $\left< f_* \sigma, S \right> = \int_{f_* \sigma} \eta_S = \int_\sigma f^* \eta_S
= \left< \sigma, f^*S \right>$, since $f^* \eta_S$ is a smooth approximation of $f^*S$.
\end{proof}
In the case that $M$ is an analytic submanifold of $N$ not entirely contained
in the polar locus of $S$, the restriction of $S$ to $M$ is defined by
$S\vert_M := \iota^*S$, where $\iota:M \rightarrow N$ is the inclusion. When
computing linking numbers, we will often choose $\Gamma$ within some
one-complex dimensional curve $M$ in $N$, with $M$ not contained in the polar
locus of $S$. In that case $S|_M$ is a positive measure on $M$ and we can use
the following:
\begin{cor}\label{COR:RESTRICT_THEN_INTEGRATE}
Let $S$ be a positive closed $(1,1)$ current on $N$ and $M$ be an analytic
curve in $N$ that is not entirely contained in the polar locus of $S$. If
$\Gamma$ is a piecewise smooth two chain in $M$ with $\iota(\partial \Gamma)$
disjoint from ${\rm supp} \ S$, then
\begin{eqnarray}
\left<\iota(\Gamma),S \right> =\int_\Gamma S \vert_M.
\end{eqnarray}
\end{cor}
\begin{proof}
Proposition \ref{PROP:PAIRING_INV} gives $\left< \iota(\Gamma),S \right> \equiv \left<\iota_* \Gamma,S\right> = \left<\Gamma,\iota^* S\right> = \left< \Gamma, S
\vert_M \right>.$ Any positive $(1,1)$ current on $M$ is a positive
measure. Thus, $\int_\Gamma S \vert_M$ is defined, and coincides with the
result obtained by first choosing a smooth approximation to $S \vert_M$.
Thus $\left< \Gamma, S \vert_M \right> = \int_\Gamma S|_M$.
\end{proof}
\noindent
In the remainder of the paper, we will not typically distinguish between $\Gamma$ and $\iota(\Gamma)$.
\subsection{Linking with the Green's Current}
\label{SUBSEC:EXAMPLE_LK}
We conclude the section with some observations specific to the Green's current
$T$, including the proof of Theorem \ref{THM:GENERAL_TECHNIQUE}, as well as an
example illustrating the definitions given above. It is worth noting that the
Green's current has empty polar locus, since $G$ is locally bounded on
$\CC^{k+1} \setminus \{0\}$, so that the hypotheses of Proposition
\ref{PROP:PAIRING_INV} and Corollary \ref{COR:RESTRICT_THEN_INTEGRATE} are easy
to check.
\begin{prop}\label{PROP:RATIONAL_LINKING}
Suppose that $f: \ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$, $W^s(\zeta) \subset U(f)$ is the basin of attraction of some attracting
periodic cycle $\zeta$, and $T$ is the Green's Current of $f$. Then
\begin{eqnarray*}
lk(\cdot,T) :H_1(W^s(\zeta)) \rightarrow \ZZ[1/d]/\ZZ \subset \QQ/\ZZ.
\end{eqnarray*}
\end{prop}
\begin{proof}
Suppose that $\zeta$ is of period $N$. Then, the basin of attraction
$W^s(\zeta)$ contains a union of small open balls $B_0,\ldots,B_{N-1}$ centered
at each point $\zeta,\ldots,f^{N-1}(\zeta)$ of the orbit $\zeta$. Since
$H_1(W^s(\zeta))$ is generated by the classes of piecewise smooth loops, it is
sufficient to consider a single such loop $\gamma$. Since $\gamma$ is a
compact subset of $W^s(\zeta)$, there is some $n$ so that $f^n(\gamma)$ is
contained in $\cup B_i$, giving that $f^n(\gamma)$ has trivial homology class
in $H_1(W^s(\zeta))$. In particular, $lk(f^n(\gamma),T) = 0 \ (\text{mod} \
1)$, so that for any $\Gamma$ with $\partial \Gamma = \gamma$ we have
$\left<f^n(\Gamma),T \right> = k$ for some integer $k$.
Recall that $f^* T = d T$, where $d$ is the algebraic degree of $f$.
Proposition \ref{PROP:PAIRING_INV} gives that $k = \left<f^n(\Gamma),T \right> =
\left<\Gamma,(f^*)^n T\right> = d^n \left<\Gamma,T\right>$. In particular,
$lk(\gamma,T) \equiv k/d^n \ (\text{mod} \ 1)$.
\end{proof}
Using Proposition \ref{PROP:RATIONAL_LINKING}, Theorem \ref{THM:GENERAL_TECHNIQUE}
presents a general strategy for showing that $H_1(U(f))$ is infinitely generated.
\vspace{0.1in}
\begin{proof}[Proof of Theorem \ref{THM:GENERAL_TECHNIQUE}:]
Since $\Omega$ is a union of basins of attraction for attracting periodic points of $f$,
Proposition \ref{PROP:RATIONAL_LINKING} gives that $lk(\cdot,T): H_1(\Omega) \rightarrow \QQ/\ZZ$.
There are homology classes $c \in H_1(\Omega)$ with $lk(c,T) \neq 0$ arbitrarily close to
zero, so, since $lk(\cdot,T)$ is a homomorphism, the image of $lk(\cdot,T): H_1(\Omega) \rightarrow \QQ/\ZZ$ is dense
in $\QQ/\ZZ$. Because any dense subgroup of $\QQ/\ZZ$ is infinitely generated,
the image of $lk(\cdot,T)$ is infinitely generated, hence $H_1(\Omega)$ is,
as well.
\end{proof}
\begin{example}\label{EXAMPLE:DEFN_LINK}
Consider the polynomial skew product $(z,w) \mapsto (z^2,w^2+0.3z)$, for which
the Fatou set consists of the union of basins of attraction for three
super-attracting fixed points: $[0:1:0]$, $[0:0:1]$, and $[1:0:0]$. In Figure
\ref{FIG_LINK_IN_LINE} we show a computer generated image of the intersection
of $W^s([0:1:0])$ (lighter grey) and $W^s([0:0:1])$ (dark grey) with the
vertical line $z=z_0=0.99999$. In terms of the fiber-wise Julia sets that were
mentioned in the introduction, $K_{z_0}$ is precisely the closure of the dark
grey region and $J_{z_0}$ is its boundary.
We will see in Proposition \ref{PROP:HAMONIC_MEASURE_ON_VERTICALS} that $T
\vert_{z=z_0}$ is precisely the harmonic measure on $K_{z_0}$. Using this
knowledge, and supposing that the computer image is accurate, we illustrate
how the above definitions can be used to show that the smooth loop $\gamma$ shown in
the figure represents a non-trivial homology class in $H_1(W^s([0:1:0]))$.
Suppose that we use the two chain $\Gamma_1$ that is depicted in the figure to
compute $lk(\gamma,T)$. The harmonic measure on $K_{z_0}$ is supported in $J_{z_0}$
and equally distributed between the four symmetric pieces
with total measure of $K_{z_0}$ is $1$. Therefore (using Corollary
\ref{COR:RESTRICT_THEN_INTEGRATE}) we see that $lk(\gamma,T) = \int_{\Gamma_1} T
\vert_{z=z_0} = \frac{1}{4} \ (\text{mod} \ 1)$, because $\Gamma_1$ covers
exactly $1$ these $4$ pieces of $K_{z_0}.$
If instead we use $\Gamma_2$, the disc ``outside of $\gamma$'' within the projective
line $z=z_0$ with the orientation chosen so that $\partial \Gamma_2 = c$ as
depicted, then $lk(\gamma,T) = \int_{\Gamma_2} T \vert_{z=z_0} = -\frac{3}{4} \
(\text{mod} \ 1)$ (because $\Gamma_2$ covers $3$ of the $4$ symmetric pieces of
$K_{z_0},$ but with the opposite orientation than that of $\Gamma_1$).
However, $-\frac{3}{4} \ (\text{mod} \ 1) = \frac{1}{4} \ (\text{mod} \ 1)$, so
we see that the computed linking number does come out the same.
Since $lk(\gamma,T) \neq 0 \ (\text{mod} \ 1)$, Corollary \ref{COR:NONZERO_LINK}
gives that it is impossible to have any $2$-chain $\Lambda$ within
$W^s([0:1:0])$ (even outside of the vertical line $z=z_0$) so that $\partial
\Lambda = c$. Thus $[\gamma] \neq 0 \in H_1(W^s([0:1:0]))$.
\begin{figure}
\begin{picture}(0,0)%
\epsfig{file=link.ps}%
\end{picture}%
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(4607,4607)(1201,-4968)
\put(1801,-2536){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\Gamma_2$}%
}}}}
\put(2026,-3811){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\gamma$}%
}}}}
\put(2546,-3912){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\Gamma_1$}%
}}}}
\end{picture}%
\caption{\label{FIG_LINK_IN_LINE} Both choices $\Gamma_1$ (inside of $\gamma$) and $\Gamma_2$ (outside of $\gamma$) yield the same $lk(\gamma,T)$.}
\end{figure}
\end{example}
\section{Application to Polynomial Endomorphisms of $\ensuremath{\field{P}^2}$}
\label{SEC:ENDO}
Having developed the linking numbers in Section \ref{SEC:LINKING}, Theorem \ref{THM:JPI_DISCONN} will be a consequence of the following
well-known result:
\begin{thm}\label{THM:J_DISCONN_RS}\cite[Thm. 5.7.1]{BEAR}
Let $g:\Po \rightarrow \Po$ be a rational map. Then, if
$J(g)$ is disconnected, it contains uncountably many components, and each point
of $J(g)$ is an accumulation point of infinitely many distinct components of
$J(g)$.
\end{thm}
Let us begin by studying the Fatou set of one-dimensional maps:
\begin{prop}\label{PROP:1VAR_FATOU}
If $g:\Po \rightarrow \Po$ is a hyperbolic rational map with disconnected Julia set $J(g)$, then the Fatou set $U(g)$ has infinitely generated first homology.
\end{prop}
\begin{rmk}\label{EG:ONE_VARIABLE_EXAMPLES}
When reading the proof of
Proposition \ref{PROP:1VAR_FATOU}, it is helpful to keep in mind two examples.
The first is the polynomial $r(z) = z^3-0.48z+(0.706260+0.502896i)$ for
which one of the critical points escapes to infinity, while the other is in the
basin of attraction for a cycle of period $3$. The result is a filled Julia
set with infinitely many non-trivial connected components, each of which is
homeomorphic to the Douady's rabbit. (See \cite{MILNOR}.)
The second example are maps of the form $f(z) = z^n + \lambda/z^h$, which were
considered in \cite{MCMULLEN}. For suitable $n, h$, and $\lambda$ the Julia set
is a Cantor set of nested simple closed curves.
\end{rmk}
\begin{proof}[Proof of Proposition \ref{PROP:1VAR_FATOU}:]
Since $g$ is hyperbolic, $U(g)$ consists of the basins of attraction of
finitely many attracting periodic points. Therefore, according to Theorem
\ref{THM:GENERAL_TECHNIQUE}, it is sufficient to find elements of $H_1(U(g))$
having non-zero linking numbers with $T=\mu_g$ that are arbitrarily close to $0$.
Theorem \ref{THM:J_DISCONN_RS} will allow us to find a sequence of piecewise
smooth two chains $\Gamma_1, \Gamma_2,\ldots$ so that $0 <
\left<\Gamma_{n-1},\mu_G\right> < \left<\Gamma_{n},\mu_G\right> < 1$ and
$\partial \Gamma_n \subset U(g)$, as follows.
For each $n$, $\Gamma_n$ will be a union of disjoint positively-oriented closed
discs in $\Po$, each counted with weight one. Since $J(g)$ is disconnected, we
can find a piecewise smooth oriented loop $\gamma_1 \subset U(g)$ that
separates $J(g)$. Let $\Gamma_1$ be the positively-oriented disc in $\Po$
having $\gamma_1$ as its oriented boundary. Since $\mu_g$ is normalized and
$\gamma_1$ separates $J(g) = {\rm supp}(\mu_g)$, we have $0 <
\left<\Gamma_1,\mu_g\right> < 1$. Now suppose that
$\Gamma_1,\ldots,\Gamma_{n-1}$ have been chosen. Since
$\left<\Gamma_{n-1},\mu_g\right> < 1$, we have $J(g) \cap (\Po \setminus
\Gamma_{n-1}) \neq \emptyset$. Then, according to Theorem
\ref{THM:J_DISCONN_RS}, there is more than one component of $J(g) \cap (\Po
\setminus \Gamma_{n-1})$, so we can choose an oriented loop $\gamma_n \subset
U(g) \cap (\Po \setminus \Gamma_{n-1})$ so that at least one component of $J(g)
\cap (\Po \setminus \Gamma_{n-1})$ is on each side of $\gamma_n$. Then, we let
$\Gamma_n$ be the union of oriented discs in $\Po$ consisting of the points
inside of $\gamma_n$ and any discs from $\Gamma_{n-1}$ that are not inside of
$\gamma_n$.
Considering the homology class $[\partial \Gamma_n - \partial \Gamma_{n-1}] \in H_1(U(g))$ we have that
\begin{eqnarray*}
lk([\partial \Gamma_n - \partial \Gamma_{n-1}],\mu_g) = \left<\Gamma_{n},\mu_G\right> - \left<\Gamma_{n-1},\mu_G\right> \ (\text{mod} \ 1)
\end{eqnarray*}
\noindent
is non-zero for each $n$. However, since
\begin{eqnarray*}
\sum_n \left<\Gamma_{n},\mu_G\right> - \left<\Gamma_{n-1},\mu_G\right>
\end{eqnarray*}
\noindent
is bounded by $1$, we have that $lk([\partial \Gamma_n - \partial
\Gamma_{n-1}],\mu_g) \rightarrow 0$ in $\QQ/\ZZ$. Theorem
\ref{THM:GENERAL_TECHNIQUE} then gives that $H_1(U(g))$ is infinitely
generated.
\end{proof}
Let $f:\ensuremath{\field{P}^2} \rightarrow \ensuremath{\field{P}^2}$ be a polynomial endomorphism given
in projective coordinates by
\begin{eqnarray}\label{EQN:LIFT_F}
f([Z:W:T]) = [P(Z,W,T):Q(Z,W,T):T^d].
\end{eqnarray}
\noindent
Since $f:\ensuremath{\field{P}^2} \rightarrow \ensuremath{\field{P}^2}$ is assumed globally holomorphic, $P(Z,W,T), Q(Z,W,T),$ and $T^d$ have no common zeros other than $(0,0,0)$.
The (projective) line at infinity $\Pi:=\{T=0\}$ is uniformly
super-attracting and the restriction $f_\Pi$
is given in homogeneous coordinates by
\begin{eqnarray}\label{EQN:LIFT_F_PI}
f_\Pi:([Z:W]) \rightarrow [P_0(Z,W):Q_0(Z,W)].
\end{eqnarray}
\noindent
where $P_0 := P(Z,W,0)$ and $Q_0 :=Q(Z,W,0)$.
Let $U(f)$ be the Fatou set for $f$ and $U({f_\Pi})$ the Fatou set for $f_\Pi$. The former is an open set in $\ensuremath{\field{P}^2}$, while
the latter is an open set in the line at infinity $\Pi$.
\begin{lem}\label{LEM:FAT_PI}If $f_\Pi$ is hyperbolic then
$U({f_\Pi}) \subset U(f)$.
\end{lem}
\begin{proof}
Since $f_\Pi$ is hyperbolic, $U({f_\Pi})$ is in the union of the basins of
attraction $W^s_\Pi(\zeta_i)$ of a finite number of periodic attracting points
$\zeta_1,\ldots,\zeta_k$. The line at infinity $\Pi$ is transversally
superattracting, so each $\zeta_i$ is superattracting in the
transverse direction to $\Pi$ and (at least) geometrically attracting within
$\Pi$. Let $W^s(\zeta_i) \subset \ensuremath{\field{P}^2}$ be the basin of attraction for $\zeta_i$ under $f$. Then, $W^s_\Pi(\zeta_i) \subset W^s(\zeta_i)$,
giving $U({f_\Pi}) \subset U(f)$.
\end{proof}
Let $T$ be the Green's current for $f$ and let $\mu_\Pi$ be the measure of maximal entropy for the restriction $f_{|\Pi}$.
\begin{lem}\label{LEM:PI_RESTRICTION}
The restriction $T_{| \Pi}$ coincides with $\mu_\Pi$.
\end{lem}
\begin{proof}
Consider the lift $F_\Pi:\CC^2 \rightarrow \CC^2$ of the rational map $f_\Pi :\Po \rightarrow \Po$. As observed in Remark \ref{RMK:GREEN_1D},
\begin{eqnarray*}
G_\Pi(Z,W) = \lim \frac{1}{d^n} \log ||F_\Pi^n(Z,W)||
\end{eqnarray*}
is the potential for $\mu_\Pi$, meaning that $\pi^* \mu_\Pi = \frac{1}{2\pi} dd^c G_\Pi$.
The restriction $T_{| \Pi}$ is obtained by restricting of the potential $G$ to $\pi^{-1}(\Pi) = \{(Z,W,0) \in \CC^3\}$. Specifically,
it is defined by $\pi^* (T_{|\Pi}) = \frac{1}{2\pi} dd^c(G(Z,W,0))$. Therefore, it suffices to show that $G(Z,W,0) = G_\Pi(Z,W)$.
However, this follows directly from the fact that $F(Z,W,0) = F_\Pi(Z,W)$. (Here $F$ is the lift of
$f$ to $\CC^3$, as given by (\ref{EQN:LIFT_F}) when considered in non-projective coordinates $[Z,W,T]$.)
\end{proof}
\vspace{0.1in}
\begin{proof}[Proof of Theorem \ref{THM:JPI_DISCONN}]
As in the proof of Proposition \ref{PROP:1VAR_FATOU}, we can find a sequence of
$1$-cycles $c_n$ in $U(f_\Pi)$ having linking numbers with $\mu_\Pi$
arbitrarily close to $0$ in $\QQ/\ZZ$. Since $f_{|\Pi}$ is hyperbolic, Lemma
\ref{LEM:FAT_PI} gives that each $c_n$ is in the union of basins of attraction
for finitely many attracting periodic points of $f$. In particular,
$lk(c_i,T)$ is well-defined for each $n$. Lemma
\ref{LEM:PI_RESTRICTION} gives that $T_{|\Pi} = \mu_\Pi$, so that
$lk(c_n,T)$ (considering $c_n$ in $\ensuremath{\field{P}^2}$) coincides with $lk(c_n,\mu_\Pi)$
(considering $c_n$ in the projective line $\Pi$). Therefore, $lk(c_n,T) \neq 0$
and $lk(c_n,T) \rightarrow 0$ in $\QQ/\ZZ$. Theorem
\ref{THM:GENERAL_TECHNIQUE} gives that the union of these basins has infinitely
generated first homology, and hence $U(f)$ does as well.
\end{proof}
\vspace{0.1in}
\begin{example}\label{EXAMPLE:RABBITS}
We embed the polynomial dynamics of $r(z)$ from Remark
\ref{EG:ONE_VARIABLE_EXAMPLES} as the dynamics on the line at infinity $\Pi$
for a polynomial endomorphism of $\ensuremath{\field{P}^2}$. Let $R(Z,W) =
Z^3-0.48ZW^2+(0.706260+0.502896i)W^3$ be the homogeneous form of $r$, and let
$P(Z,W,T)$ and $Q(Z,W,T)$ be any homogeneous polynomials of degree $2$. Then
\begin{eqnarray*}
f([Z:W:T]) = [R(Z,W)+T\cdot P(Z,W,T):W^3+T \cdot Q(Z,W,T):T^3]
\end{eqnarray*}
\noindent
is a polynomial endomorphism with $f_\Pi = r$. In this case, Theorem \ref{THM:JPI_DISCONN} gives that the basin of attraction of $[1:0:0]$ for $f$ has infinitely generated first homology.
\end{example}
\begin{rmk}
Suppose that $f: \ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$ is a holomorphic endomorphism having an
invariant projective line $\Pi$. Lemma \ref{LEM:PI_RESTRICTION} can be
extended to give that $T_{|\Pi} = \mu_\Pi$, where $\mu_\Pi$ is the measure of
maximal entropy for the one-dimensional map $f_{|\Pi}$. If $\Pi$ is at least
geometrically attracting transversally, $f_{|\Pi}$ is hyperbolic, and
$J(f_{|\Pi})$ is disconnected, then essentially the same proof as that of
Theorem \ref{THM:JPI_DISCONN} gives that the Fatou set $U(f)$ has infinitely
generated first homology.
Using this observation, one can inductively create sequences of polynomial
endomorphisms $f_k: \ensuremath{\field{P}^k} \rightarrow \ensuremath{\field{P}^k}$, for every $k$, each having Fatou
set with infinitely generated first homology. One begins with a hyperbolic
polynomial endomorphism $f_1$ of the Riemann sphere $\mathbb{P}^1$ having
disconnected Julia set. Then, for each $k$, one can let $f_{k}:\ensuremath{\field{P}^k}
\rightarrow \ensuremath{\field{P}^k}$ be any polynomial endomorphism whose dynamics on the
hypersurface $\mathbb{P}^{k-1}$ at infinity is given by $f_{k-1}$. (When $k=2$,
the construction of $f_2:\ensuremath{\field{P}^2} \rightarrow \ensuremath{\field{P}^2}$ is similar to that from Example
\ref{EXAMPLE:RABBITS}.) The resulting maps each have a totally-invariant
projective line $\Pi$ that is transversally superattracting with ${f_k}_{|\Pi}
= f_1$ hyperbolic with disconnected Julia set. Thus, the Fatou set $U(f_k)$
has infinitely generated first homology.
\end{rmk}
\section{Application to Polynomial skew products}
\label{SEC:SKEW}
A {\em polynomial skew product} is a polynomial endomorphism of the form
\begin{eqnarray*}
f(z,w) = (p(z),q(z,w))
\end{eqnarray*}
\noindent with $p$ and $q$ polynomials of degree $d$ where $p(z) = z^d +
O(z^{d-1})$ and $q(z) = w^d +O_z(w^{d-1})$. (See Jonsson \cite{JON_SKEW}.)
Theorem \ref{THM:JPI_DISCONN} can by applied to many polynomial skew products
$f$ to show that that $H_1(U(f))$ is infinitely generated; for example, $f(z,w)
= (z^2,w^2+10z^2)$, which has $J_{\Pi}$ a Cantor set. Next we will find
alternative sufficient conditions under which a polynomial skew product has
Fatou set with infinitely generated first homology, proving Theorem
\ref{THM:MAIN}. This theorem will apply to many maps to which Theorem
\ref{THM:JPI_DISCONN} does not apply; for example, $f(z,w) = (z^2,w^2-3z)$, for
which $J_{\Pi}$ is equal to the unit circle.
\vspace{0.1in}
\subsection{Preliminary background on polynomial skew products}
\label{SUBSEC:BACKGROUND_SKEW}
The Green's
current for any polynomial endomorphism can be computed in the affine coordinates on $\CC^2$ as $T :=
\frac{1}{2\pi} dd^{c} G_{\rm affine}$, where $G_{\rm affine}$ is the (affine)
Green's function defined in Remark \ref{RMK:AFFINE_GREENS_FUNCTION}. The
``base map'' $p(z)$ has a Julia set $J_p \subset \mathbb{C}$ and, similarly, a
Green's function $G_p(z):= \lim_{n \to \infty} \frac{1}{d^n} \log_+
||p^n(z)||$. Furthermore, one can define a fiber-wise Green's
function\footnote{For the purist: the Green's functions $G_p$ and $G_z$ should
also have the subscript ``affine'', but it is dropped here for ease of
notation. See Section \ref{SEC:GREENS_CURRENT} for the distinction.} by:
\begin{eqnarray*}
G_z(w) := G_{\rm affine}(z,w) - G_p(z).
\end{eqnarray*}
\noindent
For each fixed $z$, $G_z(w)$ is a subharmonic function of $w$ and one defines
the fiber-wise Julia sets by $K_z := \{G_z(w) = 0\}$ and $J_z := \partial K_z$.
The extension of $f$ to $\ensuremath{\field{P}^2}$ is given by
\begin{eqnarray} \label{EQN:SKEW_HOMOG}
f([Z:W:T]) = [P(Z,T):Q(Z,W,T):T^d],
\end{eqnarray} \noindent
where $P(Z,T)$ and $Q(Z,W,T)$ are the homogeneous versions of $p$ and $q$.
The point\\ $[0:1:0]$ that is ``vertically at infinity'' with respect to the affine coordinates $(z,w)$ is a totally-invariant super-attracting fixed point and $(z,w) \in W^s([0:1:0])$ if and only if
$w \in \CC \setminus K_z$.
Suppose that $(z,w) \in W^s([0:1:0])$ and $(z_n,w_n) := f^n(z,w)$. Then,
\begin{eqnarray}
G_{\rm affine}(z,w) &=& \lim \frac{1}{d^n} \log_+ \|f^n(z,w)\|_\infty = \lim \frac{1}{d^n} \log_+ |w_n| \,\, \mbox{and} \\
G_z(w) &=& G_{\rm affine}(z,w) - G_p(z) = \lim \frac{1}{d^n} \log_+ |w_n| - \lim \frac{1}{d^n} \log_+|z_n|. \label{EQN:VERTICAL_GREEN}
\end{eqnarray}
\noindent
since $|w_n| > |z_n|$ for all $n$ sufficiently large.
As mentioned in Section \ref{SUBSECTION:INVARIANCE_AND_RESTRICTION}, we can
restrict the current $T$ to any analytic curve obtaining a measure on that
curve. Of particular interest for skew products is the restriction $\mu_{z_0}$
of $T$ to a vertical line $\{z_0\} \times \mathbb{P}$. The following appears
as Jonsson \cite{JON_SKEW} Proposition 2.1 (i), we repeat it here for
completeness:
\begin{prop} \label{PROP:HAMONIC_MEASURE_ON_VERTICALS}
The restriction $T _{| {z=z_0}}$ of the Green's current $T$ to a vertical line
$(\{z_0\} \times \mathbb{P})$ coincides with the harmonic measure $\mu_{z_0}$ on $K_{z_0}$.
\end{prop}
\begin{proof}
Notice that
\begin{eqnarray*} T_{| {z=z_0}} &=& \frac{1}{2\pi} dd^c G_{{\rm affine}|{z=z_0}} =
\frac{1}{2\pi} dd^c G_{\rm affine}(z_0,w)\\ &=& \frac{1}{2\pi} dd^c \left(G_{\rm affine}(z_0,w) -
G_p(z_0)\right) = \frac{1}{2\pi} dd^c G_{z_0}(w). \end{eqnarray*}
\noindent
According to \cite[Thm 2.1]{JON_SKEW}, $G_{z_0}$ is the Green's function for
$K_z$ with pole at infinity. We have thus obtained that $\mu_{z_0}$ is exactly
the harmonic measure $\mu_{z_0}$ on $K_{z_0}$.
\end{proof}
\subsection{Topology of the basin of attraction $W^s([0:1:0])$}
\label{SEC:MAIN_RESULT}
\begin{prop}\label{W_PATH_CONNECTED} If $\zeta$ is a totally-invariant (super)-attracting
fixed point for a holomorphic $f:\mathbb{CP}^k \rightarrow \mathbb{CP}^k$, then $W^s(\zeta)$ is path-connected.
\end{prop}
\noindent
A nearly identical statement is proven for $\mathbb{CP}^2$ in Theorem 1.5.9 from
\cite{HP}. We refer the reader to their proof since it is nearly identical for
$\mathbb{CP}^k$. In particular, for any skew product
$W^s([0:1:0])$ is path connected.
\vspace{0.1in}
Although $G_z(w)$ is subharmonic in $w$ for any fixed $z$, it does not form a
PSH function of both $z$ and $w$. Consider the points $(z,w) \in W^s([0:1:0])$
for which $z \in J_p$. At these points $G_{\rm affine}$ is pluriharmonic, i.e. $dd^c
G_{\rm affine} = 0$, but $G_p(z)$ is not pluriharmonic, i.e. $dd^c G_p(z) > 0$.
Therefore, at these points $dd^c G_z(w) < 0$, so $G_z(w)$ is not PSH.
\begin{lem}\label{LEM:MINUS_G_Z_PSH}
The function $-G_z(w)$ is PSH at all points $(z,w) \in W^s([0:1:0]) \cap \CC^2$ and it
extends to a PSH function on all of $W^s([0:1:0])$. The resulting function is
pluriharmonic on $W^s([0:1:0])$ except at points for which $Z/T \in J_p$.
\end{lem}
\begin{proof}
Since $-G_z(w) = G_p(z) - G_{\rm affine}(z,w)$, with $G_{\rm affine}(z,w)$
pluriharmonic in $W^s([0:1:0])$ and $G_p(z)$ PSH everywhere, the result is PSH
in $W^s([0:1:0]) \cap \CC^2$.
Jonsson proves in \cite[Lemma 6.3]{JON_SKEW} that $G_z(w)$ extends as a PSH
function in a suitable neighborhood of $\Pi \setminus \{[0:1:0]\}$ and his proof
immediately gives that the result is pluriharmonic in a (possibly smaller)
neighborhood within $W^s([0:1:0])$ of $\Pi \setminus \{[0:1:0]\}$. Therefore, $-G_z(w)$
is also pluriharmonic in the same neighborhood.
Thus, $-G_z(w)$ extends to a PSH on $W^s([0:1:0]) \setminus \{[0:1:0]\}$ and, assigning
$-\infty$ to $[0:1:0]$, gives the desired extension to all of $W^s([0:1:0])$.
The result will be pluriharmonic except at $[0:1:0]$ and at the points in $W^s([0:1:0]) \cap \CC^2$
where $dd^c(-G_z(w)) > 0$, that is the points where $Z/T \in J_p$.
\end{proof}
\vspace{0.1in}
\begin{proof}[Proof of Theorem \ref{THM:MAIN}:]
We first suppose that $J_{z_0}$ is disconnected for some $z_0 \in J_p$.
Let $z_1,z_2,\ldots$ be any sequence of iterated preimages of $z_0$ so that $p^n(z_n) = z_0$.
Consider the vertical line $\{z_0\} \times \CC$. Since $J_{z_0}$ is
disconnected, so is $K_{z_0}$, and we can choose two disjoint
positively-oriented piecewise smooth loops $\eta_1, \eta_2 \subset \{z_0\}
\times \left(\CC \setminus K_{z_0}\right)$ each enclosing a proper subset of
$K_{z_0}$.
Perturbing $\eta_1, \eta_2$ within $\{z_0\} \times (\CC \setminus K_{z_0})$, if
necessary, we can suppose that none of the $d-1$ critical values of
$f|_{\{z_1\}\times \CC}: \{z_1\} \times \CC \rightarrow \{z_0\} \times \CC$
(counted with multiplicity) are on $\eta_1$ or $\eta_2$. Since the regions
enclosed by $\eta_1$ and $\eta_2$ are disjoint, at least one of them contains
at most $d-2$ of these critical values. Let $\gamma_0$ be this curve.
Since $\gamma_0 \subset \{z_0\} \times (\CC \setminus K_{z_0})$, $\gamma_0
\subset W^s([0:1:0])$. Because $\gamma_0$ is compact, it is bounded away from
${\rm supp}(T)$, and the linking number $lk(\gamma_0,T)$ is a well defined invariant
of the homology class $[\gamma]$ within $H_1(W^s([0:1:0]))$.
We let $\Gamma_0$ be the closed disc in $\left(\{z_0\} \times \mathbb{C}\right)$ having $\gamma_0$ as its oriented boundary. Since $\Gamma_0$
contains some proper subset of $K_{z_0}$ (and hence of $J_{z_0}$) with ${\rm supp}(\mu_{z_0}) = J_{z_0}$, we have that
\begin{eqnarray*}
0 < \left<\Gamma_0,T\right> = \int_{\Gamma_0} \mu_{z_0} < 1.
\end{eqnarray*}
\noindent
Therefore, $lk(\gamma_0,T) = \left<\Gamma_0,T\right>(\text{mod} \ 1) \neq 0 \
(\text{mod} \ 1)$, giving that $[\gamma_0]$ is non-trivial.
\vspace{0.1in}
Consider the preimages $D_1,\ldots,D_j$ of $\Gamma_0$ under $f|_{\{z_1\} \times
\CC} : \{z_1\} \times \CC \rightarrow \{z_0\} \times \CC$. Since at most $d-2$
critical values of the degree $d$ ramified cover $f|_{\{z_1\} \times \cup D_i}$ are
contained in $\Gamma_0$, it is a consequence of the Riemann-Hurwitz Theorem
that the Euler characteristic of $\cup D_i$ is greater than or equal to $2$.
Because each $D_i$ is a domain in $\CC$, at least two
components $D_1$ and $D_2$ are discs.
The total degree of $f|_{\{z_1\}\times \CC}: \cup D_i \rightarrow \Gamma_0$ is
$d$, so $f|_{\{z_1\} \times \CC}: D_i \rightarrow \Gamma_0$ a ramified covering
of degree $k_i \leq d-1$ for each $i$. Proposition \ref{PROP:PAIRING_INV}
and the basic invariance $f^*T = d\cdot T$ for the Green's current give that
\begin{eqnarray}\label{EQN:SMALLER_PAIRING}
\left<D_i,T\right> = \frac{1}{d}\left<D_i,f^*T\right> = \frac{1}{d} \left<f_* D_i,T\right> = \frac{1}{d} \left<k_i \Gamma_0,T\right> \leq \frac{d-1}{d}\left<\Gamma_0,T\right>
\end{eqnarray}
for each $i$.
As before, we can perturb the boundaries of $D_1$ and $D_2$ within $\{z_1\}
\times (\CC \setminus K_{z_1})$ so that none of the critical values of
$f|_{\{z_2\} \times \CC}$ lie on their boundaries and so that $D_1$ and $D_2$
remain disjoint. (It will not affect the pairings given by
(\ref{EQN:SMALLER_PAIRING})). At least one of the discs $D_1, D_2$ contains
at most $d-2$ critical values of $f|_{\{z_2\} \times C}$. We let
$\Gamma_1$ be that disc and $\gamma_1 = \partial \Gamma_1$. Then
\begin{eqnarray*}
0 < \left<\Gamma_1,T\right> \leq \frac{d-1}{d} \left<\Gamma_0,T\right> \leq \frac{d-1}{d}.
\end{eqnarray*}
Continuing in the same way, we can find a sequence of discs
$\Gamma_0,\Gamma_1,\ldots$ so that
\begin{itemize}
\item $\Gamma_n \subset \{z_n\} \times \CC$,
\item $\gamma_n = \partial \Gamma_n \subset W^s([0:1:0])$,
\item $\Gamma_n$ contains at most $d-2$ critical values of $f|_{\{z_{n+1\}}\times \CC}$ (counted with multiplicity), and
\item $\left<\Gamma_n,T\right> \leq \frac{d-1}{d} \left<\Gamma_{n-1},T\right>$.
\end{itemize}
\noindent
Consequently,
\begin{eqnarray*}
0 < \left<\Gamma_n,T\right> \leq \left(\frac{d-1}{d}\right)^n,
\end{eqnarray*}
\noindent
giving that $lk(\gamma_n,T) \rightarrow 0$ in $\QQ/\ZZ$. Therefore, Theorem
\ref{THM:GENERAL_TECHNIQUE} gives that $H_1(W^s([0:1:0]))$ is infinitely
generated.
\vspace{0.1in}
We will now show that if $J_z$ is connected for every $z \in J_p$, then
$W^s([0:1:0])$ is homeomorphic to an open ball.
Consider the local coordinates $z' = Z/W$, $t' = T/W$, chosen so that $(z',t')
= (0,0)$ corresponds to $[0:1:0]$. In these coordinates
\begin{eqnarray*}
f(z',t') = \left(\frac{P(z',t')}{Q(z',1,t')},\frac{t'^d}{Q(z',1,t')}\right),
\end{eqnarray*}
\noindent
where $P$ and $Q$ are the homogeneous versions of $p$ and $q$ appearing in Equation (\ref{EQN:SKEW_HOMOG}).
The assumption that $q(z) = w^d + O_z(w^{d-1})$ and $p(z) = z^d+O(z^{d-1})$ imply that
we have the expansion
\begin{eqnarray*}
f(z',t') = (P(z',t'),t'^d) + g(t',z'),
\end{eqnarray*}
\noindent
with $(P(z',t'),t'^d)$ non-degenerate of degree $d$ and
$g(t',z')$ consisting of terms of degree $d+1$ and larger.
Therefore, we can
construct a
potential function\footnote{The potential function $h$ is sometimes also be called the Green's function of the point $(0,0)$.} for the superattracting point $(0,0)$:
\begin{align}\label{EQN:GREEN_FOR_POINT}
h(z',t') := \lim_{n \rightarrow \infty} \frac{1}{d^n} \log \|f^n(z',t')\|_\infty.
\end{align}
\noindent
The result is a continuous pluri-subharmonic function \cite{HP} with logarithmic
singularity at $(z',t') = (0,0)$ having the property that $(z',t') \in
W^s([0:1:0])\setminus \{[0:1:0]\}$ if and only if $h(z',t') < 0$. In
particular,
\begin{eqnarray*}
h: W^s([0:1:0]) \setminus \{[0:1:0]\} \longrightarrow (-\infty,0)
\end{eqnarray*}
\noindent
is proper.
If we let $(z'_n,t'_n) = f^n(z',t')$, then Equation (\ref{EQN:GREEN_FOR_POINT}) simplifies to
\begin{align}
h(z',t') =
\begin{cases}
\lim \frac{1}{d^n} \log |t'_n| & \text{if $z'/t' \in K_p$ and}\\
\lim \frac{1}{d^n} \log |z'_n| & \text{if $z'/t' \not \in K_p$.}
\end{cases}
\end{align}
\noindent
since $z'_{n+1}/t'_{n+1} = p(z'_n/t'_n)$.
Equation (\ref{EQN:VERTICAL_GREEN}) gives that in the original affine coordinates $(z,w)$ we have
\begin{align}\label{EQN:EQUALITY_FOR_POTENTIAL}
h(z,w) =
\begin{cases}
\lim \frac{1}{d^n} \log |t'_n| = - \lim \frac{1}{d^n} \log |w_n| = -G_z(w) & \text{if $z \in K_p$ and},\\
\lim \frac{1}{d^n} \log |z'_n| = \log|z| - \lim \frac{1}{d^n} \log |w_n| = - G_z(w) & \text{if $z \not \in K_p$},
\end{cases}
\end{align}
which is harmonic on the intersection of any vertical line $\{z\} \times \CC$
with $W^s([0:1:0])$ and pluriharmonic except when $z \in J_p$; see Lemma \ref{LEM:MINUS_G_Z_PSH}. A similar
calculation shows
that $h$ coincides with the extension of $-G_z(w)$ described in Lemma \ref{LEM:MINUS_G_Z_PSH}
and that the restriction of $h$ to $\Pi$ is $-G_\Pi$. (Here, $G_\Pi$ is the Green's function
for the action $f_\Pi$ of $f$ on the line at infinity.)
Therefore, $h(z',t')$ is pluriharmonic on $W^s([0:1:0]) \setminus \{(z',w') \,:\,z'/w' \in
J_p\}$ and the restriction of $h(z',t')$ to any line through $(0,0)$ is
harmonic on $W^s([0:1:0]) \setminus \{[0:1:0]\}$, with a logarithmic singularity at $(0,0)$.
Since $J_{z_0}$ is connected for every $z_0 \in J_p$, Proposition 6.3 from
\cite{JON_SKEW} gives that $J_z$ is connected for every $z \in \CC$ and also
$J_\Pi$ is connected, or, equivalently, that $G_z(w)$ (for any $z$) and
$G_\Pi$ have no (escaping) critical points. Therefore, the
restriction of $h$ to any complex line through $(0,0)$ has no critical points in
$W^s([0:1:0])$.
The sublevel set $W_a := h^{-1}([-\infty,a))$ is open for any $a \in
(-\infty,0)$
since $h: W^s([0:1:0]) \setminus \{[0:1:0]\} \rightarrow
(-\infty,0)$ is continuous with $h(z',t') \rightarrow -\infty$ if and only if $(z',t') \rightarrow (0,0)$.
Equation 2.2 from \cite{JON_SKEW} implies that
\begin{eqnarray*}
h(z',t') &=& \log|t'| + G_p\left(\frac{z'}{t'}\right) + \eta(z',t') \,\, \mbox{if $t' \neq 0$, and}\\
h(z',t') &=& \log|z'| + G^\#_p\left(\frac{t'}{z'}\right) + \eta(z',t') \,\, \mbox{if $z' \neq 0$,}
\end{eqnarray*}
\noindent
with $\eta(z',t')$ becoming arbitrarily small for $(z',t')$ sufficiently small
and $G^\#_p(x)$ obtained by extending $G_p(1/x) - \log(1/x)$ continuously
through $x=0$. Therefore, for $a$ sufficiently negative, the intersection of
$W_a$ with any complex line through $(0,0)$ will be convex. In particular, $W_a$ is an
star-convex open subset in $\CC^2$, implying that it is homeomorphic to an open
ball. (See \cite[Theorem 11.3.6.1]{BERGER}.)
We define a new function
$\widetilde{h}$ which agrees with $h$ except in the interior of $W_a$, where we
make a $C^\infty$ modification (assigning values less than $a$) in order to remove
the logarithmic singularity at $[0:1:0]$.
We will use $\widetilde{h}$ as Morse function to show that $W_b :=
h^{-1}([-\infty,b))$ is diffeomorphic to $W_a$ for any $b \in (a,0)$.
The classical technique from Theorem 3.1 of \cite{MILNOR_MORSE} would use the
normalization of $-\nabla \widetilde{h}$ to generate a flow whose time $(b-a)$ map
gives the desired diffeomorphism. This will not work in our situation, since
$\widetilde{h}$ is not differentiable at points for which $z'/w' \in J_p$.
However, essentially the same proof works if we replace $-\nabla
\widetilde h$ with any $C^1$ vector field $V$ on $W^s([0:1:0])$ having no
singularities in $\widetilde{h}^{-1}([a,b])$ and along which
$\widetilde h$ is decreasing. Note that, as in \cite{MILNOR_MORSE},
we need that $\widetilde{h}^{-1}([a,b])$ is compact, which follows
from $h$ being proper.
Let $V$ be the the vector field parallel to each line through $(z',t') =
(0,0)$, obtained within each line as minus the gradient of the restriction of
$\widetilde{h}$ to that line. The restriction of $\widetilde{h}$ to each
complex line through $(0,0)$ has no critical points in
$\widetilde{h}^{-1}([a,b])$, so it is decreasing along $V$. Since $h$ is pluriharmonic for points with $z'/t' \not \in J_p$, it follows
immediately that $V$ is smooth there.
To see that $V$ is smooth in a
neighborhood of points where $z'/t' \in J_p$, notice that
\begin{eqnarray*}
\nabla_w G_z(w) = \nabla_w G(z,w) - G_p(z) = \nabla_w G(z,w),
\end{eqnarray*}
with $G(z,w)$ pluriharmonic on $W^s([0:1:0]) \cap \CC^2$.
Therefore, for any $b \in (a,0)$, $W_b$ is homeomorphic to $W_a$ and thus to an
open ball. One can then make a relatively standard construction, using these
homeomorphisms for $b$ increasing to $0$, in order to show that $W^s([0:1:0]) =
\cup_{b < 0} W_b$ is homeomorphic to an open ball.
\end{proof}
\section{Further applications}
\label{SEC:FURTHER_APPS}
In this final section we discuss a few examples of maps to which we have applied the results of this paper, and then a few types of maps which we feel would be fruitful to study further with techniques similar to those of this paper.
\subsection{Relationship between connectivity of $J_2$ and the topology of the Fatou set for polynomial skew products
}
\label{SEC:RELATIONSHIP_TO_MU}
For polynomial skew products, $J_2 = {\rm supp}(\mu) = {\rm supp}(T \wedge T) = \overline{\bigcup_{z \in J_p} J_z}$, which by
\cite{JON_SKEW} is also the closure of the set of repelling periodic points. Here we examine to what extent connectivity of $J_2$ affects the homology of the Fatou set $U$.
The following example shows that there are many polynomial skew products $f$ with $J_2$ connected for which
$H_1(U(f))$ is non-trivial (in fact infinitely generated.)
\begin{example} \label{EXAMPLE:J1DISCONNJ2CONN}
Consider $f(z,w) = (z^2-2,w^2+2(2-z))$ which has $J_2$ connected and
has $J_z$ disconnected over $z=-2 \in J_p$, as shown in
\cite[Example 9.7]{JON_SKEW}. Theorem \ref{THM:MAIN} immediately applies, giving that $H_1(U(f))$ is
infinitely generated.
\vspace{0.05in}
In fact, examples of this phenomenon can appear ``stably'' within a one parameter family.
Let $p_n(z) = z^2 + c_n$ be the unique quadratic polynomial with
periodic critical point of least period $n$ and $c_n$ real. Then,
\cite[Theorem 6.1]{S} yields that for $n$ sufficiently large,
\begin{eqnarray*}
f_n(z,w) = (p_n(z),w^2+2(2-z))
\end{eqnarray*}
is Axiom A with $J_z$ disconnected for most $z\in J_{p_n}$ and with $J_2$
connected. Suppose that $f_n$ is embedded within any holomorphic one-parameter
family $f_{n,\lambda}$ of polynomial skew products. Then, Theorems 4.1 and 4.2
from \cite{S} (see also, \cite[Thm C]{JON_MOTION}) give that all maps $f_{n,\lambda}$ within the same hyperbolic
component as $f_n$ also have $J_2$ connected, but $J_z$ disconnected over most
$z$ in $J_{p_{n,\lambda}}$. (Here, $p_{n,\lambda}$ is the first component of
$f_{n,\lambda}$.) An immediate application of Theorem \ref{THM:MAIN} yields
that $H_1(U(f_{n,\lambda}))$ is infinitely generated for all $f_{n,\lambda}$
within this hyperbolic component.
\end{example}
Next we consider the possibility of $J_2$ being disconnected, but $f$ not satisfying the hypotheses of our Theorem~\ref{THM:MAIN}.
\begin{question}
Is there a polynomial skew product $f$ with $J_2$ disconnected, but all $J_z$'s
connected for all $z \in \CC$, such that $H_1(U(f))$ is trivial? More
generally, is there any endomorphism of $\ensuremath{\field{P}^2}$ with $J_2$ disconnected, but with
all Fatou components having trivial homology?
\end{question}
By \cite[Proposition 6.6]{JON_SKEW}, in order for $f$ to satisfy the hypotheses of this question, $J_p$ would have to be disconnected. However, a simple product like $(z,w) \mapsto (z^2-100, w^2)$ does not suffice; note for this map, the basin of attraction of $[1:0:0]$, hence the Fatou set, has nontrivial homology. Not many examples of non-product polynomial skew products are understood, and the current list of understood examples contains no maps which satisfy the hypotheses of this question.
\subsection{A quadratic family of polynomial skew products}
\label{SEC:QUADRATIC_FAMILY}
We now consider the family of examples $f_a(z,w) = (z^2,w^2+az)$, which are
skew products over $p(z) = z^2$.
The geometry and dynamics in $J_p \times \CC$ were explored in \cite{S}. For example, there it is established that:
\begin{enumerate}
\item \cite[Theorem 5.1]{S}: $f_a$ is Axiom A if and only if $g_a(w) := w^2+a$ is hyperbolic; and
\item \cite[Lemma 5.5]{S}: $J_2$ can be described geometrically in the following manner: $J_{e^{it}}$ is a rotation of angle $t/2$ of $J_{\{z = 1\}}$. That is, start with $J(g_a)$ in the fiber $J_{\{z = 1\}}$, then as the base point $z=e^{it}$ moves around the unit circle $J_p = S^1$, the corresponding $J_z$'s are rotations of $J(g_a)$ of angle $t/2$, hence the $J_z$'s complete a half turn as $z$ moves once around the base circle.
\end{enumerate}
Due to the structure of $J_2$, the difference between $f_a$ and the product
$h_a(z,w) = (z^2, w^2+a)$ is one ``twist'' in $J_2$. In \cite{S} it is shown
that $f_a$ and $h_a$ are in the same hyperbolic component if and only if $a$ is
in the main cardiod of the Mandelbrot set, $\mathcal{M}$.
Note that the extension of $f_a$ to $\ensuremath{\field{P}^2}$, given by $f_a([Z:W:T]) =
[Z^2,W^2+aZT:T^2]$, is symmetric under the involution $\mathcal{S}([Z:W:T]) = [T:W:Z]$.
\begin{thm}\label{THM:QUADRATIC_FAMILY}
The Fatou set of $f_a$ is the union of the basins of attraction of three
superattracting fixed points: $[0:0:1], [0:1:0]$, and $[1:0:0]$, each of which
is path-connected.
Moreover:
\begin{itemize}
\item If $a \not \in \mathcal{M}$, then $W^s([0:1:0])$ has infinitely generated first homology.
\item If $a \in \mathcal{M}$, then each of the three basins of attraction
$W^s([0:1:0]), W^s([0:0:1])$ and $W^s([1:0:0])$ is homeomorphic to an open
ball.
\end{itemize}
\end{thm}
\begin{proof}
For any $a$, the fiberwise Julia set $J_0$ is the unit circle $|z| = 1$.
Proposition 4.2 from \cite{ROE_NEWTON} can be modified to show that there is a
local super-stable manifold $W^s_{\rm loc}(J_0)$ that is obtained as the image of a
holomorphic motion of $J_0$ that is parameterized over $\DD_\epsilon = \{|z| <
\epsilon\}$, for $\epsilon > 0$ sufficiently small. The motion of $(0,w) \in
J_0$ is precisely the connected component of local super-stable manifold of
$(0,w)$ that contains $(0,w)$, which we will call the {\em superstable leaf of
$w$} and denote by $W^s_{\rm loc}(w)$. By construction, $f_a$ will map the
superstable leaf of $(0,w)$ into the superstable leaf of $(0,w^2) = f_a(0,w)$.
Moreover, the proof of Proposition 4.4 from \cite{ROE_NEWTON} can also be
adapted to show that $W^s_{\rm loc}(J_0)$ is the zero locus
of a pluri-harmonic (hence real-analytic) function.
Pulling back $W^s_{\rm loc}(J_0)$ under iterates of $f_a$, we obtain a global
separatrix $W^s(J_0)$ over the entire unit disc $\DD = \{|z| = 1\}$. Note that
$W^s(J_0)$ may not be a manifold, since ramification may occur at points where
it intersects the critical locus of $f_a$. For $|z| < 1$, $J_z$ is the
intersection of $W^s(J_0)$ with $\{z\} \times \CC$ and that $K_z$ is the
intersection of $W^s([0:0:1])\cup W^s(J_0)$ with $\{z\} \times \CC$.
Thus, any point $(z,w)$ with $|z| < 1$ is in $W^s([0:0:1]) \cup W^s(J_0) \cup W^s([0:1:0])$.
Under the symmetry $\mathcal{S}$, each of the above statements about the super-stable
manifold of $J_0$ corresponds immediately to a statement about the unit circle
$J_\Pi = \{|Z/W| = 1\}$ in the line at infinity $\Pi = \{T=0\}$. Moreover,
any point in $\ensuremath{\field{P}^2}$ with $|T| < |Z|$ is in $W^s([1:0:0]) \cup W^s(J_\Pi) \cup
W^s([0:1:0])$.
Therefore, the Fatou set of $f_a$ is the union of basins of attraction for
three superattracting fixed points $[1:0:0]$, $[0:1:0]$, and $[0:0:1]$.
Since each of these fixed points is totally invariant, Proposition \ref{W_PATH_CONNECTED}
gives that each of their basins of attraction is path connected.
\vspace{0.05in}
The vertical Julia $J_1$ set over the fixed fiber $z=1$ is precisely the Julia
set of $w \mapsto w^2+a$, which is connected if and only if $a \in
\mathcal{M}$. In particular, if $a \not \in \mathcal{M}$, it follows from
Theorem \ref{THM:MAIN} that $W^s([0:1:0])$ has infinitely generated first
homology.
\vspace{0.05in}
If $a \in \mathcal{M}$, then, for each $z \in J_p$, $J_z$ is a rotation of the
connected set $J_1$ and Theorem \ref{THM:MAIN}
gives that $W^s([0:1:0])$ is homeomorphic to an open ball. We will
now use Slodkowski's Theorem on holomorphic motions \cite{SLOD} (see also
\cite[Section 5.2]{HUBB}) to show that $W^s([0:0:1])$ and $W^s([1:0:0]) =
\mathcal{S}(W^s([0:0:1]))$ are homeomorphic to the open bidisc.
We will extend (in the parameter $z$) the holomorphic motion whose image is
$W^s_{\rm loc}(J_0)$ to a holomorphic motion of $J_0$ parameterized by $z \in
\DD$, having the entire separatrix $W^s(J_0)$ as its image. Then, by
Slodkowski's Theorem, this holomorphic motion extends (in the fiber $w$) from
$J_0$ to a holomorphic motion of the entire Riemann sphere $\mathbb{P}^1$ that
is also parameterized by $z \in \DD$. Consequently, $W^s([0:0:1])$ will be the
image of a holomorphic motion of the open disc $\{z=0, |w| < 1\}$,
parameterized by $z \in \DD$.
\vspace{0.05in}
Since $a \in \mathcal{M}$, it also follows from \cite[Proposition
6.4]{JON_SKEW} that for each $z \in \CC$ the fiber-wise critical points
\begin{eqnarray*}
C_z := \{w \in \CC \,:\,q_z'(w) = 0\}
\end{eqnarray*}
\noindent
are in $K_z$. We now check that they are disjoint from
$W^s(J_0)$.
The union of these fiber-wise critical points is just the horizontal line $w=0$
that stays on one side of $W^s(J_0)$, possibly touching at many points. Note,
however that they are disjoint at $z=0$. Consider the point $z_0$ (with $|z_0|
< 1$) of smallest modulus where $w=0$ and $W^s(J_0)$ touch. Then, there is a
neighborhood of $U$ of $z_0$ in $\CC^2$ in which $W^s(J_0)$ is given by the
zero set of a PSH function $\Psi$. Changing the sign of $\Psi$ (if necessary)
we can assume that $\Psi \leq 0$ for points in $K_z \cap U$. The restriction
$\psi(z) =\Psi|_{w=0}$ is a non-positive harmonic function in a neighborhood of
$z_0$ having $\psi(z_0) = 0$, but $\psi(z) < 0$ for $z$ with $|z|< |z_0|$.
This violates the maximum principle. Therefore, the fiber-wise critical points $C_z$
are disjoint from $W^s(J_0)$ for every $z$.
\vspace{0.05in}
Suppose that $\mathcal{D} \subset W^s(J_0)$ is the graph of a
holomorphic function $\nu(z)$ defined on $\{|z| < r\}$, for some $0 < r < 1$.
Then, since $W^s(J_0)$ is disjoint from the horizontal critical locus $w = 0$,
the Implicit Function Theorem gives that $f_a^{-1}(\mathcal{D})$ is
the union of two discs through the pre-images of $\nu(0)$, each given as the graph
of a holomorphic function over $\{|z| < \sqrt{r}\}$.
Let $(0,w) \in J_0$ with preimages $(w_1,0)$ and $(w_2,0)$. Since
$f_a(W^s_{\rm loc}(w_{1,2})) \subset W^s_{\rm loc}(w)$, the two discs from
$f_a^{-1}(W^s_{\rm loc}(w))$ form extensions of $W^s_{\rm loc}(w_{1})$ and
$W^s_{\rm loc}(w_{2})$, as graphs of holomorphic functions of $|z| <
\sqrt{\epsilon}$.
Therefore, by taking the preimages under $f_a$, the family of local stable
discs can be extended, each as the graph of a holomorphic function over $|z| <
\sqrt{\epsilon}$. Applied iteratively, we can extend them as the graphs of
holomorphic functions over discs $|z| < r$ for any $r < 1$. In the limit we
obtain global stable curves $W^s(w_0)$ through every $w_0 \in J_0$, each of
which is the graph if a holomorphic function of $z \in \DD$. Since the global
stable curves of distinct points in $J_0$ are disjoint, their union gives
$W^s(J_0)$ as the image of a holomorphic motion of $J_0$ parameterized by $z
\in \DD$. \end{proof}
\subsection{Postcritically Finite Holomorphic Endomorphisms}
Until presenting the conjecture of the previous subsection, this paper has been
about endomorphisms with complicated Fatou topology. The opposite extreme is
that the Fatou topology may also be trivial in many cases. We suspect one
simple case in which Fatou topology is trivial is when the map is
postcritically finite (PCF).
\begin{question}
Does the Fatou set of a postcritically finite holomorphic endomorphism of $\ensuremath{\field{P}^2}$ always have trivial homology?
\end{question}
A starting point for investigation into this question could be to attempt to establish it for the postcritically finite examples constructed by Sarah Koch \cite{KochFRphd, KochUSphd}. Heuristic evidence supports that the homology is trivial for Koch's maps. Her construction provides a class of PCF endomorphisms, containing an infinite number of maps, including the previously studied examples of \cite{FSpcf} and \cite{CrassPCF}.
\subsection{Other holomorphic endomorphisms of $\ensuremath{\field{P}^k}$}
As we have demonstrated in Sections~\ref{SEC:ENDO} and~\ref{SEC:SKEW},
given some information about the geometry of the support of $T$, we can apply
the techniques of Sections~\ref{SEC:LINKING} to study the Fatou set of a
holomorphic endomorphism of $\ensuremath{\field{P}^2}$. We would like to be able to apply this
theorem to other holomorphic endomorphisms of $\ensuremath{\field{P}^k}$. However, specific examples of
holomorphic endomorphisms that are amenable to analytic study are notoriously
difficult to generate.
One family of endomorphisms which seem a potentially vast area of study are the H\'{e}non-like endomorphisms. Introduced by Hubbard and Papadapol in \cite{HP2}, and studied a bit further by Forn{\ae}ss and Sibony in \cite{FSexamples}, these are holomorphic endomorphisms arising from a certain perturbation of the H\'{e}non diffeomorphisms. The H\'{e}non diffeomorphisms have been deeply studied (e.g., by Bedford Lyubich, and Smillie, \cite{BLS1,BS4}, Bedford and Smillie \cite{BS6, BS9}, Hubbard and Oberste-Vorth \cite{HO1, HO2}, and Forn{\ae}ss and Sibony \cite{FSHenon}). A natural question which is thus far quite wide open is: how does the dynamics of a H\'{e}non diffeomorphism relate to the dynamics of the perturbed H\'{e}non endomorphism? Computer evidence suggests the dynamics of H\'{e}non-like endomorphisms is rich and varied.
Specifically concerning the topology of the Fatou set, the main result of
\cite{BS6} is that connectivity of the Julia set is determined by connectivity
of a slice Julia set in a certain unstable manifold. We ask whether this
result would have implications for the related H\'{e}non endomorphism, which would
allow us to use linking numbers to establish some analog of
Theorem~\ref{THM:MAIN} for H\'{e}non endomorphisms.
\bibliographystyle{plain}
|
2109.14396
|
\section{Introduction}
Stories are central to human culture and communication. However, it seems that stories are easier said than generated. Despite incredible recent progress in natural language processing generation of longer texts is still a challenge \cite{van2019narrative, rashkin2020plotmachines}. \citet{ostermann2019mcscript2} present a machine comprehension corpus for the end-to-end evaluation of script knowledge with 50\% of the questions in the corpus that require script knowledge for the correct answer. The authors demonstrate that though the task is not challenging to humans, existing machine comprehension models fail to perform well on the data, even if they make use of a commonsense knowledge base.
Partially, this challenge could be attributed to the lack of adequate memory models. Longer texts demand better memory mechanisms and possible ways to construct such mechanisms are discussed in the literature for the last 25 years. Long short-term memory networks \cite{hochreiter1997long}, Neural Turing Machines \cite{graves2014neural}, memory networks \cite{weston2014memory} and many other architectures try to tackle this problem. Attempts to introduce some form of memory in transformers, such as \cite{guo2019star} or \cite{burtsev2020memory}, could be regarded as the next steps in this long line of work.
There are some interesting recent attempts to generate long texts using some form that makes such longer text feasible for a human reader. For example, \citet{agafonova2020paranoid} generate a diary of a neural network. Yet the generation of a narrative is still challenging. For a detailed review of earlier approaches to narrative generation, we address the reader to \cite{kybartas2016survey}. Even modern models for narrative generation rely heavily on some form of expert knowledge or some type of hierarchical structure of the narrative. For example, \citet{fan2019strategies} first generate the predicate-argument structure of the text, then generate a surface realization of the predicate-argument structure, finally replace the entity placeholders with context-sensitive names and references. \citet{fan2019strategies,ammanabrolu2020story} propose a hierarchical generation framework that first plans a storyline, and then generate a story based on it. present a technique for preprocessing textual story data into event sequences. \citet{xu2018skeleton} develop a model that first generates the most critical phrases, called skeleton, and then expands the skeleton to a complete and fluent sentence. Similarly, \citet{martin2018dungeons} provide a mid-level of abstraction between words and a sentence to minimize event sparsity and present a technique for automated story generation whereby the problem is decomposed into the generation of successive events and the generation of natural language sentences from events. Finally, \citet{brahman2020cue} develop an approach, where the user provides the model with such mid-level sentence abstractions in the form of cue phrases during the generation process.
However, we should take into consideration that modern Natural Language Processing (NLP) is fundamentally an experimental discipline, so the lack of dedicated data could be another bottleneck for the development of narrative generation. This paper tries to amend this problem.
Unfortunately, the majority of available narrative datasets deal with some constrained form of a short plot that is usually called {\em scenario}. These scenarios are centered around common activities, i.e. going grocery shopping or taking a shower. These narrative datasets available in the literature are also extremely small and could not be used with the most advanced modern NLP models. \citet{regneri2010learning} collect 493 event sequence descriptions for 22 behavior scenarios. \citet{modi2016inscript} present InScript dataset that consists of 1,000 stories centered around 10 different scenarios. \citet{wanzare2019detecting} provide 200 scenarios and attempt to identify all references to them in a collection of narrative texts. \citet{mostafazadeh2016corpus} present a corpus of 50k five-sentence commonsense stories. Finally, there is an MPST dataset that contains 14K movie plot synopses, \cite{kar2018mpst}, and WikiPlots\footnote{https://github.com/markriedl/WikiPlots} that contains 112 936 story plots extracted from English language Wikipedia. Recently \citet{malyshevaDYP} provided a dataset of TV series along with an instrument for narrative arc analysis. These datasets are useful yet as well as a vast majority of the narrative datasets they are only available in English.
This paper provides a large multi-language dataset of stories in natural language. The stories have a cross-language index and every story and character are cross-linked if they occur in different languages. Additionally, the texts have tags such as a genre or a topic. This is the first story dataset of such magnitude that we know of. We hope that a large dataset of long storylines could be used for various aspects of narrative research as well as to facilitate experiments with end-to-end narrative generation.
\section{Data}
StoryDB is motivated by several interesting experiments that used WikiPlots — one of the larger English datasets of narratives available for all-purpose narrative research that we have mentioned earlier. Seeing various applications that Wikiplots dataset found in the NLP community, we believe, that StoryDB would be even more useful due to multiple languages, advanced filtering that guarantees higher quality of obtained data, and genre tagging. To improve reproducibility and make StoryDB usable as Wikipedia is further updated we publish the data as well as the code for the filtering pipeline\footnote{https://drive.google.com/drive/folders/ 1RCWk7pyvIpubtsf-f2pIsfqTkvtV80Yv}. The stories that form StoryDB are extracted from any Wikipedia article that contains a sub-header that contains the word "plot" (e.g., "Plot", "Plot Summary", etc.) in a corresponding language.
\subsection{Dataset structure}
The dataset consists of several index files and includes a directory \verb"plots". Every file in the directory has a similar structure. Two first letters of the filename stand for the ISO 639-1\footnote{https://en.wikipedia.org/wiki/ISO\_639-1} code of the language for the texts presented in the file. For example, \verb"hy_plots.tsv" contains 4 861 plots in Armenian language. The file \verb"simple_plots.tsv" contains stories in Simple English. Every entry in the plots file has a similar structure and includes the following fields:
\begin{itemize}
\item ID — the unique number of a plot that is the same across every language in the dataset;
\item Lang — the language of this particular entry;
\item Link — a link to the Wikipedia page containing the plot;
\item Title — the title of the story;
\item Text — the text of the story;
\item Categories — the categories that Wikipedia assigns to this story.
\end{itemize}
One can navigate across plot files using StoryDB's Index file \verb"plot_matrix.tsv". The rows of the file stand for languages. If a given plot is available in a given language then the title of this plot stands in the corresponding cell of the \verb"plot_matrix.tsv". For example, if "Wee free men" is available in Simple English it could be found by its title in the corresponding \verb"simple_plots.tsv". StoryDB also includes \verb"plot_rake.tsv" that contains keywords extracted with RAKE algorithm \cite{rose2010automatic} for every story.
Finally, the files \verb"ID_lang_tag.tsv" and \verb"ID_tag_average.tsv" include information about tags that correspond to the given story. We discuss tagging procedure in detail later.
\subsection{Preprocessing}
Our motivation is to provide a dataset of storylines for various languages including the low-resource ones. Roughly speaking, we want to be sure that every story that ends up in StoryDB is a legitimate storyline description in the corresponding natural language. Thus we are more interested in the precision of the dataset rather than in the recall. To guarantee a higher quality of the obtained stories we implemented several heuristical filters that we briefly describe here.
English Wikipedia is an order of magnitude bigger than any other Wikipedia both in terms of users and in terms of admins\footnote{https://meta.wikimedia.org/wiki/List\_of\_Wikipedias}. This makes the English list of storylines to be the most extensive one. We regard it as the least noisy one and use it as a reference source for the filtering procedure. We exclude every page that includes a plot yet has no plot section in English Wikipedia for the same entry.
If Wikipedia in language X has a page with title A and this page is also available in language Y under title B, we list such pair of stories as \verb"[language_X," \verb"title_A," \verb"language_Y," \verb"title_B"]". Every entry in this list is an edge in a graph of stories. Every vertex in this graph has a corresponding name \verb"language, title". Unlike connected stories from different languages that usually contain similar storylines, the stories listed under the same name in the same language might differ significantly. Say, two stories in language X \verb"[language_X," \verb"title_A]" and \verb"[language_X," \verb"title_B]" are both linked to one story in another language Y \verb"[language_Y," \verb"title_C]". To avoid such ambiguities we exclude fully connected components that contain more than one entry in the same language. Obtained list of stories ends up in the resulting matrix of stories to navigate the dataset. We experimented with various filtering procedures and found this combination to produce a sufficiently rich dataset with a minimum amount of duplicates.
StoryDB is also equipped with a catalog of characters. If a given character that has an individual Wikipedia page is mentioned in a story, its description in the original language is saved into the corresponding tsv-file alongside the ID of a story and the language of the description.
\subsection{Tagging}
We annotate the resulting stories using meta-information on categories from Wiki API\footnote{https://www.mediawiki.org/w/api.php?action=help\&
modules=query\%2Bcategories}. For every plot, we list all translated categories marked in every language in which this plot is available. Then we search these category lists for substrings that include tags from the manually created list of tags\footnote{The list of tags is published with the dataset.}. This allows us to proved language-specific tags for every language, that are listed in \verb"ID_lang_tag.tsv". For example, Czech version of Black Night has tags \verb"action;" \verb"crime;" \verb"drama;" \verb"superhero;" \verb"comics;" and \verb"thriller", while the same story in Persian has no tag \verb"comics", but has additional tags \verb"neo-noir;" \verb"psychological;" \verb"epic;" and \verb"screenplays;".
File \verb"ID_tag_average.tsv" includes the scores of the tags available for every story. The scores are calculated as follows: we count the number of times that a given tag is associated with a given story. Then we divide this number over the total number of languages in which the story is represented. The obtained space of tags could be useful for narrative exploration. Every story becomes a vector with every coordinate on the interval $[0,1]$. Figure \ref{fig:tags} shows a t-SNE visualisation of this space \cite{van2008visualizing} alongside the centroids of the more distinctive tags.
\begin{figure}[t!]\centering
\includegraphics[scale=0.2]{tags.png}
\caption{t-SNE visualisation for plots in StoryDB clustered according to their tags. Figure shows centroids of the tags with higher variance across the dataset.}
\label{fig:tags}
\end{figure}
\subsection{StoryDB}
Figure \ref{fig:size} shows the relative size of the datasets in every language presented in StoryDB. English heavily dominates followed by Italian, French, Russian, and German.
\begin{figure*}[h!]\centering
\includegraphics[scale=0.2]{plots_500.png}
\caption{Number of stories in every language that has more that five hundred entries in StoryDB.}
\label{fig:size}
\end{figure*}
There are more than 20 languages that have three thousand or more stories available, including such languages as Finnish, Hungarian, or Persian. Table \ref{tab:DB} summarises some of the resulting parameters of the obtained dataset.
\begin{table}[h!]
\centering
\begin{tabular}{lr}
\multicolumn{2}{c}{Story DB} \\ \hline
Number of languages & 42 \\
\begin{tabular}[c]{@{}l@{}}Median \# of stories in a language\end{tabular} & 2 772 \\
\begin{tabular}[c]{@{}l@{}}Maximal \# of stories in a language\end{tabular} & 63 756 \\
\begin{tabular}[c]{@{}l@{}}Minimum \# of stories in a language\end{tabular} & 568
\end{tabular}
\caption{Some resulting parameters of the StoryDB.}
\label{tab:DB}
\end{table}
\section{Evaluation}
We have used three modern transformer-based architectures for the evaluation:
\begin{itemize}
\item mBERT\footnote{https://huggingface.co/bert-base-multilingual-cased} \cite{devlin2018bert} — a multi-language version of BERT;
\item mDistilBERT\footnote{https://huggingface.co/distilbert-base-multilingual-cased} \cite{Sanh2019DistilBERTAD} — a distilled version of multi-language BERT;
\item XLM-Roberta\footnote{https://huggingface.co/xlm-roberta-base} \cite{conneau2020unsupervised} — a model that is two times larger than BERT in terms of the number of parameters.
\end{itemize}
These models are the most widely used multi-language models to date. The results of the experiments are publicly available at Weights and Biases\footnote{https://wandb.ai/altsoph/storydb
\_eval.task1\\ https://wandb.ai/altsoph/storydb\_eval.task3 \\ https://wandb.ai/altsoph/storydb\_eval.task3}, see \cite{wandb}. The evaluation was performed on ten largest languages in StoryDB, namely: English — 'en', French — 'fr', Italian — 'it', Russian — 'ru', German — 'de', Dutch — 'nl', Ukrainian — 'uk', Portuguese — 'pt', Polish — 'pl', and Spanish — 'es'.
We evaluated three tasks:
\begin{itemize}
\item Task A. Multilabel classification for tags on a multilanguage corpus of plots;
\item Task B. Multilabel classification for tags in cross-lingual learning;
\item Task C. Multilabel classification for tags in cross-lingual learning with a corpus of overlapping plots that occur in every language.
\end{itemize}
Let us now describe every task in detail.
\subsection{Task A}
We have sampled the ten most frequents tags out of StoryDB (tag 'film' was the most frequent yet was excluded as a somewhat redundant one). These tags were: 'drama', 'comedy', 'television', 'fiction', 'series', 'action',
'thriller', 'black-and-white', 'science fiction', 'horror'. These ten tags form a vector, where every dimension corresponds to one particular tag. '1' encodes the presence of the tag and '0' stands for the absence of it.
For every language out of the top ten in StoryDB, we have sampled 2000 plots such that every plot has at least one tag out of the list of the ten most popular tags. In Task A the plots were sampled randomly for every language, so there is some overlap between languages. On average, 2\% of the plots in one language reoccur in another one. It is important to note that the set of tags for a given plot might differ across languages and one plot could have several tags simultaneously. Thus, multilabel classification is a natural evaluation task under these circumstances.
Since the dataset is not balanced with respect to tags we used the binary cross-entropy loss\footnote{https://pytorch.org/docs/stable/generated/ torch.nn.BCELoss.html} over the vector of tags. Table \ref{tab:task_a} and Table \ref{tab:task_a_detail} sum up the results of three models on a multilanguage dataset of plots. Further details across languages and tags are available online\footnote{https://wandb.ai/altsoph/storydb\_eval.task1}.
\begin{table}[]
\begin{tabular}{lll}
& \begin{tabular}[c]{@{}l@{}}Hamming\\ Score\end{tabular} & \begin{tabular}[c]{@{}l@{}}Multilabel\\ Accuracy\end{tabular} \\
\hline
mDistillBERT & 0.47 & 0.31 \\
mBERT & 0.50 & 0.33 \\
XLM-RoBERTa & 0.50 & 0.33
\end{tabular}
\caption{Task A. Hamming score and multilabel accuracy for the vector of predicted tags on a validation set. Training data consists of sixteen thousand plots in ten languages, with one tenth of the dataset in every language.}
\label{tab:task_a}
\end{table}
\begin{table}[]
\begin{tabular}{llll}
& mD-BERT & mBERT & XLM-R \\
\hline
Comedy & 0.69 & 0.67 & 0.69 \\
Action & 0.67 & 0.70 & 0.67 \\
Fiction & 0.78 & 0.80 & 0.81 \\
Thriller & 0.67 & 0.63 & 0.64 \\
Horror & 0.70 & 0.76 & 0.75 \\
Drama & 0.73 & 0.74 & 0.74 \\
Series & 0.77 & 0.78 & 0.78 \\
Television & 0.74 & 0.76 & 0.76 \\
\begin{tabular}[c]{@{}l@{}}Science\\ Fiction\end{tabular} & 0.78 & 0.80 & 0.81 \\
\begin{tabular}[c]{@{}l@{}}Black and \\ White\end{tabular} & 0.68 & 0.65 & 0.62
\end{tabular}
\caption{Task A. AUC-ROC for binary tag classifiers on a validation set. Training data consists of sixteen thousand plots in ten languages, with one tenth of the dataset in every language.}
\label{tab:task_a_detail}
\end{table}
\subsection{Task B}
Now let us do a similar setup yet train every model on one language in StoryDB and test its accuracy on another language. The parameters of the training datasets and labels are the same as in Task A above but every model is trained on one dataset and is then tested on other languages. Table \ref{tab:task_b} show the performance of mBERT, yet mDistillBERT and XLM-RoBERTa demonstrate similar behavior. The detailed results could be found online\footnote{https://wandb.ai/altsoph/storydb\_eval.task2}.
\begin{table*}[]
\centering
\begin{tabular}{l|llllllllll}
& en & de & nl & fr & it & es & pt & ru & uk & pl \\
\hline
en & 0.36 & 0.16 & 0.14 & 0.16 & 0.10 & 0.17 & 0.15 & 0.13 & 0.12 & 0.15 \\
de & 0.15 & 0.40 & 0.16 & 0.18 & 0.12 & 0.16 & 0.20 & 0.19 & 0.18 & 0.21 \\
nl & 0.20 & 0.33 & 0.41 & 0.20 & 0.22 & 0.32 & 0.29 & 0.31 & 0.25 & 0.30 \\
fr & 0.16 & 0.20 & 0.16 & 0.51 & 0.13 & 0.18 & 0.18 & 0.16 & 0.14 & 0.19 \\
it & 0.19 & 0.30 & 0.24 & 0.21 & 0.21 & 0.26 & 0.28 & 0.27 & 0.24 & 0.30 \\
es & 0.23 & 0.24 & 0.20 & 0.22 & 0.18 & 0.45 & 0.27 & 0.22 & 0.22 & 0.23 \\
pt & 0.15 & 0.21 & 0.17 & 0.22 & 0.10 & 0.19 & 0.44 & 0.19 & 0.14 & 0.23 \\
ru & 0.12 & 0.21 & 0.16 & 0.13 & 0.12 & 0.20 & 0.22 & 0.45 & 0.16 & 0.20 \\
uk & 0.10 & 0.20 & 0.16 & 0.14 & 0.09 & 0.19 & 0.19 & 0.23 & 0.25 & 0.20 \\
pl & 0.19 & 0.27 & 0.19 & 0.21 & 0.11 & 0.20 & 0.24 & 0.24 & 0.20 & 0.48
\end{tabular}
\caption{Task B. Multilabel accuracy for the vector of predicted tags by mBERT. Training data consists of one thousand six hundred plots in one language. Every row shows validation accuracy of a model pretrained on the corresponding language and validated on the plots in a language from the corresponding column.}
\label{tab:task_b}
\end{table*}
Table \ref{tab:task_b} demonstrates that if we train the model on one language and validate it on the other the quality of the multilabel tag classification drops. This drop varies across languages and tends to be smaller for the languages that belong to the same language family.
\subsection{Task C}
The last validation is similar to Task B, yet now we sample plots that overlap in every language. This limits us to 1500 plots in six languages that we split into train and test. Now every plot occurs in every language. Table \ref{tab:task_c} shows the model manages to recover certain tags in one language after the pre-training on the other. Table \ref{tab:task_c} shows the performance of XLM-RoBERTa, yet mDistillBERT and mBERT demonstrate similar behavior. The performance of the models tends to be better on overlapping plots if we compare it to Task B. The detailed results could be found online\footnote{https://wandb.ai/altsoph/storydb\_eval.task3}.
\begin{table}[]
\begin{tabular}{l|llllll}
& en & de & nl & fr & it & es \\
\hline
en & 0.29 & 0.16 & 0.12 & 0.12 & 0.10 & 0.08 \\
de & 0.27 & 0.31 & 0.18 & 0.19 & 0.15 & 0.11 \\
nl & 0.34 & 0.30 & 0.32 & 0.17 & 0.13 & 0.17 \\
fr & 0.22 & 0.20 & 0.16 & 0.27 & 0.15 & 0.10 \\
it & 0.34 & 0.29 & 0.23 & 0.25 & 0.25 & 0.16 \\
ru & 0.25 & 0.21 & 0.18 & 0.18 & 0.18 & 0.13
\end{tabular}
\caption{Task C. Multilabel accuracy for the vector of predicted tags by XLM-RoBERTa across the dataset of plots withour cross-language overlaps. Training data consists of one thousand two hundred plots in one language. Every row shows validation accuracy of a model pretrained on the corresponding language and validated on the plots in a language from the corresponding column.}
\label{tab:task_c}
\end{table}
The multilabel accuracy for tag prediction declines further yet it can neither be attributed to specific lexical properties of a particular language nor any form of overlap of plots across languages.
This series of evaluation tasks demonstrates two crucial properties of StoryDB:
\begin{itemize}
\item StoryDB could be used to work with narrative structures on the most abstract cross-lingual level;
\item StoryDB allows controlling for various cross-lingual similarities of plots during ablation experiments with models of narrative.
\end{itemize}
\section{Discussion}
We believe that a broad multilanguage dataset of narratives can facilitate several areas of narrative research.
\begin{itemize}
\item Cross-cultural research of narrative structure. StoryDB provides possibilities to compare the structure of narrative in various languages. Since StoryDB includes every story in its original language and is equipped with a universal system of tags it is a natural source for such cross-cultural research.
\item Classification of narratives. StoryDB includes an extensive amount of narratives for various languages alongside their genre tags. This allows to develop new methods for narrative classification as well as extensively test the ones that already exist, see for example \cite{reiter2014nlp}.
\item Quantitative research of the narrative structure. \cite{y2007employing} represents a story as a cluster of emotional links and tensions between characters that progress over storytime. StoryDB includes the description of the plots alongside the key characters. Such information could be insightful for a deeper quantitative understanding of narrative as a by-product of character interaction.
\item Summarization of narrative. Parallel corpora in different languages contain similar descriptions of the narrative that could vary in terms of details and length. That makes StoryDB a useful tool for potential narrative summarization research such as \cite{barros2019natsum}.
\item End-to-end narrative generation. StoryDB is the first dataset of narratives that we know of that contains narrative descriptions in various natural languages.
\end{itemize}
\section{Conclusion}
This paper presents StoryDB — a broad multi-language dataset of narratives. We describe the construction of the dataset, provide the code for the whole pipeline, list the parameters of the resulting dataset, and briefly discuss several areas of natural language processing research, where StoryDB could be useful for the community.
We hope that StoryDB could be broadened as more plot descriptions are added to various languages. These considerations make StoryDB a flexible resource that would be relevant for the NLP community as the subfield of quantitative narrative research moves on.
\bibliographystyle{acl_natbib}
|
1011.1754
|
\section{Introduction}
Let $G = (V, E)$ be a graph with finite vertex set $V$ and edge set $E \subseteq \binom{V}{2}$. Let $A\colon V \times V \to \mathbb{R}$ be a symmetric matrix whose rows and columns are indexed by the vertex set of~$G$, and $r$ be a positive integer. The \emph{graphical Grothendieck problem with rank-$r$ constraint} is the following optimization problem:
\begin{equation*}
\sdp_r(G,A) = \max\biggl\{\,\sum_{\{u,v\} \in E} A(u,v) f(u) \cdot f(v) \; : \; f\colon V \to S^{r-1}\,\biggr\},
\end{equation*}
where $S^{r-1} = \{\,x \in \mathbb{R}^r : x \cdot x = 1\,\}$ is the $(r-1)$-dimensional unit sphere. The \emph{rank-$r$ Grothendieck constant of the graph $G$} is the smallest constant $K(r,G)$ so that for all symmetric matrices $A\colon V \times V \to \mathbb{R}$ the following inequality holds:
\begin{equation}
\label{Grothendieck's inequality}
\sdp_{\infty}(G,A) \leq K(r,G) \sdp_r(G,A).
\end{equation}
Here $S^{\infty}$ denotes the unit sphere of the Hilbert space $l^2(\mathbb{R})$ of square summable sequences, which contains $\mathbb{R}^n$ as the subspace of the first $n$ components. It is easy to see that $K(r,G) \geq 1$.
In this paper, we prove new upper bounds for~$K(r,G)$.
\subsection{Some history}
Inequality \eqref{Grothendieck's inequality} is called a \emph{Grothendieck inequality} because it first appeared in the work \cite{Grothendieck} of Grothendieck on the metric theory of tensor products. More precisely, Grothendieck considered the case~$r = 1$ for $2$-chromatic (bipartite) graphs, although in quite a different language. (A \emph{$k$-chromatic graph} is a graph whose chromatic number is $k$, i.e., one can color its vertices with $k$ colors so that adjacent vertices get different colors, but~$k-1$ colors do not suffice for this.) Grothendieck proved that in this case $K(1,G)$ is upper bounded by a constant that is independent of the size of $G$.
Later, Lindenstrauss and Pe{\l}czy{\'n}ski \cite{LindenstraussPelczynski} reformulated Grothendieck's inequality for bipartite graphs in a way that is very close to the formulation we gave above. The graphical Grothendieck problem with rank-$1$ constraint was introduced by Alon, Makarychev, Makarychev, and Naor \cite{AlonMakarychevMakarychevNaor}. Haagerup \cite{Haagerup} considered the complex case of Grothendieck's inequality;
his upper bound is also valid for the real case~$r = 2$. The higher rank case for bipartite graphs was introduced by Bri\"et, Buhrman, and Toner~\cite{BrietBuhrmanToner}.
\subsection{Computational perspective}
There has been a recent surge of interest in Grothendieck inequalities by the computer science community. The problem $\sdp_r(G,A)$ is a semidefinite maximization problem with rank-$r$ constraint:
\begin{equation*}
\begin{split}
\sdp_r(G,A) = \max\biggl\{\,\sum_{\{u,v\}\in E} A(u,v)X(u,v) \;\; : \;\; & X \in \mathbb{R}^{V \times V}_{\succeq 0},\\[-1em]
& X(u,u) = 1 \text{ for all $u \in V$,}\\
& \rank X \leq r\,\biggr\},
\end{split}
\end{equation*}
where~$\mathbb{R}^{V \times V}_{\succeq 0}$ is the set of matrices~$X\colon V \times V \to \mathbb{R}$ that are positive semidefinite.
On the one hand, $\sdp_r(G,A)$ is generally a difficult computational problem. For instance, if $r=1$ and~$G$ is the complete bipartite graph $K_{n,n}$ on $2n$ nodes, and if~$A$ is the Laplacian matrix of a graph $G'$ on $n$ nodes, then computing $\sdp_1(K_{n,n},A)$ is equivalent to computing the weight of a maximum cut of~$G'$. The maximum cut problem (MAX CUT) is one of Karp's 21 $\mathrm{NP}$-complete problems. On the other hand, if we relax the rank-$r$ constraint, then we deal with $\sdp_{\infty}(G,A)$, which is an easy computational problem: Obviously, one has $\sdp_{\infty}(G,A) = \sdp_{|V|}(G,A)$ and computing $\sdp_{|V|}(G,A)$ amounts to solving a semidefinite programming problem (see e.g.\ Vandenberghe, Boyd \cite{VandenbergheBoyd}). Therefore one may approximate it to any fixed precision in polynomial time by using the ellipsoid method or interior point algorithms.
In many cases the optimal constant $K(r,G)$ is not known and so one is interested in finding upper bounds for~$K(r,G)$. Usually, proving an upper bound amounts to giving a randomized polynomial-time approximation algorithm for $\sdp_r(G,A)$. In the case of the MAX CUT problem, Goemans and Williamson~\cite{GoemansWilliamson} pioneered an approach based on randomized rounding: One rounds an optimal solution of~$\sdp_\infty(G, A)$ to a feasible solution of~$\sdp_r(G, A)$. The expected value of the rounded solution is then related to the one of the original solution, and this gives an upper bound for~$K(r, G)$. Using this basic idea, Goemans and Williamson~\cite{GoemansWilliamson} showed that for all symmetric matrices $A\colon V \times V \to \mathbb{R}$ which have the properties $A(u,v) \leq 0$ for $u$ distinct from $v$ and $\sum_{u \in V} A(u,v) = 0$ for all $v \in V$, we have
\begin{equation*}
\sdp_\infty(K_{n,n},A) \leq (0.878\dots)^{-1} \sdp_1(K_{n,n},A).
\end{equation*}
\medskip
\subsection{Applications and references}
Grothendieck's inequality is a fundamental inequality in the theory of Banach spaces. Many books on the geometry of Banach spaces contain a substantial treatment of the result. We refer for instance to the books by Pisier~\cite{Pisier}, Jameson~\cite{Jameson}, and Garling~\cite{Garling}.
During the last years, especially after Alon and Naor \cite{AlonNaor} pointed out the connection between the inequality and approximation algorithms using semidefinite programs, Grothendieck's inequality has also become a unifying and fundamental tool outside of functional analysis.
It has applications in optimization (Nesterov \cite{Nesterov}, Nemirovski, Roos, Terlaky \cite{NemirovskiRoosTerlaky}, Megretski \cite{Megretski}), extremal combinatorics (Alon, Naor \cite{AlonNaor}), system theory (Ben-Tal, Nemirovski \cite{BenTalNemirovski}), machine learning (Charikar, Wirth \cite{CharikarWirth}, Khot, Naor \cite{KhotNaor, KhotNaor2}), communication complexity (Linial, Shraibman \cite{LinialSchraibman}), quantum information theory (Tsirel'son \cite{Tsirelson}, Regev, Toner \cite{RegevToner}), and computational complexity (Khot, O'Donnell \cite{KhotODonnell}, Arora, Berger, Kindler, Safra, Hazan \cite{AroraBergerKindlerHazanSafra}, Khot and Naor~\cite{KhotNaor3}, Raghavendra, Steurer \cite{RaghavendraSteurer}).
The references above mainly deal with the combinatorial rank $r = 1$ case, when $S^0 = \{-1,+1\}$. For applications in quantum information (Bri\"et, Buhrman, Toner \cite{BrietBuhrmanToner}) and in statistical mechanics (mentioned in Alon, Makarychev, Makarychev, Naor \cite{AlonMakarychevMakarychevNaor}, Kindler, Naor, Schechtman \cite{KindlerNaorSchechtman}) the more geometrical case when $r > 1$ is of interest --- this case is the subject of this paper.
Before we present our results we consider the application to statistical mechanics: The \emph{$n$-vector model}, introduced by Stanley \cite{Stanley}, describes the interaction of particles in a spin glass with ferromagnetic and antiferromagnetic interactions. The case $n = 1$ corresponds to the Ising model, the case $n = 2$ to the XY model, the case~$n = 3$ to the Heisenberg model, and the case $n = \infty$ to the Berlin-Kac spherical model.
Let $G = (V, E)$ be the interaction graph where the vertices are particles and where edges indicate which particles interact.
The potential function $A\colon V \times V \to \mathbb{R}$ is $0$ if $u$ and $v$ are not adjacent, it is positive if there is ferromagnetic interaction between $u$ and $v$, and it is negative if there is antiferromagnetic interaction. The particles possess a vector-valued spin $f\colon V \to S^{n-1}$. In the absence of an external field, the total energy of the system is given by the \emph{Hamiltonian}
\begin{equation*}
H(f) = -\sum_{\{u,v\} \in E} A(u,v) f(u) \cdot f(v).
\end{equation*}
The ground state of this model is a configuration of spins $f\colon V \to S^{n-1}$ which minimizes the total energy. Finding the ground state is the same as solving~$\sdp_n(G,A)$. Typically, the interaction graph has small chromatic number, e.g.\ the most common case is when $G$ is a finite subgraph of the integer lattice $\mathbb{Z}^n$ where the vertices are the lattice points and where two vertices are connected if their Euclidean distance is one. These graphs are bipartite since they can be partitioned into even and odd vertices, corresponding to the parity of the sum of the coordinates.
We briefly describe the relation to quantum information theory.
In an influential paper, Einstein, Podolsky, and Rosen~\cite{Einstein:1935} pointed out an anomaly of quantum mechanics that allows spatially separated parties to establish peculiar correlations by each performing measurements on a private quantum system: {\em entanglement}.
Later, Bell~\cite{Bell:1964} proved that local measurements on a pair of spatially separated, entangled quantum systems, can give rise to joint probability distributions of measurement outcomes that violate certain inequalities (now called Bell inequalities), satisfied by any classical distribution.
Experimental results of Aspect, Grangier, and Roger~\cite{Aspect:1981} give strong evidence that nature indeed allows distant physical systems to be correlated in such non-classical ways.
{\em XOR games}, first formalized by Cleve, H\o yer, Toner, and Watrous~\cite{Cleve:2004}, constitute the simplest model in which entanglement can be studied quantitatively. In an XOR game, two players, Alice and Bob, receive questions $u$ and $v$ (resp.) that are picked by a referee according to some probability distribution $\pi(u,v)$ known to everybody in advance. Without sharing their questions, the players have to answer the referee with bits $a$ and $b$ (resp.), and win the game if and only if the exclusive-OR of their answers $a\oplus b$ equals the value of a Boolean function $g(u,v)$; the function $g$ is also known in advance to all three parties.
In a quantum-mechanical setting, the players determine their answers by performing measurements on their shares of a pair of entangled quantum systems.
A {\em state} of a pair of $d$-dimensional quantum systems is a trace-$1$ positive semidefinite operator $\rho\in \mathbb{C}^{d^2\times d^2}_{\succeq 0}$.
The systems are {\em entangled} if $\rho$ is not a convex combination of tensor products of $d$-by-$d$ positive semidefinite matrices.
For each question $u$, Alice has a two-outcome measurement defined by a pair of $d$-by-$d$ positive semidefinite matrices $\{A_u^0,A_u^1\}$ that satisfies $A_u^0 + A_u^1 = I$, where $I$ is the identity matrix.
Bob has a similar pair $\{B_v^0,B_v^1\}$ for each question~$v$.
When the players perform their measurements, the probability that they obtain bits $a$ and $b$ is given by $\Tr(A_u^a\otimes B_v^b\rho)$.
The case $d = 1$ corresponds to a classical setting. In this case, the maximum winning probability equals $\big(1 + \sdp_1(G,A)\big)/2$, where
$G$ is the complete bipartite graph with Alice and Bob's questions on opposite sides of the partition, and $A(u,v) = (-1)^{g(u,v)}\pi(u,v)/2$ for pairs $\{u,v\}\in E$ and $A(u,v)=0$ everywhere else.
Tsirel'son~\cite{Tsirelson} related the maximum winning probability $\omega^*_d(\pi,g)$ of the game $(\pi,g)$, when the players are restricted to measurements on $d$-dimensional quantum systems, to the quantity $\sdp_r(G,A)$.
In particular, he proved that
\begin{equation*}
\frac{1 + \sdp_{\lfloor\log d\rfloor}(G,A)}{2} \leq \omega_d^*(\pi,g) \leq \frac{1+ \sdp_{2d}(G,A)}{2}.
\end{equation*}
The quantity~$\sdp_r(G,A)$ thus gives bounds on the maximum winning probability of XOR games when players are limited in the amount of entanglement they are allowed to use.
The rank-$r$ Grothendieck constant $K(r,G)$ of the bipartite graph~$G$ described above gives a quantitative bound on the advantage that unbounded entanglement gives over finite entanglement in XOR games.
\medskip
\medskip
\medskip
\subsection{Our results and methods}
\label{sec:our methods}
The purpose of this paper is to prove explicit upper bounds for $K(r,G)$. We are especially interested in the case of small~$r$ and graphs with small chromatic number, although our methods are not restricted to this. The proof of the following theorem gives a randomized polynomial-time approximation algorithm for approximating ground states in the Heisenberg model in the lattice $\mathbb{Z}^3$ with approximation ratio $0.78\ldots = (1.28\ldots)^{-1}$. This result can be regarded as one of the main contributions of this paper.
\newpage
\begin{theorem}
\label{thm:main}
For $r = 1, \ldots, 10$ and in the case of a bipartite or a tripartite graph~$G$ the rank-$r$ Grothendieck constant is at most:
\medskip
\begin{center}
\begin{tabular}{ccc}
\noalign{\hrule\vskip2pt}
\quad$r$\quad\hbox{} & \quad{\sl bipartite $G$}\quad \hbox{} & \quad {\sl tripartite $G$} \quad\hbox{}\\[2pt]
\noalign{\hrule\vskip2pt}
$1$ & $1.782213\dots$ & $3.264251\dots$\\
$2$ & $1.404909\dots$ & $2.621596\dots$\\
$3$ & $1.280812\dots$ & $2.412700\dots$\\
$4$ & $1.216786\dots$ & $2.309224\dots$\\
$5$ & $1.177179\dots$ & $2.247399\dots$\\
$6$ & $1.150060\dots$ & $2.206258\dots$\\
$7$ & $1.130249\dots$ & $2.176891\dots$\\
$8$ & $1.115110\dots$ & $2.154868\dots$\\
$9$ & $1.103150\dots$ & $2.137736\dots$\\
$10$ & $1.093456\dots$ & $2.124024\dots$\\[2pt]
\hline
\end{tabular}
\end{center}
\end{theorem}
\smallskip
The bound for the original Grothendieck constant $K(1,G)$ for bipartite~$G$ is due to Krivine~\cite{Krivine}.
For more than thirty years this was the best known upper bound, and it was conjectured by many to be optimal.
However, shortly after our work appeared in preprint form, Braverman, Makarychev, Makarychev and Naor~\cite{Braverman:2011} showed that Krivine's bound can be slightly improved.
The best known lower bound is $1.676956\dots$ due to Davie~\cite{Davie} and Reeds~\cite{Reeds} (see also Khot and O'Donnell~\cite{KhotODonnell}). The bound for~$K(2,G)$ is due to Haagerup \cite{Haagerup}.
When the graph~$G$ has large chromatic number, then the result of Alon, Ma\-ka\-ry\-chev, Makarychev, and Naor~\cite{AlonMakarychevMakarychevNaor} gives the best known bounds for $K(1,G)$: They prove a logarithmic dependence on the chromatic number of the graph (actually on the theta number of the complement of $G$, cf. Section~\ref{constructing section}) whereas our methods only give a linear dependence. Although our main focus is on small chromatic numbers, for completeness we extend the results of~\cite{AlonMakarychevMakarychevNaor} for large chromatic numbers to $r\geq 2$ in Section~\ref{sec:highchrom}. In a previous paper~\cite{Briet:MTNS2010} we proved that $K(r,K_{n,n}) = 1 +\Theta(1/r)$.
For the proof of Theorem~\ref{thm:main} we use the framework of Krivine and Haagerup which we explain below. Our main technical contributions are a matrix version of Grothendieck's identity (Lemma~\ref{grothendieck identity}) and a method to construct new unit vectors which can also deal with nonbipartite graphs (Lemma~\ref{lem:gen-embedding}).
The strategy of Haagerup and Krivine is based on the following embedding lemma:
\begin{lemma}
\label{lem:embed-vague}
Let~$G = (V, E)$ be a graph and choose~$Z = (Z_{ij}) \in \mathbb{R}^{r \times |V|}$ at random so that each entry is distributed independently according to the normal distribution with mean~$0$ and variance~$1$, that is,~$Z_{ij} \sim N(0, 1)$.
Given~$f\colon V \to S^{|V|-1}$, there is a function~$g\colon V \to S^{|V|-1}$ such that whenever~$u$ and~$v$ are adjacent in~$G$, then
\[
\mathbb{E}\biggl[\frac{Z g(u)}{\|Z g(u)\|} \cdot \frac{Z g(v)}{\|Z g(v)\|}\biggr] = \beta(r, G) f(u) \cdot f(v)
\]
for some constant~$\beta(r, G)$ depending only on~$r$ and~$G$.
\end{lemma}
In the statement above we are vague regarding the constant~$\beta(r, G)$. We will give the precise statement of the lemma in Section~\ref{constructing section} (cf.~Lemma~\ref{lem:gen-embedding} there), right now this precise statement is not relevant to our discussion.
Now, the strategy of Haagerup and Krivine amounts to analyzing the following four-step procedure that yields a randomized polynomial-time approximation algorithm for $\sdp_r(G,A)$:
\medskip
\noindent
{\bf Algorithm~A.}\enspace Takes as input a finite graph~$G = (V, E)$ with at least one edge and a symmetric matrix~$A\colon V \times V \to \mathbb{R}$, and returns a feasible solution~$h\colon V \to S^{r-1}$ of~$\sdp_r(G, A)$.
\begin{enumerate}
\item Solve $\sdp_{\infty}(G, A)$, obtaining an optimal solution $f\colon V \to S^{|V|-1}$.
\item Use $f$ to construct $g\colon V \to S^{|V|-1}$ according to Lemma~\ref{lem:embed-vague}.
\item Choose $Z = (Z_{ij}) \in \mathbb{R}^{r \times |V|}$ at random so that every matrix entry $Z_{ij}$ is distributed independently according to the standard normal distribution with mean $0$ and variance $1$, that is,~$Z_{ij} \sim N(0,1)$.
\item Define $h\colon V \to S^{r-1}$ by setting $h(u) = Zg(u)/\|Zg(u)\|$.
\end{enumerate}
\medskip
To analyze this procedure, we compute the expected value of the feasible solution~$h$. Using Lemma~\ref{lem:embed-vague} we obtain
\begin{equation}
\label{ineqchain}
\begin{split}
\sdp_r(G,A) & \geq \mathbb{E}\biggl[\sum_{\{u,v\} \in E} A(u,v) h(u) \cdot h(v)\biggr]\\
& = \sum_{\{u,v\} \in E} A(u,v) \mathbb{E}[h(u) \cdot h(v)]\\
& = \beta(r,G) \sum_{\{u,v\} \in E} A(u,v) f(u) \cdot f(v)\\
& = \beta(r,G) \sdp_{\infty}(G,A),
\end{split}
\end{equation}
and so we have $K(r,G) \leq \beta(r,G)^{-1}$.
If we were to skip step~{(2)} and apply step~{(4)} to $f$ directly, then the expectation~$\mathbb{E}[h(u)\cdot h(v)]$ would be a non-linear function of $f(u)\cdot f(v)$, which would make it difficult to assess the quality of the feasible solution $h$. The purpose of step~{(2)} is to linearize this expectation, which allows us to estimate the quality of~$h$ in terms of a linear function of $\sdp_r(G,A)$.
The constant~$\beta(r, G)$ in Lemma~\ref{lem:embed-vague} is defined in terms of the Taylor expansion of the inverse of the function~$E_r\colon [-1, 1] \to [-1, 1]$ given by
\[
E_r(x \cdot y) = \mathbb{E}\biggl[\frac{Zx}{\|Zx\|} \cdot \frac{Zy}{\|Zy\|}\biggr],
\]
where~$x$, $y \in S^\infty$ and~$Z = (Z_{ij}) \in \mathbb{R}^{r \times \infty}$ is chosen so that its entries are independently distributed according to the normal distribution with mean~$0$ and variance~$1$. The function~$E_r$ is well-defined since the expectation above is invariant under orthogonal transformations.
The Taylor expansion of~$E_r$ is computed in Section~\ref{grothendieck identity section}. The Taylor expansion of~$E_r^{-1}$ is treated in Section~\ref{convergence section}, where we basically follow Haagerup~\cite{Haagerup}. A precise version of Lemma~\ref{lem:embed-vague} is stated and proved in Section~\ref{constructing section}, following Krivine~\cite{Krivine}.
Finally, in Section~\ref{refined section} we show that one can refine this analysis and can (strictly) improve the upper bound if one takes the dimension of the matrix $A\colon V \times V \to \mathbb{R}$ into account. In particular, we compare the problems $\sdp_q$ and $\sdp_r$ for $q \geq r$. Earlier, Avidor and Zwick~\cite{AvidorZwick} considered the problem of bounding the ratio $\sdp_q(G,A)/\sdp_1(G,A)$ for $q = 2,3$ and $A$ the Laplacian matrix of a graph.
\section{A matrix version of Grothendieck's identity}
\label{grothendieck identity section}
In the analysis of many approximation algorithms that use semidefinite programming the following identity plays a central role:
Let $u$, $v$ be unit (column) vectors in $\mathbb{R}^n$ and let $Z \in \mathbb{R}^{1 \times n}$ be a random (row) vector whose entries are distributed independently according to the standard normal distribution with mean $0$ and variance~$1$. Then,
\begin{equation*}
\mathbb{E}[\sign(Zu)\sign(Zv)] = \mathbb{E}\biggl[\frac{Zu}{\|Zu\|} \cdot \frac{Zv}{\|Zv\|}\biggr] = \frac{2}{\pi}\arcsin(u \cdot v).
\end{equation*}
For instance, the celebrated algorithm of Goemans and Williamson \cite{GoemansWilliamson} for approximating the MAX CUT problem is based on this. The identity is called \emph{Grothendieck's identity} since it appeared for the first time in Grothendieck's work on the metric theory of tensor products \cite[Proposition 4, p. 63]{Grothendieck} (see also Diestel, Fourie, and Swart~\cite{DiestelFourieSwart}).
In this section we extend Grothendieck's identity from vectors to matrices by replacing the arcsine function by a hypergeometric function.
\begin{lemma}
\label{grothendieck identity}
Let $u$, $v$ be unit vectors in $\mathbb{R}^n$ and let $Z \in \mathbb{R}^{r \times n}$ be a random matrix whose entries are distributed independently according to the standard normal distribution with mean $0$ and variance $1$. Then,
\begin{equation*}
\mathbb{E}\biggl[\frac{Zu}{\|Zu\|} \cdot \frac{Zv}{\|Zv\|}\biggr]= \frac{2}{r}\left(\frac{\Gamma((r+1)/2)}{\Gamma(r/2)}\right)^2
(u \cdot v)\, {}_{2}F_{1} \left(\!\!\begin{array}{cc} 1/2, 1/2\\ r/2+1\end{array}\!; (u \cdot v)^2 \right).
\end{equation*}
Here,
\begin{equation*}
\begin{split}
&(u \cdot v)\, {}_{2}F_{1} \left(\!\!\begin{array}{cc} 1/2, 1/2\\ r/2+1\end{array}\!; (u \cdot v)^2 \right)\\
&\qquad{}= \sum_{k = 0}^{\infty} \frac{(1\cdot 3 \cdots (2k-1))^2}{(2 \cdot 4 \cdots 2k)((r+2)\cdot(r+4)\cdots (r+2k))} (u \cdot v)^{2k+1}.
\end{split}
\end{equation*}
\end{lemma}
Before proving the lemma we review special cases known in the literature. If $r = 1$, then we get the original Grothendieck's identity:
\begin{eqnarray*}
\mathbb{E}[\sign(Zu)\sign(Zv)]
& = & \frac{2}{\pi} \arcsin (u \cdot v)\\
& = & \frac{2}{\pi} \left( u \cdot v + \left(\frac{1}{2}\right)\frac{(u \cdot v)^3}{3} + \left(\frac{1\cdot 3}{2 \cdot 4}\right) \frac{(u \cdot v)^5}{5} + \cdots \right).
\end{eqnarray*}
The case $r=2$ is due to Haagerup \cite{Haagerup}:
\begin{eqnarray*}
\mathbb{E}\left[\frac{Zu}{\|Zu\|} \cdot \frac{Zv}{\|Zv\|}\right]
& = & \frac{1}{u \cdot v}\left(E(u \cdot v)-(1-(u \cdot v)^2)K(u \cdot v)\right)\\
& = & \frac{\pi}{4}
\left(
u \cdot v + \left(\frac{1}{2}\right)^2 \frac{(u \cdot v)^3}{2}
+ \left(\frac{1 \cdot 3}{2 \cdot 4}\right)^2 \frac{(u \cdot v) ^5}{3} + \cdots\right),
\end{eqnarray*}
where $K$ and $E$ are the complete elliptic integrals of the first and second kind. Note that on page 201 of Haagerup~\cite{Haagerup} $\pi/2$ has to be $\pi/4$. Bri\"et, Oliveira, and Vallentin~\cite{BrietOliveiraVallentin} computed the first coefficient $2/r(\Gamma((r+1)/2)/\Gamma(r/2))^2$ of the Taylor series of the expectation for every $r$.
The following elegant proof of Grothendieck's identity has become a classic: We have $\sign(Zu)\sign(Zv) = 1$ if and only if the vectors $u$ and $v$ lie on the same side of the hyperplane orthogonal to the vector $Z \in \mathbb{R}^{1 \times n}$. Now we project this $n$-dimensional situation to the plane spanned by $u$ and $v$. Then the projected random hyperplane becomes a random line. This random line is distributed according to the uniform probability measure on the unit circle because $Z$ is normally distributed. Now one obtains the final result by measuring intervals on the unit circle: The probability that $u$ and $v$ lie on the same side of the line is $1 - \arccos(u \cdot v)/\pi$.
However, we do not have such a picture proof for our matrix version. Our proof is based on the rotational invariance of the normal distribution and integration with respect to spherical coordinates together with some identities for hypergeometric functions. A similar calculation was done by K\"onig and Tomczak-Jaegermann~\cite{Koenig}. It would be interesting to find a more geometrical proof of the lemma.
For computing the first coefficient of the Taylor series in \cite{BrietOliveiraVallentin} we took a slightly different route: We integrated using the Wishart distribution of $2 \times 2$-matrices.
\begin{proof}[Proof of Lemma \ref{grothendieck identity}]
Let $Z_i \in \mathbb{R}^n$ be the $i$-th row of the matrix $Z$, with $i = 1, \ldots r$. We define vectors
\begin{equation*}
x =
\begin{pmatrix}
Z_1 \cdot u\\
Z_2 \cdot u\\
\vdots\\
Z_r \cdot u
\end{pmatrix}\qquad\text{and}\qquad
y =
\begin{pmatrix}
Z_1 \cdot v\\
Z_2 \cdot v\\
\vdots\\
Z_r \cdot v
\end{pmatrix}
\end{equation*}
so that we have $x\cdot y = (Zu)\cdot (Zv)$.
Since the probability distribution of the vectors $Z_i$ is invariant under orthogonal transformations we may assume that $u = (1, 0, \ldots, 0)$ and $v = (t, \sqrt{1-t^2}, 0, \ldots, 0)$ and so the pair $(x,y) \in \mathbb{R}^r \times \mathbb{R}^r$ is distributed according to the probability density function~(see e.g.\ Muirhead~\cite[p.~10, eq.~(7)]{Muirhead})
\begin{equation*}
(2\pi \sqrt{1-t^2})^{-r}\exp\left(-\frac{x \cdot x - 2t x \cdot y + y \cdot y}{2(1-t^2)}\right).
\end{equation*}
Hence,
\begin{equation*}
\begin{split}
&\mathbb{E}\left[\frac{x}{\|x\|} \cdot \frac{y}{\|y\|}\right]\\
&\qquad{}= (2\pi \sqrt{1-t^2})^{-r} \int_{\mathbb{R}^r} \int_{\mathbb{R}^r} \frac{x}{\|x\|} \cdot \frac{y}{\|y\|} \exp\left(-\frac{x \cdot x - 2t x \cdot y + y \cdot y}{2(1-t^2)}\right)\, dx dy.
\end{split}
\end{equation*}
By using spherical coordinates $x = \alpha \xi$, $y = \beta \eta$, where $\alpha,\beta \in [0,\infty)$ and $\xi, \eta \in S^{r-1}$, we rewrite the above integral as
\begin{equation*}
\vcenter{\halign{\hbox to\hsize{#\hfil}\cr
\quad $\displaystyle\int_0^{\infty} \int_0^{\infty} (\alpha\beta)^{r-1} \exp\left(-\frac{\alpha^2+\beta^2}{2(1-t^2)}\right)\int_{S^{r-1}} \int_{S^{r-1}} \xi \cdot \eta \exp\left(\frac{\alpha\beta t\xi \cdot \eta}{1-t^2}\right)$\hfill\cr\noalign{\vskip1pt}
\hfill $d\omega(\xi) d\omega(\eta) d\alpha d\beta.$\quad\cr
}}
\end{equation*}
If $r = 1$, we get for the inner double integral
\begin{equation*}
\begin{split}
& \int_{S^{0}} \int_{S^{0}} \xi \cdot \eta \exp\left(\frac{\alpha\beta t\xi \cdot \eta}{1-t^2}\right)\, d\omega(\xi) d\omega(\eta)\\
&\qquad{}= 4 \sinh\left(\frac{\alpha\beta t}{1-t^2}\right)\\
&\qquad{}= 4 \frac{\alpha\beta t}{1-t^2} {}_{0}F_{1} \left(\!\!\begin{array}{cc} \overline{\phantom{xxx}}\\ 3/2\end{array}\!; \left(\frac{\alpha\beta t}{2(1-t^2)}\right)^2 \right).
\end{split}
\end{equation*}
Now we consider the case when $r \geq 2$. Since the inner double integral over the spheres only depends on the inner product $p= \xi \cdot \eta$ it can be rewritten as
\begin{equation*}
\omega(S^{r-2}) \omega(S^{r-1})\int_{-1}^1 p \exp\left(\frac{\alpha\beta tp}{1-t^2}\right) (1-p^2)^{(r-3)/2}\, dp,
\end{equation*}
where
\begin{equation*}
\omega(S^{r-2}) \omega(S^{r-1}) = \frac{4 \pi^{r-1/2}}{\Gamma(r/2)\Gamma((r-1)/2)}.
\end{equation*}
Integration by parts yields
\begin{equation*}
\begin{split}
& \int_{-1}^1 p(1-p^2)^{(r-3)/2} \exp\left(\frac{\alpha\beta tp}{1-t^2}\right)\, dp\\
&\qquad{}= \frac{\alpha\beta t}{(r-1)(1-t^2)} \int_{-1}^1 (1-p^2)^{(r-1)/2} \exp\left(\frac{\alpha\beta tp}{1-t^2}\right) dp.
\end{split}
\end{equation*}
The last integral can be rewritten using the modified Bessel function of the first kind (cf.~Andrews, Askey, Roy \cite[p.~235, Exercise 9]{AndrewsAskeyRoy})
\begin{equation*}
\begin{split}
& \int_{-1}^1 (1-p^2)^{(r-1)/2} \exp\left(\frac{\alpha\beta tp}{1-t^2}\right)\, dp\\
&\qquad{}= \Gamma((r+1)/2) \sqrt{\pi} \left(\frac{2(1-t^2)}{\alpha\beta t}\right)^{r/2} I_{r/2}\left(\frac{\alpha\beta t}{1-t^2}\right).
\end{split}
\end{equation*}
One can write $I_{r/2}$ as a hypergeometric function (cf.~Andrews, Askey, and Roy~\cite[(4.12.2)]{AndrewsAskeyRoy})
\begin{equation*}
I_{r/2}(x) = (x/2)^{r/2} \sum_{k=0}^{\infty} \frac{(x/2)^{2k}}{k!\Gamma(r/2+k+1)} =
\frac{(x/2)^{r/2}}{\Gamma((r+2)/2)} {}_{0}F_{1} \left(\!\!\begin{array}{cc} \overline{\phantom{xxx}}\\ (r+2)/2\end{array}\!; \left(\frac{x}{2}\right)^2 \right).
\end{equation*}
Putting things together, we get
\begin{equation*}
\begin{split}
& \omega(S^{r-2}) \omega(S^{r-1})\int_{-1}^1 p \exp\left(\frac{\alpha\beta tp}{1-t^2}\right) (1-p^2)^{(r-3)/2}\, dp\\
&\qquad{}= \frac{4\pi^r}{\Gamma(r/2)^2 r}
\frac{\alpha\beta t}{1-t^2}
{}_{0}F_{1} \left(\!\!\begin{array}{cc} \overline{\phantom{xxx}}\\ (r+2)/2\end{array}\!; \left(\frac{\alpha\beta t}{2(1-t^2)}\right)^2 \right).
\end{split}
\end{equation*}
Notice that the last formula also holds for $r = 1$. So we can continue without case distinction.
Now we evaluate the outer double integral
\begin{equation*}
\int_0^{\infty}\int_0^{\infty} (\alpha\beta)^r \exp\left(-\frac{\alpha^2+\beta^2}{2(1-t^2)}\right) {}_{0}F_{1} \left(\!\!\begin{array}{cc} \overline{\phantom{xxx}} \\ (r+2)/2\end{array}\!; \left(\frac{\alpha\beta t}{2(1-t^2)}\right)^2 \right)\, d\alpha d\beta.
\end{equation*}
Here the inner integral equals
\begin{equation*}
\int_0^{\infty} \alpha^r \exp\left(-\frac{\alpha^2}{2(1-t^2)}\right) {}_{0}F_{1} \left(\!\!\begin{array}{cc} \overline{\phantom{xxx}} \\ (r+2)/2\end{array}\!; \left(\frac{\alpha\beta t}{2(1-t^2)}\right)^2 \right)\, d\alpha,
\end{equation*}
and doing the substitution $\gamma = \alpha^2 / (2(1-t^2))$ gives
\begin{equation*}
2^{(r-1)/2}(1-t^2)^{(r+1)/2}\int_0^{\infty} \gamma^{(r-1)/2} \exp(-\gamma)\, {}_{0}F_{1} \left(\!\!\begin{array}{cc} \overline{\phantom{xxx}} \\ (r+2)/2\end{array}\!; \frac{\gamma(\beta t)^2}{2(1-t^2)} \right)\, d\gamma,
\end{equation*}
which is by the Bateman Manuscript Project \cite [p.~337 (11)]{ErdelyiMagnusOberhettinerTricomi} equal to
\begin{equation*}
2^{(r-1)/2}(1-t^2)^{(r+1)/2} \Gamma((r+1)/2) {}_{1}F_{1} \left(\!\!\begin{array}{cc} (r+1)/2 \\ (r+2)/2\end{array}\!; \frac{(\beta t)^2}{2(1-t^2)} \right).
\end{equation*}
Now we treat the remaining outer integral in a similar way, using \cite [p.~219 (17)]{ErdelyiMagnusOberhettinerTricomi}, and get that
\begin{equation*}
\begin{split}
&\int_0^{\infty} \beta^r \exp\left(-\frac{\beta^2}{2(1-t^2)}\right) {}_{1}F_{1} \left(\!\!\begin{array}{cc} (r+1)/2 \\ (r+2)/2\end{array}\!; \frac{(\beta t)^2}{2(1-t^2)} \right)\, d\beta\\
&\qquad{}=2^{(r-1)/2}(1-t^2)^{(r+1)/2} \Gamma((r+1)/2) {}_{2}F_{1} \left(\!\!\begin{array}{cc}(r+1)/2, (r+1)/2 \\ (r+2)/2\end{array}\!; t^2 \right).
\end{split}
\end{equation*}
By applying Euler's transformation (cf.~Andrews, Askey, and Roy~\cite[(2.2.7)]{AndrewsAskeyRoy})
\begin{equation*}
{}_{2}F_{1} \left(\!\!\begin{array}{cc}(r+1)/2, (r+1)/2 \\ (r+2)/2\end{array}\!; t^2 \right) = (1-t^2)^{-r/2} {}_{2}F_{1} \left(\!\!\begin{array}{cc}1/2, 1/2 \\ (r+2)/2\end{array}\!; t^2 \right)
\end{equation*}
and after collecting the remaining factors we arrive at the result.
\end{proof}
\section{Convergence radius}
\label{convergence section}
To construct the new vectors in the third step of the algorithm that are used to linearize the expectation we will make use of the Taylor series expansion of the inverse of~$E_r$. Locally around zero we can expand the function $E_r^{-1}$ as
\begin{equation*}
E_r^{-1}(t) = \sum_{k = 0}^{\infty} b_{2k+1} t^{2k+1},
\end{equation*}
but in the proof of Lemma~\ref{lem:gen-embedding} it will be essential that this expansion be valid for all $t \in [-1,1]$.
In the case $r = 1$ we have $E_1^{-1}(t) = \sin(\pi/2 t)$ and here the convergence radius is even infinity. The case $r = 2$ was treated by Haagerup and it requires quite some technical work which we sketch very briefly now. He shows that $|b_{k}| \leq C/k^2$ for some constant $C$, independent of $k$, using tools from complex analysis. Using Cauchy's integral formula and after doing some simplifications \cite[p.~208]{Haagerup} one can express $b_k$ as
\begin{equation*}
b_k = \frac{2}{\pi k} \int_1^{\alpha} \Im(E_2(z)^{-k})\, dz + \frac{2}{\pi k}\Im\left(\int_{C'_{\alpha}} E_2(z)^{-k}\, dz\right),
\end{equation*}
where $C'_{\alpha}$ is the quarter circle $\{\,\alpha e^{i\theta} : \theta \in [0,\pi/2]\,\}$.
For an appropriate choice of $\alpha$ the first integral is in absolute value bounded above by $C/k$ and the second integral is in absolute value exponentially small in~$k$. We refer to the original paper for the details. One key point in the arguments is the following integral representation of $E_2$ giving an analytic continuation of $E_2$ on the complex plane slit along the half line $(1, \infty)$:
\begin{equation*}
E_2(z) = \int_0^{\pi/2} \sin\theta \arcsin(z \sin\theta)\, d\theta.
\end{equation*}
Here, the term $\arcsin(z \sin \theta)$ gives the main contribution in the estimates.
Now we derive a similar representation of $E_r$ and using it in Haagerup's analysis with obvious changes shows that also for $r > 2$ we have $b_{k} \leq C/k^2$ for some constant $C$, independent of $k$.
\begin{lemma}
For $r \geq 2$ we have
\begin{equation*}
E_r(z) = \frac{2(r-1)\Gamma((r+1)/2)}{\Gamma(1/2)\Gamma(r/2)} \int_0^{\pi/2} \cos^{r-2} \theta \sin\theta \arcsin(z \sin \theta)\, d\theta.
\end{equation*}
\end{lemma}
\begin{proof}
Using Euler's integral representation of the hypergeometric function (cf.~Andrews, Askey, and Roy~\cite[Theorem 2.2.1]{AndrewsAskeyRoy}) we can rewrite~$E_r$ as
\begin{equation*}
E_r(z) = \frac{\Gamma((r+1)/2)}{\Gamma(1/2)\Gamma(r/2)}\int_0^1 \frac{(1-t)^{(r-1)/2}z}{\sqrt{t(1-z^2t)}}\, dt,
\end{equation*}
which is valid in the complex plane slit along the half line $(1,\infty)$. Using the substitution $t = \sin^2 \theta$ we get
\begin{equation*}
E_r(z) = 2 \frac{\Gamma((r+1)/2)}{\Gamma(1/2)\Gamma(r/2)} \int_0^{\pi/2} \frac{\cos^r \theta z}{\sqrt{1 - z^2 \sin^2 \theta}}\, d\theta.
\end{equation*}
Now integration by parts and the identity
\begin{equation*}
\frac{d}{d\theta} \arcsin(z \sin \theta) = \frac{z \cos\theta}{\sqrt{1-z^2 \sin^2 \theta}}
\end{equation*}
gives the result.
\end{proof}
\section{Constructing new vectors}
\label{constructing section}
In this section we use the Taylor expansion of the inverse of the function~$E_r$ to give a precise statement and proof of Lemma~\ref{lem:embed-vague}; this is done in Lemma~\ref{lem:gen-embedding}. For this we follow Krivine~\cite{Krivine}, who proved the statement of the lemma in the case of bipartite graphs. We comment on how his ideas are related to our construction, which can also deal with nonbipartite graphs, after we prove the lemma.
For the nonbipartite case we need to use the theta number, which is a graph parameter introduced by Lov\'asz~\cite{Lovasz}. Let~$G = (V, E)$ be a graph. The \emph{theta number} of the complement of~$G$, denoted by~$\vartheta(\overline{G})$, is the optimal value of the following semidefinite program:
\begin{equation}
\label{opt:theta-gbar}
\begin{split}
\vartheta(\overline{G}) = \min\Big\{\,\lambda : \;\; & Z \in \mathbb{R}^{V \times V}_{\succeq 0},\\
&Z(u, u) = \lambda - 1 \;\;\text{for~$u \in V$},\\
&Z(u, v) = -1 \;\; \text{for~$\{u,v\} \in E$}\,\Big\}.
\end{split}
\end{equation}
It is known that the theta number of the complement of~$G$ provides a lower bound for the chromatic number of~$G$. This can be easily seen as follows. Any proper $k$-coloring of $G$ defines a mapping of~$V$ to the vertices of a $(k-1)$-dimensional regular simplex whose vertices lie in a sphere of radius $\sqrt{k-1}$: Vertices in the graph having the same color are sent to the same vertex in the regular simplex and vertices of different colors are sent to different vertices in the regular simplex. The Gram matrix of these vectors gives a feasible solution of~\eqref{opt:theta-gbar}.
\begin{lemma}
\label{lem:gen-embedding}
Let~$G = (V, E)$ be a graph with at least one edge. Given~$f\colon V
\to S^{|V|-1}$, there is a function~$g\colon V \to S^{|V|-1}$ such
that whenever $u$ and $v$ are adjacent, then
\begin{equation*}
E_r\big(g(u) \cdot g(v)\big) = \beta(r,G) f(u) \cdot f(v).
\end{equation*}
The constant $\beta(r,G)$ is defined as the solution of the equation
\begin{equation*}
\sum_{k=0}^{\infty} |b_{2k+1}| \beta(r,G)^{2k+1} = \frac{1}{\vartheta(\overline{G}) - 1},
\end{equation*}
where
\begin{equation*}
E_r^{-1}(t) = \sum_{k=0}^{\infty} b_{2k+1} t^{2k+1}.
\end{equation*}
\end{lemma}
With this lemma, we can give a proof of Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
We combine Lemma~\ref{lem:gen-embedding} with the analysis of Algorithm~A from Section~\ref{sec:our methods}. To compute the table in the theorem, we use the formula
\begin{equation}\label{eq:rankgroth-mfinverse}
b_k = \frac{1}{k!a_1^k} \left[\frac{d^{k-1}}{dt^{k-1}}\left(1 + \frac{a_2}{a_1}t + \cdots + \frac{a_k}{a_1}t^{k-1}\right)^{-k}\right]_{t = 0},
\end{equation}
where $a_i$ are the Taylor coefficients of $E_r$ (cf.~Morse and Feshbach~\cite[(4.5.13)]{MorseFeshbach}).
\end{proof}
Now we give a proof of the lemma.
\begin{proof}[Proof of Lemma~\ref{lem:gen-embedding}]
We construct the vectors $g(u) \in S^{|V|-1}$ by constructing vectors $R(u)$ in an infinite-dimensional Hilbert space whose inner product matrix coincides with the one of the $g(u)$. We do this in three steps.
In the first step, set $H = \mathbb{R}^{|V|}$ and consider the Hilbert space
\begin{equation*}
\mathcal{H} = \bigoplus_{k=0}^\infty H^{\otimes (2k+1)}.
\end{equation*}
For a unit vector~$x \in H$, consider the vectors~$S(x)$, $T(x) \in \mathcal{H}$ given componentwise by
\begin{equation*}
S(x)_k = \sqrt{|b_{2k+1}| \beta(r,G)^{2k+1}} x^{\otimes (2k+1)}
\end{equation*}
and
\begin{equation*}
T(x)_k = \sign(b_{2k+1}) \sqrt{|b_{2k+1}| \beta(r,G)^{2k+1}} x^{\otimes (2k+1)}.
\end{equation*}
Then for vectors $x, y \in S^{|V|-1}$ we have
\begin{equation*}
S(x) \cdot T(y) = E_r^{-1}(\beta(r,G) x \cdot y)
\end{equation*}
and moreover
\begin{equation*}
S(x) \cdot S(x) = T(x) \cdot T(x) = \sum_{k=0}^{\infty} |b_{2k+1}| \beta(r,G)^{2k+1} = \frac{1}{\vartheta(\overline{G}) - 1}.
\end{equation*}
In the second step, let ~$\lambda = \vartheta(\overline{G})$, and $Z$ be an optimal solution of~\eqref{opt:theta-gbar}. We have~$\lambda \geq 2$ since $G$ has at least one edge. Set
\begin{equation*}
A = \frac{(\lambda - 1) (J+Z)}{2\lambda}\qquad
\text{and}\qquad
B = \frac{(\lambda - 1) J - Z}{2\lambda},
\end{equation*}
and consider the matrix
\begin{equation*}
U = \begin{pmatrix}
A&B\\
B&A
\end{pmatrix}.
\end{equation*}
By applying a Hadamard transformation
\begin{equation*}
\frac{1}{\sqrt{2}}
\begin{pmatrix}
I & I\\
I & -I
\end{pmatrix}
U
\frac{1}{\sqrt{2}}
\begin{pmatrix}
I & I\\
I & -I
\end{pmatrix}
=
\begin{pmatrix}
A+B & 0\\
0 & A - B
\end{pmatrix}
\end{equation*}
one sees that $U$ is positive semidefinite, since both~$A + B$ and~$A - B$ are positive semidefinite. Define $s\colon V \to \mathbb{R}^{2|V|}$ and $t\colon V \to \mathbb{R}^{2|V|}$ so that
\begin{equation*}
s(u) \cdot s(v) = t(u) \cdot t(v) = A(u,v)\qquad\text{and}\qquad s(u) \cdot t(v) = B(u,v).
\end{equation*}
The matrix $U$ is the Gram matrix of the vectors $\big(s(u)\big)_{u\in V}$ and $\big(t(v)\big)_{v\in V}$.
It follows that these maps have the following properties:
\begin{enumerate}
\item $s(u) \cdot t(u) = 0$ for all~$u \in V$,
\item $s(u) \cdot s(u) = t(u) \cdot t(u) = (\vartheta(\overline{G}) - 1)/2$ for all~$u \in V$,
\item $s(u) \cdot s(v) = t(u) \cdot t(v) = 0$ whenever~$\{u,v\} \in E$,
\item $s(u) \cdot t(v) = s(v) \cdot t(u) = 1/2$ whenever~$\{u,v\} \in E$.
\end{enumerate}
In the third step we combine the previous two. We define the vectors
\begin{equation*}
R(u) = s(u) \otimes S(f(u)) + t(u) \otimes T(f(u)).
\end{equation*}
For adjacent vertices $u, v \in V$ we have
\begin{equation*}
R(u) \cdot R(v) = E_r^{-1}(\beta(r,G)f(u) \cdot f(v)),
\end{equation*}
and moreover the $R(u)$ are unit vectors. Hence, one can use the Cholesky decomposition of~$(R(u) \cdot R(v)) \in \mathbb{R}^{V \times V}_{\succeq 0}$ to define the desired function $g\colon V \to S^{|V|-1}$.
\end{proof}
We conclude this section with a few remarks on the lemma and its proof:
\begin{enumerate}
\item To approximate the Gram matrix $(R(u) \cdot R(v))$ it is enough to compute the series expansion of $E^{-1}_r$ and the matrix $U$ to the desired precision. The latter is found by solving a semidefinite program.
\item
Krivine proved the statement of the lemma in the case $r = 1$ and for bipartite graphs~$G$; then, $\vartheta(\overline{G}) = 2$ holds. Here, one only needs the first step of the proof. Also, $\beta(1,G)$ can be computed analytically. We have $E_1^{-1}(t) = \sin(\pi/2 t)$ and
\begin{equation*}
\sum_{k=0}^{\infty} \left|(-1)^{2k+1} \frac{(\pi/2)^{2k+1}}{(2k+1)!}\right| t^{2k+1} = \sinh(\pi/2 t).
\end{equation*}
Hence, $\beta(1,G) = 2 \arcsinh(1) / \pi= 2 \ln(1+\sqrt{2}) / \pi$.
\item
In the second step one can also work with any feasible solution of the semidefinite program~\eqref{opt:theta-gbar}. For instance one can replace $\vartheta(\overline{G})$ in the lemma by the chromatic number~$\chi(G)$ albeit getting a potentially weaker bound.
\item
Alon, Makarychev, Makarychev, and Naor \cite{AlonMakarychevMakarychevNaor} also gave an upper bound for $K(1,G)$ using the theta number of the complement of~$G$. They prove that
\begin{equation*}
K(1,G) \leq O(\log\vartheta(\overline{G})),
\end{equation*}
which is much better than our result in the case of large $\vartheta(\overline{G})$. However, our bound is favourable when $\vartheta(\overline{G})$ is small.
In Section~\ref{sec:highchrom} we generalize the methods of Alon, Makarychev, Makarychev, and Naor~\cite{AlonMakarychevMakarychevNaor} to obtain better upper bounds on~$K(r,G)$ for $r\geq 2$ and large $\vartheta(\overline{G})$.
\item
Finally, notice that in the first step it was essential that the Taylor expansion of $E_r^{-1}$ has convergence radius of at least one.
\end{enumerate}
\section{A refined, dimension-dependent analysis}
\label{refined section}
So far we only compared the two problems $\sdp_{\infty}$ and $\sdp_r$. One can perform a refined, dimension-dependent analysis by comparing $\sdp_q$ and $\sdp_r$ when $q \geq r$.
Let~$K(q \mapsto r, G)$, where~$q \geq r$, be the least number such that
\begin{equation*}
\sdp_q(G,A) \leq K(q \mapsto r, G) \sdp_r(G,A)
\end{equation*}
for all~$A\colon V \times V \to \mathbb{R}$. In this section we give an upper bound for~$K(q \mapsto r, G)$ that depends on~$q$ and~$r$. For fixed~$r$, this upper bound will become smaller as~$q$ comes closer to~$r$. Krivine \cite{Krivine} gave such a refined, dimension-dependent analysis in the bipartite case; he showed that
\[
K(2 \mapsto 1, K_{n,n}) = \sqrt{2},\quad K(3 \mapsto 1, K_{n,n}) \leq 1.517, \quad\text{and}\quad K(4 \mapsto 1, K_{n,n}) \leq \pi/2.
\]
Avidor and Zwick~\cite{AvidorZwick} analyzed the cases $r=1$ and $q\in\{2,3\}$ for bipartite~$G=K_{n,n}$ and $A$ the Laplacian of a graph~$G'$ on~$n$ nodes.
Our upper bound comes from the following lemma:
\begin{lemma}
\label{lem:sphere-embedding}
Let~$G = (V, E)$ be a graph with at least one edge. Given~$f\colon V
\to S^{q-1}$, there is a function~$g\colon V \to S^{|V|-1}$ such
that whenever $u$ and $v$ are adjacent, then
\begin{equation*}
E_r(g(u) \cdot g(v)) = \beta(q \mapsto r,G) f(u) \cdot f(v),
\end{equation*}
where~$0 < \beta(q \mapsto r, G) \leq 1$ is such that~$\beta(q \mapsto r, G) > \beta(q + 1 \mapsto r, G)$ and~$\beta(q \mapsto r, G) > \beta(r,G)$ for all~$q \geq 2$.
\end{lemma}
\noindent The proof of the lemma will also give a procedure to compute $\beta(q \mapsto r, G)$ explicitly.
So we have the theorem:
\begin{theorem}
Let~$G = (V, E)$ be a graph with at least one edge and let~$q \geq r \geq 1$ be integers. Then~$K(q \mapsto r, G) \leq \beta(q \mapsto r, G)^{-1}$.
\end{theorem}
\begin{proof}
Combine Lemma~\ref{lem:sphere-embedding} with Algorithm~A from Section~\ref{sec:our methods}.
\end{proof}
The proof of the lemma uses some basic facts from harmonic
analysis, which we now summarize. For measurable functions~$f$,
$g\colon [-1, 1] \to \mathbb{R}$ we consider the inner product
\begin{equation}
\label{eq:inner-p}
\langle f, g \rangle_n = \int_{-1}^1 f(t) g(t) (1 - t^2)^{(n-3)/2}\,
dt.
\end{equation}
We say that a continuous function~$f\colon [-1, 1] \to \mathbb{R}$ is of
\textit{positive type for~$S^{n-1}$} if for any choice~$x_1$,
\dots,~$x_N$ of points in~$S^{n-1}$ we have that the
matrix~$\bigl(f(x_i \cdot x_j)\bigr)_{i, j = 1}^N$ is positive
semidefinite. If two continuous functions~$f$, $g\colon [-1, 1] \to
\mathbb{R}$ are of positive type for~$S^{n-1}$, then~$\langle f, g \rangle_n
\geq 0$.
Schoenberg~\cite{Schoenberg} characterized the continuous functions
of positive type in terms of Gegenbauer polynomials. We denote
by~$P_k^n$ the Gegenbauer polynomial of degree~$k$ and
parameter~$(n-2)/2$ which is normalized so that~$P_k^n(1) = 1$. Notice
that this normalization is not the one commonly found in the
literature.
The Gegenbauer polynomials~$P_0^n$, $P_1^n$, $P_2^n$, \dots\ are
pairwise orthogonal with respect to the inner
product~\eqref{eq:inner-p}, and they form a complete orthogonal system
for the space~$L^2([-1, 1])$, equipped with the inner
product~\eqref{eq:inner-p}.
Schoenberg's characterization of the functions of positive type is as
follows: A function~$f\colon [-1, 1] \to \mathbb{R}$ is continuous and of positive type
for~$S^{n-1}$ if and only if
\begin{equation}
\label{eq:pos-decomp}
f(t) = \sum_{k=0}^\infty a_k P_k^n(t)
\end{equation}
for some nonnegative numbers~$a_0$, $a_1$, $a_2$, \dots\ such
that~$\sum_{k=0}^\infty a_k$ converges, in which case the series
in~\eqref{eq:pos-decomp} converges absolutely and uniformly in~$[-1,
1]$.
A continuous function~$f\colon [-1, 1] \to \mathbb{R}$ can also be of positive
type for spheres of every dimension. Schoenberg~\cite{Schoenberg}
also characterized these functions. They are the ones that can be
decomposed as
\begin{equation*}
f(t) = \sum_{k=0}^\infty a_k t^k
\end{equation*}
for some nonnegative numbers~$a_0$, $a_1$, $a_2$, \dots\ such
that~$\sum_{k=0}^\infty a_k$ converges.
A polynomial in~$\mathbb{R}[x_1, \ldots, x_n]$ is \textit{harmonic} if it is
homogeneous and vanishes under the Laplace operator~$\Delta
= \partial^2 / \partial x_1^2 + \cdots + \partial^2 / \partial
x_n^2$. Harmonic polynomials restricted to the unit sphere $\sphere{n}$ are related to Gegenbauer polynomials by
the \textit{addition formula} (see e.g.\ Andrews, Askey, and
Roy~\cite[Theorem 9.6.3]{AndrewsAskeyRoy}): Let~$H_k$ be the space of degree~$k$ harmonic polynomials on~$n$
variables. Any orthonormal basis of~$H_k$ can be scaled so as to give
a basis~$e_{k, 1}$, \dots,~$e_{k, h_k}$ of~$H_k$ for which the
following holds: For every~$u$, $v \in S^{n-1}$ we have that
\begin{equation*}
P_k^n(u \cdot v) = \sum_{i=1}^{h_k} e_{k,i}(u) e_{k,i}(v).
\end{equation*}
With this we have all that we need to prove the lemma. We only consider the bipartite case in the proof in order to simplify the notation and to make the argument more transparent. One can handle the nonbipartite case exactly in the same way as in the proof of Lemma~\ref{lem:gen-embedding}.
\begin{proof}[Proof of Lemma~\ref{lem:sphere-embedding}]
As before, we construct the function $g:V\to\sphere{|V|}$ from functions $S$ and $T$ that satisfy $S(x)\cdot T(y) = E^{-1}_r(\beta x\cdot y)$ for some real number $\beta$.
Fix~$0 < \beta \leq 1$ and consider the expansion
\begin{equation*}
E_r^{-1}(\beta t) = \sum_{k=0}^\infty g_k^q(\beta) P_k^q(t),
\end{equation*}
which converges in the~$L^2$ sense.
\begin{claim}
The series~$\sum_{k=0}^\infty
|g_k^q(\beta)|$ converges for every~$0 < \beta \leq 1$, and hence the
above expansion converges absolutely and uniformly for~$t \in [-1,
1]$.
\end{claim}
\begin{claimproof}
To prove the claim we use the comparison test. To this end, consider the expansion~$E_r^{-1}(t) = \sum_{k=0}^\infty b_k t^k$ and recall that $\sum_{k=0}^\infty |b_k|$ converges. We may define the function~$\overline{E}_r^{-1}(t) = \sum_{k=0}^\infty |b_k| t^k$, which is then of positive type for every sphere. So by Schoenberg's theorem we can write
\begin{equation*}
\overline{E}_r^{-1}(t) = \sum_{k=0}^\infty \overline{g}_k^q P_k^q(t)
\end{equation*}
for nonnegative numbers~$\overline{g}_k^q$ such that~$\sum_{k=0}^\infty \overline{g}_k^q$ converges. Now notice that
\begin{equation*}
g_k^q(\beta) = \|P_k^q\|^{-2}_q \langle E_r^{-1}(\beta t), P_k^q
\rangle_q = \|P_k^q\|^{-2}_q \sum_{l=0}^\infty b_l \beta^l \langle
t^l, P_k^q\rangle_q,
\end{equation*}
where~$\|P_k^q\|_q = \langle P_k^q, P_k^q \rangle_q^{1/2}$. Above,
since~$t^l$ is a function of positive type for every sphere, we
have that~$\langle t^l, P_k^q \rangle_q \geq 0$. But we also have that
\begin{equation*}
\overline{g}_k^q = \|P_k^q\|^{-2}_q \langle \overline{E}_r^{-1}, P_k^q
\rangle_q = \|P_k^q\|^{-2}_q \sum_{l=0}^\infty |b_l| \langle
t^l, P_k^q\rangle_q,
\end{equation*}
and we see that~$|g_k^q(\beta)| \leq \overline{g}_k^q$ for all~$k \geq 0$
and~$0 < \beta \leq 1$. This proves the claim.
\end{claimproof}
From the proof of the claim, it it also clear that
\begin{equation}
\label{eq:sum-beta}
\sum_{k=0}^\infty |g_k^q(\beta)|
\end{equation}
is a continuous function of~$\beta$.
Now, let~$\beta(q \mapsto r, G)$ be the maximum number~$\beta\in(0, 1]$ that is such that
\begin{equation*}
\sum_{k=0}^\infty |g_k^q(\beta)| = 1.
\end{equation*}
By the intermediate value theorem, such a number exists because~\eqref{eq:sum-beta} is continuous as a
function of~$\beta$, being equal to~$0$ when~$\beta = 0$ and at
least~$E_r^{-1}(1) = 1$ when~$\beta = 1$.
Consider the Hilbert space
\begin{equation*}
\mathcal{H} = \bigoplus_{k=0}^\infty \mathbb{R}^{h_k},
\end{equation*}
equipped with the Euclidean inner product and where $h_k$ is the dimension of $H_k$, the space of harmonic polynomials of degree~$k$ on~$q$ variables. For a vector~$x \in S^{q-1}$,
consider the vectors~$S(x)$ and~$T(x) \in \mathcal{H}$ given componentwise by
\begin{equation*}
\begin{split}
S(x)_k&= \sqrt{|g_k^q(\beta(q \mapsto r, G))|} (e_{k, 1}(x), \ldots, e_{k,
h_k}(x))\qquad\text{and}\\
T(x)_k&= \sign(g_k^q(\beta(q \mapsto r, G))) \sqrt{|g_k^q(\beta(q \mapsto r, G))|} (e_{k, 1}(x), \ldots, e_{k,
h_k}(x)).
\end{split}
\end{equation*}
By the addition formula we have that
\begin{equation*}
S(f(u)) \cdot T(f(v)) = E_r^{-1}(\beta(q \mapsto r, G) f(u) \cdot f(v)).
\end{equation*}
Moreover, we also have that
\begin{equation*}
\|S(f(u))\|^2 = \|T(f(v))\|^2 = \sum_{k=0}^\infty |g_k^q(\beta(q \mapsto r, G))| = 1,
\end{equation*}
and so from the Gram matrix of the vectors~$S(f(u))$ and~$T(f(v))$ we may obtain the function $g\colon V \to S^{|V|-1}$ as we wanted.
Now we show that~$\beta(q \mapsto r, G) > \beta(q + 1 \mapsto r, G)$ for all~$q \geq 2$. To this
end, consider the function
\begin{equation*}
F_\beta(t) = \sum_{k=0}^\infty |g_k^{q+1}(\beta)| P_k^{q+1}(t).
\end{equation*}
Since the series~$\sum_{k=0}^\infty |g_k^{q+1}(\beta)|$ converges,
from Schoenberg's theorem we see that~$F_\beta$ is a continuous function
of positive type for the sphere~$S^q$. Notice moreover that, by
definition,~$\beta(q+1 \mapsto r, G)$ is the maximum number~$\beta\in(0, 1]$ such
that~$F_\beta(1) = 1$.
Since~$F_\beta$ is of positive type for~$S^q$, it is also of positive type
for~$S^{q-1}$, and then we may write
\begin{equation*}
F_\beta(t) = \sum_{k=0}^\infty a_k(\beta) P_k^q(t),
\end{equation*}
and we have~$\sum_{k=0}^\infty a_k(\beta) = 1$ as for all $k$, $P_k^q(1) = 1$. We also have
the expression
\begin{equation}
\label{eq:ak-exp}
a_k(\beta) = \|P_k^q\|^{-2}_q \langle F_\beta, P_k^q\rangle_q
= \|P_k^q\|^{-2}_q \sum_{l=0}^\infty |g_l^{q+1}(\beta)|
\langle P_l^{q+1}, P_k^q\rangle_q.
\end{equation}
Notice that, since both~$P_l^{q+1}$ and~$P_k^q$ are of positive type
for~$S^{q-1}$, we have that $\langle P_l^{q+1}, P_k^q\rangle_q \geq 0$
for all~$l$ and~$k$.
Now, from the expansion
\[
E_r^{-1}(\beta t) = \sum_{k=0}^\infty g_k^{q+1}(\beta) P_k^{q+1}(t)
\]
we see that
\begin{equation}
\label{eq:small-exp}
g_k^q(\beta) = \|P_k^q\|^{-2}_q \langle E_r^{-1}(\beta t),
P_k^q\rangle_q = \|P_k^q\|^{-2}_q \sum_{l=0}^\infty g_l^{q+1}(\beta)
\langle P_l^{q+1}, P_k^q\rangle_q.
\end{equation}
The function $E^{-1}_r$ is not of positive type because the coefficient $b_3$ of its Taylor expansion is always negative (this can easily be checked using Eq.~\eqref{eq:rankgroth-mfinverse}), and so
some of the~$g_l^{q+1}(\beta)$ must be negative. This, together
with~\eqref{eq:ak-exp} and~\eqref{eq:small-exp}, implies
that~$|g_k^q(\beta)| < a_k(\beta)$ for all~$0 < \beta \leq 1$. So we
must have that
\begin{equation*}
\sum_{k=0}^\infty |g_k^q(\beta(q+1 \mapsto r, G))| < \sum_{k=0}^\infty a_k(\beta(q+1 \mapsto
r, G)) = 1,
\end{equation*}
and we see that~$\beta(q \mapsto r, G) > \beta(q+1 \mapsto r, G)$. In a similar way, one may show that~$\beta(q \mapsto r, G) >
\beta(r, G)$.
\end{proof}
\section{Better bounds for large chromatic numbers}
\label{sec:highchrom}
For graphs with large chromatic number, or more precisely with large $\vartheta(\overline G)$, our bounds on $K(r,G)$ proved above can be improved using the techniques of Alon, Makarychev, Makarychev, and Naor~\cite{AlonMakarychevMakarychevNaor}. In this section, we show how their bounds on $K(1,G)$ generalize to higher values of~$r$.
\begin{theorem}\label{thm:rankgroth-largechrom}
Given a graph $G = (V,E)$ and integer $1\leq r\leq \log\vartheta(\overline G)$, we have
\begin{equation*}
K(r,G) \leq O\left(\frac{\log\vartheta(\overline G)}{r}\right).\end{equation*}
\end{theorem}
\begin{proof}
It suffices to show that for any matrix $A:V\times V\to\mathbb{R}$, we have
\begin{equation*}
\sdp_r(G,A)\geq \Omega\left(\frac{r}{\log\vartheta(\overline G)}\right)\sdp_\infty(G,A).
\end{equation*}
Fix a matrix $A:V\times V\to\mathbb{R}$. Let $f:V\to\sphere{|V|}$ be optimal for~$\sdp_{\infty}(G,A)$, so that
\begin{equation*}
\sum_{\{u,v\}\in E} A(u,v) f(u)\cdot f(v) = \sdp_{\infty}(G,A).
\end{equation*}
Let $\lambda = \vartheta(\overline G)$, and $\widetilde Z:V\times V\to\mathbb{R}$ be an optimal solution of~\eqref{opt:theta-gbar}. Let~$J$ be the $2|V| \times 2|V|$ all-ones matrix and $I$ the $2 \times 2$ identity matrix. Since the matrix $(I\otimes \widetilde Z + J)/\lambda$ is positive semidefinite, we obtain from its Gram decomposition functions $s,t:V\to\mathbb{R}^{2|V|}$ that satisfy
\begin{enumerate}
\item $s(u)\cdot s(u) = t(u)\cdot t(u) = 1$ for all $u\in V$.
\item $s(u)\cdot t(u) = 1/\lambda$ for all $u,v\in V$.
\item $s(u)\cdot s(v) = t(u)\cdot t(v) = 0$ for all $\{u,v\}\in E$.
\end{enumerate}
\medskip
Let $\mathcal H$ be the Hilbert space of vector-valued functions $h:\mathbb{R}^{r\times |V|}\to \mathbb{R}^r$ with inner product
\begin{equation*}
(g,h) = \mathbb E[g(Z)\cdot h(Z)],
\end{equation*}
where the expectation is taken over random $r \times |V|$ matrices $Z$ whose entries are i.i.d.~$N(0,1/r)$ random variables.
Let $R\geq 2$ be some real number to be set later.
Define for every $u\in V$ the function $g_u\in \mathcal H$ by
\begin{equation*}
g_u(Z) =\left\{\begin{array}{ll}
\frac{Zf(u)}{R} & \text{if } \|Zf(u)\| \leq R\\[.2cm]
\frac{Zf(u)}{\|Zf(u)\|} & \text{otherwise.}
\end{array}
\right.
\end{equation*}
Notice that for every matrix $Z\in \mathbb{R}^{r\times |V|}$, the vector $g_u(Z)\in\mathbb{R}^r$ has Euclidean norm at most 1. It follows by linearity of expectation that
\begin{equation*}
\sdp_r(G,A) \geq \mathbb E\bigg[\sum_{\{u,v\}\in E} A(u,v)\, g_u(Z)\cdot g_v(Z)\bigg] = \sum_{\{u,v\}\in E} A(u,v) (g_u,g_v).
\end{equation*}
We proceed by lower bounding the right-hand side of the above inequality.
Based on the definition of $g_u$ we define two functions $h_u^0,h_u^1\in \mathcal H$ by
\begin{align*}
h_u^0(Z) = \frac{Z f(u)}{R} + g_u(Z)& &\text{and}& &h_u^1(Z) &=& \frac{Z f(u)}{R} - g_u(Z).
\end{align*}
For every $u\in V$, define the function $H_u\in \mathbb{R}^{2|V|}\otimes \mathcal H$ by
\begin{equation*}
H_u = \frac{1}{4} s(u)\otimes h_u^0 + 2\lambda\, t(u)\otimes h_u^1.
\end{equation*}
We expand the inner products $(g_u,g_v)$ in terms of $f(u)\cdot f(v)$ and $\langle H_u,H_v\rangle$.
\begin{claim}
For every $\{u,v\}\in E$ we have
\begin{eqnarray*}
(g_u,g_v) &=& \frac{1}{R^2} f(u)\cdot f(v) - \langle H_u,H_v\rangle.
\end{eqnarray*}
\end{claim}
\begin{claimproof}
Simply expanding the inner product $\langle H_u,H_v\rangle$ gives
\[
\begin{split}
\langle H_u,H_v\rangle &= \frac{s(u)\cdot s(v)}{16} (h_u^0,h_v^0)\, +\, 4\lambda^2\big(t(u)\cdot t(v)\big)\,(h_u^1,h_v^1) \\[.2cm]
&\qquad{} + \frac{\lambda}{2}\Big[ \big(s(u)\cdot t(v)\big)\, (h_u^0,h_v^1)\, +\, \big(t(u)\cdot s(v)\big)\, (h_u^1,h_v^0)\Big].
\end{split}
\]
It follows from property 3 of $s$ and $t$ that the above terms involving $s(u)\cdot s(v)$ and $t(u)\cdot t(v)$ vanish. By property 2, the remaining terms reduce to
\[
\begin{split}
\frac{1}{2}\Big((h_u^0,h_v^1) + (h_u^1,h_u^0)\Big) &= \frac{1}{2}\mathbb E\left[\left(\frac{Zf(u)}{R} + g_u(Z)\right)\cdot \left(\frac{Zf(v)}{R} - g_v(Z)\right)\right]\\
&\qquad{} + \frac{1}{2}\mathbb E\left[\left(\frac{Zf(u)}{R} - g_u(Z)\right)\cdot \left(\frac{Zf(v)}{R} + g_v(Z)\right)\right].
\end{split}
\]
Expanding the first expectation gives
\begin{equation*}
\frac{1}{R^2}\mathbb E[f(u)^{\sf T}Z^{\sf T}Zf(v)] - (g_u,g_v)-
\mathbb E\left[\frac{Zf(u)}{R}\cdot g_v(Z)\right] +
\mathbb E\left[ g_u(Z)\cdot \frac{Zf(v)}{R}\right]
\end{equation*}
and expanding the second gives
\begin{equation*}
\frac{1}{R^2}\mathbb E[f(u)^{\sf T}Z^{\sf T}Zf(v)] - (g_u,g_v)+
\mathbb E\left[\frac{Zf(u)}{R}\cdot g_v(Z)\right] -
\mathbb E\left[ g_u(Z)\cdot \frac{Zf(v)}{R}\right].
\end{equation*}
Adding these two gives that the last two terms cancel. Since $\mathbb E[Z^{\sf T}Z] = I$, what remains equals
\begin{equation*}
\frac{1}{R^2} f(u)\cdot f(v) - (g_u,g_v),
\end{equation*}
which proves the claim.
\end{claimproof}
From the above claim it follows that
\[
\begin{split}
\sum_{\{u,v\}\in E}A(u,v) (g_u,g_v)
&= \frac{1}{R^2}\sdp_{\infty}(G,A) - \sum_{\{u,v\}\in E}A(u,v) \langle H_u,H_v\rangle\\
&\geq \left(\frac{1}{R^2} - \max_{u\in V}\| H_u\|^2\right) \sdp_\infty(G,A),
\end{split}
\]
where $\|H_u\|^2 = \langle H_u,H_u\rangle$.
By the triangle inequality, we have for every $u\in V$,
\begin{equation*}
\|H_u\|^2 \leq \left(\frac{1}{4} \|h^0_u\| + 2 \lambda \|h^1_u\|\right)^2
\leq
\frac{1}{R^2}\left(\frac{1}{2} + 2\lambda R\, \mathbb E\left[\Big\|\frac{ Z f(u)}{R} - g_u(Z)\Big\|\right]\right)^2.
\end{equation*}
By the definition of $ g_u$, the vectors $Zf(u)$ and $ g_u$ are parallel. Moreover, they are equal if $\|Zf(u)\|\leq R$. Since $f(u)$ is a unit vector, the $r$ entries of the random vector $Zf(u)$ are i.i.d.~$N(0,1/r)$ random variables. Hence,
\[
\begin{split}
\mathbb E\left[\Big\|\frac{ Z f(u)}{R} - g_u(Z)\Big\|\right] &= \int_{\mathbb{R}^r}\ind[\|x\|\geq R]\Big(\frac{\|x\|}{R} - 1\Big)\Big(\frac{r}{2\pi}\Big)^{r/2} e^{-r\|x\|^2/2}dx\\
&= \int_R^{\infty} \int_{\sphere{r}}\rho^{r-1}\Big(\frac{\rho}{R} - 1\Big)\Big(\frac{r}{2\pi}\Big)^{r/2} e^{-r\rho^2/2}d\rho d\tilde\omega_r(\xi)\\
&\leq \frac{r^{r/2}}{R\Gamma\big(\frac{r}{2}\big)}\int_R^{\infty} \rho^r e^{-r\rho^2/2}d\rho,
\end{split}
\]
where $\tilde\omega_r$ is the unique rotationally invariant measure on $\sphere{r}$, normalized such that $\tilde\omega_r(\sphere{r}) = r^{r/2}/\Gamma(r/2)$.
Using a substitution of variables, we get
\begin{equation*}
\int_R^{\infty} \rho^r e^{-r\rho^2/2}d\rho = \frac{1}{2}\Big(\frac{2}{r}\Big)^{(r+1)/2} \Gamma\Big(\frac{r+1}{2},\frac{rR^2}{2}\Big),
\end{equation*}
where $\Gamma(a,x)$ is the lower incomplete Gamma function~\cite[Eq.~(4.4.5)]{AndrewsAskeyRoy}.
Collecting the terms from above then gives the bound
\begin{equation}\label{eq:chrom5}
\sdp_r(G,A)\geq \frac{1}{R^2}\left(1 -\left(\frac{1}{2} + \lambda\frac{2^{(r+1)/2}}{\sqrt{r}\Gamma\big(\frac{r}{2}\big)}\Gamma\Big(\frac{r+1}{2}, \frac{rR^2}{2}\Big)\right)^2\right)\sdp_{\infty}(G,A).
\end{equation}
The bound in the theorem follows by setting $R$ as small as possible such that the above factor between brackets is some positive constant.
By Stirling's formula, there is a constant $C_1>0$ such that $\Gamma(x) \geq C_1 e^{-x}x^{x-1/2}$ holds (see for example Abramowitz and Stegun~\cite[Eq.~(6.1.37)]{Abramowitz:1964}). Hence, for some constants $c,C>0$, we have
\begin{equation}\label{eq:chrom2}
\frac{2^{(r+1)/2}}{\sqrt{r}\Gamma\big(\frac{r}{2}\big)} \leq C\left(\frac{c}{r}\right)^{r/2}
\end{equation}
The power series of the incomplete gamma function~\cite[Eq.~(6.5.32)]{Abramowitz:1964} gives that if $a\leq x$, for some constant $C_2>0$, the inequality $\Gamma(a,x) \leq C_2x^ae^{-x}$ holds.
As $R \geq 2$, for some constants $d,D>0$, we have
\begin{equation}\label{eq:chrom3}
\Gamma\left(\frac{r+1}{2}, \frac{rR^2}{2}\right) \leq D \sqrt{r} \left(\frac{r}{d^{R^2}}\right)^{r/2}.
\end{equation}
Putting together estimates~\eqref{eq:chrom2} and~\eqref{eq:chrom3} gives
\begin{eqnarray*}
\lambda\frac{2^{(r+1)/2}}{\sqrt{r}\Gamma\big(\frac{r}{2}\big)} \Gamma\left(\frac{r+1}{2}, \frac{rR^2}{2}\right) &\leq & CD \sqrt{r}\lambda\left(\frac{c}{d^{R^2}}\right)^{r/2}.
\end{eqnarray*}
Since $r\leq\log\lambda$ there is some constant $C'$ such that for $R^2 = C'\big(\log \lambda\big)/r$, the above value is less than $1/4$. It follows that for this value of $R$, Inequality~\eqref{eq:chrom5} is nontrivial and we get the result.
\end{proof}
\section*{Acknowledgements}
The third author thanks Assaf Naor for helpful comments, and the Institute for Pure \& Applied Mathematics at UCLA for its hospitality and support during the program ``Modern Trends in Optimization and Its Application'', September 17--December 17, 2010.
|
1011.1459
|
\section{Introduction}
\label{sec:intro}
The most massive stars probably end their lives with a supernova
explosion or a quiet core collapse, becoming stellar-mass black holes.
The mass distribution of such black holes can provide important clues
to the end stages of evolution of these stars. In addition, the mass
distribution of stellar-mass black holes is an important input in
calculations of rates of gravitational wave emission events from
coalescing neutron star-black hole and black hole-black hole binaries
in the LIGO gravitational wave observatory \citep{Abadie2010}.
Observations of X-ray binaries in both the optical and X-ray bands can
provide a measurement of the mass of the compact object in these
systems. The current sample of stellar mass black holes with
dynamically measured masses includes 15 systems with low-mass, Roche
lobe overflowing donors and 5 wind-fed systems with high-mass
donors. Hence, sophisticated statistical analyses of the black hole
mass distribution in these systems are possible.
The first study of the mass distribution of stellar-mass black holes,
in \citet{Bailyn1998}, examined a sample of seven low-mass X-ray
binaries thought to contain a black hole, concluding in a Bayesian
analysis that the mass function was strongly peaked around seven solar
masses%
\footnote{A similar analysis of the neutron star mass distribution can
be found in \citet{Finn1994}.}. %
\citet{Bailyn1998} found evidence of a ``gap'' between the least
massive black hole and a ``safe'' upper limit for neutron star masses
of 3 $M_\odot$ (e.g.\ \citet{Kalogera1996}). Such a gap is puzzling in
light of theoretical studies that predict a continuous distribution of
compact object supernova remnant masses with a smooth transition from
neutron stars to black holes \citep{Fryer2001}. (We note that
\citet{Fryer2001} considered binary evolution effects only
heuristically and put forward some possible explanations for the gap
from \citet{Bailyn1998} both in the context of selection effects or in
connection to the energetics of supernova explosions.)
Towards the end of our analysis work, we became aware of a more recent
study \citep{Ozel2010}, also in a Bayesian framework, analyzing the
low-mass X-ray binary sample. Our results are largely consistent with
those obtained by \citet{Ozel2010}, who examined 16 low-mass X-ray
binary systems containing black holes and found a strongly peaked
distribution at $7.8 \pm 1.2 \, M_\odot$. They used two models for the
mass function: a Gaussian and a decaying exponential with a minimum
``turn-on'' mass (motivated by the analytical model of the black-hole
mass function in \citet{Fryer2001}). We note that \citet{Ozel2010} do
not provide confidence limits for the minimum black hole mass, instead
discussing only the model parameters at the peak of their posterior
distributions. They also do not perform any model selection analysis;
thus, they give the distribution of parameters within each of their
models, but cannot say which model is more likely to correspond to the
true distribution of black hole masses. Nevertheless, it appears that
their analysis confirms the existence of a mass gap. \citet{Ozel2010}
discuss possible selection effects that could lead to the appearance
of a mass gap, but conclude these effects could not produce the
observed gap, which they therefore claim is a real property of the
black hole mass distribution.
We use a Bayesian Markov-Chain Monte Carlo (MCMC) analysis to
quantitatively assess a wide range of models for the black hole mass
function for both samples. We include both parametric models, such
as a Gaussian, and non-parametric models where the mass function is
represented by histograms with various numbers of bins. (Our set of
models includes those of \citet{Ozel2010} and \citet{Bailyn1998}.)
After computing posterior distributions for the model parameters, we
use model selection techniques (including a new technique for
efficient reversible-jump MCMC \citep{Farr2010}) to compare the
evidence for the various models from both samples.
We define the ``minimum black hole mass'' to be the 1\% quantile,
$M_{1\%}$, in the black hole mass distribution (see Section
\ref{sec:minimum-mass}). In qualitative agreement with
\citet{Ozel2010} and \citet{Bailyn1998}, we find strong evidence for a
mass gap among the best models for both samples. Our analysis gives
distributions for $M_{1\%}$ implied by the data in the context of each
of our models for the black hole mass distribution. In the context of
the best model for the low-mass systems (a power-law), the
distribution for $M_{1\%}$ gives $M_{1\%} > 4.3$ $M_\odot$ with 90\%
confidence; in the context of the best model for the combined sample
of lower- and high-mass systems the distribution of $M_{1\%}$ has
$M_{1\%} > 4.5$ $M_\odot$ with 90\% confidence. Further, in the context
of models with lower evidence, most also have a mass gap, with 90\%
confidence bounds on $M_{1\%}$ significantly above a ``safe'' maximum
neutron star mass of 3 $M_\odot$ \citep{Kalogera1996}.
We find that, for the low-mass X-ray binary sample, the theoretical
model from \citet{Fryer2001}---a decaying exponential---is strongly
disfavored by our model selection. We find that the low-mass systems
are best described by a power law, followed closely by a Gaussian
(which is the second model considered by \citet{Ozel2010}). On the
other hand, we find that the theoretical model from \citet{Fryer2001}
is the preferred model for the combined sample of low- and high-mass
X-ray binaries. A model with two separate Gaussian peaks also has
relatively high evidence for the combined sample of systems. The
difference in best-fitting model indicates that the low-mass subsample
is not consistent with being drawn from the distribution of the
combined population.
The structure of this paper is as follows. In Section
\ref{sec:systems} we discuss the 15 systems that comprise the low-mass
X-ray binary black hole sample and the 5 additional high-mass,
wind-fed systems that make up the combined sample. In Section
\ref{sec:models} we discuss the Bayesian techniques we use to analyze
the black hole mass distribution, the techniques we use for model
selection, and the parametric and non-parametric models we will use
for the black hole mass distribution. In Section \ref{sec:results} we
discuss the results of our analysis and model selection. In Section
\ref{sec:minimum-mass} we discuss the distribution of the minimum
black hole mass implied by the analysis of Section \ref{sec:results}.
In Section \ref{sec:conclusion} we summarize our results and comment
on the significance of the observed mass gap in the context of
theoretical models. Appendix \ref{sec:mcmc} describes MCMC techniques
in some detail. Appendix \ref{sec:reversible-jump-mcmc} explains our
novel algorithm for efficiently performing the reversible jump MCMC
computations used in the model comparisons of Section
\ref{sec:results} (but see also \citet{Farr2010}).
\section{Systems}
\label{sec:systems}
The 20 X-ray binary systems on which this study is based are listed in
Table \ref{tab:sources}. We separate the systems into 15 low-mass
systems in which the central black hole appears to be fed by
roche-lobe overflow from the secondary, and 5 high-mass systems in
which the black hole is fed via winds (these systems all have a
secondary that appears to be more massive than the black hole). The
low- and high-mass systems undoubtedly have different evolutionary
tracks, and therefore it is reasonable that they would have different
black-hole mass distributions. We will first analyze the 15 low-mass
systems alone (Section \ref{sec:results-low-mass}), and then the
combined sample of 20 systems (Section \ref{sec:high-mass}).
In each of these systems, spectroscopic measurements of the secondary
star provide an orbital period for the system and a semi-amplitude for
the secondary's velocity curve. These measurements can be combined
into the mass function,
\begin{equation}
\label{eq:mass-function}
f(M) = \frac{P K^3}{2\pi G} = \frac{M \sin^3 i}{\left( 1 + q \right)^2},
\end{equation}
where $P$ is the orbital period, $K$ is the secondary's velocity
semi-amplitude, $M$ is the black hole mass, $i$ is the inclination of
the system, and $q \equiv M_2 / M$ is the mass ratio of the system.
The mass function defines a lower limit on the mass: $f(M) < M$. To
accurately determine the mass of the black hole, the inclination $i$
and mass ratio $q$ must be measured. Ideally, this can be
accomplished by fitting ellipsoidal light curves and study of the
rotational broadening of spectral lines from the secondary, but even
in the most studied case (see, e.g., \citet{Cantrell2010} on A0620)
this procedure is complicated. In particular, contributions from an
accretion disk and hot spots in the disk can significantly distort the
measured inclination and mass ratios. For some systems (e.g.\ GS 1354
\citep{Casares2009}) strong variability completely prevents
determination of the inclination from the lightcurve; in these cases
an upper limit on the inclination often comes from the observed lack
of eclipses in the lightcurve. In general, accurately determining $q$
and $i$ requires a careful system-by-system analysis.
For the purposes of this paper, we adopt the following simplified
approach to the estimation of the black hole mass from the observed
data. When an observable is well-constrained, we assume that the true
value is normally distributed about the measured value with a standard
deviation equal to the quoted observational error. This is the case
for the mass function in all the systems we use, and for many systems'
mass ratios and inclinations. When a large range is quoted in the
literature for an observable, we take the true value to be distributed
uniformly (for the mass ratio) or isotropically (for the inclination)
within the quoted range. Table \ref{tab:sources} gives the assumed
distribution for the observables in the 20 systems we use. We do not
attempt to deal with the systematic biases in the observational
determination of $f$, $q$, and $i$ in any realistic way; we are
currently investigating more realistic treatments of the errors
(including observational biases that can shift the peak of the true
mass distribution away from the ``best-fit'' mass in the
observations). This treatment will appear in future work.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|l|}
\hline
Source & $f$ ($M_\odot$) & $q$ & $i$ (degrees) & References \\
\hline \hline
GRS 1915 & $N(9.5, 3.0)$ & $N(0.0857, 0.0284)$ & $N(70, 2)$ &
\citet{Greiner2001} \\
XTE J1118 & $N(6.44, 0.08)$ & $N(0.0264, 0.004)$ & $N(68, 2)$ &
\citet{Gelino2008} \\ & & & & \citet{Harlaftis2005} \\
XTE J1650 & $N(2.73, 0.56)$ & $U(0, 0.5)$ & $I(50, 80)$ &
\cite{Orosz2004} \\
GRS 1009 & $N(3.17, 0.12)$ & $N(0.137, 0.015)$ & $I(37, 80)$ &
\cite{Filippenko1999} \\
A0620 & $N(2.76, 0.036)$ & $N(0.06, 0.004)$ & $N(50.98, 0.87)$ &
\citet{Cantrell2010} \\ & & & & \citet{Neilsen2008} \\
GRO J0422 & $N(1.13, 0.09)$ & $U(0.076, 0.31)$ & $N(45, 2)$ &
\citet{Gelino2003} \\
Nova Mus 1991 & $N(3.01, 0.15)$ & $N(0.128, 0.04)$ & $N(54,1.5)$
& \cite{Gelino2001} \\
GRO J1655 & $N(2.73,0.09)$ & $N(0.3663, 0.04025)$ & $N(70.2,
1.9)$ & \citet{Greene2001} \\
4U 1543 & $N(0.25, 0.01)$ & $U(0.25, 0.31)$ & $N(20.7,1.5)$ &
\citet{Orosz2003} \\
XTE J1550 & $N(7.73,0.4)$ & $U(0,0.04)$ & $N(74.7, 3.8)$ &
\citet{Orosz2010} \\
V4641 Sgr & $N(3.13,0.13)$ & $U(0.42,0.45)$ & $N(75,2)$ &
\citet{Orosz2003} \\
GS 2023 & $N(6.08, 0.06)$ & $U(0.056,0.063)$ & $I(66, 70)$ &
\citet{Charles2006} \\
& & & & \citet{Khargharia2010} \\
GS 1354 & $N(5.73, 0.29)$ & $N(0.12,0.04)$ & $I(50, 80)$ &
\citet{Casares2009} \\
Nova Oph 77 & $N(4.86,0.13)$ & $U(0, 0.053)$ & $I(60, 80)$ &
\citet{Charles2006} \\
GS 2000 & $N(5.01, 0.12)$ & $U(0.035, 0.053)$ & $I(43, 74)$ &
\citet{Charles2006} \\
\hline \hline
Cyg X1 & $N(0.251, 0.007)$ & $N(2.778, 0.386)$ & $I(23, 38)$ &
\citet{Gies2003} \\
M33 X7 & $N(0.46, 0.08)$ & $N(4.47, 0.61)$ & $N(74.6, 1)$ &
\citet{Orosz2007} \\
NGC 300 X1 & $N(2.6, 0.3)$ & $U(1.05, 1.65)$ & $I(60, 75)$ &
\citet{Crowther2010} \\
LMC X1 & $N(0.148, 0.004)$ & $N(2.91, 0.49)$ & $N(36.38, 2.02)$
& \citet{Orosz2009} \\
IC 10 X1 & $N(7.64, 1.26)$ & $U(0.7, 1.7)$ & $I(75, 90)$ &
\citet{Prestwich2007} \\
& & & & \citet{Silverman2008} \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:sources} The source parameters for the 20 X-ray binaries used in this work. The first 15 systems have
low-mass secondaries that feed the black hole via Roche lobe
overflow; the last five systems have high-mass secondaries ($q
\gtrsim 1$) that feed the black hole via winds. In each line, $f$
is the mass function for
the compact object, $q$ is the mass ratio $M_2/M$, and $i$ is the
inclination of the system to the line of sight. We indicate the
distribution used for the true parameters when computing the
probability distributions for the masses of these systems:
$N(\mu,\sigma)$ implies a Gaussian with mean $\mu$ and standard
deviation $\sigma$, $U(a,b)$ is a uniform distribution between $a$ and
$b$, and $I(\alpha,\beta)$ is an isotropic distribution between the
angles $\alpha$ and $\beta$.}
\end{table}
From these assumptions, we can generate probability distributions for
the true mass of the black hole given the observations and errors via
the Monte Carlo method: drawing samples of $f$, $q$, and $i$ from the
assumed distributions and computing the mass implied by Equation
\eqref{eq:mass-function} gives samples of $M$ from the distribution
induced by the relationship in Equation \eqref{eq:mass-function}.
Mass distributions generated in this way for the systems used in this
work are shown in Figure \ref{fig:low-masses} and Figure
\ref{fig:high-masses}. Systems for which $i$ is poorly constrained
have broad ``tails'' on their mass distributions. These mass
distributions constitute the ``observational data'' we will use in the
remainder of this paper.
\begin{figure}
\begin{center}
\plotone{all-masses}
\end{center}
\caption{\label{fig:low-masses} The individual mass distributions
implied by Equation \eqref{eq:mass-function} and the assumed
distributions on observational parameters $f$, $q$, and $i$ given
in Table \ref{tab:sources} for the low-mass sources. The
significant asymmetry and long tails in many of these
distributions are the result of the non-linear relationship
(Equation \eqref{eq:mass-function}) between $M$, $f$, $q$, and
$i$.}
\end{figure}
\begin{figure}
\begin{center}
\plotone{high-masses}
\end{center}
\caption{\label{fig:high-masses} Mass distributions for the
wind-fed, high-mass systems computed from the distributions on
observed data in Table \ref{tab:sources} using Equation
\eqref{eq:mass-function}. (Similar to Figure
\ref{fig:low-masses}.) The asymmetry and long tails in these
distributions are the result of the non-linear relationship
between $M$, $f$, $q$, and $i$.}
\end{figure}
\section{Statistical Analysis}
\label{sec:models}
In this section we describe the statistical analysis we will apply to
various models for the underlying mass distribution from which the
low-mass sample and the combined sample of X-ray binary systems in
Table \ref{tab:sources} were drawn. The results of our analysis are
presented in Section \ref{sec:results}.
\subsection{Bayesian Inference}
The end result of our statistical analysis will be the probability
distribution for the parameters of each model implied by the data from
Section \ref{sec:systems} in combination with our prior assumptions
about the probability distribution for the parameters. Bayes' rule
relates these quantities. For a model with parameters $\vec{\theta}$ in
the presence of data $d$, Bayes' rule states
\begin{equation}
\label{eq:Bayes-rule}
p(\vec{\theta} | d) = \frac{p(d | \vec{\theta}) p(\vec{\theta})}{p(d)}.
\end{equation}
Here, $p(\vec{\theta}|d)$, called the posterior probability distribution
function, is the probability distribution for the parameters $\vec{\theta}$
implied by the data $d$; $p(d|\vec{\theta})$, called the likelihood, is the
probability of observing data $d$ given that the model parameters are
$\vec{\theta}$; $p(\vec{\theta})$, called the prior, reflects our estimate of the
probability of the various model parameters in the absence of any
data; and $p(d)$, called the evidence, is an overall normalizing
constant ensuring that
\begin{equation}
\int d\theta\, p(\vec{\theta}|d) = 1,
\end{equation}
whence
\begin{equation}
\label{eq:evidence-def}
p(d) = \int d\vec{\theta}\, p(d|\vec{\theta}) p(\vec{\theta}).
\end{equation}
In our context, the data are the mass distributions given in Section
\ref{sec:systems}: $d = \{ p_i(M)| i = 1, 2, \ldots, 20 \}$. We
assume that the measurements in Section \ref{sec:systems} are
independent, so the complete likelihood is given by a product of the
likelihoods for the individual measurements. For a model with
parameters $\vec{\theta}$ that predicts a mass distribution $p(M|\vec{\theta})$
for black holes, we have
\begin{equation}
\label{eq:likelihood-def}
p(d|\vec{\theta}) = \prod_i \int dM\, p_i(M) p(M|\vec{\theta}).
\end{equation}
That is, the likelihood of an observation is the average over the
individual mass distribution implied by the observation, $p_i(M)$, of
the probability for a black hole of that mass to exist according to
the model of the mass distribution, $p(M | \vec{\theta})$. We approximate
the integrals as averages of $p(M|\vec{\theta})$ over the Monte Carlo mass
samples drawn from the distributions in Table \ref{tab:sources} (also
see Figures \ref{fig:low-masses} and \ref{fig:high-masses}):
\begin{equation}
p(d|\vec{\theta}) \approx \prod_i \frac{1}{N_i} \sum_{j = 1}^{N_i} p(M_{ij} | \vec{\theta}),
\end{equation}
where $M_{ij}$ is the $j$th sample (out of a total $N_i$) from the
$i$th individual mass distribution.
Our calculation of the likelihood of each observation does not include
any attempt to account for selection effects in the observations. We
simply assume (almost certainly incorrectly) that any black hole drawn
from the underlying mass distribution is equally likely to be
observed. The results of \citet{Ozel2010} suggest that selection
effects are unlikely to significantly bias our analysis.
For a mass distribution with several parameters, $p(\vec{\theta} | d)$
lives in a multi-dimensional space. Previous works
\citep{Ozel2010,Bailyn1998} have considered models with only two
parameters; for such models evaluating $p(\vec{\theta}|d)$ on a grid may be
a reliable method. Many of our models for the underlying mass
distribution have three or more parameters. Exploring the entirety of
these parameter spaces with a grid rapidly becomes prohibitive as the
number of parameters increases. A more efficient way to explore the
distribution $p(\vec{\theta} | d)$ is to use a Markov Chain Monte Carlo
(MCMC) method (see Appendix \ref{sec:mcmc}). MCMC methods produce a
chain (sequence) of parameter samples, $\{ \vec{\theta}_i \, | \, i = 1,
\ldots \}$, such that a particular parameter sample, $\vec{\theta}$,
appears in the sequence with a frequency proportional to its posterior
probability, $p(\vec{\theta}|d)$. In this way, regions of parameter space
where $p(\vec{\theta}|d)$ is large are sampled densely while regions where
$p(\vec{\theta}|d)$ is small are effectively ignored.
Once we have a chain of samples from $p(\vec{\theta}|d)$, the distribution
for any quantity of interest can be computed by evaluating it on each
sample in the chain and forming a histogram of these values. For
example, to compute the one-dimensional distribution for a single
parameter obtained by integrating over all other dimensions in
parameter space, called the ``marginalized'' distribution, one plots
the histogram of the values of that parameter appearing in the chain.
\subsection{Priors}
\label{sec:priors}
An important part of any Bayesian analysis is the priors placed on the
parameters of the model. The choice of priors can bias the results of
the analysis through both the shape and the range of prior support in
parameter space. The prior should reflect the ``best guess'' for the
distribution of parameters before examining any of the data. In the
absence of any information about the distribution of parameters, it is
best to choose a prior that is broad and uninformative to avoid biasing
the posterior as much as possible.
A prior that is independent of parameters, $\vec{\theta}$, in some region,
called ``flat,'' results in a posterior that is proportional to the
likelihood (see Equation \eqref{eq:Bayes-rule}). A flat prior does
not change the shape of the posterior. However, the choice of a flat
prior is parameterization-dependent: a change of parameter from
$\vec{\theta}$ to $\vec{\theta}' = \vec{f}(\vec{\theta})$ can change a flat
distribution into one with non-trivial structure. In this work, we
choose priors that are flat when the parameters are measured in
physical units. In particular, for the log-normal model (Section
\ref{sec:log-normal}) the natural parameters for the distribution are
the mean, $\langle \log M \rangle$, and standard deviation,
$\sigma_{\log M}$, in $\log M$, but we choose priors that are flat in
$\langle M \rangle$ and $\sigma_M$.
The range of prior support can also affect the results of a Bayesian
analysis. Because priors are normalized, prior support over a larger
region of parameter space results in a smaller prior probability at
each point. Such ``wide'' priors are implicitly claiming that any
particular sub-region of parameter space is less likely than it would
be under a prior of the same shape but smaller support volume. This
difference is important in model selection (Section
\ref{sec:model-selection}): when comparing two models with the same
likelihood, one with wide priors will seem less probable than one with
narrower priors. Of course, priors should be wide enough to encompass
all regions of parameter space that have significant likelihood. To
make the model comparison in Section \ref{sec:model-selection} fair,
we choose prior support in parameter space so that the allowed
parameter values for each model give distributions for which nearly
all the probability lies in the range $0 \leq M \leq 40 M_\odot$.
\subsection{Parametric Models for the Black Hole Mass Distribution}
\label{sec:parametric-models}
Here we discuss the various parametric models of the underlying black
hole mass distribution considered in this paper.
\subsubsection{Power-Law Models}
\label{sec:power-law}
Many astrophysical distributions are power laws. Let us assume that
the BH mass distribution is given by
\begin{equation}
\label{eq:power-law-dist}
p(M|\vec{\theta}) = p(M|\{M_{\textnormal{min}}, M_{\textnormal{max}}, \alpha \}) =
\begin{cases}
A M^\alpha & M_{\textnormal{min}} \leq m \leq M_{\textnormal{max}} \\
0 & \textnormal{otherwise}
\end{cases}.
\end{equation}
The normalizing constant $A$ is
\begin{equation}
A = \frac{1+\alpha}{M_{\textnormal{max}}^{1+\alpha} - M_{\textnormal{min}}^{1+\alpha}}.
\end{equation}
We choose uniform priors on $M_{\textnormal{min}}$ and $M_{\textnormal{max}} \geq M_{\textnormal{min}}$ between 0 and
$40 M_\odot$, and uniform priors on the exponent $\alpha$ in a broad
range between $-15$ and $13$:
\begin{equation}
p(\vec{\theta}) = p(\{M_{\textnormal{min}}, M_{\textnormal{max}}, \alpha\}) =
\begin{cases}
2 \frac{1}{40^2}\frac{1}{28} & 0 \leq M_{\textnormal{min}} \leq M_{\textnormal{max}}
\leq 40, \quad -15 \leq \alpha \leq 13 \\
0 & \textnormal{otherwise}
\end{cases}.
\end{equation}
Our MCMC analysis output is a list of $\{M_{\textnormal{min}}, M_{\textnormal{max}}, \alpha\}$
values distributed according to the posterior
\begin{equation}
p(\vec{\theta}|d) = p(\{M_{\textnormal{min}}, M_{\textnormal{max}}, \alpha\}|d) \propto p(d|\{M_{\textnormal{min}},
M_{\textnormal{max}}, \alpha\}) p(\{M_{\textnormal{min}}, M_{\textnormal{max}}, \alpha\}),
\end{equation}
with the likelihood $p(d|\{M_{\textnormal{min}}, M_{\textnormal{max}}, \alpha\})$ defined in
Equation \eqref{eq:likelihood-def}.
\subsubsection{Decaying Exponential}
\label{sec:exponential}
\citet{Fryer2001} studied the relation between progenitor and remnant
mass in simulations of supernova explosions. Combining this with the
mass function for supernova progenitors, they suggested that the
black-hole mass distribution may be well-represented by a decaying
exponential with a minimum mass:
\begin{equation}
\label{eq:exp-def}
p(M|\vec{\theta}) = p(M|\{M_{\textnormal{min}}, M_0\}) =
\begin{cases}
\frac{e^{\frac{M_{\textnormal{min}}}{M_0}}}{M_0} \exp \left[ - \frac{M}{M_0}
\right] & M \geq M_{\textnormal{min}} \\
0 & \textnormal{otherwise}
\end{cases}.
\end{equation}
We choose a prior for this model where $M_{\textnormal{min}}$ is uniform between 0
and $40 M_\odot$. For each $M_{\textnormal{min}}$, we choose $M_0$ uniformly within a
range ensuring that $40M_\odot$ is at least two scale masses above the
cutoff: $40M_\odot \geq M_{\textnormal{min}} + 2 M_0$. This ensures that the majority
of the mass probability lies in the range $0 \leq M \leq 40M_\odot$.
The resulting prior is
\begin{equation}
p(\vec{\theta}) = p(\{M_{\textnormal{min}}, M_0\}) =
\begin{cases}
\frac{4}{40^2} & 0 \leq M_{\textnormal{min}} \leq 40, \quad 0 < M_0, \quad M_{\textnormal{min}}
+ 2 M_0 \leq 40, \\
0 & \textnormal{otherwise}
\end{cases}
\end{equation}
\subsubsection{Gaussian and Two-Gaussian Models}
\label{sec:gaussian}
The mass distributions in Figure \ref{fig:low-masses} all peak in a
relatively narrow range near $\sim 10 M_\odot$. The prototypical
single-peaked probability distribution is a Gaussian:
\begin{equation}
\label{eq:gaussian-def}
p(M|\vec{\theta}) = p(M|\{\mu, \sigma\}) = \frac{1}{\sigma \sqrt{2\pi}}
\exp\left[ - \left(\frac{M - \mu}{\sqrt{2} \sigma} \right)^2 \right].
\end{equation}
We use a prior on the mean mass, $\mu$, and the standard deviation,
$\sigma$, that ensures that the majority of the mass distribution lies
below $40 M_\odot$:
\begin{equation}
\label{eq:gaussian-prior-def}
p(\{\mu,\sigma\}) =
\begin{cases}
\frac{8}{40^2} & 0 \leq \mu \leq 40, \quad \sigma \geq 0, \quad
\mu + 2\sigma \leq 40 \\
0 & \textnormal{otherwise}
\end{cases},
\end{equation}
where both $\mu$ and $\sigma$ are measured in solar masses.
Though we do not expect to find a second peak in the low-mass
distribution, we may find evidence of one when exploring the combined
low- and high-mass samples. To look for a second peak in the
black-hole mass distribution, we use a two-Gaussian model:
\begin{multline}
\label{eq:two-gaussian-def}
p(M|\vec{\theta}) = p(M|\{\mu_1, \mu_2, \sigma_1, \sigma_2, \alpha\}) = \\
\frac{\alpha}{\sigma_1 \sqrt{2\pi}} \exp\left[ - \left( \frac{M -
\mu_1}{\sqrt{2}\sigma_1} \right)^2 \right] + \frac{1-\alpha}{\sigma_2 \sqrt{2\pi}} \exp\left[ - \left( \frac{M -
\mu_2}{\sqrt{2}\sigma_2} \right)^2 \right].
\end{multline}
The probability is a linear combination of two Gaussians with weights
$\alpha$ and $1-\alpha$. We restrict $\mu_1 < \mu_2$ and also impose
combined conditions on $\mu_i$ and $\sigma_i$ that ensure that most of
the mass probability lies below $40 M_\odot$ with the prior
\begin{equation}
p(\{\mu_1, \mu_2, \sigma_1, \sigma_2, \alpha\}) =
\begin{cases}
2 p(\{\mu_1, \sigma_1\}) p(\{\mu_2, \sigma_2\}) & \mu_1 \leq
\mu_2, \quad 0 \leq \alpha \leq 1 \\
0 & \textnormal{otherwise}
\end{cases},
\end{equation}
where the single-Gaussian prior, $p(\{\mu_i, \sigma_i\})$, is defined
in Equation \eqref{eq:gaussian-prior-def}.
\subsubsection{Log Normal}
\label{sec:log-normal}
Many of the mass distributions for the systems in Figure
\ref{fig:low-masses} rise rapidly to a peak and then fall off more
slowly in a longer tail toward high masses. So far, none of the
parameterized distributions we have discussed have this property. In
this section, we consider a log-normal model for the underlying mass
distribution; the log-normal distribution has a rise to a peak with a
slower falloff in a long tail.
The log-normal distribution gives $\log M$ a Gaussian distribution
with mean $\mu$ and standard deviation $\sigma$:
\begin{equation}
\label{eq:log-normal-def}
p(M|\vec{\theta}) = p(M|\{\mu, \sigma \}) = \frac{1}{
\sqrt{2\pi} M \sigma} \exp\left[ -\frac{\left(\log M - \mu\right)^2}{2 \sigma^2} \right].
\end{equation}
The parameters $\mu$ and $\sigma$ are dimensionless; the mean mass
$\langle M \rangle$ and mass standard deviation $\sigma_M$ are related
to $\mu$ and $\sigma$ by
\begin{eqnarray}
\label{eq:avg-M}
\langle M \rangle & = & \exp\left( \mu + \frac{1}{2} \sigma^2
\right) \\
\label{eq:sigma-M}
\sigma_M & = & \langle M \rangle \sqrt{\exp\left( \sigma^2 \right) - 1}.
\end{eqnarray}
For a fair comparison with the other models, we impose a prior that is
flat in $\langle M \rangle$ and $\sigma_M$. To ensure that most of
the probability in this model occurs for masses below $40 M_\odot$, we
require $\langle M \rangle + 2 \sigma_M \leq 40$, resulting in a
prior
\begin{equation}
p(\vec{\theta}) = p(\{\mu,\sigma\}) =
\begin{cases}
\frac{4}{40^2} \left| \frac{\partial \left(\langle M \rangle,
\sigma_M \right)}{\partial \left( \mu, \sigma \right)}
\right| & \sigma > 0, \quad \langle M \rangle + 2 \sigma_M \leq 40
\\
0 & \textnormal{otherwise}
\end{cases},
\end{equation}
where
\begin{equation}
\left| \frac{\partial \left(\langle M \rangle,
\sigma_M \right)}{\partial \left( \mu, \sigma \right)}
\right| = \frac{\exp\left( 2 \left( \mu + \sigma^2 \right)\right)
\sigma}{\sqrt{\exp\left( \sigma^2 \right) - 1}}
\end{equation}
is the determinant of the Jacobian of the map in Equations
\eqref{eq:avg-M} and \eqref{eq:sigma-M}.
\subsection{Non-Parametric Models for the Black Hole Mass Distribution}
\label{sec:non-parametric-models}
The previous subsection discussed models for the underlying black hole
mass distribution that assumed particular parameterized shapes for the
distribution. In this subsection, we will discuss models that do not
assume a priori a shape for the black hole mass distribution. The
fundamental non-parametric distribution in this section is a
histogram with some number of bins, $N_{\textnormal{bin}}$. Such a distribution is
piecewise-constant in $M$.
One choice for representing such a histogram would be to fix the bin
locations, and allow the heights to vary. With this approach, one
should be careful not to ``split'' features of the mass distribution
across more than one bin in order to avoid diluting the sensitivity to
such features; similarly, one should avoid including more than ``one''
feature in each bin. The locations of the bins, then, are crucial.
An alternative representation of histogram mass distributions avoids
this difficulty.
We choose to represent a histogram mass distribution with $N_{\textnormal{bin}}$ bins
by allocating a fixed probability, $1/N_{\textnormal{bin}}$, to each bin. The lower
and upper bounds for each bin are allowed to vary; when these are
close to each other (i.e.\ the bin is narrow), the distribution will
have a large value, and conversely when the bounds are far from each
other. We assume that the non-zero region of the distribution is
contiguous, so we can represent the boundaries of the bins as a
non-decreasing array of masses, $w_0 \leq w_1 \leq \ldots \leq
w_{N_{\textnormal{bin}}}$, with $w_0$ the minimum and $w_{N_{\textnormal{bin}}}$ the maximum mass
for which the distribution has support. This gives the distribution
\begin{equation}
\label{eq:hist-def}
p(M|\theta) = p(M|\{w_0, \ldots, w_{N_{\textnormal{bin}}}\}) =
\begin{cases}
0 & M < w_0 \textnormal{ or } w_{N_{\textnormal{bin}}} \leq M \\
\frac{1}{N_{\textnormal{bin}}} \frac{1}{w_{i+1} - w_i} & w_i \leq M < w_{i+1}
\end{cases}.
\end{equation}
For priors on the histogram model with $N_{\textnormal{bin}}$ bins, we assume that
the bin boundaries are uniformly distributed between 0 and $40 M_\odot$
subject only to the constraint that the boundaries are non-decreasing
from $w_0$ to $w_{N_{\textnormal{bin}}}$:
\begin{equation}
p(\{w_0, \ldots, w_{N_{\textnormal{bin}}}\}) =
\begin{cases}
\frac{\left(N_{\textnormal{bin}}+1\right)!}{40^{N_{\textnormal{bin}}+1}} & 0 \leq w_0 \leq w_1
\leq \ldots \leq w_{N_{\textnormal{bin}}} \leq 40 \\
0 & \textnormal{otherwise}
\end{cases}.
\end{equation}
We consider histograms with up to five bins in this work. We will see
that the evidence for the histogram models (see Sections
\ref{sec:model-selection}, \ref{sec:low-mass-model-selection}, and
\ref{sec:high-mass-model-selection}) from both the low-mass and
combined datasets is decreasing as the number of bins reaches five,
indicating that increasing the number of bins beyond five would not
sufficiently improve the fit to the mass distribution to compensate
for the extra parameter-space volume implied by the additional
parameters.
\subsection{Bayesian Model Selection}
\label{sec:model-selection}
In Sections \ref{sec:parametric-models} and
\ref{sec:non-parametric-models}, we discussed a series of models for
the underlying black hole mass distribution. Our MCMC analysis will
provide the posterior distribution of the parameters within each
model, but does not tell us which models are more likely to correspond
to the actual distribution. This model selection problem is the topic
of this section.
Consider a set of models, $\{M_i| i = 1, \ldots\}$, each with
corresponding parameters $\vec{\theta}_i$. Re-writing Equation
\eqref{eq:Bayes-rule} to be explicit about the assumption of a
particular model, we have
\begin{equation}
p(\vec{\theta}_i | d, M_i) = \frac{p(d|\vec{\theta}_i, M_i) p(\vec{\theta}_i | M_i)}{p(d|M_i)}.
\end{equation}
This gives the posterior probability of the parameters $\vec{\theta}_i$ in
the context of model $M_i$. But, the model itself can be regarded as
a discrete parameter in a larger ``super-model'' that encompasses all
the $M_i$. The parameters for the super-model are $\{M_i,
\vec{\theta}_i\}$: a choice of model and the corresponding parameter values
within that model. Each point in the super-model parameter space is a
statement that, e.g., ``the underlying mass distribution is a
Gaussian, with parameters $\mu$ and $\sigma$,'' or ``the underlying
mass distribution is a triple-bin histogram with parameters $w_1$,
$w_2$, $w_3$, and $w_4$,'' or .... The posterior probability of the
super-model parameters is given by Bayes' rule:
\begin{equation}
\label{eq:bayes-explicit-model}
p(\vec{\theta}_i, M_i|d) = \frac{p(d|\vec{\theta}_i, M_i) p(\vec{\theta}_i |M_i) p(M_i)}{p(d)},
\end{equation}
where we have introduced the model prior $p(M_i)$, which represents
our estimate on the probability that model $M_i$ is correct in the
absence of the data $d$. The normalizing evidence is now
\begin{equation}
\label{eq:multi-model-evidence-def}
p(d) = \sum_i \int d\vec{\theta}_i\, p(d|\vec{\theta}_i, M_i) p(\vec{\theta}_i |M_i) p(M_i) = \sum_i
p(d|M_i) p(M_i),
\end{equation}
writing the single-model evidence from Equation
\eqref{eq:evidence-def} as $p(d|M_i)$ to be explicit about the
dependence on the choice of model.
To compare the various models $M_i$, we are interested in the
marginalized posterior probability of $M_i$:
\begin{equation}
\label{eq:model-posterior-def}
p(M_i|d) \equiv \int d\vec{\theta}_i\, p(\vec{\theta}_i, M_i|d).
\end{equation}
This is the integral of the posterior over the entire parameter space
of model $M_i$. The marginalized posterior probability of model $M_i$
can be re-written in terms of the single-model evidence, $p(d|M_i)$
(see Equations \eqref{eq:bayes-explicit-model} and
\eqref{eq:evidence-def}):
\begin{equation}
\label{eq:model-evidence-def}
p(M_i|d) = \int d\vec{\theta}_i\, p(\vec{\theta}_i, M_i|d) = \frac{p(M_i)}{p(d)} \int d\vec{\theta}_i
p(d|\vec{\theta}_i,M_i) p(\vec{\theta}_i|M_i) = \frac{p(d|M_i) p(M_i)}{p(d)}.
\end{equation}
Here and throughout, we assume that any of the models in Section
\ref{sec:models} are equally likely a priori, so the model priors are
equal:
\begin{equation}
p(M_i) = \frac{1}{N_{\textnormal{model}}}.
\end{equation}
A powerful technique%
\footnote{We also attempted to compute $p(M_i|d)$ using two other
methods: the well-known harmonic-mean estimator and the direct
integration methods described in \citet{Weinberg2010}. The harmonic
mean is known to be very sensitive to outlying points in the MCMC in
general, and we found this to be true in our specific application.
The statistical properties of the direct integration algorithm from
\citet{Weinberg2010} are less certain, but we found that it was
quite noisy in our application compared to the reversible-jump MCMC.
Due to the statistical noise in the other two methods, we use the
results from our reversible jump MCMC analysis for model
selection.} %
for computing $p(M_i|d)$ is the reversible-jump MCMC
\citep{Green1995}. Reversible jump MCMC, discussed in more detail in
Appendix \ref{sec:reversible-jump-mcmc}, is a standard MCMC analysis
conducted in the super-model. The result of a reversible jump MCMC is
a chain of samples, $\{ M_i, \vec{\theta}_i\, | \, i = 1, \ldots \}$, from the
super-model parameter space. The integral in Equation
\eqref{eq:model-evidence-def} can be estimated by counting the number
of times that a given model $M_i$ appears in the reversible jump MCMC
chain:
\begin{equation}
p(M_i|d) = \int d\vec{\theta}_i p(M_i, \vec{\theta}_i|d) \approx \frac{N_i}{N},
\end{equation}
where $N_i$ is the number of MCMC samples that have discrete parameter
$M_i$, and $N$ is the total number of samples in the MCMC.
Naively implemented reversible jump MCMCs can be very inefficient when
the posteriors for a model or models are strongly peaked. In this
circumstance, a proposed MCMC jump into one of the peaked models is
unlikely to land on the peak by chance; since it is rare to propose a
jump into the important regions of parameter space of the peaked model
in a naive reversible jump MCMC, the output chain must be very long to
ensure that all models have been compared fairly. We describe a new
algorithm in Appendix \ref{sec:reversible-jump-mcmc} that produces
very efficient jump proposals for a reversible jump MCMC by exploiting
the information about the model posteriors we have from the
single-model MCMC samples. (See also \citet{Farr2010}.) With this
algorithm, reasonable chain lengths can fairly compare all the models
under consideration. We have used this algorithm to perform 10-way
reversible jump MCMCs to calculate the relative evidence for both the
parametric and non-parametric models in this study. These results
appear in Section \ref{sec:results}.
\section{Results}
\label{sec:results}
In this section we discuss the results of our MCMC analysis of the
posterior distributions of parameters for the models in Sections
\ref{sec:parametric-models} and \ref{sec:non-parametric-models}. We
also discuss model selection results. The results in Section
\ref{sec:results-low-mass} apply to the low-mass sample of
systems, while those of Section \ref{sec:high-mass} apply to the
combined sample of systems.
\subsection{Low-Mass Systems}
\label{sec:results-low-mass}
Table \ref{tab:low-mass-parametric} gives quantiles of the
marginalized parameter distributions of the parametric models implied
by the low-mass data. Table \ref{tab:low-mass-non-parametric} gives
the quantiles of the histogram bin boundaries in the non-parametric
analysis implied by the low-mass data.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Model & Parameter & 5\% & 15\% & 50\% & 85\% & 95\% \\
\hline \hline
Power Law (Equation \eqref{eq:power-law-dist}) & $M_{\textnormal{min}}$ &
1.2786 & 4.1831 & 6.1001 & 6.5011 & 6.6250 \\
\hline
& $M_{\textnormal{max}}$ & 8.5578 & 8.9214 & 23.3274 & 36.0002 & 38.8113 \\
\hline
& $\alpha$ & -12.4191 & -10.1894 & -6.3861 & 2.8476 & 5.6954 \\
\hline \hline
Exponential (Equation \eqref{eq:exp-def}) & $M_{\textnormal{min}}$ &
5.0185 & 5.4439 & 6.0313 & 6.3785 & 6.5316 \\
\hline
& $M_0$ & 0.7796 & 0.9971 & 1.5516 & 2.4635 & 3.2518 \\
\hline \hline
Gaussian (Equation \eqref{eq:gaussian-def}) & $\mu$ &
6.6349 & 6.9130 & 7.3475 & 7.7845 & 8.0798 \\
\hline
& $\sigma$ & 0.7478 & 0.9050 & 1.2500 & 1.7335 & 2.1134 \\
\hline \hline
Two Gaussian (Equation \eqref{eq:two-gaussian-def}) & $\mu_1$ &
5.4506 & 6.3877 & 7.1514 & 7.6728 & 7.9803 \\
\hline
& $\mu_2$ & 7.2355 & 7.7387 & 12.3986 & 25.2456 & 31.4216 \\
\hline
& $\sigma_1$ & 0.3758 & 0.7626 & 1.2104 & 1.7981 & 2.3065 \\
\hline
& $\sigma_2$ & 0.2048 & 0.6421 & 1.9182 & 5.2757 & 7.2625 \\
\hline
& $\alpha$ & 0.0983 & 0.3526 & 0.8871 & 0.9792 & 0.9936 \\
\hline \hline
Log Normal (Equation \eqref{eq:log-normal-def}) & $\langle M \rangle$ &
6.7619 & 7.0122 & 7.4336 & 7.9159 & 8.2942 \\
\hline
& $\sigma_M$ & 0.7292 & 0.8920 & 1.2704 & 1.8695 & 2.4069 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:low-mass-parametric} Quantiles of the
marginalized distribution for each of the parameters in the models
discussed in Section \ref{sec:parametric-models} implied by the low-mass data. We indicate
the 5\%, 15\%, 50\% (median), 85\%, and 95\% quantiles. The
marginalized distribution can be misleading when there are strong
correlations between variables. For example, while the
marginalized distributions for the power law parameters are quite
broad, the distribution of mass distributions implied by the power
law MCMC samples is similar to the other models. This occurs in
spite of the broad marginalized distributions because of the
correlations between the slope and limits of the power law
discussed in Section \ref{sec:power-law}.}
\end{table}
Recall that each MCMC sample in our analysis gives the parameters for
a model of the black hole mass distribution. The chain of samples of
parameters for a particular model gives us a distribution of black
hole mass distributions. Figure \ref{fig:dists} gives a sense of the
shape and range of the distributions of black hole mass distributions
that result from our MCMC analysis. In Figure \ref{fig:dists} we plot
the median, 10\% and 90\% values of the black hole mass distributions
that result from the MCMC chains. Because the choice of parameters
that gives, for example, the median distribution value at one mass
need not give the median distribution at another mass, these curves do
not necessarily look like the underlying model for the mass
distribution. For the same reason, they are not necessarily
normalized.
\begin{figure}
\begin{center}
\plotone{dist}
\end{center}
\caption{\label{fig:dists} The median (solid line), 10\% (lower
dashed line) and 90\% (upper dashed line) values of the black hole
mass distribution, $p(M|\theta)$, at various masses implied by the
posterior $p(\theta|d)$ for the models discussed in Sections
\ref{sec:parametric-models} and \ref{sec:non-parametric-models}.
These distributions use only the 15 low-mass observations in Table
\ref{tab:sources} (the combined sample is analyzed in Section
\ref{sec:high-mass}). Note that these ``distributions of
distributions'' are not necessarily normalized, and need not be
shaped like the underlying model distributions.}
\end{figure}
\subsubsection{Power Law}
In Figure \ref{fig:power-law} , we display a histogram of the
resulting samples in each of the parameters $M_{\textnormal{min}}$, $M_{\textnormal{max}}$, and
$\alpha$ for the power law model (see Equation
\eqref{eq:power-law-dist}); this represents the one-dimensional
``marginalized'' distribution
\begin{equation}
\label{eq:alpha-pdf}
p(\alpha|d) = \int dM_{\textnormal{min}}\, dM_{\textnormal{max}}\, p(\{M_{\textnormal{min}}, M_{\textnormal{max}}, \alpha\}|d),
\end{equation}
and similarly for $M_{\textnormal{min}}$ and $M_{\textnormal{max}}$.
\begin{figure}
\begin{center}
\plotone{power-law}
\end{center}
\caption{\label{fig:power-law} Histograms of the marginalized
distribution for the three parameters $M_{\textnormal{min}}$ (top, left), $M_{\textnormal{max}}$
(top, right), and $\alpha$ (bottom) from the power-law model. The
marginalized distribution for $\alpha$ is broad, with $-11.8 <
\alpha < 6.8$ enclosing 90\% of the probability. We have
$p(\alpha < 0) = 0.6$; the median value is $\alpha = -3.35$. The
broad distribution for $\alpha$ (and the other parameters) is due
to correlations between the parameters discussed in the main text;
see Figure \ref{fig:power-law-2D}.}
\end{figure}
The marginalized distribution for $\alpha$ is broad, with
\begin{equation}
-11.8 < \alpha < 6.8
\end{equation}
enclosing 90\% of the probability (excluding 5\% on each side). We
have $p(\alpha < 0) = 0.6$. The median value is $\alpha = -3.35$.
The broadness of the marginalized distribution for $\alpha$ comes from
the need to match the relatively narrow range in mass of the
low-mass systems. When $\alpha$ is negative, the resulting mass
distribution slopes down; $M_{\textnormal{min}}$ is constrained to be near the lowest
mass of the observed black holes, while $M_{\textnormal{max}}$ is essentially
irrelevant. Conversely, when $\alpha$ is positive and the mass
distribution slopes up, $M_{\textnormal{max}}$ must be close to the largest mass
observed, while $M_{\textnormal{min}}$ is essentially irrelevant. Figure
\ref{fig:power-law-2D} illustrates this effect, showing the
correlations between $\alpha$ and $M_{\textnormal{min}}$ and $\alpha$ and $M_{\textnormal{max}}$.
When we include the high-mass systems in the analysis, the long tail
will eliminate this effect by bringing both $M_{\textnormal{min}}$ and $M_{\textnormal{max}}$ into
play for all values of $\alpha$.
\begin{figure}
\begin{center}
\plotone{power-law-2D}
\end{center}
\caption{\label{fig:power-law-2D} MCMC samples in the $M_{\textnormal{min}},
\alpha$ (top) and $M_{\textnormal{max}}, \alpha$ (bottom) planes for the
power-law model discussed in Section \ref{sec:power-law}. The
correlations between $\alpha$ and the power law bounds discussed
in the text are apparent: when $\alpha$ is positive, the mass
distribution slopes upward and $M_{\textnormal{max}}$ is constrained to be near
the maximum observed mass while $M_{\textnormal{min}}$ is unconstrained. When
$\alpha$ is negative, the mass distribution slopes down and
$M_{\textnormal{min}}$ is constrained to be near the lowest mass observed, while
$M_{\textnormal{max}}$ is unconstrained. }
\end{figure}
\subsubsection{Decaying Exponential}
Figure \ref{fig:exp-marginal} displays the marginalized posterior
distribution for the scale mass of the exponential, $M_0$, and the
cutoff mass, $M_{\textnormal{min}}$ (see Equation \ref{eq:exp-def}). The median
scale mass is $M_0 = 1.55$, and $0.78 \leq M_0 \leq 3.25$ with 90\%
confidence. This model was one of those considered by
\citet{Ozel2010}, whose results ($M_0 \sim 1.5$ and $M_{\textnormal{min}} \sim 6.5$)
are broadly consistent with ours. Figure \ref{fig:exp-2D} displays
the MCMC samples in the $M_{\textnormal{min}}$, $M_0$ plane for this model. There is
a small correlation between smaller $M_{\textnormal{min}}$ and larger $M_0$, which is
driven by the need to widen the distribution to encompass the peak of
the mass measurements in Figure \ref{fig:low-masses} when the minimum
mass is smaller.
\begin{figure}
\begin{center}
\plotone{exp-cutoff}
\end{center}
\caption{\label{fig:exp-marginal} The distribution of scale masses,
$M_0$ (dashed histogram), and minimum masses, $M_{\textnormal{min}}$ (solid
histogram), both measured in units of a solar mass for the
exponential underlying mass distribution defined in Equation
\eqref{eq:exp-def}. The median scale mass is $M_0 = 1.55$, and
$0.78 \leq M_0 \leq 3.25$ with 90\% confidence.}
\end{figure}
\begin{figure}
\begin{center}
\plotone{exp-cutoff-2d}
\end{center}
\caption{\label{fig:exp-2D} The MCMC samples in the $M_{\textnormal{min}}$, $M_0$
plane for the decaying exponential underlying mass distribution
model. The slight correlation between smaller $M_{\textnormal{min}}$ and larger
$M_0$ is driven by the need to widen the mass distribution to
encompass the peak of the measurements in Figure
\ref{fig:low-masses} when the minimum mass decreases.}
\end{figure}
\subsubsection{Gaussian}
Figure \ref{fig:gaussian} shows the resulting marginalized
distributions for the parameters $\mu$ and $\sigma$. We constrain the
peak of the Gaussian between $6.63 \leq \mu \leq 8.08$ with 90\%
confidence. This model also appeared in \citet{Ozel2010}; they found
$\mu \sim 7.8$ and $\sigma \sim 1.2$, consistent with our results
here.
\begin{figure}
\begin{center}
\plotone{gaussian}
\end{center}
\caption{\label{fig:gaussian} Marginalized posterior distributions
for the mean, $\mu$ (solid histogram), and standard deviation,
$\sigma$ (dashed histogram), both in solar masses for the Gaussian
underlying mass distribution defined in Equation
\eqref{eq:gaussian-def}. The peak of the Gaussian, $\mu$, is
constrained in $6.63 \leq \mu \leq 8.08$ with 90\% confidence.}
\end{figure}
\subsubsection{Two Gaussian}
Figure \ref{fig:two-gaussian} shows the marginalized distributions for
the two-Gaussian model parameters from our MCMC runs. We find $\alpha
> 0.8$ with 62\% probability, clearly favoring the Gaussian with
smaller mean. The distributions for $\mu_1$ and $\sigma_1$ are
similar to those of the single Gaussian displayed in Figure
\ref{fig:gaussian}, indicating that this Gaussian is centered around
the peaks of the low-mass distributions. The second Gaussian's
parameter distributions are much broader. The second Gaussian appears
to be sampling the tail of the mass samples. In spite of the extra
degrees of freedom in this model, we find that this model is strongly
disfavored relative to the single-Gaussian model for this dataset:
$p(\textnormal{Gaussian}|d) / p(\textnormal{Two Gaussian}|d) \simeq
4.7$ (see Sections \ref{sec:model-selection} and
\ref{sec:low-mass-model-selection} for discussion).
\begin{figure}
\begin{center}
\plotone{two-gaussian}
\end{center}
\caption{\label{fig:two-gaussian} The marginal distributions for the
five parameters of the two-Gaussian model. The top panel is
$\mu_1$ (solid histogram) and $\sigma_1$ (dashed histogram), the
middle panel is $\mu_2$ (solid histogram) and $\sigma_2$ (dashed
histogram), and the bottom panel is $\alpha$. We have $\alpha >
0.8$ with 62\% probability, favoring the first of the two
Gaussians. The distributions for $\mu_1$ and $\sigma_1$ are
similar to those of the single Gaussian model displayed in Figure
\ref{fig:gaussian}; the second Gaussian's parameter distributions
are much broader (recall that we constrain $\mu_2 > \mu_1$). The
second Gaussian is attempting to fit the tail of the mass samples.
The extra degrees of freedom in the distribution from the second
Gaussian do not provide enough extra fitting power to compensate
for the increase in parmeter space, however: the two-Gaussian
model is disfavored relative to the single Gaussian by a factor of
$4.7$ on this dataset (see Sections \ref{sec:model-selection} and
\ref{sec:low-mass-model-selection} for discussion).}
\end{figure}
\subsubsection{Log Normal}
The marginal distributions for $\langle M \rangle$ and $\sigma_M$
appear in Figure \ref{fig:log-normal}. The distributions are similar
to those for $\mu$ and $\sigma$ from the Gaussian model in Section
\ref{sec:gaussian}.
\begin{figure}
\begin{center}
\plotone{log-normal}
\end{center}
\caption{\label{fig:log-normal} Marginalized distributions of the
mean mass, $\langle M \rangle$ (solid histogram), and standard
deviation of the mass, $\sigma_M$ (dashed histogram), for the
log-normal model in Section \ref{sec:log-normal}. The
distributions are similar to the distributions of $\mu$ and
$\sigma$ in the Gaussian model of Section \ref{sec:gaussian}.}
\end{figure}
\subsubsection{Histogram Models}
The median values of the histogram mass distributions that result from
the MCMC samples of the posterior distribution for the $w_i$
parameters for one-, two-, three-, four-, and five-bin histogram
models are shown in Figure \ref{fig:dists}. Table
\ref{tab:low-mass-non-parametric} gives quantiles of the marginalized
bin boundary distributions for the histogram models.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Bins & Boundary & 5\% & 15\% & 50\% & 85\% & 95\% \\
\hline \hline
1 & $w_0$ & 3.94488 & 4.55603 & 5.43333 & 6.02557 & 6.29749 \\
\hline
& $w_1$ & 8.50844 & 8.69262 & 9.11784 & 9.83477 & 10.5128 \\
\hline \hline
2 & $w_0$ & 3.3426 & 4.2047 & 5.39132 & 6.18413 & 6.47553 \\
\hline
& $w_1$ & 6.41972 & 6.72605 & 7.43421 & 8.2489 & 8.52885 \\
\hline
& $w_2$ & 8.46161 & 8.65077 & 9.12694 & 10.1113 & 11.2595 \\
\hline \hline
3 & $w_0$ & 2.18176 & 3.54345 & 5.16094 & 6.16473 & 6.44697 \\
\hline
& $w_1$ & 5.68876 & 6.14223 & 6.68829 & 7.38725 & 8.04235 \\
\hline
& $w_2$ & 6.8297 & 7.22718 & 8.1451 & 8.7512 & 9.27296 \\
\hline
& $w_3$ & 8.44307 & 8.67362 & 9.25718 & 12.1688 & 21.92 \\
\hline \hline
4 & $w_0$ & 1.32131 & 2.7934 & 4.66156 & 5.78459 & 6.17946 \\
\hline
& $w_1$ & 5.20112 & 5.77331 & 6.42501 & 6.98427 & 7.44584 \\
\hline
& $w_2$ & 6.41805 & 6.73535 & 7.43826 & 8.32958 & 8.64212 \\
\hline
& $w_3$ & 7.40302 & 7.95608 & 8.58976 & 9.33897 & 10.3992 \\
\hline
& $w_4$ & 8.56724 & 8.8059 & 10.2451 & 24.3573 & 34.2423 \\
\hline \hline
5 & $w_0$ & 0.9392 & 2.28789 & 4.33389 & 5.7012 & 6.21166 \\
\hline
& $w_1$ & 4.69778 & 5.44302 & 6.26575 & 6.76407 & 7.14427 \\
\hline
& $w_2$ & 6.1388 & 6.47155 & 7.00606 & 7.97325 & 8.38259 \\
\hline
& $w_3$ & 6.82058 & 7.28677 & 8.22514 & 8.81555 & 9.41012 \\
\hline
& $w_4$ & 8.02335 & 8.36993 & 8.94879 & 11.3206 & 17.3349 \\
\hline
& $w_5$ & 8.7112 & 9.25208 & 16.2059 & 31.897 & 37.2738 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:low-mass-non-parametric} The 5\%, 15\%, 50\%
(median), 85\%, and 95\% quantiles for the bin boundaries in the
one- through five-bin histogram models discussed in Section
\ref{sec:non-parametric-models}.}
\end{table}
As the number of bins increases, the models are better able to capture
features of the mass distribution, but we find that the one-bin
histogram is the most probable of the histogram models for the
low-mass data (see Section \ref{sec:low-mass-model-selection} for
discussion). This occurs because the extra fitting power does not
sufficiently improve the fit to compensate for the vastly larger
parameter space of the models with more bins.
\subsubsection{Model Selection for the Low-Mass Sample}
\label{sec:low-mass-model-selection}
We have performed a suite of 500 independent reversible-jump MCMCs
jumping between all the models (both parametric and non-parametric)
described in Section \ref{sec:models} using the single-model MCMC
samples to construct an efficient jump proposal for each model as
described above (see Appendix \ref{sec:reversible-jump-mcmc}). The
numbers of counts in each model are consistent across the MCMCs in the
suite; Figure \ref{fig:rj} displays the average probability for each
model across the suite, along with the 1-$\sigma$ errors on the
average inferred from the standard deviation of the model counts
across the suite. Table \ref{tab:rj} gives the numerical values of
the average probability for each model across the suite of MCMCs.
\begin{figure}
\begin{center}
\plotone{rj}
\end{center}
\caption{\label{fig:rj} The relative probability of the models
discussed in Section \ref{sec:models} as computed using the
reversible-jump MCMC with the efficient jump proposal algorithm
described in Section \ref{sec:reversible-jump-mcmc}. (See also
Table \ref{tab:rj}.) In increasing order along the $x$-axis, the
models are the power-law of Section \ref{sec:power-law} (PL), the
decaying exponential of Section \ref{sec:exponential} (E), the
single Gaussian of Section \ref{sec:gaussian} (G), the double
Gaussian of Section \ref{sec:gaussian} (TG), and the one-, two-,
three-, four-, and five-bin histogram models of Section
\ref{sec:non-parametric-models} (H1, H2, H3, H4, H5,
respectively). The average of 500 independent reversible-jump
MCMCs is plotted, along with the 1-$\sigma$ error on the average
inferred from the standard deviation of the probability from the
individual MCMCs. As discussed in the text, the power-law and
Gaussian models are the most favored.}
\end{figure}
The most favored model is the power law from Section
\ref{sec:power-law}, followed by the Gaussian model from Section
\ref{sec:gaussian}. Interestingly, the theoretical curve from
\citet{Fryer2001} (the exponential model of Section
\ref{sec:exponential}) places fourth in the ranking of evidence.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|}
\hline
Model & Relative Evidence \\
\hline \hline
Power Law (Section \ref{sec:power-law}) & 0.331488 \\
\hline
Gaussian (Section \ref{sec:gaussian}) & 0.288129 \\
\hline
Log Normal (Section \ref{sec:log-normal}) & 0.138435 \\
\hline
Exponential (Section \ref{sec:exponential}) & 0.0916218 \\
\hline
Two Gaussian (Section \ref{sec:gaussian}) & 0.0662577 \\
\hline
Histogram (1 Bin, Section \ref{sec:non-parametric-models}) &
0.0641941 \\
\hline
Histogram (2 Bin, Section \ref{sec:non-parametric-models}) &
0.015184 \\
\hline
Histogram (3 Bin, Section \ref{sec:non-parametric-models}) &
0.00332933 \\
\hline
Histogram (4 Bin, Section \ref{sec:non-parametric-models}) &
0.000999976 \\
\hline
Histogram (5 Bin, Section \ref{sec:non-parametric-models}) &
0.0003614 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:rj} Relative probabilities of the various models
from Section \ref{sec:models} implied by the low-mass data. (See also Figure \ref{fig:rj}.) These probabilities have been
computed from reversible-jump
MCMC samples using the efficient jump proposal algorithm in Appendix \ref{sec:reversible-jump-mcmc}.}
\end{table}
Though the model probabilities presented in this section have small
statistical error, they are subject to large ``systematic'' error.
The source of this error is both the particular choice of model prior
(uniform across models) and the choice of priors on the parameters
within each model used for this work. For example, the
theoretically-preferred exponential model (Section
\ref{sec:exponential}) is only a factor of $\sim 3$ away from the
power law model (Section \ref{sec:power-law}), which does not have
theoretical support. Is such support worth a factor of three in the
model prior? Alternately, we may say we know (in advance of any mass
measurements) that black holes must exist with mass $\lesssim
10M_\odot$; then we could, for example, impose a prior on the minimum
mass in the exponential model ($M_{\textnormal{min}}$) that is uniform between $0$
and $10 M_\odot$, which would reduce the prior volume available for the
model by a factor of 4 without significantly reducing the posterior
support for the model. This has the same effect as increasing the
model prior by a factor of 4, which would move this model from fourth
to first place. Of course, we would then have to modify the prior
support for the other models to take into account the restriction that
there must be black holes with $M \lesssim 10M_\odot$....
\citet{Linder2008} discuss these issues in the context of cosmological
model selection, concluding with a warning against over-reliance on
model selection probabilities.
Nevertheless, we believe that our model comparison is reasonably fair
(see the discussion of priors in Section \ref{sec:priors}). It seems
safe to conclude that ``single-peaked'' models (the power-law and
Gaussian) are preferred over ``extended'' models (the exponential or
log-normal), or those with ``structure'' (the many-bin histograms or
two-Gaussian model). Previous studies have also supported the
``single, narrow peak'' mass distribution \citep{Bailyn1998,Ozel2010}.
In this light, poor performance of the single-bin histogram is
surprising.
\subsection{Combined Sample}
\label{sec:high-mass}
This section repeats the analysis of the models from Section
\ref{sec:models}, but including the high-mass, wind-fed systems from
Table \ref{tab:sources} (see also Figure \ref{fig:high-masses}) in the
sample. Figure \ref{fig:high-mass-dists} displays bounds on the value
of the underlying mass distribution for the various models in Section
\ref{sec:models} applied to this data set; compare to Figure
\ref{fig:dists}. The inclusion of the high-mass, wind-fed systems
tends to widen the distribution toward the high-mass end and, in
models that allow it, produce a second, high-mass peak in addition
to the one in Figure \ref{fig:dists}.
\begin{figure}
\begin{center}
\plotone{dist-high}
\end{center}
\caption{\label{fig:high-mass-dists} The median (solid line), 10\%
(lower dashed line), and 90\% (upper dashed line) values of the
black hole mass distribution, $p(M|\theta)$, at various masses
implied by the posterior $p(\theta|d)$ for the models discussed in
Sections \ref{sec:parametric-models} and
\ref{sec:non-parametric-models}. These distributions use the
combined sample of 20 observations in Table \ref{tab:sources},
including the high-mass, wind-fed systems. Note that these
``distributions of distributions'' are not necessarily normalized,
and need not be ``shaped'' like the underlying model
distributions. Compare to Figure \ref{fig:dists}, which includes
only the low-mass systems in the analysis. Including the
high-mass systems tends to widen the distribution toward the
high-mass end and, in models that allow it, produce a second,
high-mass peak in addition to the one in Figure
\ref{fig:dists}. }
\end{figure}
\subsubsection{Power Law}
Figure \ref{fig:power-law-high} presents the marginalized
distribution for the three power-law parameters $M_{\textnormal{min}}$, $M_{\textnormal{max}}$, and
$\alpha$ (Section \ref{sec:power-law}) from an analysis including the
high-mass systems. The distribution for $M_{\textnormal{max}}$ is quite broad
because the best fit power laws slope downward ($\alpha < 0$), making
this parameter less relevant. The range $-5.05 \leq \alpha \leq
-1.77$ encloses 90\% of the probability; the median value of $\alpha$
is -3.23. The presence of the high-mass samples in the analysis
produces a distinctive tail, eliminating the correlations discussed in
Section \ref{sec:power-law} and displayed in Figure
\ref{fig:power-law-2D} for the low-mass subset of the observations.
\begin{figure}
\begin{center}
\plotone{power-law-high}
\end{center}
\caption{\label{fig:power-law-high} Histograms of the marginalized
distribution for the three parameters $M_{\textnormal{min}}$ (top, left), $M_{\textnormal{max}}$
(top, right), and $\alpha$ (bottom) from the power-law model
including the high-mass samples in the MCMC. The distribution for
$M_{\textnormal{max}}$ is quite broad because the best fit power laws slope
downward ($\alpha < 0$), making this parameter less relevant. The
range $-5.05 \leq \alpha \leq -1.77$ encloses 90\% of the
probability; the median value of $\alpha$ is -3.23. The presence
of the high-mass samples in the analysis produces a distinctive
tail, eliminating the correlations discussed in Section
\ref{sec:power-law} and displayed in Figure \ref{fig:power-law-2D}
for the low-mass subset of the observations. }
\end{figure}
\subsubsection{Decaying Exponential}
Figure \ref{fig:exp-cutoff-high} displays the marginalized
distributions for the exponential parameters $M_{\textnormal{min}}$ and $M_0$
(Section \ref{sec:exponential}) from an analysis including the
high-mass systems. The distribution for the scale mass, $M_0$, has
moved to higher masses relative to Figure \ref{fig:exp-marginal} to
fit the tail of the mass distribution; the distribution for $M_{\textnormal{min}}$ is
less affected, though it has broadened somewhat toward low masses.
\begin{figure}
\begin{center}
\plotone{exp-cutoff-high}
\end{center}
\caption{\label{fig:exp-cutoff-high} The marginalized distributions
for the exponential parameters $M_{\textnormal{min}}$ (top) and $M_0$ (bottom)
defined in Section \ref{sec:exponential} from an analysis
including the high-mass systems. The distribution for the scale
mass, $M_0$, has moved to higher masses relative to Figure
\ref{fig:exp-marginal} to fit the tail of the mass distribution;
we now have $2.8292 \leq M_0 \leq 7.9298$ with 90\% confidence,
with median 4.7003. The distribution for $M_{\textnormal{min}}$ is less
affected, though it has broadened somewhat toward low masses.}
\end{figure}
\subsubsection{Gaussian}
Figure \ref{fig:gaussian-high} displays the marginalized distributions
for the Gaussian parameters (Section \ref{sec:gaussian}) when the
high-mass objects are included in the mass distribution. The mean
mass, $\mu$, and the mass standard deviation, $\sigma$, are both
increased relative to Figure \ref{fig:gaussian} to account for the
broader distribution and high-mass tail.
\begin{figure}
\begin{center}
\plotone{gaussian-high}
\end{center}
\caption{\label{fig:gaussian-high} The marginalized distributions
for the Gaussian parameters when the high-mass objects are
included in the mass distribution. The mean mass, $\mu$ (solid
histogram), and the mass standard deviation, $\sigma$ (dashed
histogram), are both increased relative to Figure
\ref{fig:gaussian} to account for the broader distribution and
high-mass tail. The peak of the underlying mass distribution
lies in the range $7.8660 \leq \mu \leq 10.9836$ with 90\%
confidence; the median value is 9.2012.}
\end{figure}
\subsubsection{Two Gaussian}
The analysis of the two-Gaussian model shows the largest change when
the high-mass samples are included. Figure
\ref{fig:two-gaussian-high} shows the marginalized distributions for
the two-Gaussian parameters (Section \ref{sec:gaussian}) when the
high-mass samples are included in the analysis. In stark contrast
to Figure \ref{fig:two-gaussian}, there are two well-defined,
separated peaks; the low-mass peak reproduces the results from the
low-mass samples, while the high-mass peak ($13.5534 \leq \mu_2 \leq
27.9481$ with 90\% confidence; median 20.3839) matches the new
high-mass samples. The peak in $\alpha$ near 0.8 is consistent with
approximately 4/5 the total probability being concentrated in the 15
low-mass samples.
\begin{figure}
\begin{center}
\plotone{two-gaussian-high}
\end{center}
\caption{\label{fig:two-gaussian-high} The marginalized
distributions for the two-Gaussian parameters (Section
\ref{sec:gaussian}) when the high-mass samples are included in the
analysis. The means ($\mu_1$ and $\mu_2$) are represented by the
solid histograms; the standard deviations ($\sigma_1$ and
$\sigma_2$) are represented by the dashed histograms. In stark
contrast to Figure \ref{fig:two-gaussian}, there are two
well-defined, separated peaks; the low-mass peak reproduces the
results from the low-mass samples, while the high-mass peak
($13.5534 \leq \mu_2 \leq 27.9481$ with 90\% confidence; median
20.3839) matches the new high-mass samples. The peak in $\alpha$
near 0.8 is consistent with approximately 15 out of 20 samples
belonging to the low-mass peak.}
\end{figure}
\subsubsection{Log Normal}
The marginalized distributions for the log-normal parameters (Section
\ref{sec:log-normal}) when the high-mass samples are included in the
analysis are displayed in Figure \ref{fig:log-normal-high}. The
changes when the high-mass samples are included (compare to Figure
\ref{fig:log-normal}) are similar to the changes in the Gaussian
distribution: the mean mass moves to higher masses, and the
distribution broadens. Because the log-normal distribution is
inherently asymmetric, with a high-mass tail, it does not need to
widen as much as the Gaussian distribution did.
\begin{figure}
\begin{center}
\plotone{log-normal-high}
\end{center}
\caption{\label{fig:log-normal-high} The marginalized distributions
for the log-normal parameters (Section \ref{sec:log-normal};
$\langle M \rangle$ solid, $\sigma_M$ dashed) when the high-mass
samples are included in the analysis. The changes when the
high-mass samples are included (compare to Figure
\ref{fig:log-normal}) are similar to the changes in the Gaussian
distribution: the mean mass moves to higher masses, and the
distribution broadens.}
\end{figure}
The confidence limits on the parameters for the parametric models of
the underlying mass distribution are displayed in Table
\ref{tab:high-mass-parametric} (compare to Table
\ref{tab:low-mass-parametric}).
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Model & Parameter & 5\% & 15\% & 50\% & 85\% & 95\% \\
\hline \hline
Power Law (Equation \eqref{eq:power-law-dist}) & $M_{\textnormal{min}}$ &
4.87141 & 5.29031 & 5.85019 & 6.26118 & 6.45674 \\
\hline
& $M_{\textnormal{max}}$ & 19.1097 & 23.4242 & 31.5726 & 37.7519 & 39.3369 \\
\hline
& $\alpha$ & -5.04879 & -4.30368 & -3.23404 & -2.31365 & -1.77137 \\
\hline \hline
Exponential (Equation \eqref{eq:exp-def})& $M_{\textnormal{min}}$ &
4.0865 & 4.60236 & 5.32683 & 5.94097 & 6.22952 \\
\hline
& $M_0$ & 2.82924 & 3.41139 & 4.70034 & 6.52214 & 7.92979 \\
\hline \hline
Gaussian (Equation \eqref{eq:gaussian-def}) & $\mu$ &
7.86599 & 8.33118 & 9.20116 & 10.2493 & 10.9836 \\
\hline
& $\sigma$ & 2.23643 & 2.58899 & 3.33545 & 4.17886 & 4.67881 \\
\hline \hline
Two Gaussian (Equation \eqref{eq:two-gaussian-def}) & $\mu_1$ &
6.741 & 7.02724 & 7.48174 & 8.0139 & 8.46626 \\
\hline
& $\mu_2$ & 13.5534 & 16.202 & 20.3839 & 24.9259 & 27.9481 \\
\hline
& $\sigma_1$ & 0.742824 & 0.913941 & 1.31244 & 1.94862 & 2.50238 \\
\hline
& $\sigma_2$ & 0.511159 & 1.5025 & 4.39824 & 7.04612 & 8.25905 \\
\hline
& $\alpha$ & 0.575692 & 0.670978 & 0.798227 & 0.891522 & 0.932143 \\
\hline \hline
Log Normal (Equation \eqref{eq:log-normal-def}) & $\langle M \rangle$ &
8.00086 & 8.51192 & 9.6264 & 11.1851 & 12.3986 \\
\hline
& $\sigma_M$ & 2.19262 & 2.8137 & 4.16742 & 6.25101 & 8.11839 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:high-mass-parametric} Quantiles of the
marginalized distribution for each of the parameters in the models
discussed in Section \ref{sec:parametric-models} when the
high-mass samples are included in the analysis (compare to Table
\ref{tab:low-mass-parametric}). We indicate the 5\%, 15\%, 50\%
(median), 85\%, and 95\% quantiles.}
\end{table}
\subsubsection{Histogram Models}
The non-parametric (histogram; see Section
\ref{sec:non-parametric-models}) models also show evidence of a long
tail from the inclusion of the high-mass samples. Table
\ref{tab:high-mass-non-parametric} displays confidence limits on the
histogram parameters for the analysis including the high-mass
systems; compare to Table \ref{tab:low-mass-non-parametric}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Bins & Boundary & 5\% & 15\% & 50\% & 85\% & 95\% \\
\hline \hline
\hline
1 & $w_0$ & 2.22294 & 3.12695 & 4.2456 & 5.15132 & 5.58265 \\
\hline
& $w_1$ & 15.93 & 16.2535 & 17.7836 & 20.5449 & 22.5836 \\
\hline \hline
2 & $w_0$ & 3.87202 & 4.49983 & 5.41234 & 6.08334 & 6.35933 \\
\hline
& $w_1$ & 7.22163 & 8.25079 & 8.93669 & 9.71551 & 10.4287 \\
\hline
& $w_2$ & 18.4762 & 19.9798 & 24.941 & 32.5972 & 36.8615 \\
\hline \hline
3 & $w_0$ & 3.39289 & 4.24509 & 5.41694 & 6.15087 & 6.42822 \\
\hline
& $w_1$ & 6.41849 & 6.71984 & 7.47263 & 8.2942 & 8.61785 \\
\hline
& $w_2$ & 8.41449 & 8.64664 & 9.17056 & 10.4075 & 12.2718 \\
\hline
& $w_3$ & 18.5705 & 21.0481 & 27.1494 & 34.7753 & 38.0652 \\
\hline \hline
4 & $w_0$ & 2.42094 & 3.69875 & 5.2596 & 6.25449 & 6.54316 \\
\hline
& $w_1$ & 5.83725 & 6.2836 & 6.84987 & 7.8033 & 8.27706 \\
\hline
& $w_2$ & 6.94919 & 7.43628 & 8.38531 & 9.13401 & 9.91845 \\
\hline
& $w_3$ & 8.50371 & 8.75188 & 9.86694 & 17.1848 & 22.1086 \\
\hline
& $w_4$ & 18.5823 & 21.4628 & 28.367 & 35.8118 & 38.5278 \\
\hline \hline
5 & $w_0$ & 1.73691 & 3.19184 & 4.89769 & 5.9547 & 6.35522 \\
\hline
& $w_1$ & 5.46124 & 5.95881 & 6.59431 & 7.26795 & 7.91821 \\
\hline
& $w_2$ & 6.63468 & 6.9804 & 7.93239 & 8.60918 & 9.06926 \\
\hline
& $w_3$ & 7.89654 & 8.35634 & 8.91766 & 10.6568 & 13.9644 \\
\hline
& $w_4$ & 8.74064 & 9.42672 & 15.8004 & 22.7101 & 27.6399 \\
\hline
& $w_5$ & 20.0202 & 22.9065 & 29.6307 & 36.6606 & 38.8573 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:high-mass-non-parametric} The 5\%, 15\%, 50\%
(median), 85\%, and 95\% quantiles for the bin boundaries in the
one- through five-bin histogram models discussed in Section
\ref{sec:non-parametric-models} in an
analysis including the high-mass, wind-fed systems.
The tails evident in Figure \ref{fig:high-mass-dists} are apparent
here as well; compare to Table \ref{tab:low-mass-non-parametric}.}
\end{table}
\subsubsection{Model Selection for the Combined Sample}
\label{sec:high-mass-model-selection}
Repeating the model selection analysis discussed in Section
\ref{sec:low-mass-model-selection} for the sample including the
high-mass systems, we find that the model probabilities have changed
with the inclusion of the extra five systems. As before, we assume
for this analysis that the model priors are equal.
Reversible jump MCMC calculations of the model probabilities are
displayed in Figure \ref{fig:high-rj-evidence}; compare Figure
\ref{fig:rj}. The relative model probabilities are given in Table
\ref{tab:rj-high}. The exponential model is the most favored model
for the combined sample, with the two-Gaussian model the second-most
favored. The ranking of models differs significantly from the
low-mass samples. The improvement of the exponential model relative
to the low-mass analysis is encouraging for theoretical calculations
that attempt to model the entire population of X-ray binaries with
this mass model. Note also that the increasing structure of the mass
distribution favors histogram models with three bins over those with
fewer bins.
\begin{figure}
\begin{center}
\plotone{rj-high}
\end{center}
\caption{\label{fig:high-rj-evidence} The relative probability of
the models discussed in Section \ref{sec:models} as computed using
the reversible-jump MCMC with the efficient jump proposal
algorithm described in Section \ref{sec:reversible-jump-mcmc},
applied to all 20 systems in Table \ref{tab:sources} (i.e.\
including the high-mass systems). (See also Table
\ref{tab:rj-high}.) In increasing order along the $x$-axis, the
models are the power-law of Section \ref{sec:power-law} (PL), the
decaying exponential of Section \ref{sec:exponential} (E), the
single Gaussian of Section \ref{sec:gaussian} (G), the double
Gaussian of Section \ref{sec:gaussian} (TG), and the one-, two-,
three-, four-, and five-bin histogram models of Section
\ref{sec:non-parametric-models} (H1, H2, H3, H4, H5,
respectively). The average of 500 independent reversible-jump
MCMCs is plotted, along with the 1-$\sigma$ error on the average
inferred from the standard deviation of the probability from the
individual MCMCs. Compare to Figure \ref{fig:rj}.}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l|r|}
\hline
Model & Relative Evidence \\
\hline \hline
Exponential (Section \ref{sec:exponential}) & 0.346944 \\
\hline
Two Gaussian (Section \ref{sec:gaussian}) & 0.304923 \\
\hline
Power Law (Section \ref{sec:power-law}) & 0.120313 \\
\hline
Log Normal (Section \ref{sec:log-normal}) & 0.102536 \\
\hline
Histogram (3 Bin, Section \ref{sec:non-parametric-models}) &
0.0473464 \\
\hline
Histogram (4 Bin, Section \ref{sec:non-parametric-models}) &
0.0282086 \\
\hline
Histogram (2 Bin, Section \ref{sec:non-parametric-models}) &
0.0210994 \\
\hline
Histogram (5 Bin, Section \ref{sec:non-parametric-models}) &
0.0179703 \\
\hline
Gaussian (Section \ref{sec:gaussian}) & 0.00901719 \\
\hline
Histogram (1 Bin, Section \ref{sec:non-parametric-models}) &
0.00164214 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:rj-high} Relative probabilities of the various models
from Section \ref{sec:models} implied by the combined sample of systems. (See also Figure \ref{fig:high-rj-evidence}.) These probabilities have been
computed from reversible-jump MCMC samples using the efficient jump proposal algorithm in Appendix \ref{sec:reversible-jump-mcmc}.}
\end{table}
\section{The Minimum Black Hole Mass}
\label{sec:minimum-mass}
It is interesting to use our models for the underlying black hole mass
distribution in X-ray binaries to place constraints on the minimum
black hole mass implied by the present sample. \citet{Bailyn1998}
addressed this question in the context of a ``mass gap'' between the
most massive neutron stars and the least massive black holes. The
more recent study of \citet{Ozel2010} also looked for a mass gap using
a subset of the models and systems presented here. Both works found
that the minimum black hole mass is significantly above the maximum
neutron star mass \citep{Kalogera1996} of $\sim 3 M_\odot$ (though
\citet{Ozel2010} only state their evidence for a gap in terms of the
maximum-posterior parameters and not the full extent of their
distributions).
The distributions of the minimum black hole mass from the analysis of
the low-mass samples are displayed in Figure \ref{fig:min-mass}. The
minimum black hole mass is defined as the 1\% mass quantile,
$M_{1\%}$, of the black-hole mass distribution (i.e.\ the mass lying
below 99\% of the mass distribution). (A quantile-based definition is
necessary in the case of those distributions that do not have a hard
cutoff mass; even for those that do, like the power-law model, it can
be useful to define a ``soft'' cutoff in the event that the lower mass
hard cutoff becomes an irrelevant parameter as discussed in Section
\ref{sec:power-law}.) For each mass distribution parameter sample
from our MCMC, we can calculate the distribution's minimum black hole
mass; the collection of these minimum black hole masses approximates
the distribution of minimum black hole masses implied by the data in
the context of that distribution. Figure \ref{fig:min-mass} plots
histograms of the minimum black hole mass samples.
\begin{figure}
\begin{center}
\plotone{mmin}
\end{center}
\caption{\label{fig:min-mass} The distributions for the minimum
black hole mass, $M_{1\%}$, calculated from the MCMC samples for
the models in Section \ref{sec:models} applied to the low-mass
systems. For the most favored models, the power-law and Gaussian,
the 90\% confidence limit on the minimum black hole mass is 4.3
$M_\odot$ and 2.9 $M_\odot$, respectively. In all plots, we indicate
the 90\% confidence bound (i.e.\ the 10\% quantile) on the minimum
black hole mass with a vertical line.}
\end{figure}
We find that the best-fit model for the low-mass systems (the
power-law) has $M_{1\%} > 4.3$ $M_\odot$ in 90\% of the MCMC samples
(i.e.\ at 90\% confidence). This is significantly above the maximum
theoretically-allowed neutron star mass, $\sim 3 M_\odot$ (e.g.\
\citet{Kalogera1996}). Hence we conclude that the low-mass systems
show strong evidence of a mass gap.
The distribution of minimum black hole masses for the analysis of the
combined sample (i.e.\ including the high-mass systems) is shown in
Figure \ref{fig:high-min-mass}. For the most favored model, the
exponential, we find that $M_{1\%} > 4.5$ $M_\odot$ with 90\%
confidence. We therefore conclude that there is strong evidence for a
mass gap in the combined sample as well.
\begin{figure}
\begin{center}
\plotone{mmin-high}
\end{center}
\caption{\label{fig:high-min-mass} The distributions for the minimum
black hole mass, $M_{1\%}$, calculated from the MCMC samples for
the models in Section \ref{sec:models} using the combined sample
of systems. For the two most favored models, the exponential and
two-Gaussian, the 90\% confidence limit on the minimum black hole
mass is 4.5 $M_\odot$ and 2.3 $M_\odot$, respectively. For every
model, we indicate the 90\% confidence bound on the minimum black
hole mass with a vertical line.}
\end{figure}
Table \ref{tab:mmin-quants} gives the 10\%, 50\% (median), and 90\%
quantiles for the minimum black hole mass implied by the low-mass
sample; Table \ref{tab:mmin-quants-high} gives the same, but for the
combined sample of systems.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|c|r|}
\hline
Model & 10\% & 50\% & 90\% \\
\hline \hline
Power Law (Section \ref{sec:power-law}) & 4.3 & 6.1 & 6.6 \\
\hline
Gaussian (Section \ref{sec:gaussian}) & 2.9 & 4.4 & 5.5 \\
\hline
Log Normal (Section \ref{sec:log-normal}) & 3.9 & 4.9 & 5.8 \\
\hline
Exponential (Section \ref{sec:exponential}) & 5.3 & 6.0 & 6.5 \\
\hline
Two Gaussian (Section \ref{sec:gaussian}) & 2.4 & 4.2 & 5.5 \\
\hline
Histogram (1 Bin, Section \ref{sec:non-parametric-models}) & 4.4 & 5.5 & 6.2 \\
\hline
Histogram (2 Bin, Section \ref{sec:non-parametric-models}) & 4.0 & 5.4 & 6.3 \\
\hline
Histogram (3 Bin, Section \ref{sec:non-parametric-models}) & 3.2 & 5.2 & 6.3 \\
\hline
Histogram (4 Bin, Section \ref{sec:non-parametric-models}) & 2.4 & 4.7 & 6.0 \\
\hline
Histogram (5 Bin, Section \ref{sec:non-parametric-models}) & 1.9 & 4.4 & 6.0 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:mmin-quants} The 10\%, 50\% (median), and 90\% quantiles for the minimum black hole mass (in units of $M_\odot$) implied by the low-mass sample in the context of the various models for the black hole mass distribution. The models are listed in order of preference from model selection (Section \ref{sec:low-mass-model-selection}, Figure \ref{fig:rj}, and Table \ref{tab:rj}).}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|l|c|r|}
\hline
Model & 10\% & 50\% & 90\% \\
\hline \hline
Exponential (Section \ref{sec:exponential}) & 4.5 & 5.4 & 6.1 \\
\hline
Two Gaussian (Section \ref{sec:gaussian}) & 2.3 & 4.3 & 5.5 \\
\hline
Power Law (Section \ref{sec:power-law}) & 5.1 & 5.9 & 6.4 \\
\hline
Histogram (3 Bin, Section \ref{sec:non-parametric-models}) & 4.0 & 5.5 & 6.3 \\
\hline
Histogram (4 Bin, Section \ref{sec:non-parametric-models}) & 3.4 & 5.3 & 6.4 \\
\hline
Histogram (2 Bin, Section \ref{sec:non-parametric-models}) & 4.4 & 5.5 & 6.2 \\
\hline
Histogram (5 Bin, Section \ref{sec:non-parametric-models}) & 2.8 & 5.0 & 6.2 \\
\hline
Gaussian (Section \ref{sec:gaussian}) & -0.64 & 1.4 & 3.4 \\
\hline
Histogram (1 Bin, Section \ref{sec:non-parametric-models}) & 2.9 & 4.4 & 5.5 \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:mmin-quants-high} The 10\%, 50\% (median), and 90\% quantiles for the distribution of minimum black hole masses (in units of $M_\odot$) implied by the combined sample in the context of the various models for the black hole mass distribution. The models are listed in order of preference from model selection (Section \ref{sec:high-mass-model-selection}, Figure \ref{fig:high-rj-evidence}, and Table \ref{tab:rj-high}). }
\end{table}
\section{Summary and Conclusion}
\label{sec:conclusion}
We have presented a Bayesian analysis of the mass distribution of
stellar-mass black holes in X-ray binary systems. We considered
separately a sample of 15 low-mass, Roche lobe-filling systems and a
sample of 20 systems containing the 15 low-mass systems and five
high-mass, wind-fed X-ray binaries. We used MCMC methods to sample
the posterior distributions of the parameters implied by the data for
five parametric models and five non-parametric (histogram) models for
the mass distribution. For both sets of samples, we used reversible
jump MCMCs (exploiting a new algorithm for efficient jump proposals in
such calculations) to perform model selection on the suite of models.
The consideration of a broad range of models and the model-selection
analysis, along with consideration of the full posterior distribution
on the minimum black hole mass, significantly expand earlier
statistical analyses of black hole mass measurements
\citep{Bailyn1998,Ozel2010}.
For the low-mass systems, we found the limits on model parameters in
Tables \ref{tab:low-mass-parametric} and
\ref{tab:low-mass-non-parametric}. The relative model probabilities
from the model selection are given in Table \ref{tab:rj}. The most
favored model for the low-mass systems is a power law. The equivalent
limits on the model parameters for the combined systems are given in
Tables \ref{tab:high-mass-parametric} and
\ref{tab:high-mass-non-parametric}. Unlike the low-mass systems, the
most favored model for the combined sample is the exponential model.
This difference indicates that the low-mass subsample is not
consistent with being drawn from the distribution of the combined
population.
We found strong evidence for a mass gap between the most massive
neutron stars and the least massive black holes. For the low-mass
systems, the most favored, power law model gives a black hole mass
distribution whose 1\% quantile lies above $4.3 M_\odot$ with 90\%
confidence. For the combined sample of systems, the most favored,
exponential model gives a black hole mass distribution whose 1\%
quantile lise above $4.5 M_\odot$ with 90\% confidence. Although the
study methodology was different, the existence of a mass gap was
pointed out first by \citet{Bailyn1998} and most recently by
\citet{Ozel2010} (who did not consider a power law model, and applied
both Gaussian and exponential models to the low-mass systems, where
the exponential is strongly disfavored compared to our power-law
model).
Theoretical expectations for the black hole mass distribution have
been examined in \citet{Fryer2001}. They considered results of
supernova explosion and fallback simulations \citep{Fryer1999} applied
to single star populations; they also included a heuristic treatment
of the possible effects of binary evolution on the black hole mass
distribution. It is interesting that we find the most-favored model
for the combined sample to be an exponential, as discussed by
\citet{Fryer2001}. On the other hand, we find the most-favored model
for the low-mass sample to be a power law, with the exponential model
strongly disfavored for this sample. In agreement with
\citet{Bailyn1998} and \citet{Ozel2010}, we too conclude that both the
low-mass and combined samples require the presence of a gap between 3
and 4--4.5 $M_\odot$.
\citet{Fryer1999} discussed two possible causes of such a gap: (1) a
step-like dependence of supernova energy on progenitor mass or (2)
selection biases. Current simulations of core collapse in massive
stars may shed light on the dependence of supernova energy on
progenitor mass. Selection biases can occur because the X-ray
binaries with very low-mass black holes systems are more likely to be
persistently Roche-lobe overflowing, preventing dynamical mass
measurements. \citet{Ozel2010} conclude that the presence of such
biases is not enough to account for the gap, arguing that the number
(26) of observed persistent X-ray sources not known to be neutron
stars is insufficient to populate the 2--5 $M_\odot$ region of any black
hole mass distribution that rises toward low masses. Population
synthesis models incorporating sophisticated treatment of binary
evolution and transient behavior (e.g.\ \citet{Fragos2008,Fragos2009})
could help shed light on this possibility.
\acknowledgements
WMF, NS, and VK are supported by NSF grants CAREER AST-0449558 and
AST–0908930. AC, LK, and CB are supported by NSF grant
NSF/AST-0707627. IM acknowledges support from the NSF AAPF under
award AST-0901985. Calculations for this work were performed on the
Northwestern Fugu cluster, which was partially funded by NSF MRI grant
PHY-0619274. We thank Jonathan Gair for helpful discussions.
|
1612.06173
|
\section*{}\small
\textbf{Abstract.} In this paper we generalize to a certain class of Stein manifolds the Bernstein-Walsh-Siciak theorem which describes the equivalence between possible holomorphic continuation of a function $f$ defined on a compact set $K$ in ${\mathbb C}^N$ to the rapidity of the best uniform approximation of $f$ on $K$ by polynomials. We also generalize Winiarski's theorem which relates the growth rate of an entire function $f$ on $\mathbb{C}^N$ to its best uniform approximation by polynomials on a compact set.
\normalsize
\section{Introduction}
The famous Runge-Oka-Weil theorem can be phrased in the following way:
\begin{center}
If $K\subset {\mathbb C}^N$ is compact and polynomially convex, then\\ $\lim_{n\to\infty}d_K(f,\mathcal{P}_n)=0$ for every $f\in \mathcal{O}(K)$.
\end{center}
Here $\mathcal{P}_n$ is the set of polynomials in ${\mathbb C}^N$ of degree less than or equal to $n$ and $d_K(f,\mathcal{P}_n):=\inf\{\|f-p\|_K:\; p\in \mathcal{P}_n \}$ is the best uniform approximation of $f$ on $K$ by polynomials in $\mathcal{P}_n$. In \cite{Sic:1962} Siciak proves a precise quantitative version of the Oka-Weil-Runge theorem. More specifically, provided certain regularity conditions on $K$, he proves that
\begin{align}
\limsup_{n\to\infty} \left(d_K(f,\mathcal{P}_n)\right)^{1/n}\leq L^{-1}\label{abcdefg}
\end{align}
if and only if $f$ extends as a holomorphic function to $\{z\in {\mathbb C}^N;\; V_K(z)<\log(L) \}$. Here $V_K$ denotes the Siciak-Zakharyuta extremal function defined as the supremum of all entire plurisubharmonic functions $u$ of minimal growth with $u|_K\leq 0$ (see for example \cite{Sic:1981} or \cite{Kli:1991}). Taking Siciak's theorem into consideration, we say that a function $f$ admits to rapid approximation by polynomials on $K$ if (\ref{abcdefg}) is true for some $L>1$. Goncar \cite{Gon:1972,Gon:1974,Gon:1975} and Cirka \cite{Cir:1976,Cir:1976:2} have proven theorems in a similar spirit, regarding rapid approximation by rational functions on ${\mathbb C}^N$ (defined in an analogous way).
Following Stoll \cite{Stoll:1977} we say that a manifold $X$ of complex dimension $N$ is {\it $S$-parabolic} if it possesses a plurisubharmonic (psh) exhaustion function $\tau$ which is maximal outside a compact subset $S$ of $X$. Such an exhaustion function is called a {\it special} exhaustion function. The function $\tau$ being maximal outside $S$ is equivalent to $(i\partial \bar{\partial}\tau)^N=0$ on $X\setminus S$. We say that an entire function $f$ on $X$ is a $\tau$-polynomial if there are constants $t,C\geq 0$ such that
\begin{align*}
\log|f(z)|\leq t\tau^+(z)+C,\qquad z\in X.
\end{align*}
In their work \cite{Ayt:2015,Ayt:2011,Ayt:2014}, Aytuna and Sadullaev consider the Fréchet-space $\mathcal{O}(X)$ of holomorphic functions on $X$. They construct an example of an $S$-parabolic manifold where the $\tau$-polynomials are not dense in $\mathcal{O}(X)$. Zeriahi \cite{Zer:1991,Zer:1996} introduces analogues of classical pluripotential theory to $S$-parabolic Stein manifolds. He generalizes the theorem of Siciak and a theorem of Winiarski \cite{Win:1970} to algebraic varieties.
In this paper we consider a Stein manifold $X$ with a psh exhaustion function $\psi$ such that the $(1,1)$-form $\frac{i}{2}\partial \bar{\partial}e^\psi$ satisfies certain curvature properties (which we discuss in detail in Section 3). When the curvature properties in question are satisfied, we prove a theorem on rapid approximation by $\psi$-polynomials on compact subsets of $X$. We also prove a generalization of Winiarski's theorem to such manifolds. Our main results are stated in Section 3 and proven in Section 5. In Section 4 we look at a few examples of functions $\psi$ satisfying the aforementioned curvature properties.
\textbf{Acknowledgments.} The author would like to thank his doctoral advisor, Ragnar Sigurðsson, for reviewing this paper and giving helpful comments. This project was funded by The Doctoral Grants of The University of Iceland Research Fund and by The Icelandic Center for Research (Rannís) grant no. 152572-052.
\section{Preliminaries}
Let $X$ be a Stein manifold and $\psi:X\to {\mathbb R}$ be a plurisubharmonic exhaustion function. The notion of a $\psi$-polynomial on $X$ was introduced by Zeriahi in \cite{Zer:1991}.
\begin{definition}
We say that a function $f\in \mathcal{O}(X)$ is a $\psi$-polynomial if there exist constants $t$ and $C$ such that
\begin{align}
\log|f(z)|\leq t\psi^+(z)+C,\qquad z\in X,\label{poly}
\end{align}
where $\psi^+(z):=\max\{0,\psi(z) \}$ is the positive part of $\psi$. We denote by $\mathcal{P}^\psi$ the space of $\psi$-polynomials on $X$ and for a fixed $t>0$ we denote by $\mathcal{P}^\psi_{t}$ the set of $\psi$-polynomials on $X$ satisfying inequality (\ref{poly}) for some constant $C$. If $f$ is a $\psi$-polynomial on $X$ then the $\psi$-degree of $f$ is
\begin{align*}
\deg_\psi(f):=\inf\{t>0:\; f\in\mathcal{P}^\psi_{t} \}.
\end{align*}
\end{definition}
Note that if $X={\mathbb C}^N$ and $\psi(z)=\log\|z\|$ then the notion of a $\psi$-polynomial coincides with the classical notion of a polynomial. The polynomial spaces $\mathcal{P}^\tau_{t}$ are of particular interest when the function $\tau$ is a special exhaustion because then they are of finite dimension. More specifically we have:
\begin{proposition}[\cite{Zer:1991}, Th\'eor\`eme 4.8]\label{dimensional}
If $\tau$ is a special exhaustion function on the $N$-dimensional manifold $X$, there exists a constant $M$ such that
\begin{align*}
\dim \mathcal{P}^\tau_{n}\leq{{N +nM}\choose{N }},\qquad n\in {\mathbb N}.
\end{align*}
\end{proposition}
In \cite{Zer:1991,Zer:1996} Zeriahi considers the case when $X$ is an affine algebraic variety and proves theorems similar to the theorems of Oka-Weil and Siciak. In \cite{Ayt:2011,Ayt:2014,Ayt:2015} Aytuna and Sadullaev consider the polynomial space $\mathcal{P}^\tau$ when $\tau$ is a special exhaustion function. They construct an example where the polynomial space $\mathcal{P}^\tau$ consist only of the constant functions, and another one where $\mathcal{P}^\tau$ is not trivial, but still not dense in the Fréchet-space of holomorphic functions $\mathcal{O}(X)$. As a corollary to our main results of this paper we find sufficient conditions for $\mathcal{P}^\tau$ to be dense in $\mathcal{O}(X)$.
As an analogue to the classical Lelong class $\mathcal{L}$ on ${\mathbb C}^N$ we define the $\psi$-Lelong class on $X$ to be the set
\begin{align*}
\mathcal{L}_{\psi}:=\{u\in \operatorname{PSH}(X);\; \exists C\geq 0\;\text{ such that }\; u\leq \psi^++C\;\text{ on }\; X \}
\end{align*}
where $\psi$ is any psh exhaustion on $X$ and we define
\begin{align*}
\mathcal{L}^+_{\psi}:=\{u\in \mathcal{L}_{\psi};\; \exists C\geq 0\;\text{ such that }\; \psi\leq u^++C\;\text{ on }\; X \}.
\end{align*}
If $\psi$ is a special exhaustion function then $\mathcal{L}_{\psi}$ is an abstract Lelong class in the sense of \cite{Zer:2000}. This means that for any compact non-pluripolar set $K\subset X$ the extremal function
\begin{align*}
V_{K,\psi}(z):=\sup\{v(z);\; v\in \mathcal{L}_{\psi},\;\; v|_K\leq 0 \},\qquad z\in X
\end{align*}
is well defined, i.e.\ we have $V_{K,\psi}<\infty$. Indeed, the upper semi-continuous regularization $V^*_{K,\psi}$ is a member of $\mathcal{L}_{\psi}$. We also define the function
\begin{align*}
\Phi_{K,\psi}(z)=\sup\{|f(z)|^{1/t}; f\in \mathcal{P}^\psi_{t},\; \|f\|_K\leq 1,\; t>0 \}.
\end{align*}
In the case when $X={\mathbb C}^N$ and $\psi(z)=\log\|z\|$ the function $\Phi_K:=\Phi_{K,\psi}$ was originally introduced by Siciak \cite{Sic:1962} in order to extend classical results of approximation and interpolation to holomorphic functions of several complex variables. Later, Zakharyuta \cite{Zak:1976} defined the extremal function $V_K:=V_{K,\psi}$ with $X={\mathbb C}^N$ and $\psi=\log\|z\|$. It is well known that $\log\Phi_{K}=V_{K}$ for every compact $K\subset {\mathbb C}^N$ (see for example \cite[Theorem 5.1.7]{Kli:1991}), but on a more general manifold such an equality might not be true, even if we assume $\psi$ to be a special exhaustion function. Indeed, as mentioned before, there exists an example of a special exhaustion $\psi$ on a manifold $X$ such that $\mathcal{P}^\psi$ consists only of the constants \cite{Ayt:2015}, in which case we have $\log\Phi_{K,\psi}\equiv 0$. In general we have $\log\Phi_{K,\psi}\leq V_{K,\psi}$.
\section{Results}
In this section we present the main results of this paper. All results are proven in Section 5. First we must introduce some notation. Recall that if $\omega$ is a Kähler-form on $X$ with coefficients $\omega_{j,\overline{k}}$ with respect to a given coordinate system then the Ricci curvature of $\omega$ is given as follows
\begin{align*}
\operatorname{Ricci}(\omega)=-i\partial \bar{\partial}\log({\operatorname{Det}}(\omega_{j,\overline{k}})).
\end{align*}
For any $z\in X$, $r>0$ we denote by $B(z,r,\omega)$ the geodesic ball with center $z$ and radius $r$ with respect to the metric $\omega$.
\begin{definition}\label{skil}
Let $\psi$ be a psh exhaustion function on the $N$-dimensional manifold $X$ and assume that the $(1,1)$-form $\frac{i}{2}\partial\bar{\partial}e^\psi$ is smooth and strictly positive outside a compact subset $S$ of $X$.
\begin{enumerate}[(i)]
\item Let $\theta\in \operatorname{PSH}(X)$. We say that $\theta$ is a Ricci compensator for $\psi$ if it is continuous, strictly psh in a neighborhood of $S$,
\begin{align*}
|\theta|\leq A\psi^++B\qquad \text{on }\; X
\end{align*}
for some constants $A,B>0$ and
\begin{align*}
\tfrac{i}{2}\partial\bar{\partial}\theta+\operatorname{Ricci}\left(\tfrac{i}{2}\partial\bar{\partial}e^\psi\right)\geq 0,\qquad \text{on }\; X\setminus S.
\end{align*}
If there exists a Ricci compensator for $\psi$, then we say that $\psi$ is Ricci compensable.
\medskip
\item We say that $\psi$ induces an integral estimate for holomorphic functions if for every $\delta>0$ there are constants $A,B$ such that for every $z\in X\setminus S$ and every function $F\in \mathcal{O}(\overline{B}(z,\delta,\frac{i}{2}\partial\bar{\partial}e^\psi))$
\begin{align*}
|F(z)|^2\leq e^{A\psi^+(z)+B}\int_{B(z,\delta,\tfrac{i}{2}\partial \bar{\partial}e^\psi)\setminus S}|F|^2\left(\tfrac{i}{2}\partial\bar{\partial}e^\psi\right)^N.
\end{align*}
\end{enumerate}
\end{definition}
Our first main result is the following.
\begin{theorem}\label{adal0}
Let $\psi$ be a psh exhaustion function on $X$ which is Ricci compensable and induces an integral estimate for holomorphic functions. Let $K\subset X$ be compact and $\varphi\in \mathcal{L}_\psi^+$ be a continuous function satisfying $\varphi|_K\leq 0$. Then for every $L\in ]1,\infty[$ and every function $f$ holomorphic on $K_L:=\{z\in X;\; \varphi(z)<\log(L) \}$ we have
\begin{align*}
\limsup_{t\to\infty}\left(d_K(f,\mathcal{P}^\psi_{t})\right)^{1/t}\leq L^{-1}.
\end{align*}
\end{theorem}
Notice that if the function $\psi$ is a member of $\mathcal{L}_\tau$ for some special exhaustion function $\tau$ then the polynomial spaces $\mathcal{P}^\psi_{t}$ have finite dimension (this follows directly from Proposition \ref{dimensional}). This means that Theorem \ref{adal0} is, in some sense, the strongest in this case. We should note though, that for a given special exhaustion function $\tau$ it is not always possible to find $\psi\in \mathcal{L}_\tau$ satisfying the properties of Definition \ref{skil} (for instance if $\mathcal{P}^\tau$ consists only of constants). We then need larger polynomial spaces if we want to apply Theorem 3.2.
If $\psi\in \mathcal{L}_\tau$ for a special exhaustion function $\tau$, then the extremal function $V_{K,\psi}$ is well defined. If $V_{K,\psi}$ happens to be continuous as well, then we can take $\varphi$ to be equal to $V_{K,\psi}$ in Theorem \ref{adal0}. In this case the converse of Theorem \ref{adal0} is true as well:
\begin{proposition}\label{prop33}
If $\psi\in \mathcal{L}_\tau$ for some special exhaustion function $\tau$ and $f:K\to {\mathbb C}$ is any function s.t.\
\begin{align}
\limsup_{t\to\infty}\left(d_K(f,\mathcal{P}^\psi_{t})\right)^{1/t}\leq L^{-1}\label{laks}
\end{align}
for some $L>1$, then $f$ is the restriction to $K$ of a function holomorphic on the set $K_L=\{z\in X;\; V_{K,\psi}(z)<\log(L) \}$.
\end{proposition}
Observe that if inequality (\ref{laks}) holds for every $L>0$, then $f$ is the restriction to $K$ of an entire function, also denoted by $f$. If $f$ is of finite order $\varrho$ and of finite type $\sigma$ with respect to $\varrho$, then we have a more precise estimate of $d_K(f,\mathcal{P}^\psi_t)^{1/t}$. More precisely, we have a generalization of a theorem of Winiarski \cite{Win:1970} for the special case $X={\mathbb C}^N$ and $\psi=\log\|z\|$.
\begin{theorem}\label{adal00}
Assume $\psi$ is Ricci compensable and induces an integral estimate for holomorphic function. Further assume that for every $r>0$ there exist constants $A,B$ such that
\begin{align}\label{volgrowth}
\int_{\{\psi(z)<\log(L) \}}\big(\tfrac{i}{2}\partial\bar{\partial} e^{\psi}\big)^N\leq e^{AL^r+B},\qquad L>1.
\end{align}
Let $K\subset X$ be compact and $\varphi\in \mathcal{L}_\psi^+$ be a continuous function on $X$ satisfying $\varphi|_K\leq 0$. Then for any entire function $f$ on $X$ satisfying the growth estimates
\begin{align}
\label{jafna1}
\limsup_{r\to\infty}\frac{\log^+\log\|f\|_{\{\varphi\leq \log(r)\}}}{\log(r)}\leq\varrho\;\;\; \text{and}\;\;\;
\limsup_{r\to\infty}\frac{\log\|f\|_{\{\varphi\leq \log(r)\}}}{r^{\varrho}}\leq\sigma,
\end{align}
for some $\varrho>0$, $\sigma\geq0$, we have
\begin{align}
\label{jafna3}
\limsup_{t\to\infty}t^{1/\varrho}(d_K(f,\mathcal{P}^\psi_{t}))^{1/t}\leq (e\sigma\varrho)^{1/\varrho}.
\end{align}
If $\psi\in \mathcal{L}_{\tau}$ for some special exhaustion function $\tau$ and we take $\varphi:=V_{K,\psi}$ then the converse holds as well, i.e.\ if $f$ is a function on $K$ and inequality (\ref{jafna3}) holds, then $f$ extends to an entire function on $X$ and inequalities $(\ref{jafna1})$ are true with $\varphi$ replaced by $V_{K,\psi}$.
\end{theorem}
Theorems \ref{adal0} and \ref{adal00} are based on a third main result, in which we give an estimate for $d_K(f,\mathcal{P}^\psi_{t})$ for fixed $t$.
\begin{theorem}\label{adal3}
Let $\psi$ be a psh exhaustion on $X$ such that $\frac{i}{2}\partial \bar{\partial}e^\psi$ is smooth and strictly positive outside a compact set $S$ and assume $\psi$ induces an integral estimate for holomorphic functions. Let $K\subset X$ be compact and $\varphi$ be a continuous psh function on $X$ satisfying:
\begin{enumerate}[(i)]
\item There is a constant $t_0>0$ such that $t_0 i\partial \bar{\partial}\varphi+\operatorname{Ricci}\left(\frac{i}{2}\partial \bar{\partial} e^\psi\right)\geq 0$ on $X\setminus S$ and $i\partial\bar{\partial}\varphi>0$ on $S$,
\item $\varphi\in \mathcal{L}_\psi$,
\item $\varphi|_K\leq 0$.
\end{enumerate}
Let $L>1$, $f$ be a function holomorphic on the set $\{z\in X;\; \varphi(z)<\log(L) \}$ and $\epsilon\in ]0,(L-1)/2[$. Then there are constants $l,T_0$ neither depending on $f$ nor $L$ and a constant $M$ not depending on $f$, such that for any $t\geq T_0$ we have
\begin{align}\label{tyu}
d_K(f,\mathcal{P}^\psi_{t})\leq M\|f\|_{\{\varphi\leq \log(L-\epsilon/2)\}}\left(\frac{1+\epsilon}{L-\epsilon}\right)^{t-l}.
\end{align}
If $L$ is large enough we can write $M=M_0\|\bar{\partial}\chi\|_{L^2(X\setminus S)}$
where $M_0$ is a constant neither depending on $f$ nor $L$ and $\chi:X\to{\mathbb R}$ is any $C^\infty$ cutoff function with $\chi=1$ on $\{\varphi<\log(L-\epsilon) \}$ and $\chi=0$ on $\{\varphi>\log(L-\epsilon/2)\}$. Here $\|\bar{\partial}\chi\|_{L^2(X\setminus S)}$ denotes the $L^2$ norm of $\bar{\partial}\chi$ on $X\setminus S$ with respect to the measure $\left(\frac{i}{2}\partial \bar{\partial}e^\psi\right)^N$ and the natural norm on $\Lambda^{0,1}T^*_X$ induced by the metric $\frac{i}{2}\partial \bar{\partial}e^\psi$.
\end{theorem}
\section{Examples}
We start this section by proving two propositions. We then apply them to construct a few examples of psh exhaustion functions $\psi$ on Stein manifolds which are Ricci compensable and induce an integral estimate for holomorphic functions.
\begin{proposition}\label{prop25}
Let $\psi$ be a psh exhaustion on $X$ such that $\frac{i}{2}\partial \bar{\partial}e^\psi$ is smooth and strictly positive outside a compact set $S$. If there exist functions $\epsilon,M:X\to {\mathbb R}_+$ and constants $A,B>0$ satisfying
\begin{align*}
\epsilon(z)\geq e^{-(A\psi^+(z)+B)}\;\;\text{and}\;\; M(z)\leq e^{A\psi^+(z)+B},\qquad z\in X,
\end{align*}
such that for every $z\in X\setminus S$, there is a coordinate patch $\xi: B(\zeta,\epsilon(z))\to X$ with $\xi(\zeta)=z$, $\xi^*\left(\frac{i}{2}\partial \bar{\partial}e^\psi\right)\leq M(z)\omega_0$ and $\omega_0^N\leq M(z)\left(\xi^*\frac{i}{2}\partial \bar{\partial}e^\psi\right)^N$ on $B(\zeta,\epsilon(z))$, then $\psi$ induces an integral estimate for holomorphic functions.
\end{proposition}
\begin{proof}
Let $\delta>0$ and write $r(z)=\delta_1 \min\{\epsilon(z),(M(z))^{-1} \}$ where $\delta_1=\min\{1,\delta\}$. Since $\xi^*(\frac{i}{2}\partial \bar{\partial}e^\psi)\leq M(z)\omega_0$ on $B(0,\epsilon(z))$ we have
\begin{align*}
\xi(B(0,r(z)))\subset B(z,\delta_1,\tfrac{i}{2}\partial \bar{\partial}e^\psi)\subset B(z,\delta,\tfrac{i}{2}\partial \bar{\partial}e^\psi).
\end{align*}
Let $F$ be a function holomorphic in a neighborhood of the ball $\overline{B}(z,\delta,\frac{i}{2}\partial \bar{\partial}e^\psi)$ and denote by $v_{2N}$ the volume of the unit ball of dimension $2N$. Then by the sub-mean-value inequality on ${\mathbb C}^N$ we have
\begin{align*}
|F(z)|^2&=|F\circ \xi(0)|^2\leq \frac{1}{v_{2N}(r(z))^{2N}}\int_{B(0,r(z))} |F\circ \xi|^2\omega_0^N\\
&\leq \frac{\max\{\epsilon^{-1}(z),M(z) \}^{2N}}{v_{2N}\delta_1^{2N}}\int_{B(0,r(z))} |F\circ \xi|^2M(z)\left(\xi^*\tfrac{i}{2}\partial \bar{\partial}e^\psi\right)^N\\
&\leq \frac{\max\{\epsilon^{-1}(z),M(z) \}^{2N}M(z)}{v_{2N}\delta_1^{2N}}\int_{B(z,\delta,\tfrac{i}{2}\partial \bar{\partial}e^\psi)}F(\tfrac{i}{2}\partial \bar{\partial}e^\psi)^N.
\end{align*}
By assumption of the growth rate of $\epsilon$ and $M$ we get the result.
\end{proof}
\begin{proposition}\label{prop35}
Assume $\psi$ is an exhaustion function of the form
\begin{align*}
\psi(z)=\log(|g_1(z)|^2+...+|g_m(z)|^2),\qquad z\in X,
\end{align*}
for some holomorphic functions $g_1,...,g_m\in \mathcal{O}(X)$.
\begin{enumerate}[(i)]
\item The Ricci curvature of $\frac{i}{2}\partial \bar{\partial}e^{\psi}$ is given with
\begin{align*}
\operatorname{Ricci}\big(\tfrac{i}{2}\partial \bar{\partial}e^{\psi}\big)= -i\partial \bar{\partial}\log\left( \sum_{1\leq j_1<...< j_N\leq m}|{\operatorname{Det}} \left(\operatorname{Jac}(g_{j_1},...,g_{j_N})\right)|^2\right).
\end{align*}
where the sum is taken over every subcollection $\{j_1\leq...\leq j_N\}\subset \{1,...,m\}$ of size $N$ and $\operatorname{Jac}(g_{j_1},...,g_{j_N})$ is the Jacobian of $g_{j_1},...,g_{j_N}$ with respect to any local coordinate system.
\medskip
\item If there exist constants $A,B>0$ such that for every $z\in X$ there exists a subcollection $\{g_{j_1},...,g_{j_N}\}\subset \{g_1,...,g_m\}$ and a neighborhood $V$ of $z$ such that $(g_{j_1},...,g_{j_N})$ maps $V$ bijectively to an open ball in ${\mathbb C}^N$ of radius $\epsilon(z)\geq e^{-(A\psi^+(z)+B)}$ and such that for every $g_k\in \{g_1,...,g_m\}\setminus \{g_{j_1},...,g_{j_N}\}$ we have
\begin{align}
i\partial \bar{\partial}|g_k|^2\leq e^{A\psi^++B}i\partial\bar{\partial}\sum_{s=1}^{N}|g_{j_s}|^2\qquad \text{on}\;\; V,\label{wwww}
\end{align}
then $\psi$ induces an integral estimate for holomorphic functions.
\end{enumerate}
\end{proposition}
\begin{proof}
$(i)$ Let $(z_1,...,z_N)$ be some local coordinate chart. We have
\begin{align*}
i\partial\bar{\partial}e^{\psi}=\sum_{j,k,r}\frac{\partial g_r}{\partial z_j}\overline{\frac{\partial g_r}{\partial z_k}}dz_j\wedge d\bar{z}_k
\end{align*}
and therefore, by the Cauchy-Binet formula, we have
\begin{align*}
(i\partial \bar{\partial}e^{\psi})^N&={\operatorname{Det}}\left(\sum_{r}\frac{\partial g_r}{\partial z_j}\overline{\frac{\partial g_r}{\partial z_k}} \right)_{j,k}dV\\
&=\left(\sum_{1\leq j_1<...< j_N\leq m}|{\operatorname{Det}} \left(\operatorname{Jac}(f_{j_1},...,f_{j_N})\right)|^2\right)dV.
\end{align*}
The result now follows from definition of the Ricci curvature.
$(ii)$ This is just a special case of Proposition \ref{prop25} where the coordinate patch $\xi$ is defined as the inverse of the map $z\to(g_{j_1}(z),...,g_{j_N}(z))$.
\end{proof}
Now we apply propositions \ref{prop25} and \ref{prop35} to construct a few examples where our main results can be applied.
\textbf{\large{Polynomials in ${\mathbb C}^N$}.} Let $X={\mathbb C}^N$ and let $g_1,...,g_m$ be polynomials on ${\mathbb C}^N$ s.t.\
\begin{align*}
\psi(z):=\log(|g_1(z)|^2+...+|g_m(z)|^2),\qquad z\in {\mathbb C}^N
\end{align*}
is an exhaustion function. Further assume that the Jacobian of the map $z\to (g_1(z),...,g_m(z))$ has full rank on ${\mathbb C}^N$. Then $\psi$ is Ricci compensable and induces an integral estimate for holomorphic functions. We do not prove this here since this is a special case of our next example.
\textbf{\large{Affine algebraic manifolds}.} Let $X\subset {\mathbb C}^M$ be a non-singular algebraic manifold of dimension $N$ and let $g_1,...,g_m$ be polynomials on $X$ (i.e.\ each $g_j$ is the restriction of a polynomial on ${\mathbb C}^M$ to $X$). Assume that the function
\begin{align*}
\psi(z)=\log(|g_1(z)|^2+...+|g_m(z)|^2),\qquad z\in X
\end{align*}
is an exhaustion function, and further assume that the Jacobian of the map $X\to {\mathbb C}^m$, $z\to (g_1(z),...,g_m(z))$ has full rank on $X$. Then $\psi$ is Ricci compensable and induces an integral estimate for holomorphic functions. By Rudin \cite{Rud:1968}, after a linear change of variables, we can assume that $X$ is a subset of
\begin{align*}
\{z=(z_1,...,z_N,z_{N+1},...,z_M)=(z',z'')\in {\mathbb C}^M;\; \|z''\|\leq A(1+\|z'\|)^{B} \}
\end{align*}
for some positive constants $A,B$. This implies that the function $\tau:=\log\|z'\|$ is a special exhaustion function on $X$. Since $\psi\in \mathcal{L}_{C\tau}$ for $C>0$ large enough, we see that the polynomial spaces $\mathcal{P}^\psi_{t}$ have finite dimension.
We now prove that $\psi$ is Ricci compensable and induces an integral estimate for holomorphic functions. Our method is based on Demailly's calculations from the proof of \cite[Proposition 10.1]{Dem:1985}. Indeed, we generalize this result by calculating the Ricci curvature of $\frac{i}{2}\partial \bar{\partial}e^\psi$.
Let $P_1,...,P_r$ be generators of the ideal $I(X)$ of polynomials in ${\mathbb C}^M$ vanishing on $X$ and let $s=M-N$ be the codimension of $X$. For each $K=\{k_1<...<k_s\}\subset \{1,...,r\}$ and each $L=\{l_1<...<l_N \}\subset \{1,...,m \}$ denote by $J_{K,L}$ the determinant of the Jacobian of the functions $g_{l_1},...,g_{l_N},P_{k_1},...,P_{k_s}$ on ${\mathbb C}^M$. Further write
\begin{align*}
U_K:=X\cap \{z\in {\mathbb C}^M:\; dP_{k_1}\wedge...\wedge dP_{k_s}(z)\not=0 \}.
\end{align*}
The sets $U_K$ form an open cover of $X$ since it is non-singular. Denote by $(z_1,...,z_M)$ the standard coordinates on ${\mathbb C}^M$, write $T_0:=\{N+1,N+2,...,M \}$ and let $w\in U_K$ be fixed. Without loss of generality we can assume that $X$ can be parameterized in the variables $(z_1,...,z_N)$ in a neighborhood of $w$. That means that in a neighborhood of $w$ in ${\mathbb C}^M$ we have
\begin{align*}
|D_{K,T_{0}}|^2:=\left|{\operatorname{Det}}\left(\tfrac{\partial P_{k}}{\partial z_t } \right)_{k\in K, t\in T_0 } \right|^2\not=0
\end{align*}
and therefore
\begin{align}
&dg_{l_1}\wedge d\bar{g}_{l_1}\wedge...\wedge dg_{l_N}\wedge d\bar{g}_{l_N}\wedge dP_{k_1}\wedge d\overline{P}_{k_1}\wedge...\wedge dP_{k_s}\wedge d\overline{P}_{k_s}\nonumber\\
=&|J_{K,L}|^2dz_1\wedge d\bar{z}_1\wedge...\wedge dz_{M}\wedge d\bar{z}_M\label{xxxx}\\
=&\frac{|J_{K,L}|^2}{|D_{K,T_0}|^2}dz_1\wedge d\bar{z}_1\wedge...\wedge dz_N\wedge d\bar{z}_N\wedge dP_{k_1}\wedge d\overline{P}_{k_1}\wedge...\wedge dP_{k_s}\wedge d\overline{P}_{k_s}.\nonumber
\end{align}
Since the gradients $\nabla P_k$ are orthogonal to the tangent space of $X$ we see that when we restrict the forms from equation $(\ref{xxxx})$ to the submanifold $X$ we have
\begin{align*}
dg_{l_1}\wedge d\bar{g}_{l_1}\wedge...\wedge dg_{l_N}\wedge d\bar{g}_{l_N}=\frac{|J_{K,L}|^2}{|D_{K,T_0}|^2}dz_1\wedge d\bar{z}_1\wedge...\wedge dz_N\wedge d\bar{z}_N.
\end{align*}
Now by applying Proposition \ref{prop35} $(i)$, and by noticing that the function $\log|D_{K,T_0}|^2$ is pluriharmonic in a neighborhood of $w$, we see that
\begin{align}
\operatorname{Ricci}\left(\tfrac{i}{2}\partial\bar{\partial}e^{\psi}\right)=-i\partial\bar{\partial}\log \sum_{|L|=N}|J_{K,L}|^2,\qquad \text{on}\;\; U_K.\label{ricci1}
\end{align}
Now let $K_0$ be fixed. For any $K\not= K_0$ the function
\begin{align*}
a_{K,K_0}:=\log \sum_{|L|=N}|J_{K,L}|^2-\log \sum_{|L|=N}|J_{K_0,L}|^2
\end{align*}
is pluriharmonic on $U_K\cap U_{K_0}$ since it is the difference of two local potentials of the Ricci curvature. Moreover, this function is locally bounded from above on $U_{K_0}$ and since $U_{K_0}\setminus U_K$ is an analytic subset of $U_{K_0}$ the function $a_{K,K_0}$ is psh on $U_{K_0}$. This is true for all $K$ so the function
\begin{align}
\log \sum_{|K|=s}e^{a_{K,K_0}}=\log \sum_{|K|=s,|L|=N}|J_{K,L}|^2-\log \sum_{|L|=N}|J_{K_0,L}|^2\label{ricci2}
\end{align}
is plurisubharmonic on $U_{K_0}$. Now define the function
\begin{align*}
\theta=\log \sum_{|K|=s,|L|=N} |J_{K,L}|^2,\qquad \text{on}\;\; X.
\end{align*}
By equations (\ref{ricci1}) and (\ref{ricci2}) we see that $i\partial\bar{\partial}\theta+\operatorname{Ricci}(\frac{i}{2}\partial\bar{\partial}e^{\psi})\geq 0$ on $X$. Moreover, since the Jacobian of $g_1,...,g_m$ has full rank on $X$, the functions $|J_{K,L}|^2$ never vanish at the same time on $X$, i.e.\ we have $\theta>-\infty$ on $X$. By a simple application of Hilbert's Nullstellensatz we can see that there exist constants $A$ and $B$ such that $|\theta|\leq A\psi^++B$ on $X$ and therefore $\theta$ is a Ricci compensator for $\psi$.
By applying Hilbert's Nullstellensatz again we can find constants $A_1,B_1$ such that for each $z\in X$ we can find $K=\{k_1,...,k_s \}$ and $L=\{l_1,...,l_N \}$ such that
\begin{align*}
|J_{K,L}(z)|^2\geq e^{-(A_1\psi^+(z)+B_1)}.
\end{align*}
Since the derivatives of $|J_{K,L}|^2$ have polynomial growth it is simple to show that $(g_{l_1},...,g_{l_N},P_{k_1},...,P_{k_s})$ maps a neighborhood of $z$ in ${\mathbb C}^M$ bijectively to an open ball in ${\mathbb C}^M$ of radius $\epsilon(z)\geq e^{-(A_2\psi^+(z)+B_2)}$ where $A_2,B_2$ are constants independent of $z$. Since the functions $P_{k_1},...,P_{k_s}$ vanish on $X$ we see that $(g_{l_1},...,g_{l_N})$ maps a neighborhood of $z$ in $X$ to an open ball in ${\mathbb C}^N$ of radius $\epsilon(z)$. Since the functions $g_{j}$ are polynomials it is easy to see that inequality $(\ref{wwww})$ is satisfied for some $A,B$.
\textbf{\large{The complex torus}.} Let $X={\mathbb C}^N/{\mathbb Z}^N$ be the complex torus and
\begin{align*}
\psi(z)=\|\operatorname{Im}(z)\|=\|y\|,\qquad z=x+iy\in X.
\end{align*}
Then $\psi$ is Ricci compensable and induces an integral estimate for holomorphic functions. The function $\psi$ is itself a special exhaustion function so the polynomial spaces $\mathcal{P}^\psi_{t}$ have finite dimension. Indeed the functions
\begin{align}
\xi_a(z):=e^{2\pi i\langle z,a\rangle},\qquad a\in {\mathbb Z}^N,\;\; \|a\|\leq \tfrac{t}{2\pi},\;\; z\in X,\label{fourier}
\end{align}
form a basis for $\mathcal{P}^\psi_{t}$ for every $t$. In this case we can apply our main theorems to prove classical results from Fourier analysis.
We now prove these statements. It is simple to show that $(i\partial \bar{\partial}\|y\|)^N=0$ if $y\not=0$ so $\psi$ is a special exhaustion function. Now suppose $p\in \mathcal{P}_t^\psi$ for some $t$. Since $p$ is periodic it is of the form $p(z)=\sum_{a\in {\mathbb Z}^N} c_a e^{2\pi i \langle z,a\rangle}$ for some constants $c_a$. By the Paley-Wiener theorem we see that $p$ is the Fourier-transform of a distribution with support on $B(0,t)\subset {\mathbb R}^N$. Therefore we have $c_a=0$ if $\|a\|>\frac{t}{2\pi}$ and the functions from (\ref{fourier}) form a basis for $\mathcal{P}_t^\psi$.
If $\|y\|\geq 1$, then the largest eigenvalue of the metric
\begin{align*}
\frac{i}{2}\partial \bar{\partial}e^{\psi}=\frac{ie^\psi}{8}\sum_{1\leq j,k\leq N}\left(\frac{\delta_{j,k}}{\|y\|}+\frac{y_jy_k(\|y\|-1)}{\|y\|^3} \right) dz_j\wedge d\bar{z}_k
\end{align*}
is $\lambda_1=e^\psi/4$ and corresponds to the eigenvector $y$. Therefore we can apply Proposition \ref{prop25} with $\epsilon(z)= \frac{1}{2}$ (we can map the torus $X$ to a strip in ${\mathbb C}^N$ of width one centered at $z$) and $M(z)=\frac{1}{4}e^{\psi(z)+\frac{1}{2}}$ so $\psi$ induces an integral estimate for holomorphic functions. The metric $\frac{i}{2}\partial \bar{\partial}e^{\psi}$ has one more eigenvalue $\lambda_2=\frac{e^\psi}{4\|y\|}$ which corresponds to the $(N-1)$-dimensional eigenspace of vectors perpendicular to $y$. Therefore we have
\begin{align*}
\operatorname{Ricci}\left(\tfrac{i}{2}\partial\bar{\partial}e^{\psi} \right)=-i\partial \bar{\partial}\log (\lambda_1\lambda_2^{N-1})=-Ni\partial \bar{\partial}\psi+(N-1)i\partial \bar{\partial}\log \psi.
\end{align*}
As a Ricci compensator we can take the function
\begin{align*}
\theta=v+\begin{cases}
N\psi-(N-1)\log \psi,\qquad \text{if}\; \psi\geq 2\\
2N-(N-1)\log 2,\qquad \text{if}\; \psi<2
\end{cases}
\end{align*}
where $v\in \mathcal{L}_{\psi}$ is any function which is strictly psh on \{$\psi<2\}$. It is a simple exercise to show that $\theta$ is indeed psh.
\textbf{The complement of a graph of a holomorphic function.} Let $f:{\mathbb C}^{N-1}\to {\mathbb C}$ be a holomorphic function in $(N-1)$ variables and define
\begin{align*}
F(z):=z_N-f(z'),\qquad z=(z',z_N)\in {\mathbb C}^N.
\end{align*}
Let $X={\mathbb C}^N\setminus \{F=0 \}$ and
\begin{align*}
\psi(z):=\log\left(\|z'\|^2+|F(z)|^2+|F^{-1}(z)|^2\right),\qquad z\in X.
\end{align*}
The function $\psi$ is Ricci compensable and induces an integral estimate for holomorphic functions. In \cite[Theorem 4.1]{Ayt:2014} we see that $\tau:=\log(\|z'\|+|F(z)-1|^2)-\log|F(z)|$ is a special exhaustion function. It is easy to see that $\psi\in\mathcal{L}_{C\tau}$ for $C>0$ large enough and therefore the polynomial spaces $\mathcal{P}^\psi_{t}$ have finite dimension.
To prove these statements we first observe that applying Proposition \ref{prop35} $(i)$ gives
\begin{align*}
\operatorname{Ricci}\left(\tfrac{i}{2}\partial\bar{\partial}e^{\psi} \right)=-i\partial\bar{\partial}\log(1+|F|^{-2})
\end{align*}
so we can take $\theta=\log(1+|F|^{-2})$ as a Ricci compensator. The map $z\to (z',F^{-1}(z))$ is a bijection from $X$ to ${\mathbb C}^{N-1}\times{\mathbb C}^*$ so we can take $\{z',F^{-1} \}$ as the subcollection mentioned in Proposition \ref{prop35} $(ii)$ and $\epsilon(z)=|F(z)|^{-1}$. We just have to check that (\ref{wwww}) from Proposition \ref{prop35} is satisfied. Indeed we have
\begin{align*}
i\partial \bar{\partial}|F|^2= |F|^{4}i\partial \bar{\partial}|F|^{-2}\qquad \text{on}\;\; X
\end{align*}
so $\psi$ induces an integral estimate for holomorphic functions.
\section{Proofs}
In this section we prove all results from Section 3. Recall that $X$ is always a Stein manifold and $\psi$ is a psh exhaustion function on $X$. We start with auxiliary propositions.
\begin{proposition}\label{key}
Assume that $\frac{i}{2}\partial \bar{\partial}e^\psi$ is smooth and strictly positive outside a compact subset $S$ of $X$. Then there exists a Kähler form $\omega$ on $X$ such that $\omega=\frac{i}{2}\partial \bar{\partial}e^\psi$ outside a compact subset of $X$. Moreover, the metric $\omega$ is complete.
\end{proposition}
\begin{proof}
By adding a constant to $\psi$ we can assume that $\psi|_S\leq -1$. We define
\[\omega:=\tfrac{i}{2}\partial \bar{\partial}\left(e^{\Gamma\circ \psi}+\epsilon \chi u\right) \]
where $\Gamma:{\mathbb R}\to {\mathbb R}$ is a smooth, increasing and convex function satisfying $\Gamma(x)=-1/2$ for $x\leq -1$ and $\Gamma(x)=x$ for $x>0$, $u$ is any strictly psh function defined in a neighborhood of $\{\psi\leq 0 \}$, $\chi:X\to [0,1]$ is a smooth function with $\chi=1$ on $\{\psi\leq -1/2\}$ and $\chi=0$ on $\{\psi\geq -1/4 \}$ and $\epsilon$ is a small constant. We clearly have $\omega=\frac{i}{2}\partial \bar{\partial}e^\psi$ on $\{\psi>0\}$ and $\omega$ is strictly positive on $\{\psi\leq -1/2 \}\cup\{\psi\geq -1/4 \}$. The form $\frac{i}{2}\partial\bar{\partial}e^{\Gamma\circ \psi}$ is strictly positive on $\{-1/2\leq \psi\leq -1/4 \}$ so by choosing $\epsilon$ small enough we can make $\omega$ strictly positive on $X$.
On $X\setminus\{\psi\leq 0\}$ we have $\omega=\tfrac{i}{2}e^{\psi}\left( \partial \psi\wedge \bar{\partial}\psi+\partial\bar{\partial}\psi\right)\geq \tfrac{i}{2}\partial \psi\wedge \bar{\partial}\psi$ and therefore $|d\psi|_{\omega}=|\partial\psi+\bar{\partial}\psi|_{\omega}\leq 2|\partial\psi|_{\omega}\leq 2|\partial \psi|_{\partial \psi\wedge \bar{\partial}\psi}= 2.$
By \cite[Lemma VIII 2.4]{Dem:2012} the metric is complete.
\end{proof}
For the rest of the section $\omega$ is a Kähler form as described in Proposition \ref{key}. By adding a constant to $\psi$ we can always assume that $S=\{\psi\leq 0\}$ and $\omega=\frac{i}{2}\partial \bar{\partial}e^{\psi}$ on $X\setminus S$. If $v$ is a tangent vector in $X$ we denote by $\omega(v)$ the length of $v$ with respect to the metric $\omega$. Whenever we work in local coordinates $(z_1,...,z_N)$ we write
\begin{align*}
\nabla g=\left(\frac{\partial g}{\partial z_1},...,\frac{\partial g}{\partial z_N}\right)\qquad \text{and}\qquad \langle v,w\rangle=\sum_{j=1}^{N}v_jw_j
\end{align*}
for every function $g\in C^1$ and tangent vectors $v,w$.
\begin{lemma}\label{l52}
If $B(z,r,\omega)\cap S=\emptyset$ then $|e^{\psi(\zeta)/2}-e^{\psi(z)/2}|\leq \frac{r}{2}$ for every $\zeta\in B(z,r,\omega)$.
\end{lemma}
\begin{proof}
For every tangent vector $v$ we have
\begin{align}
\omega(v)=\frac{i}{2}e^{\psi}(\partial\bar{\partial}\psi+\partial\psi\wedge\bar{\partial}\psi)(v)\geq e^{\psi}|\langle \nabla\psi,v\rangle|^2\label{a}
\end{align}
Let $\gamma:[0,1]\to X$ be a geodesic from $z$ to $\zeta$ of length $r$. By the fundamental theorem of calculus, and by (\ref{a}) we have
\begin{align*}
e^{\psi(\zeta)/2}-e^{\psi(z)/2}=\int_{0}^{1}\frac{\partial}{\partial t}&e^{(\psi\circ\gamma)/2}dt\\
&\leq\frac{1}{2}\int_{0}^{1}e^{(\psi\circ\gamma)/2}|\langle \nabla \psi,\gamma'\rangle|dt\leq \frac{1}{2}\int_{\gamma} \omega^{\frac{1}{2}}=\frac{r}{2}.
\end{align*}
\end{proof}
The following theorem is a special case of a famous result by Skoda \cite{Skoda:1977}.
\begin{theorem}
\label{hormander}
Let $\varphi$ be a continuous psh function on $X$ satisfying $\frac{i}{2}\partial\bar{\partial}\varphi+\operatorname{Ricci}(\omega)\geq 0$ on $X$. Then, for any $(0,q)$-form $f$ with $C^\infty$ (resp. $L^2_{\operatorname{loc}}$) coefficients satisfying $\bar{\partial}f=0$ and $\int_{X}|f|_\omega^2e^{-\varphi}\omega^N<\infty$ there is a $(0,q-1)$-form $u$ with $C^\infty$ (resp. $L^2_{\operatorname{loc}}$) coefficients such that $\bar{\partial}u=f$ and
\begin{align*}
\int_{X}|u|_{\omega}^2(e^{\psi}+1)^{-2}e^{-2\varphi}\omega^N\leq \frac{1}{2N}\int_X s|f|_{\omega}^2e^{-2\varphi}\omega^N,
\end{align*}
where $s$ is a non-negative function on $X$ which equals $1$ on $X\setminus S$.
\end{theorem}
\begin{proof}
First assume that $\varphi$ is $C^{\infty}$. Consider the line bundle $E:=X\times {\mathbb C}$ over $X$ with the trivial projection. On the fibers of $E$ we define the Hermitian product
\begin{align*}
\langle \zeta_1,\zeta_2\rangle_z :=(1+e^{\psi(z)})^{-2}\zeta_1\bar{\zeta}_2
\end{align*}
and denote by $|\cdot|^2_E=(1+e^{\psi(z)})^{-2}|\cdot|^2$ the corresponding norm. Denote by $i\Theta(E)=2i\partial \bar{\partial}\log(1+e^{\psi})$ the Chern curvature tensor on $E$ with respect to this metric (see for example comment (12.6) Chapter V in \cite{Dem:2012}). On $X\setminus S$ we have
\begin{align*}
i(1+e^\psi)^2\Theta(E)=4\omega+4e^{2\psi}i\partial\bar{\partial}\psi\geq4\omega
\end{align*}
so, by assumption on $\varphi$, we have
\begin{align}\label{eigingildi}
i\Theta(E)+\operatorname{Ricci}(\omega)+i\partial\bar{\partial}\varphi\geq\frac{4\omega}{(1+e^\psi)^2}
\end{align}
on $X\setminus S$. Therefore the sum of the eigenvalues of the $(1,1)$-form on the left hand side of (\ref{eigingildi}) with respect to the metric $\omega$ is larger than or equal to $4N(1+e^\psi)^{-2}$.
Now, if we consider $f$ as a section of the cotangent bundle of $E$, by \cite[Theorem VIII 6.5]{Dem:2012} we can find a $(0,q-1)$ form $u$ such that $\bar{\partial}u=f$ and
\begin{align*}
\int_{X}|u|_E^2e^{-2\varphi}\omega^N\leq \frac{1}{4N}\int_X s(1+e^\psi)^2|f|_E^2e^{-2\varphi}\omega^N,
\end{align*}
where $s$ is a positive function equal to $1$ on $X\setminus S$. If we replace $|\cdot|_E^2$ by $(1+e^{\psi(z)})^{-2}|\cdot|^2$ the result follows.
If $\varphi$ is not $C^{\infty}$ we get the result by finding a decreasing sequence $(\varphi_n)_{n\in {\mathbb N}}$ of $C^{\infty}$ psh functions such that $\varphi_n\searrow \varphi$ in the $L^1_{\text{loc}}$ topology and by taking the limit. Since $X$ is Stein, such a sequence exists.
\end{proof}
\begin{proof}[Proof of Theorem \ref{adal3}.]
By assumption $(i)$ and compactness of $S$ we can find a constant $T_0$ such that $T_0\frac{i}{2}\partial\bar{\partial}\varphi+\operatorname{Ricci}(\omega)\geq 0$. Let $\chi:X\to {\mathbb R} $ be a $C^\infty$ cutoff function with $\chi=1$ on $\{\varphi<\log(L-\epsilon) \}$ and $\chi=0$ on $\{ \varphi>\log(L-\epsilon/2)\}$. For any $t>T_0$ we can apply Theorem \ref{hormander} to find a function $u_t$ solving the $\overline{\partial}$-equation $\bar{\partial}u_t=\bar{\partial}(f\chi)=f\bar{\partial}\chi$ on $X$ and satisfying
\begin{align}
\int_{X}|u_t|^2(e^{\psi}+1)^{-2}e^{-2t\varphi}\omega^N\leq\frac{1}{2N}\int_Xs|f\bar{\partial}\chi|_{\omega}^2e^{-2t\varphi}\omega^N.\label{j}
\end{align}
Now we define the function $p_t:=\chi f-u_t$. It is clear that $p_t$ is an entire function.
Let $z\in X$ be such that $B(z,1,\omega)$ does not intersect $S$ or the support of $\chi$. Then $u_t=p_t$ on $B(z,1,\omega)$.
Since $\psi$ induces an integral estimate for holomorphic functions (Def \ref{skil} (ii)) we can find constants $A$ and $B$ (not depending on $z$ or $t$) such that
\begin{align*}
|p_t&(z)|^2\leq e^{A\psi^+(z)+B}\int_{B(z,1,\omega)}|u_t|^2\omega^N,\nonumber\\
&\leq e^{A\psi^+(z)+B}\sup_{\zeta\in B(z,1,\omega)}\left(e^{2t\varphi(\zeta)}(e^{\psi(\zeta)}+1)^2\right)\int_{X}|u_t|^2(e^{\psi}+1)^{-2}e^{-2t\varphi}\omega^N.
\end{align*}
Since $\varphi\in \mathcal{L}_\psi$ we can now use inequality (\ref{j}) and Lemma \ref{l52} to show that $p_t\in \mathcal{P}^\psi_{l+t}$ where $l=\frac{A}{2}+1$.
Let $r>0$ be small enough such that $B(z,r,\omega)\subset\subset \{\varphi<\log(1+\epsilon)\}\subset \{\chi=1\}$ for all $z\in K$. Then $u_t\in\mathcal{O}(\overline{B}(z,r,\omega))$ for $z\in K$. Since $\psi$ induces an integral estimate for holomorphic functions, and by the compactness of the sets $S$ and $K$ we can find a positive constant $C$ such that for every $z\in K$ we have
\begin{align}
|f(z)-p_t(z)|^2&=|u_t(z)|^2\leq C\left(\sup_{\zeta\in B(z,r,\omega)}1+e^{\psi(\zeta)}\right)^{-2}\int_{B(z,r,\omega)}|u_t|^2\omega^N\nonumber\\
&\leq C\left(\sup_{\zeta\in B(z,r,\omega)}e^{2t\varphi(\zeta)}\right)\int_{X}|u_t|^2(e^{\psi}+1)^{-2}e^{-2t\varphi}\omega^N.\label{qqqqq}
\end{align}
By the definition of $r$ we can estimate $\sup_{\zeta\in B(z,r,\omega)}e^{2t\varphi(\zeta)}$ with $(1+\epsilon)^{2t}$ and recall that we have $\log(L-\epsilon/2)>\varphi>\log(L-\epsilon)$ on the support of $\bar{\partial}\chi$. By inequalities (\ref{j}) and (\ref{qqqqq}) we therefore have
\begin{align*}
|f(z)-p_t(z)|^2&\leq C(1+\epsilon)^{2t}\int_{X}s|f\bar{\partial}\chi|_{\omega}^2e^{-2t\varphi}\omega^N\\
&\leq C\|s\|_X\left(\frac{1+\epsilon}{L-\epsilon}\right)^{2t}\|f\|^2_{\{\varphi<\log(L-\epsilon/2) \}}\|\bar{\partial}\chi\|^2_{L^2(X)},\qquad z\in K
\end{align*}
where $\|s\|_X$ is the sup-norm of $s$ on $X$ and $\|\bar{\partial}\chi\|^2_{L^2(X)}$ is the $L^2$ norm of $\bar{\partial}\chi$ with respect to the metric $\omega$ and measure $\omega^N$. We have already seen that $p_t\in \mathcal{P}^\psi_{l+t}$ and by replacing $t$ with $t-l$ we have the result with $M=C\|s\|_X\|\bar{\partial}\chi\|_{L^2(X)}$. If $L$ is large enough the support of $\bar{\partial}\chi$ does not intersect $S$ and $\|\bar{\partial}\chi\|_{L^2(X)}=\|\bar{\partial}\chi\|_{L^2(X\setminus S)}$.
\end{proof}
\begin{lemma}\label{vvvv}
Let $L,\epsilon>0$, $\varphi\in \mathcal{L}_\psi^+$ be continuous and $\theta$ be a continuous function satisfying the growth condition
\begin{align*}
|\theta(z)|\leq A\psi^+(z)+B,\qquad z\in X
\end{align*}
for some constants $A,B$. Write $\tilde{\varphi}_t:=(1-t^{-1})\varphi+t^{-1}\theta$. Then there exists $T$ such that for $t>T$ we have
\begin{align}
\{z\in X;\; \tilde{\varphi}_t(z)<L-\epsilon \}\subset \{z\in X;\;\varphi(z)<L \},\label{p}
\end{align}
and the function $\tilde{\varphi}_{t}$ is an exhaustion function.
\end{lemma}
\begin{proof}
It is trivial to check that
\begin{align*}
\{z\in X;\; \theta(z)-\varphi(z)\geq 0 \}\cap \{z\in X;\; \tilde{\varphi}_t(z)<L-\epsilon \}\subset \{z\in X;\;\varphi(z)<L \}
\end{align*}
for every $t>0$. Therefore we only need to show that
\begin{align*}
\{z\in X;\; \theta(z)-\varphi(z)<0 \}\cap \{z\in X;\; \tilde{\varphi}_t(z)<L-\epsilon \}\subset \{z\in X;\;\varphi(z)<L \}
\end{align*}
for large enough $t$.
Since $\varphi\in \mathcal{L}_\psi^+$ and by $(\ref{p})$ we can find positive constants $C_1$ and $C_2$ such that
\begin{align*}
|\theta(z)|\leq C_1\varphi^+(z)+C_2,\qquad z\in X.
\end{align*}
If $t>2$ and $z$ is such that $\varphi(z)>1$, then we have
\begin{align}
\tilde{\varphi}_t(z)=(1-t^{-1})\varphi(z)+&t^{-1}\theta(z)>\frac{1}{2}\left(\varphi(z)-\frac{2}{t}|\theta(z)|\right)\nonumber\\
&\geq \frac{\varphi(z)}{2}\left(1-\frac{2C_1}{t}\right)-\frac{C_2}{t}.\label{v}
\end{align}
Since $\varphi$ is an exhaustion, inequality (\ref{v}) implies that $\tilde{\varphi}_t$ is also an exhaustion for $t\geq T_0:=\max\{2C_1+1,2\}$. In particular the set $\{\tilde{\varphi}_{T_0}\leq L-\epsilon \}$ is compact so the continuous function $\theta-\varphi$ has a lower bound $-M<0$ on it. We define $T:=\max\{T_0,\frac{M}{\epsilon} \}$. Now let $t>T$ and $z\in \{\theta-\varphi<0 \}\cap \{\tilde{\varphi}_t<L-\epsilon\}$. Since $\theta(z)-\varphi(z)<0$ and $t>T_0$ we have $\tilde{\varphi}_{T_0}(z)<\tilde{\varphi}_{t}(z)\leq L-\epsilon$ so $\theta(z)-\varphi(z)\geq -M$ and
\begin{align*}
\varphi(z)=\tilde{\varphi}_t(z)-t^{-1}(\theta(z)-\varphi(z))<L-\epsilon-\frac{\epsilon}{M}(-M)=L.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{adal0}]
Let $\epsilon\in]0,(L-1)/2[$ and let $\theta$ be a Ricci compensator for $\psi$. By adding a constant to $\theta$ we can assume $\theta<0$ on $K$. Now apply Lemma \ref{vvvv} to find $T>0$ such that $\{\tilde{\varphi}_T<\log(L-\epsilon) \}\subset \{\varphi<\log(L)\}$ where $\tilde{\varphi}_T:=(1-T^{-1})\varphi+T^{-1}\theta$. We can now apply Theorem \ref{adal3} with $\varphi$ replaced by $\tilde{\varphi}_T$ and with $L$ replaced by $L-\epsilon$. For $t$ large enough we have
\begin{align*}
d_K(f,\mathcal{P}^\psi_{t})\leq M\|f\|_{\{\tilde{\varphi}_T\leq \log(L-3\epsilon/2)\}}\left(\frac{1+\epsilon}{L-2\epsilon}\right)^{t-l}.
\end{align*}
and
\begin{align*}
\limsup_{t\to\infty}(d_K(f,\mathcal{P}^\psi_{t}))^{1/t}\leq \limsup_{t\to\infty}\left(\frac{1+\epsilon}{L-2\epsilon}\right)^{\frac{t-l}{t}}=\frac{1+\epsilon}{L-2\epsilon}.
\end{align*}
Since $\epsilon>0$ was arbitrary the result follows.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop33}]
By assumption there is a sequence $(p_n)_{n\in{\mathbb N}}$ of functions on $X$ such that $p_n\in \mathcal{P}^\psi_{n}$ for all $n\in {\mathbb N}$ and $\|p_n-f\|_K\leq (L-\epsilon(n))^{-n}$ where $\epsilon:{\mathbb N}\to {\mathbb R}_+$ is a decreasing function satisfying $\lim_{n\to\infty}\epsilon(n)=0$. We claim that the sum $p_1+\sum_{n=1}^{\infty}(p_{n+1}-p_n)$ is uniformly convergent on compact subsets of $\{V_{K,\psi}<\log(L) \}$. Indeed for $l<L$ we have
\begin{align*}
&\|p_n(z)-p_{n-1}(z)\|_{\{V_{K,\psi}(z)\leq l \}}\leq\|p_n-p_{n-1}\|_{K} \|e^{V_{K,\psi}}\|_{\{V_{K,\psi}(z)\leq l \}}\\
&\leq \left(\|p_n-f\|_{K}+\|f-p_{n-1}\|_{K}\right)l^n\leq 2l\left(\frac{l}{L-\epsilon(n-1)}\right)^{n-1}.
\end{align*}
Since $l<L$ and $\epsilon(n)$ converges to $0$ the series converges. It is obviously equal to $f$ on $K$.
\end{proof}
\begin{lemma}\label{summulemma}
For $a,\varrho>0$ we have
\begin{align*}
\sum_{n=1}^{\infty}\left(\frac{a}{n}\right)^{n/\varrho}\leq 1+2^{\varrho}ae^{\frac{a}{e\varrho}}.
\end{align*}
\end{lemma}
\begin{proof}
Since $(a/n)^{n/\varrho}<2^{-n}$ for $n>\lfloor 2^{\varrho}a\rfloor$ we have $\sum_{n=\lfloor2^{\varrho}a\rfloor+1}^{\infty}\left(\frac{a}{n}\right)^{n/\varrho}\leq 1$. The function $x\to (a/x)^{x/\varrho}$ is maximized when $x=a/e$. Therefore
\begin{align*}
\sum_{n=1}^{\lfloor 2^{\varrho}a\rfloor}\left(\frac{a}{n}\right)^{n/\varrho}\leq \lfloor 2^{\varrho}a\rfloor \left(\frac{a}{a/e}\right)^{\frac{a/e}{\varrho}}\leq 2^{\varrho}ae^{\frac{a}{e\varrho}}.
\end{align*}
\end{proof}
\begin{lemma}\label{kilemma}
Let $\varphi\in\mathcal{L}^+_{\psi}$ and $L_0>1$ be large enough such that $\{\psi< 1\}\subset \{\varphi<\log(L_0) \}$. Then for any constants $L_1,L_2$ with $L_2>L_1>L_0$ there exists a function $\chi\in C^{\infty}(X)$ with $\chi=1$ on $\{\varphi<\log(L_1)\}$, $\chi=0$ on $\{\varphi>\log(L_2) \}$ and $\|\bar{\partial}\chi\|^2_{L^2(X)}\leq \frac{M_1L_2^2}{L_2-L_1}\int_{\{\psi\leq\log(L_2)+M_2 \}}\omega^N$ where $M_1$ and $M_2$ are constants independent of $L_1$ and $L_2$.
\end{lemma}
\begin{proof}
Let $\chi_0\in C^{\infty}({\mathbb R})$ be such that $\chi_0(x)=1$ if $x\leq 1$ and $\chi_0(x)=0$ if $x\geq L_2/L_1$. We can choose such a function such that $\|\chi_0'\|_{\mathbb R}\leq 2(L_2/L_1-1)^{-1}$. Now define
\begin{align*}
\chi(z)=\chi_0\left(e^{\varphi(z)}/L_1 \right),\qquad z\in X.
\end{align*}
We clearly have $\chi=1$ on $\{\varphi<\log(L_1)\}$ and $\chi=0$ on $\{\varphi>\log(L_2) \}$ so we only have to prove the estimate for $\|\bar{\partial}\chi\|^2_{L^2(X)}$. First notice that
\begin{align}
|\bar{\partial}\chi|^2_{\omega}\omega^N&\leq \frac{\|\chi_0'\|_{{\mathbb R}}}{L_1}|\bar{\partial}e^\varphi|_{\omega}^2\omega^N\leq \frac{2}{L_2-L_1}|\bar{\partial}e^\varphi|_{\omega}^2\omega^N\nonumber\\
&=\frac{2i}{L_2-L_1}\partial e^{\varphi}\wedge \bar{\partial}e^{\varphi}\wedge \omega^{N-1}\leq\frac{i}{L_2-L_1}(\partial\bar{\partial}e^{2\varphi})\wedge\omega^{N-1}\label{h}.
\end{align}
By assumption there exists a constant $C$ such that $\varphi^+-C\leq \psi^+\leq \varphi^++C$ on $X$. Let $\Gamma_0\in C^{\infty}({\mathbb R})$ be such that $\Gamma_0(x)=1$ if $1\leq x\leq \log(L_2)+C$ and $\Gamma_0=0$ if $x\leq 0$ or $x\geq \log(L_2)+C+1$. The function $\Gamma_0$ can be chosen such that $\max\{\|\Gamma_0'\|_{{\mathbb R}},\|\Gamma_0''\|_{{\mathbb R}}\}\leq 4$. Now define $\Gamma:=\Gamma_0\circ \psi$. Then $\Gamma$ equals $1$ on the support of the $(0,1)$-form $\bar{\partial}\chi$ and the support of $\Gamma$ is a subset of
\begin{align*}
F:=\{0\leq \psi(z)\leq \log(L_2)+C+1 \}\subset X.
\end{align*}
Recall that we have $\omega=\frac{i}{2}\partial \bar{\partial}e^\psi$ on $F$. By (\ref{h}) we have
\begin{align*}
\|\bar{\partial}\chi\|^2_{L^2(X)}&=\int_{X}\Gamma |\bar{\partial}\chi|^2_{\omega}\omega^N\leq \frac{1}{L_2-L_1}\int_{F}\Gamma (i\partial\bar{\partial}e^{2\varphi})\wedge\omega^{N-1}\\
&=\frac{1}{L_2-L_1}\int_{F}e^{2\varphi}(i\partial\bar{\partial}\Gamma)\wedge\omega^{N-1}\\
&\leq\frac{1}{L_2-L_1}\int_{F}4e^{2(\psi+C)}i(\partial \psi\wedge \bar{\partial}\psi+\partial\bar{\partial \psi})\wedge\omega^{N-1}\\
&\leq \frac{8L^2_2e^{2C+1}}{L_2-L_1}\int_{F}e^{-\psi}\omega^N\leq \frac{8L^2_2e^{2C+1}}{L_2-L_1}\int_{\{\psi\leq \log(L_2)+C+1 \}}\omega^N.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{adal00}]
Assume inequalities (\ref{jafna1}) are true and let $\epsilon>0$. We first consider the case when $\varphi$ satisfies $(i)$ from Theorem \ref{adal3}. By assumption there is a constant $C$ such that
\begin{align}
\|f\|_{\{\varphi\leq \log(r)\}}\leq C\exp((\sigma+\epsilon)r^{\varrho}),\qquad r\geq 1.\label{arg}
\end{align}
Since $f$ is entire, inequality (\ref{tyu}) from Theorem \ref{adal3} is true for all $L>1$ and all large $t$. In particular if we take $L=L(t):=(t/(\varrho\sigma ))^{\varrho^{-1}}+\epsilon/2$ then by (\ref{tyu}) and (\ref{arg}) we have
\begin{align}
d_K(f,\mathcal{P}^\psi_{t})\leq M_0C\exp\left(\frac{(\sigma+\epsilon)t}{\varrho\sigma}\right)\|\bar{\partial}\chi_t\|_{L^2(X)}\left(\frac{1+\epsilon}{L(t)-\epsilon}\right)^{t-l}\label{fff}
\end{align}
for every $t$ large enough. Here $\chi_t$ is a cut-off function with $\chi_t=1$ on $\{\varphi<\log(L(t)-\epsilon) \}$ and $\chi_t=0$ on $\{\varphi>\log(L(t)-\epsilon/2) \}$. By Lemma \ref{kilemma} and assumption (\ref{volgrowth}) (taking $r=\varrho/2$) there are constants $M_1,M_2,A,B$ such that
\begin{align}
\|\bar{\partial}\chi_t\|^2_{L^2(X)}&\leq \frac{M_1(L(t)-\epsilon/2)^2}{\epsilon/2}\int_{\{\psi\leq \log(L(t)-\epsilon/2)+M_2\}}\omega^N\nonumber\\
&\leq \frac{M_1(L(t)-\epsilon/2)^2}{\epsilon/2}\exp\left(Ae^{\frac{M_2\varrho}{2}}(L(t)-\epsilon/2)^{\frac{\varrho}{2}}+B \right)\nonumber\\
&=\frac{2M_1}{\epsilon}\left( \frac{t}{\varrho\sigma}\right)^{2\varrho^{-1}}\exp\left(Ae^{\frac{M_2\varrho}{2}}\left(\frac{t}{\varrho\sigma} \right)^{\frac{1}{2}}+B \right).\label{ggg}
\end{align}
Now, combining (\ref{fff}) and (\ref{ggg}), we get
\begin{align}
\limsup_{t\to\infty}t(d_K(f,\mathcal{P}^\psi_{t}))^{\varrho/t}\leq \exp\left(\frac{\sigma+\epsilon}{\sigma}\right)(1+\epsilon)^{\varrho}\varrho\sigma.\label{iii}
\end{align}
Since (\ref{iii}) is true for all $\epsilon>0$ the result follows.
Now consider the case when $\varphi$ does not satisfy $(i)$ from Theorem \ref{adal3}. Let $\theta$ be a Ricci compensator for $\psi$ satisfying $\theta|_K\leq 0$, and let $T$ be large enough such that $\tilde{\varphi}_{T}$ (as defined in Lemma \ref{vvvv}) is an exhaustion function. Then clearly the function $\varphi_{\epsilon}:=(1-\epsilon)\varphi+\epsilon\tilde{\varphi}_{T}$ satisfies $(i)-(iii)$ from Theorem \ref{adal3}. Moreover, since $\tilde{\varphi}_{T}$ is an exhaustion, we have
\begin{align}
\{\varphi_\epsilon\leq \log(r) \}\subset \{(1-\epsilon)\varphi\leq \log(r) \},\nonumber\\ \text{and}\qquad \|f\|_{\{\varphi_\epsilon\leq \log(r)\}}\leq \|f\|_{\{\varphi\leq \log(r^{1/(1-\epsilon)}) \}}\label{pppp}
\end{align}
for $r>1$ large enough. By assumption (\ref{jafna1}) and by (\ref{pppp}) we have
\begin{align*}
\limsup_{r\to\infty}\frac{\log^+\log\|f\|_{\{\varphi_\epsilon\leq \log(r)\}}}{\log(r)}\leq\limsup_{r\to\infty}\frac{\log^+\log\|f\|_{\{\varphi\leq \log(r^{1/(1-\epsilon)})\}}}{(1-\epsilon)\log(r^{1/(1-\epsilon)})}\leq \frac{\varrho}{1-\epsilon}
\end{align*}
and
\begin{align*}
\limsup_{r\to\infty}\frac{\log\|f\|_{\{\varphi_\epsilon\leq \log(r)\}}}{r^{\varrho/(1-\epsilon)}}\leq \limsup_{r\to\infty}\frac{\log\|f\|_{\{\varphi\leq \log(r^{1/(1-\epsilon)})\}}}{(r^{1/(1-\epsilon)})^{\varrho}}\leq \sigma.
\end{align*}
We can now apply our previous conclusion with $\varphi$ replaced by $\varphi_{\epsilon}$ and $\varrho$ replaced by $\varrho/(1-\epsilon)$ and we have
\begin{align*}
\limsup_{t\to\infty}t(d_K(f,\mathcal{P}^\psi_{t}))^{\varrho/((1-\epsilon)t)}\leq e\frac{\varrho}{1-\epsilon}\sigma.
\end{align*}
Since this is true for every $\epsilon>0$ the result follows.
To prove the converse, assume inequality (\ref{jafna3}) is true with $\varphi$ replaced by $V_{K,\psi}$. Then for $\epsilon>0$ we can find $M$ such that for any $n\geq M$ there is a function $p_n\in \mathcal{P}^\psi_{n}$ such that
\begin{align}
\|f-p_n\|_K\leq\left(\frac{e\varrho\sigma(1+\epsilon)}{n}\right)^{n/\varrho}.\label{qqqq}
\end{align}
Consider the function $G:=p_M+\sum_{n=M}^{\infty}(p_{n+1}-p_n)$. Clearly we have $G=f$ on $K$. Moreover, by (\ref{qqqq}) and Lemma \ref{summulemma}, we have
\begin{align*}
|G|&\leq|p_M|+ \sum_{n=M}^{\infty}|p_{n+1}-p_n|\leq \|p_M\|_Ke^{MV_{K,\psi}}+\sum_{n=M}^{\infty}\|p_{n+1}-p_n\|_Ke^{(n+1)V_{K,\psi}}\\
&\leq \|p_M\|_Ke^{MV_{K,\psi}}+\sum_{n=M}^{\infty}\left(\|p_{n+1}-f\|_K+\|p_n-f\|_K\right)e^{(n+1)V_{K,\psi}}\\
&\leq \|p_M\|_Ke^{MV_{K,\psi}}+2e^{V_{K,\psi}}\sum_{n=M}^{\infty}\left(\frac{e\varrho\sigma(1+\epsilon)e^{\varrho V_{K,\psi}}}{n}\right)^{n/\varrho}\\
&\leq \|p_M\|_Ke^{M V_{K,\psi}}+2e^{V_{K,\psi}}\left(1+2^{\varrho}e\varrho\sigma(1+\epsilon)\exp\left(\varrho V_{K,\psi}+\sigma(1+\epsilon)e^{\varrho V_{K,\psi}} \right)\right).
\end{align*}
In particular we have
\begin{align*}
\|G\|_{\{V_{K,\psi}\leq \log(r)\}}\leq \|p_M\|_Kr^{M}+2r(1+2^{\varrho}e\varrho\sigma(1+\epsilon)r^{\varrho}e^{\sigma(1+\epsilon)r^{\varrho}})
\end{align*}
and now it is easy to see that
\begin{align*}
\limsup_{r\to\infty}\frac{\log\|G\|_{\{\varphi\leq \log(r)\}}}{r^{\varrho}}\leq \sigma(1+\epsilon)
\end{align*}
and the second inequality of (\ref{jafna1}) follows by letting $\epsilon\to 0$. By simple calculus we can show that the first inequality of (\ref{jafna1}) follows from the second one.
\end{proof}
\bibliographystyle{plain}
|
2110.02873
|
\section{Introduction}
Human has the nature of self-expressing, however the fear of total exposure to the outer world still remains deep inside our mind. In this situation, some simple alienation on their outlook may satisfy such ambivalence. People are using all kinds of decoration and filters on their selfies, before they post them on the internet. As the extensive use of social media in entertainment industry keeps arising, style filters have become the base function of many applications.
Recently, Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} have shown the potential of image-to-image translation automatically. Differing from the original GAN, the state-of-art approaches (eg: CycleGAN\cite{zhu2017unpaired}, Pix2PixHD\cite{wang2018high}, StyleGAN\cite{karras2019style}, DRIT++\cite{lee2020drit++}, ) implement the discriminator to distinguish both the generated image and the image from desired domain. Meanwhile, the generator is forced to output images with similar features in this domain. As great results were produced by these methods. In the unsupervised task, the high-level semantic information in a picture was still poorly engineered. The intuitive solution to this problem is to have pre-trained semantic segmentation masks and implement them into the generator layers. ContrastGAN \cite{liang2017generative} uses manual object-mask annotations to disentangle the foreground and background of the existing picture. However, even a well-prepared segmentation model would cause loss of generalization of the network \cite{alotaibi2020deep} due to the hand-labeled data has little scale and availability. Thus, recent years have seen many types of research based on the self-attention of GANs \cite{chen2018attention, tang2019attentiongan,zhang2019self, emami2020spa}. This technique allows the generator and discriminator to extract the latent attention mask of the input image, according to the learned data distribution. In image-to-image translation practices, the attention mechanism is embedded into the GAN architecture by branching convolutional blocks of the image encoder/decoder. The spatial attention map (implemented by dot product) is used to distinguish foreground object and background noise, thus excludes the background from translation. These methods largely improve the image quality, when dealing with translation between real-world objects like human faces. However spatial attention is not sufficient to cover the low-level features such as artwork movement, texture, due to the style change in 2 groups of images that are global mapping. In this work, we focus on the translation between Marvel-styled comic image and human face, which include both high-level semantic manipulation and low-level feature engineering.
In order to address the existing issues, we propose a novel Spectral Domain Attention-Guided Generative Adversarial Network (SDA-GAN) for unsupervised image-to-image translation. As shown in Fig.\ref{sda}, by adding spectral attention with spatial attention, the model would better translate the low-level features, while still having objective consistency. A two-scheme framework is applied in this work. First, we explore the most straightforward method to inject spectral information into the network architecture. We operate Fast Fourier Transform directly on the content layer of the generator. Then we combine the contents with the spectral attention and spatial attention with inverse Fourier transform. Inspired by \cite{tang2019attentiongan}, the attention is divided into multiple foreground and background parts for each case. The generator produces a transform map, which includes both objective content and global style. As a proof of concept, we suppose the network will eventually learn how to distinguish the object and background, after fine-tuning and optimization. Thus the final output can be divided into two components: foreground (generated by foreground attention and content/style mask) and background (generated by background attention and original image). By weighted-adding the two counterparts, we obtain the mapping from the source domain to the target domain. This transformation includes both spatial modification and global texture, styles, illumination, etc. Due to introducing semantic operations, the uninterested area of images can be ignored, during the translation. To reduce the computational cost introduced by complex number operation, we divided the spectral information into two parts: phase and amplitude. The phase is shared in all channels, while all the generator layers only have to produce amplitude content and attention. This pre-separation of phase and amplitude can help stablizing the network, preventing it from numerical error.
Furthermore, we constructed a more complicated system, where the spectral contents are independently generated from its own branch. Experimentally, we discovered that cycle architecture \cite{zhu2017unpaired} is vital in the training process, for it largely reduces the possibility of mode collapse and preserves global consistency.
The major contribution of this work is summarized as follows:
\begin{itemize}
\item To the extent of our knowledge, this is the first work that implemented spectral attention on image translation.
\item We propose a novel spectral attention-guiged generator that is robust both on high-level features and low-level features. This architecture utilizes spectral and spatial information, which allows the network to perform local translating and global style reformatting on the objectives while ignoring backgrounds.
\item We demonstrate that the use of Fourier transform in convolutional networks will introduce performance benefits, while only slightly impairs the computational efficiency.
\item We introduce an adaptive image fuser, that integrates translation contents from spatial and spectral domain.
\end{itemize}
\section{Related Work}
\textbf{Generative Adversarial Networks} coined in 2014 by Goodfellow et al.\cite{goodfellow2014generative} have achieved significant results on various computer vision subfields, including super-resolution \cite{dahl2017pixel,bulat2018learn}, image-to-image translation \cite{wang2018high,karras2019style,lee2020drit++}, text-to-image generation \cite{reed2016generative}, and semantic segmentation \cite{cherian2019sem,yao2018exploring}.
A GAN is composed of two confrontational networks called Generator G and Discriminator D, which are trained simultaneously using the adversarial loss. G tries to generate sufficiently high-quality fake images and D is trained to distinguish them from real pictures. The ultimate goal of training a GAN is to achieve Nash equilibrium, as the enhancement of one part in the network would cause loss of the counterpart. In this case, we hope that the generator produces images with high confidence both in discriminators and human eyes. Original GANs derive images from random vectors, while conditional GANs (cGANs) utilize external information such as annotations, interested points, semantic mask and desired image domain. In this work, we adopt the adversarial loss to learn the mapping from the source domain to the target domain so that the translated image will be genuine enough as the ground truth, from the perspective of discriminators.
\textbf{Image-to-Image translation}
Using a convolutional neural network (CNN), GANs can learn and perform a translation of images from two distinct domains (eg: horses to zebras, summers to winters, cry to laughter). Recent years have seen a variety of applications with such architecture \cite{isola2017image,wang2018high,reed2016generative}. For paired dataset, each image in domain A has a corresponding image in domain B. The translation between these domains has been widely explored. Taking a classic example, Pix2Pix \cite{isola2017image} learns a mapping using conditional network with L1 loss to stabilize the convergence and reduce blurring. The further Pix2PixHD \cite{wang2018high} increased the resolution limit to 2048*1024 by implementing a pyramid of generators, which produce images from coarse to fine quality. Pix2PixHD enables dynamic editing of image semantics with labels or segmentation masks in a short processing time.
However, the issue of limited training samples hinders the application of paired image-to-image translation. To overcome the limitation, several methods were proposed to handle unpaired image-to-image translation. CycleGAN \cite{zhu2017unpaired} is the ancestor of all cycle-consistency-based GANs. They preserve the key features in the whole image processing stage, by adding cycle-consistency loss and identity loss to the adversarial loss. The variants of CycleGAN include Augmented CycleGAN \cite{almahairi2018augmented}, DiscoGAN \cite{kim2017learning}, DualGAN \cite{yi2017dualgan}, Attention GAN \cite{tang2019attentiongan} and etc. CycleGAN builds a symmetric architecture involving two generators and discriminators and trained them recurrently, giving the training image unchanged. We also inherit this cycle-based architecture in our network, and the detailed illustration is in Section 3. Besides cycle-based networks, autoencoder-based GANs \cite{liu2016coupled, liu2020gmm} and Disentangled Representation methods \cite{huang2018multimodal,lee2018diverse,lee2020drit++} are also widely investigated.
\textbf{Attention-Guided GANs}
Imitating the human vision system, attention mechanism has been widely applied in many fields of computer vision including visual explanation \cite{fukui2019attention}, semantic segmentation \cite{chen2016attention}, image and video captioning \cite{yan2019stat}, etc. Attention stimulates the network to focus on the region of interest, thus improve the performance of all the above applications.
Recently the incorporation of attention and GANs has been extensively studied. These approaches can be divided into 2 classes. The first category is GAN-assisted attention generation. In \cite{chen2016attention,yao2018exploring,han2018spine}, GANs are employed to complete the segmentation task. As given paired images and semantic masks, the GANs are trained to build the mapping between these two domains. The second category is attention-guided GANs. In ContrastGAN proposed in 2018, the authors used a mask-conditional generative model to disentangle the object and background. The network was trained on datasets with extra segmentation masks which largely limited the application fields. To address this issue, SAGAN \cite{zhang2019self} is proposed to combine attention mechanism and GAN to deal with unlabeled data. The self-attention network is pre-trained in a segmentation dataset and then serves as a teacher network that inserts attention masks into the generator and discriminator. The system can be trained in an end-to-end manner and can process new data without extra information.
However, even a well-prepared segmentation model would cause loss of generalization of the network \cite{alotaibi2020deep} due to the hand-labeled data is always hard to attain. Thus, many types of research based on the integrated self-attention of GANs \cite{chen2018attention, tang2019attentiongan, emami2020spa} have been studied. This technique allows the generator to extract the latent attention mask of the input image, according to the learned data distribution. Instead of using the labeled segmentation mask, the network gradually learns the attention during the generation-discrimination iteration.
By fusing the attention mask and generated content, the generated image will only contain translations in the desired features, leaving the background or shared features unchanged. This avoids artifacts of the image, compared to the original GAN with no attention information. Popular attention models take advantage of the spatial distribution of interested regions. In this work, we expand the attention to the spectral domain. Efficiently applying Fourier transform and inverse Fourier transform in the network architecture, we demonstrate that spectral-domain attention can improve the generator performance. The enhanced awareness of desired translation could be further applied in other image representation fields.
\section{SDA-GAN}
We first formulate the image-to-image translation problem as learning the mapping $G_X$ from a source image $x\in X$ to the target image $y\in Y$. The discriminator $D_Y$ reacts to the generated images and real sample to predict a confidence score of each type. To preserve cycle consistency, a backward translation network with generator $G_Y$ and discriminator $D_x$ is built for translation from $Y$ to $X$. The adversarial loss $L_{adv}$ indicates the accuracy of the classification. Based on the loss function, the two parts of GAN are trained to play a mini-max game. For generator, the goal is to fool the discriminator with fake images so the object will be maximizing $L_{adv}$. For discriminator, the goal is to distinguish precisely whether an image is real or generated so the object will be minimizing $L_{adv}$. At the very beginning of training, $G_X$ can only produce noise-like images, that are easily distinguished by $D_Y$.
After sufficient iterations, the optimal situation is that $G_X$ produces nice images that look like real ones, and $D_Y$ has 50\% accuracy of identifying real and fake images. In the perspective of optimization, a saddle point is achieved, which implies the instability of GANs, that no global optimization point exists. The instability is a major issue in training GANs, while the previous study has proved mathematically that by modifying and constraining the loss function and weight distribution, local equilibrium is achievable for GANs \cite{ kodali2017convergence}. In this work, two configurations of spectral engineering are constructed and tested.
\subsection{Spectral Attention Guided Generator}
In the proposed network, we use $A_X$ and $A_Y$ to denote the spatial attention, as well as $S_X$ and $S_Y$ for spectral attention of Y-X and X-Y mapping. $C_X$ and $C_Y$ are generated contents, sorted channel-wise. Then, the objective mapping can be represented as $G(x)=G(A_y, S_y, C_y) ,H(y)=H(A_x,S_x,C_x)$ . As shown in Fig.\ref{sda}, the image input x is first encoded into a latent code $E_x$ by convolutional blocks. Then a three-branch network ($G_A$,$G_Y$,$G_S$) respectively generate the spatial attention $A_y$, translation contents $C_y$, and spectral attention $S_y$ using the latent code $E_x$. Further, the 2-D Fourier transform of generated contents $\mathcal{F}(C_x)$ and $\mathcal{F}(C_y)$ are calculated and stored.
In the first configuration described in Fig.\ref{sda}(a), since the shared encoder compress the shape of $x$, both contents and attention decoder can be constructed and trained in an efficient way. To achieve good performance on complex objects, we applied multiple layers of attention extractors $A_y \in \mathbb{R} ^ {H \times W \times n}$ and $S_y \in \mathbb{R} ^ {H \times W \times n}$, where $H$ and $W$ is the height and width of the feature map, to fine-split the interested region into a sequence of attention masks. Softmax activation is applied as the top layers of $G_A$ and $G_S$, this modification ensures the attention among each channel is a probabilistic distribution. The corresponding translation content has the dimension $C_y \in \mathbb{R} ^ {H \times W \times n \times 3}$, for the generated image should contain RGB channels.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{fig/sda.png}
\caption{\textbf{(a)} Diagram of basic architecture in SDA-GAN. \textbf{(b)} Diagram of an alternative architecture in SDA-GAN}
\label{sda}
\end{figure*}
As for spectral features, the translation contents (amplitude) are directly extracted from the output of spatial content decoder, which can be represented as the amplitude of 2D-FFT: $abs(\mathcal{F}(C_x))$. For each channel, the amplitude is combined with shared phase form input image. Finally, a dot-product operation is performed on all paired attention and contents.
To better manipulate the distribution of generated image, an additional convolutional block is employed, as an 'image fuser'. The two branches of generated contents will be the input of image fuser, where spatial and spectral features are processed and integrated, to produce the final output. This implementation gives the network ability to adaptively fine-tune the color theme of generated images. The forward process is presented below:
\begin{equation}
G(x) = N_y(\sum\limits_{i = 1}^n {\left( {\lambda _{Ay}^iC_y^i*A_y^i + \lambda _{Sy}^i({\mathcal{F}^{ - 1}}(S_y^i*\mathcal{F}(C_y^i))} \right)} )
\end{equation}
where $N_y$ denotes for the image fuser. For $i \ne n$, $A_y^i$ and $S_y^i$ are foreground attentions, $A_y^n$ and $S_y^n$ are the background attentions, all satisfying the following relationships:
\begin{equation}
\sum\limits_i^n {A_y^n = 1} ,\sum\limits_i^n {S_y^n = 1}
\end{equation}
This indicates the freedom of attention at each pixel is $n-1$. $C_y^i$ is the translation contents in each layer, in which $C_y^n=x$ is the identity mapping, multiplied with the background attention.
Following the same procedure, the output of mapping $ H(y)=H(A_x,S_x,C_x)$ can be derived by:
\begin{equation}
H(y) = {N_x}(\sum\limits_{i = 1}^n {\left( {\lambda _{Ax}^iC_x^i*A_x^i + \lambda _{Sx}^i({\mathcal{F}^{ - 1}}(S_x^i*\mathcal{F}(C_x^i))} \right)} )
\end{equation}
In the second configuration shown in Fig.\ref{sda}(b), an alternative branch of spatial content is implemented. This structure avoids performing FFT on the spatial content layer, by using an independent spectral decoder to generate the amplitude of each channel $\tilde{C}_x$ and $\tilde{C}_y$. At last, the generated amplitudes, together with the background amplitude (from input) are combined with the shared phase. Experimentally, the two mentioned structures don't differ much at performance. However the training is more stable and with less change to achieve 'nan' with the independent amplitude structure. The forward process is presented below:
\begin{equation}
G(x) = {N_y}(\sum\limits_{i = 1}^n {\left( {\lambda _{Ay}^iC_y^i*A_y^i + \lambda _{Sy}^i({\mathcal{F}^{ - 1}}(S_y^i*{{\tilde C}^i_y})} \right)} )
\end{equation}
\subsection{Spectral Attention Guided Discriminator}
As the generator output remains the same for both structures mentioned above. The discriminator constructs two different loss functions for each configuration. Same as cGANs, the adversarial loss is calculated as below:
\begin{equation}
\begin{aligned} \mathcal{L}_{adv}\left(G, D_{Y}\right) =-\mathbb{E}_{y \sim p_{\text {data }}(y)}\left[\log D_{Y}(y)\right] \\ -\mathbb{E}_{x \sim p_{\text {data }}(x)}\left[\log \left(1-D_{Y}(G(x))\right)\right] \end{aligned}
\end{equation}
The update direction of weights can be calculated using back-propagation algorithm as generator intend to maximize the loss while discriminator minimizes it. Replacing $G$ with $H$, $D_Y$ with $D_X$, it gives the adversarial loss function of mapping $ H(y)=H(A_x,S_x,C_x)$, for the first configuration, and $ H(y)=H(A_x,S_x,C_x,\tilde{C_x})$ for the second configuration.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig/res.png}
\caption{Image translation results of \textbf{(a)} Natural objects and \textbf{(b)} artistic styles}
\label{res}
\end{figure*}
\begin{table*}
\caption{Performance evaluation of SDA-GAN and baseline}
\centering
\begin{tabular}{@{}lllllllll@{}}
\toprule
& \multicolumn{2}{c}{face2comic} & \multicolumn{2}{c}{horse2zebra} & \multicolumn{2}{c}{apple2orange} & \multicolumn{2}{c}{monet2photo} \\ \midrule
& \multicolumn{1}{c}{SDA} & \multicolumn{1}{c}{Baseline} & \multicolumn{1}{c}{SDA} & \multicolumn{1}{c}{Baseline} & \multicolumn{1}{c}{SDA} & \multicolumn{1}{c}{Baseline} & \multicolumn{1}{c}{SDA} & \multicolumn{1}{c}{Baseline} \\
$IS_A$ & \textbf{3.051} & 2.802 & \textbf{3.214} & 1.763 & \textbf{3.799} & 2.681 & 3.118 & \textbf{3.256} \\
$IS_B$ & \textbf{2.928} & 2.913 & \textbf{3.140} & 2.998 & \textbf{4.254} & 2.962 & \textbf{3.018} & 2.892 \\
$FID_A$ & \textbf{53.5} & 142.8 & \textbf{206.9} & 208.7 & \textbf{168.3} & 171.2 & \textbf{166.8} & 190.4 \\
$FID_B$ & 103.1 & \textbf{101.2} & 172.5 & \textbf{171.0} & \textbf{174.3} & 174.7 & \textbf{188.2} & 243.2 \\ \bottomrule
\end{tabular}
\end{table*}
\subsection{Cycle-consistency loss}
In order to preserve translation consistency, we apply two independent mapping network $G:X\rightarrow Y$ and $H:Y\rightarrow X$. A full cycle in an iteration step can be described as follows:
Firstly, the generator $G$ transforms an input image $x$ from domain $X$ to domain $Y$. The generated result $G(x)$, together with the sampled image y from domain $Y$ are used to calculate the adversarial loss. Then $G(x)$ is processed by another generator $H$, which is responsible for translation from $Y$ to $X$. The output will be $H(G(x))$, which can be used to calculate cycle-consistency loss with input $x$. The distance ($L_1$, $L_2$, cross-entropy) between $H(G(x))$ and $x$ denotes the information loss after recurrently passing $x$ into 2 generators. The below equation formulate the cycle-consistency loss using Wasserstein loss \cite{arjovsky2017wasserstein}:
\begin{equation}
\begin{aligned}
\mathcal{L}_{\text {cycle }}(G, F) &=\mathbb{E}_{x \sim p_{\text {data }}(x)}\left[\|H(G(x))-x\|_{1}\right] \\
&+\mathbb{E}_{y \sim p_{\text {data }}(y)}\left[\|G(H(y))-y\|_{1}\right]
\end{aligned}
\end{equation}
The identity loss is used to value the consistency when the network is inputted with the image from the output domain, for example, $G(y)$. Intuitively this output should remain unchanged if the network is well trained. So the distance between$ G(y)$ and $y$, as well as $H(x)$ and $x$ is measured and added into the total loss. The identity loss is formulated as follows:
\begin{equation}
\begin{aligned}
\mathcal{L}_{\text {id}}(G, F) &=\mathbb{E}_{x \sim p_{\text {data }}(x)}\left[\|H(x)-x\|_{1}\right] \\
&+\mathbb{E}_{y \sim p_{\text {data }}(y)}\left[\|G(y)-y\|_{1}\right]
\end{aligned}
\end{equation}
\begin{figure*}[t]
\centering
\includegraphics[width=0.6\textwidth]{fig/spe.png}
\caption{Spectral filling with SDA-GAN }
\label{spe}
\end{figure*}
\section{Experimental analysis}
In order to illustrate the functionalllity of spectral attention, we choose the AttentionGAN from \cite{tang2019attentiongan} as the baseline network. We trained the proposed SDA-GAN in Fig.\ref{sda} (b) and the baseline in a variety of datasets including natural objects, comic styled images and Monet paintings. The networks share the same set of hyperparameters and are both trained in a single v100 GPU for 200 epoches. Part of the testing results are shown in Fig.\ref{res}.
As we can see, Fig.\ref{res}(a) compares the translation results in horse2zebra and apple2orange dataset. The images generated by the baseline have extreme points, while the images produced by our approach are more normal and smooth. This difference is mainly because of the use of image fuser. In the baseline model, the final output is a simple summation of branches, with attention as the weight of each content image. Instead, our model applies an multi-layered convolutional network to perform the final adjustment of the output. Since the kernels process all-channel feature maps in one convolution, attention and content layers from different domain can then have interaction. Possible extreme value that is wrongly produced by certain layer can be then eliminated by the more confident information from other layers. The interaction between attention and contents, we argue, is a more efficient way to utilize such feature. Fig.\ref{res}(b) is the style-transform results in face2comic and monet2photo datasets. The baseline and SDA-GAN are both cycled-structure, so we performed the forward translation A-B and backward translation B-A in the all 4 datasets. The models are evaluated by Inception Score \cite{salimans2016improved} and FID metric \cite{heusel2017gans}. Detailed comparision is in Table.1:
The chart indicates that SDA-GAN has better IS compared with baseline network in the same training manner. Thus, we can conclude that our model can generate more balanced classes of image, and has less possibility of mode collapse. The FID metric, different from IS, is used to measure the distance between generated image and target domain. From the listed evaluations, the FID metrics of both model, in terms of real object translation (column 2 and 3) are similar. However in style transfer tasks, such as photo to Monet painting, the FID is largely reduced with our model which takes spectral attention. This is consistent with our hypothesis, that both object-scaled, high-level features and style-based, low-level features can be generated with spectral engineering. SDA-GAN modifies the texture and color theme into the distribution of target domain.
We also noticed that the FID of reconstructed human face images from comics has achieved a large improvement (from 142.8 to 53.5). Following the concept of spectral engineering, we analysed the 2D FFT of generated image, illustrated in Fig.\ref{spe}. Before translation, the comic picture only has two main frequency components, along the x-axis and y-axis. This indicates the dominance of horizontal and vertical in the source image. The translation introduces more dispersed spectral distribution, in the perspective of signal analysis, more high-frequency parts on the output picture. This phenomenon leads to the more 'realistic' profile of reconstructed face image.
\section{Conclusion}
This work introduced spectral engineering into image translation. Starting from comparing the spectra of real face picture and comic images, we formulated the high-level and low-level translation,by separating phase and amplitude components.
Since Fast Fourier Transform (FFT) is based on the linear combination of input signals, it’s reasonable to assume convolutional network can function as a space-frequency domain converter naturally. Our approach pilot and accelerated the convergence to this potential function for spectral-related branches. Besides, little computational cost is imported in this approach, as the derivative of Fourier transform can be represented by inverse Fourier transform, which is, in the same level of complexity.
Our proposed model is trained in limited time and with limited resources. For future study, we can increase the number of content layers of the decoder. This would introduce more freedom in putting attention on different semantic components. Another possible improvement can be made by adding spectral information to the discrimination process. In a word, we 'hard code' the spectral layer into the discriminator network. This modification will consider spectral distribution and spectral consistency, and might increase the output quality.
\newpage
\bibliographystyle{unsrtnat}
|
2004.04621
|
\section{Introduction}
\label{sec1}
Soluble surfactants play a fundamental role in many microfluidic applications \citep{A16}. For instance, it is well-known that surfactants can stabilize both foams and emulsions due to Marangoni convection effects \citep{EBW91,CT04,DL08}. The surface viscosity of surfactant monolayers is also believed to play a significant role in such stabilization. In fact, the drainage time during the coalescence of two bubbles/droplets can considerably increase due to the monolayer viscosity \citep{OJ19}. However, there are serious doubts about whether small-molecule surfactants commonly used in microfluidic applications exhibit measurable surface viscosities. For instance, \citet{ZNMLDMTS14} reported that the surface shear viscosity of Sodium Dodecyl Sulfate (SDS) was below the sensitivity limit of their experimental technique ($\sim 10^{-8}$ Pa\,s\,m). This raises doubts about the role played by surface shear rheology in the stability of foams and emulsions treated with soluble surfactants.
The disparity among the reported values of shear and dilatational viscosities of both soluble and insoluble surfactants reflects the complexity of measuring such properties. The lack of precise information about these values, as well as the mathematical complexity of the calculation of the surface viscous stresses, has motivated that most of the experimental and theoretical works in microfluidics do not take into account those stresses. However, one can reasonably expect surface viscosity to considerably affect the dynamics of interfaces for sufficiently small spatiotemporal scales even for nearly-inviscid surfactants \citep{PMHVV17}. A paradigmatic example of this is the pinch-off of an interface covered with surfactant \citep{PMHVV17}, where both the surface-to-volume ratio and surface velocity can diverge for times and distances sufficiently close to this singularity.
In the pinching of a Newtonian liquid free surface, the system spontaneously approaches a finite-time singularity, which offers a unique opportunity to observe the behavior of fluids with arbitrarily small length and time scales. This property and its universal character (insensitivity to both initial and boundary conditions) turn this problem into an ideal candidate to question our knowledge of fundamental aspects of fluid dynamics. Both theoretical \citep{PSS90,E93,P95,LS16,KWTB18} and experimental \citep{CMP09,VMHF14,CCTSHHLB15,PMHVV17} studies on the free surface pinch-off have traditionally considered the dependence of the free surface minimum radius, $R_{\textin{min}}$, with respect to the time to the pinching, $\tau$, as an indicator of the relevant forces arising next to the pinching spatiotemporal coordinate. For small viscous effects, the thinning of the liquid thread passes through an inertio-capillary regime characterized by the power law
\begin{equation}
\label{ebbb}
R_{\textin{min}}=A \left(\frac{\sigma}{\rho}\right)^{1/3} \tau^{2/3},
\end{equation}
where $\sigma$ and $\rho$ are the liquid surface tension and density, respectively \citep{KM83,E93}. The dimensionless prefactor $A$ can exhibit a complex, nonmonotonic behavior over many orders of magnitude in $\tau$. In fact, its asymptotic value $A\simeq 0.717$ is never reached because there are very long-lived transients, and then viscous effects take over \citep{DHHVRKEB18}.
The addition of surfactant confers a certain degree of complexity on Newtonian liquids, which may lead to unexpected behaviors during the pinch-off of their free surfaces. For instance, Marangoni stress can produce microthread cascades during the breakup of interfaces loaded with surfactants \citep{MB06}. It is still a subject of debate whether surfactants are convected away from the pinching region. In that case, the system would follow the self-similar dynamics of clean interfaces at times sufficiently close to the breakup \citep{TL02,CMP02,LSFB04,LFB06,LB07,CMP09,RABK09,CMCP11a,PMHVV17}. The persistence of a surfactant monolayer in the pinching of an interface potentially entails the appearance of several effects. The first and probably more obvious is the so-called solutocapillarity, i.e., the local reduction of the surface tension due to the presence of surface-active molecules \citep{RABK09,SPADBK12}. The other effect that has been accounted for is the Marangoni stress induced by the surface tension gradient due to uneven distribution of surfactant along the free surface \citep{SL90b,AB99,TL02,CMP02,MB06,HSYLBP08,JGS06,LFB06,DSXCS06,HM16b,KWTB18}. However, some other effects might be considered in the vicinity of the pinching region as well. Among them, the shear and dilatational surface viscosities have already been shown to affect considerably the breakup of pendant drops covered with insoluble (viscous) surfactants \citep{PMHVV17}.
SDS is one of the most commonly used surfactants in microfluidic experiments. The adsorption/desorption times of SDS are several orders of magnitude larger than the characteristic time of the breakup of free surfaces enclosing low-viscosity liquids. This allows one to regard SDS as an insoluble surfactant, which considerably simplifies the problem. Under the insolubility condition, bulk diffusion and adsorption/desorption processes can be ruled out. Due to its small molecular size, the SDS monolayer is assumed to exhibit a Newtonian behavior \citep{S60}. In addition, the sphere-to-rod transition of SDS micelles (and its associated viscoelastic behavior) does not take place unless some specific salt is added to the solution \citep{AKC03}. Therefore, viscoelastic effects are not expected to come up even for concentrations larger than the cmc.
Surface viscosities of small-size surfactant molecules, such as SDS, are believed not to affect the breakage of a pendant drop due to their small values. However, and as mentioned above, the surface-to-volume ratio diverges in the vicinity of the pinching region and, therefore, surface viscous effects can eventually dominate both inertia and viscous dissipation in the bulk of that region. In addition, the surface tension is bounded between the values corresponding to the clean free surface and the maximum packaging limit, while surface velocity can diverge at the pinch-off singularity. This suggests that surface viscous stresses (which are proportional to the surface velocity gradient) can become comparable with, or even greater than, Marangoni stress (which is proportional to surface tension gradient) in the pinching region for times sufficiently close to the breakup. One can hypothesize that surface viscous stresses can eventually have a measurable influence on the evolution of the free surface even for very low-viscosity surfactants. This work aims to test this hypothesis. The comparison between numerical simulations and experimental data will allow us to determine upper bounds for both the shear and dilatational viscosities of SDS.
\section{Theoretical model}
\label{sec2}
Consider a liquid drop of density $\rho$ and viscosity $\mu$ hanging on a vertical capillary (needle) of radius $R_0$ due to the action of the (equilibrium) surface tension $\sigma_0$ (Fig.\ \ref{sketch}). In this section, all the variables are made dimensionless with the needle radius $R_0$, the inertio-capillary time $t_0=(\rho R_0^3/\sigma_0)^{1/2}$, the inertio-capillary velocity $v_0=R_0/t_0$, and the capillary pressure $\sigma_0/R_0$. The velocity ${\bf v}({\bf r},t)$ and reduced pressure $p({\bf r},t)$ fields are calculated from the continuity and Navier-Stokes equations
\begin{equation}
{\boldsymbol \nabla}\cdot {\bf v}=0,
\end{equation}
\begin{equation}
\frac{\partial {\bf v}}{\partial t}+{\bf v}\cdot {\boldsymbol \nabla}{\bf v}=-{\boldsymbol \nabla}p+{\boldsymbol \nabla}\cdot {\bf T},
\end{equation}
respectively, where ${\bf T}=\text{Oh}[{\boldsymbol \nabla}{\bf v}+({\boldsymbol \nabla}{\bf v})^T]$ is the viscous stress tensor, and $\text{Oh}=\mu(\rho\sigma_0 R_0)^{-1/2}$ is the volumetric Ohnesorge number. These equations are integrated over the liquid domain of (dimensionless) volume $V$ considering the non-slip boundary condition at the solid surface, the anchorage condition at the needle edge, and the kinematic compatibility condition at the free surface.
Neglecting the dynamic effects of the surrounding gas, the balance of normal stresses at the free surface yields \cite{LH98}
\begin{equation}
-p+B\, z+{\bf n}\cdot {\bf T}\cdot {\bf n}=[\hat{\sigma}+(\text{Oh}_2^S-\text{Oh}_1^S)\boldsymbol{\nabla}^S\cdot\mathbf{v}^S]\kappa+2\text{Oh}_1^S[\kappa_1(\boldsymbol{\nabla}^S\mathbf{v}^S)_{11}+\kappa_2(\boldsymbol{\nabla}^S\mathbf{v}^S)_{22}]
,\label{NormalStress1}
\end{equation}
where $B=\rho g R_0^2/\sigma_0$ is the Bond number, $g$ the gravitational acceleration, ${\bf n}$ the unit outward normal vector, $\widehat{\sigma}\equiv\sigma/\sigma_0$ is the ratio of the local value $\sigma$ of the surface tension to its equilibrium value $\sigma_0$, $\text{Oh}_{1,2}^S=\mu_{1,2}^S(\rho\sigma_0 R_0^3)^{-1/2}$ are the superficial Ohnesorge numbers defined in terms of the surface shear and dilatational viscosities $\mu_1^S$ and $\mu_2^S$, respectively, $\bnabla^ S$ the tangential intrinsic gradient along the free surface, $\mathbf{v}^S(z,t)$ the (two-dimensional) tangential velocity to the free surface, $\kappa=\kappa_1+\kappa_2$ (twice) the mean curvature of the free surface, $\kappa_1$ and $\kappa_2$ the curvatures along the meridians and parallels in the inward normal direction, respectively, and $(\boldsymbol{\nabla}^S\mathbf{v}^S)_{11}$ and $(\boldsymbol{\nabla}^S\mathbf{v}^S)_{22}$ the diagonal elements of $\boldsymbol{\nabla}^S\mathbf{v}^S$ along the meridians and the parallels, respectively.
In addition, the balance of tangential stresses leads to
\begin{equation}
{\bf t}\cdot {\bf T}\cdot {\bf n}={\bf t}\cdot \btau^S,
\end{equation}
where ${\bf t}$ is the unit vector tangential to the free surface meridians, and
\begin{eqnarray}
\btau^S=\bnabla^S\widehat{\sigma}+\bnabla^S\cdot\{ \text{Oh}_1^S[\bnabla^S\mathbf{v}^S+(\bnabla^S\mathbf{v}^S)^\top]\}+\bnabla^S[(\text{Oh}_2^S-\text{Oh}_1^S)\bnabla^S\cdot\mathbf{v}^S],
\label{stress}
\end{eqnarray}
is the surface stress tensor.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.15\textwidth}{!}{\includegraphics{fig1.pdf}}}
\caption{Image of a pendant drop in the experiments right before its breakup.}
\label{sketch}
\end{figure}
The surface viscosities are expected to depend on the surfactant surface concentration. For the sake of simplicity, we assume the linear relationships $\mu_{1,2}^S=\mu_{1,2}^{S*}\widehat{\Gamma}/\widehat{\Gamma}_{\textin{cmc}}$, where $\mu_{1,2}^{S*}$ are the surfactant viscosities at the cmc. In addition, $\widehat{\Gamma}\equiv \Gamma/\Gamma_0$ and $\widehat{\Gamma}_{\textin{cmc}}\equiv\Gamma_{\textin{cmc}}/\Gamma_0$, where $\Gamma$ and $\Gamma_{\textin{cmc}}$ are the surfactant surface concentration and its value at the cmc, respectively, both in terms of the equilibrium value $\Gamma_0$. Therefore,
\begin{equation}
\text{Oh}_{1,2}^S=\text{Oh}_{1,2}^{S*} \frac{\widehat{\Gamma}}{\widehat{\Gamma}_{\textin{cmc}}},
\end{equation}
where $\text{Oh}_{1,2}^{S*}=\mu^{S*}_{1,2}(\rho\sigma_0 R_0^3)^{-1/2}$ are the superficial Ohnesorge numbers at the cmc.
To calculate the surfactant surface concentration, we take into account that the droplet breakup time is much smaller than the characteristic adsorption-desorption times (see Sec.\ \ref{sec3}), and, therefore, surfactant solubility can be neglected over the breakup process. In this case, one must consider the equation governing the surfactant transport on the free surface:
\begin{equation}
\label{conser}
\frac{\partial \widehat{\Gamma}}{\partial t}+{\bnabla}^S\cdot (\widehat{\Gamma}{\bf v})=\frac{1}{\text{Pe}^S}\, {\bnabla}^{S2}\widehat{\Gamma},
\end{equation}
where Pe$^S$=$R_0^2/(t_0 {\cal D}^S)$ and ${\cal D}^S$ are the surface Peclet number and diffusion coefficient, respectively. The equation of state $\widehat{\sigma}(\widehat{\Gamma})$ is obtained from experimental data as explained below.
The above theoretical model is numerically solved by mapping the time-dependent liquid region onto a fixed numerical domain through a coordinate transformation. The hydrodynamic equations are spatially discretized with the Chebyshev spectral collocation technique, and an implicit time advancement is performed using second-order backward finite differences \cite{HM16a}. To deal with the free surface overturning taking place right before the droplet breakup, a quasi-elliptic transformation \citep{DT03} was applied to generate the mesh. To trigger the pendant drop breakup process, a very small force was applied to a stable shape with a volume just below the critical one. This perturbation was expected to affect neither the pendant drop dynamics close to the free-surface pinch-off nor the formation of the satellite droplet. The time-dependent mapping of the physical domain does not allow the algorithm to surpass the free surface pinch-off, and therefore the evolution of the satellite droplet cannot be analyzed.
\section{Experimental method}
\label{sec3}
In the experimental setup (Fig.\ \ref{setup}), a cylindrical feeding capillary (A) $R_0=115$ $\mu$m in outer radius was placed vertically. To analyze the role of the capillary size, we also conducted experiments with $R_0=205$ $\mu$m. A pendant droplet was formed by injecting the liquid at a constant flow rate with a syringe pump (Harvard Apparatus PHD 4400) connected to a stepping motor. We used a high-precision orientation system and a translation stage to ensure the correct position and alignment of the feeding capillary. Digital images of the drop were taken using an ultra-high-speed video camera ({\sc Kirana}-5M) (B) equipped with optical lenses (an Optem HR 50X magnification zoom-objective and a NAVITAR 12X set of lenses) (C). As explained below, the images were acquired either at $5\times 10^6$ fps with a magnification 101.7 nm/pixel or at $5\times 10^5$ fps with a magnification 156 nm/pixel. The camera could be displaced both horizontally and vertically using a triaxial translation stage (D) with one of its horizontal axes (axis $x$) motorized (THORLABS Z825B) and controlled by the computer, which allowed as to set the droplet-to-camera distance with an error smaller than 29 nm. The camera was illuminated with a laser (SI-LUX 640, {\sc Specialised Imaging}) (E) synchronized with the camera, which reduced the effective exposure time down to 100 ns. The camera was triggered by an optical trigger (SI-OT3, {\sc Specialised Imaging}) (F) equipped with optical lenses (G) and illuminated with cold white backlight (H). All these elements were mounted on an optical table with a pneumatic anti-vibration isolation system (I) to damp the vibrations coming from the building.
\begin{figure}[h]
\begin{tabular}{lr}
\vcenteredhbox{\resizebox{0.45\textwidth}{!}{\includegraphics{fig2a.pdf}}}&
\vcenteredhbox{\resizebox{0.3\textwidth}{!}{\includegraphics{fig2b.pdf}}}
\end{tabular}
\caption{(Left) Experimental setup: feeding capillary (A), ultra-high speed video camera (B), optical lenses (C), triaxial translation stage (D), laser (E), optical trigger (F), optical lenses (G), white backlight (H), and anti-vibration isolation system (I). (Right) Spatio-temporal hypervolume analyzed in the experiment: image width $w=94$ $\mu$m, height $h=78$ $\mu$m, depth of field $d=0.48$ $\mu$m and time $\Delta t=36$ $\mu$s
elapsed during the experiment.}
\label{setup}
\end{figure}
In the experiment, a pendant droplet hanging on the feeding capillary was inflated by injecting the liquid at 1 ml/h. The triple contact lines anchored to the outer edge of the capillary. The drop reached its maximum volume stability limit after around 20 s. We analyzed images of the quasi-static process with the Theoretical Image Fitting Analysis (TIFA) \citep{CBMN04} method to verify that the surface tension right before the droplet breakup was the same (within the experimental uncertainty) as that measured at equilibrium. In this way, one can ensure that the surfactant surface concentration corresponded to the prescribed volumetric concentration at equilibrium. This conclusion can be anticipated from the fact that the characteristic surfactant adsorption process is much smaller than the droplet inflation time.
When the maximum volume stability limit was reached, the droplet broke up spontaneously. We recorded 180 images at $5\times 10^6$ fps of the final stage of the breakup process within a spatial window $94\times 78$ $\mu$m. This experiment was repeated several times to assess the degree of reproducibility of the experimental results. The flow rate at which the pendant droplet is inflated was reduced down to 0.1 ml/h to verify that this parameter did not affect the final stage of the breakup process. Besides, 180 images of a spatial window $144\times 120$ $\mu$m were taken at $5\times 10^5$ fps to describe the process on a larger scale.
We selected SDS in deionized water (DIW) because it is a solution widely used in experiments and very well characterized. The dependence of the (equilibrium) surface tension with respect to the surface surfactant concentration $\Gamma$ has been determined from direct measurements (Fig.\ \ref{tas}) \citep{TMS70}. We use the fit
\begin{equation}
\label{fit}
\sigma_0=10^3\frac{-17.94\,\Gamma+60.76}{\Gamma^2-240.9\,\Gamma+841.8}
\end{equation}
to that experimental data in our simulations. In this equation, $\sigma_0$ and $\Gamma$ are measured in mN/m and $\mu$mol/m$^2$, respectively. It should be noted that there is no theoretical justification for the above equation of state. It simply represents an accurate approximation for the numerical simulations. Other equations may be equally valid for our purposes.
\begin{figure}[h]
\begin{tabular}{lr}
\vcenteredhbox{\resizebox{0.3\textwidth}{!}{\includegraphics{fig3.pdf}}}
\end{tabular}
\caption{Experimental values of the (equilibrium) surface tension $\sigma_0$ versus the surface surfactant concentration $\Gamma$ for SDS in DIW (symbols) \citep{TMS70}. The line corresponds to the fit (\ref{fit}) to those values.}
\label{tas}
\end{figure}
Table \ref{tab2} shows some physical properties of SDS in DIW. The shear $\mu_1^{S*}$ and dilatational $\mu_2^{S*}$ surface viscosities of aqueous solutions of SDS at the cmc have been widely measured with different methods over the last decades. \citet{ZNMLDMTS14} reported the surface shear viscosity to be below $10^{-8}$ Pa\,s\,m (the sensitivity limit of their technique). Other authors have measured values up to five orders of magnitude higher than that upper bound \citep{MK12,LD03}.
\begin{table*}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$\mu_1^{S*}$ (Pa\,s\,m) \citep{ZNMLDMTS14} & $\mu_2^{S*}$ (Pa\,s\,m) \citep{MK12} & ${\cal D}^S$ (m$^2$/s) \citep{MK12} & $t_a$ (ms) \citep{RABK09} & $t_d$ (ms) \citep{MK12} & $\Gamma_{\textin{cmc}}$ ($\mu$mol m$^{-2}$ & $N_{\textin{agg}}$ \citep{CDA08} & $R_{\textin{mic}}$ (nm) \citep{CDA08}\\
\hline
$<10^{-8}$ & $10^{-7}$--$10^{-9}$ & $8\times 10^{-10}$ & 100 & 169.5 & 3.19 & 61 & 1.72\\
\hline
\end{tabular}
\caption{Physical properties of SDS in DIW: superficial viscosities $\mu_{1,2}^{S*}$, , surfactant surface diffusivity ${\cal D}^S$, adsorption $t_a$ and desorption $t_d$ time, aggregation number $N_{\textin{agg}}$, and micelle radius $R_{\textin{mic}}$.}
\label{tab2}
\end{table*}
Table \ref{tab3} shows the values of the superficial Ohnesorge numbers, Boussinesq numbers $\text{Bq}_{1,2}=\mu_{1,2}^S/(\mu \ell_c)$, and surface Peclet number. The superficial Ohnesorge numbers are much smaller than the volumetric one, $\text{Oh}\simeq 0.02$, which indicates that the superficial viscosities play no significant role on a scale given by the feeding capillary radius $R_0$. The Boussinesq numbers are defined in terms of the characteristic length $\ell_c\equiv 1$ $\mu$m of the pinching region (see Sec.\ \ref{sec4}). Due to the smallness of this length, superficial viscous stresses may become comparable with the bulk ones, and, therefore, may produce a measurable effect on that scale. The value of the Peclet number indicates that surfactant surface diffusion is negligible at the beginning of the droplet breakup. The Peclet number defined in terms of $\ell_c$ and the corresponding capillary time $(\rho\ell_c^3/\sigma_0)^{1/2}$ takes values of the order of $10^3-10^4$. Therefore, one can expect surface diffusion to play a secondary role on that scale too.
\begin{table*}
\begin{tabular}{|c|c|c|c|c|}
\hline
Oh$_1^S$ & Oh$_2^S$ & $\text{Bq}_1$ & $\text{Bq}_2$ & $\text{Pe}^S$\\
\hline
$<9.35\times 10^{-4}$ & $9.35\times 10^{-3}$--$9.35\times 10^{-5}$ & $<1.41$ & 14.1--0.14 & $7.73\times 10^{4}$\\
\hline
\end{tabular}
\caption{Dimensionless numbers calculated from the physical properties of SDS in DIW (Table \ref{tab2}): interfacial Ohnesorge numbers Oh$_{1,2}^S$, Boussinesq numbers $\text{Bq}_{1,2}$, and surface Peclet number $\text{Pe}^S$.}
\label{tab3}
\end{table*}
\section{Results}
\label{sec4}
Figure \ref{imagesW} shows images of the pinch-off of a drop of DIW, DIW+SDS 0.8cmc, and DIW+SDS 2cmc. A microthread forms next to the pinching point when the surfactant is added. The breakup of that microthread produces a tiny subsatellite droplet 1-2 $\mu$m in diameter. This droplet is significantly smaller than that observed in previous experiments with 5-cSt silicone oil in the absence of surfactant, which seems to confirm that the silicone oil subsatellite droplet was formed by viscoelastic effects \citep{RPVHM19}.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.45\textwidth}{!}{\includegraphics{fig4.pdf}}}
\caption{(From top to bottom) Pinch-off of a drop of DIW, DIW+SDS 0.8cmc, and DIW+SDS 2cmc. The labels indicate the time to the pinching with an error of $\pm$100 ns. The arrows point to the subsatellite droplets.}
\label{imagesW}
\end{figure}
Figure \ref{universal} shows the free surface minimum radius, $R_{\textin{min}}$, as a function of the time to the pinching, $\tau$, for experiments conducted with two feeding capillary radii. The agreement among the results obtained for the same liquid shows both the high reproducibility of the experiments and the universal character (independency from $R_0$) of $R_{\textin{min}}(\tau)$ for the analyzed time interval. In fact, the differences between the results obtained with $R_0=115$ and $205$ $\mu$m are smaller than the effect attributed to the surface viscosities, as will be described below. The results for DIW follow the scaling law (\ref{ebbb}) with $A\simeq 0.55$.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.3\textwidth}{!}{\includegraphics{fig5.pdf}}}
\caption{$R_{\textin{min}}(\tau)$ for the breakup of a pendant drop of DIW and DIW+SDS 0.8cmc. The black and blue symbols are the experimental data for DIW and DIW+SDS 0.8cmc, respectively. The different symbols correspond to experiments visualized with different magnifications and recording speeds. The open and solid symbols correspond to experiments conducted with a cylindrical feeding capillary $R_0=115$ and $205$ $\mu$m in radius, respectively. The solid line is the power law (\ref{ebbb}) with $A\simeq 0.55$.}
\label{universal}
\end{figure}
As can be seen in Figs.\ \ref{W08} and \ref{W2}, there is a remarkable agreement between the experiments and numerical simulations for the pure DIW case for times to the pinching as small as $\sim 300$ ns, which constitutes a stringent validation of both experiments and simulations. When SDS is dissolved in water, it creates a monolayer which substantially alters the pinch-off dynamics. The function $R_{\textin{min}}(\tau)$ takes smaller values than in the pure DIW case over the entire process due to the reduction of the surface tension. More interestingly, if only solutocapillarity and Marangoni convection are considered in the numerical simulations (blue solid lines), there is a measurable deviation with respect to the experimental results for $R_{\textin{min}}(\tau)\lesssim 5$ $\mu$m. Specifically, the free surface in the experiment evolves towards its pinching slower than in the numerical simulation. We added surface viscous stresses to the simulation to reproduce the entire range of experimental data. To this end, we set to zero one of the surface viscosities and modulated the other. In this way, one can establish upper bounds of both the shear $\mu_1^{S*}$ and extensional $\mu_2^{S*}$ viscosity.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.3\textwidth}{!}{\includegraphics{fig6a.pdf}}}\vcenteredhbox{\resizebox{0.3\textwidth}{!}{\includegraphics{fig6b.pdf}}}
\caption{$R_{\textin{min}}(\tau)$ for the breakup of a pendant drop of DIW and DIW+SDS 0.8cmc. The black and blue symbols are the experimental data for DIW and DIW+SDS 0.8cmc, respectively. The different symbols correspond to experiments visualized with different magnifications. The black solid line and magenta dashed line correspond to the simulation and the power law $R_{\textin{min}}(\tau)\sim \tau^{2/3}$ for DIW, respectively. (Left) The colored solid lines correspond to simulations of DIW+SDS 0.8cmc for $\mu_2^{S*}=0$ and $\mu_1^{S*}=0$ (blue), $5 \times 10^{-10}$ (red), and $3.5 \times 10^{-9}$ Pa\,s\,m (cyan). (Right) The colored solid lines correspond to simulations of DIW+SDS 0.8cmc for $\mu_1^{S*}=0$ and $\mu_2^{S*}=0$ (blue), $3.5 \times 10^{-9}$ (red), $10^{-8}$ (cyan), and $10^{-7}$ Pa\,s\,m (green).}
\label{W08}
\end{figure}
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.3\textwidth}{!}{\includegraphics{fig7a.pdf}}}\vcenteredhbox{\resizebox{0.3\textwidth}{!}{\includegraphics{fig7b.pdf}}}
\caption{$R_{\textin{min}}(\tau)$ for the breakup of a pendant drop of DIW and DIW+SDS 2cmc. The black and blue symbols are the experimental data for DIW and DIW+SDS 2cmc, respectively. The different symbols correspond to experiments visualized with different magnifications. The black solid line and magenta dashed line correspond to the simulation and the power law $R_{\textin{min}}(\tau)\sim \tau^{2/3}$ for DIW, respectively. (Left) The colored solid lines correspond to simulations of DIW+SDS 2cmc for $\mu_2^{S*}=0$ and $\mu_1^{S*}=0$ (blue), $5 \times 10^{-10}$ (red), and $3.5 \times 10^{-9}$ Pa\,s\,m (cyan). (Right) The colored solid lines correspond to simulations of DIW+SDS 2cmc for $\mu_1^{S*}=0$ and $\mu_2^{S*}=0$ (blue), $3.5 \times 10^{-9}$ (red), $10^{-8}$ (cyan), and $10^{-7}$ Pa\,s\,m (green).}
\label{W2}
\end{figure}
The experimental results can be reproduced for $\mu_1^{S*}=5 \times 10^{-10}$ Pa\,s\,m and $\mu_2^{S*}=0$ (see Figs.\ \ref{W08}-left and \ref{W2}-left). This upper bound is consistent with the results obtained by \citet{ZNMLDMTS14}, who concluded that the surface shear viscosity of SDS in DIW must take values below $10^{-8}$ Pa\,s\,m (the sensitivity limit of their technique). The experimental results can also be reproduced for $\mu_1^{S*}=0$ and $\mu_2^{S*}=3.5 \times 10^{-9}$ Pa\,s\,m (Figs.\ \ref{W08}-right and \ref{W2}-right). There are significant deviations when other values of $\mu_2^{S*}$ found in the literature are considered \citep{MK12}. The optimum value of the shear viscosity is one order of magnitude smaller than that of the dilatational viscosity, which suggests that shear viscous stresses have a greater effect on the pinching than dilatational ones for the same value of the corresponding surface viscosities. In fact, when the surface shear viscosity takes the value of the dilatational viscosity ($\mu_1^{S*}=3.5 \times 10^{-9}$ Pa\,s\,m, $\mu_2^{S*}=0$) the numerical curve (cyan solid line in Figs.\ \ref{W08}-left and \ref{W2}-left) significantly deviates from the experimental one. The relative importance of the shear and dilatational viscosities can be explained in terms of the equivalence between the corresponding terms in the 1D approximation discussed in Sec.\ \ref{sec2}. The agreement achieved for DIW+SDS 2cmc is slightly worse than that obtained for DIW+SDS 0.8cmc probably because the experimental surface tension values are less accurate for concentrations larger than the cmc (see Fig.\ \ref{tas}).
Equation (\ref{stress}) shows the competition between the Marangoni stress, $\text{M}\equiv {\bf t}\cdot\bnabla^S\widehat{\sigma}$, and the tangential projection of the surface viscous stress,
\begin{eqnarray}
\text{SV}\equiv {\bf t}\cdot\left[\bnabla^S\cdot\{ \text{Oh}_1^S[\bnabla^S\mathbf{v}^S+(\bnabla^S\mathbf{v}^S)^\top]\}-\bnabla^S(\text{Oh}_1^S\bnabla^S\cdot\mathbf{v}^S)\right]\quad \text{and} \quad \text{DV}\equiv {\bf t}\cdot\left[\bnabla^S(\text{Oh}_2^S\bnabla^S\cdot\mathbf{v}^S)\right],
\label{svis}
\end{eqnarray}
where SV and DV are the (dimensionless) contributions associated with the shear and dilatational surface viscosities, respectively. Figures \ref{distributionshear} and \ref{distributiondilatational} show the axial distribution of the tangential stresses, surfactant surface concentration, and free surface radius at a given instant of the droplet evolution. In Fig.\ \ref{distributionshear}, we compare the solution for $\mu_1^{S*}=\mu_2^{S*}=0$ with that for $\mu_2^{S*}=0$ and the optimum value of the shear surface viscosity determined from Fig.\ \ref{W08}-left, $\mu_1^{S*}=5 \times 10^{-10}$ Pa\,s\,m. The same comparison is presented in Fig.\ \ref{distributiondilatational} but for $\mu_1^{S*}=0$ and the optimum value of the dilatational surface viscosity determined from Fig.\ \ref{W08}-right, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m. The instants were selected so that $R_{\textin{min}}$ took approximately the same value in the simulations with and without surface viscosities.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.35\textwidth}{!}{\includegraphics{fig8a.pdf}}}\vcenteredhbox{\resizebox{0.35\textwidth}{!}{\includegraphics{fig8b.pdf}}}
\caption{Axial distribution of the Marangoni stress (M) and tangential shear viscous stress (SV) (a), surfactant surface concentration (b,) and free surface radius (c) for DIW+SDS 0.8cmc. The solid lines are the results for $\{\mu_1^{S*}=5 \times 10^{-10}$ Pa\,s\,m, $\mu_2^{S*}=0\}$, while the dotted lines correspond to $\mu_1^{S*}=\mu_2^{S*}=0$. The dotted lines show the results for $\mu_1^{S*}=\mu_2^{S*}=0$ (in the left-hand graphs, $R_{\textin{min}}=0.9836$ $\mu$m for $\mu_1^{S*}=\mu_2^{S*}=0$).}
\label{distributionshear}
\end{figure}
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.35\textwidth}{!}{\includegraphics{fig9a.pdf}}}\vcenteredhbox{\resizebox{0.35\textwidth}{!}{\includegraphics{fig9b.pdf}}}
\caption{Axial distribution of the Marangoni stress (M) and tangential dilatational viscous stress (DV) (a), surfactant surface concentration (b,) and free surface radius (c) for DIW+SDS 0.8cmc. The solid lines are the results for $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$, while the dotted lines correspond to $\mu_1^{S*}=\mu_2^{S*}=0$. The dotted line show the results for $\mu_1^{S*}=\mu_2^{S*}=0$ (in the right-hand graphs, $R_{\textin{min}}=0.32$ $\mu$m for $\mu_1^{S*}=\mu_2^{S*}=0$).}
\label{distributiondilatational}
\end{figure}
Consider the solution for $\{\mu_1^{S*}=5 \times 10^{-10}$ Pa\,s\,m, $\mu_2^{S*}=0\}$ (Fig.\ \ref{distributionshear}). For $R_{\textin{min}}=0.9439$ $\mu$m, the shear viscous stress is much smaller than the Marangoni stress over the entire free surface. As the minimum radius decreases, the relative importance of the shear viscosity increases. In fact, the maximum value of the shear viscous stress becomes comparable to that of the Marangoni stress for $R_{\textin{min}}=0.32$ $\mu$m. Small differences in the surfactant distribution arise for $R_{\textin{min}}\lesssim 0.32$ $\mu$m. The presence of shear viscosity slightly reduces the magnitude of the Marangoni stress.
As mentioned in the Introduction, there is still a certain controversy about whether surfactants are convected away from the pinching region \citep{TL02,CMP02,LSFB04,LFB06,LB07,CMP09,RABK09,CMCP11a,PMHVV17}. Our results show that, when Marangoni and surface viscosity stresses are taken into account, the surfactant is not swept away from the thread neck in the time interval analyzed ($\widehat{\Gamma}\gtrsim 0.8$ in this region). These stresses operate in a different way but collaborate to keep the surfactant in the vicinity of the pinching point. Marangoni stress tries to restore the initial uniform surfactant concentration, while surface viscosity opposes to the variation of the surface velocity, and, therefore, to the extensional flow responsible for the surfactant depletion that would occur in the absence of Marangoni and viscous stresses. While the gradient of surfactant concentration remains bounded in the pinching region, the gradient of surface velocity continues to increase there (Fig.\ \ref{maximum}). This may explain why surface viscous stresses grow faster than Marangoni stress over the time interval analyzed.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.35\textwidth}{!}{\includegraphics{fig10.pdf}}}
\caption{Maximum values of the surfactant gradient, max($\boldsymbol{\nabla}^S\widehat{\Gamma}$) (solid symbols), and the surface velocity gradient, max($\boldsymbol{\nabla}^S\cdot{\bf v}^S$) (open symbols), for $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$.}
\label{maximum}
\end{figure}
Interestingly, the free surface shape for $\mu_1^{S*}=\mu_2^{S*}=0$ is practically the same as that with the adjusted value of $\mu_1^{S*}$. This indicates that surface viscosity simply delays the time evolution of that shape. In fact, the values of the minimum radius obtained with and without surface viscosity significantly differ from each other when they are calculated at the same time to the pinching. For instance, $R_{\textin{min}}=0.32$ and 0.58 $\mu$m at $\tau\simeq 0.36$ $\mu$s for $\{\mu_1^{S*}=5 \times 10^{-10}$ Pa\,s\,m, $\mu_2^{S*}=0\}$ and $\mu_1^{S*}=\mu_2^{S*}=0$, respectively. However, the free surface shapes are practically the same if they are compared when the same value $R_{\textin{min}}=0.32$ $\mu$m of the minimum radius is reached. We can conclude that the surface viscosities of the SDS monolayer hardly alter the satellite droplet diameter and the amount of surfactant trapped in it. In this sense, solutocapillarity and Marangoni convection are the major factors associated with the surfactant \citep{KWTB18}.
Similar conclusions can be drawn from the numerical simulation conducted for $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$ (Fig.\ \ref{distributiondilatational}). In this case, the dilatational viscous stress exhibits a noticeable maximum near the free surface neck. The full width at half maximum, $\Delta z$, measured in terms of the minimum radius, $R_{\textin{min}}$, sharply increases as the droplet approaches its breakup (Fig.\ \ref{dz}), which indicates that the importance of the dilatational viscous stress increases with time.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.36\textwidth}{!}{\includegraphics{fig11.pdf}}}
\caption{Full width at half maximum, $\Delta z$, of the dilatational viscous stress as a function of the minimum radius $R_{\textin{min}}$ for DIW+SDS 0.8cmc with $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$.}
\label{dz}
\end{figure}
Figure \ref{vs} shows the velocity ${\bf v}^S=v^S {\bf t}$ along the free surface as the droplet approaches its breakup for the case $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$. As can be observed, the maximum of $v_s(z)$ exhibits a non-monotonic behavior with respect to the time to the pinching, and is located at the free surface neck. The difference between the maximum and minimum values of $v_s(z)$ increases with time, and so does the average dilatational stress in the pinching region. The overturning of the free surface is observed for $R_{\textin{min}}\lesssim 0.3$ $\mu$m. For this reason, $v_s(z)$ becomes a multivalued function on the right side of the free surface neck.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.4\textwidth}{!}{\includegraphics{fig12.pdf}}}
\caption{Surface velocity $v^S(z)$ (a) and free surface radius $R(z)$ (b) for DIW+SDS 0.8cmc with $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$. The dashed vertical lines indicate the position of the free surface neck.}
\label{vs}
\end{figure}
We now study how the scaling of the minimum radius depends on the surfactant viscosities. In general, we have $R_{\textin{min}}=f(\tau,\mu_{1,2}^S)$. Assume that we can write this equation in the form $R_{\textin{min}}=R_s H(\tau/\tau_s)$, where $R_s$ and $\tau_s$ are the length and time scales associated with the surface viscosities, respectively. We suppose that these scales depend on the viscosities as
\begin{equation}
\label{scaling}
R_s=A (\mu_{1,2}^{S*})^{\alpha}, \quad\tau_s=B (\mu_{1,2}^{S*})^{\beta}.
\end{equation}
The cross-over function $H(\xi)$ behaves as $H(\xi)\sim\xi^{2/3}$ for $\xi\gg 1$ (inviscid limit) and $H(\xi)\sim\xi^{\gamma}$ for $\xi\ll 1$ (viscous regime), with a crossover at $\xi\sim 1$. Therefore, $R_{\textin{min}}=A B^{-2/3} (\mu_{1,2}^S)^{\alpha-2\beta/3}\tau^{2/3}$ in the inviscid limit. Assuming that $R_{\textin{min}}\sim \tau^{2/3}$ in that limit, we conclude that $\alpha=2\beta/3$.
The value of the exponents can be guessed from the balance of forces. Both Marangoni and surface viscous stresses delay the free surface pinch-off (Figs.\ \ref{W08} and \ref{W2}) acting against the driving capillary force. For sufficiently small values of $R_{\textin{min}}$, the effect of surface viscous stresses become comparable and even larger than that caused by Marangoni stress (Figs.\ \ref{distributionshear} and \ref{distributiondilatational}). The value of $R_{\textin{min}}$ below which this occurs decreases as the surface viscosities decrease. For instance, Marangoni and surface viscous stresses produce similar effects for $R_{\textin{min}}\lesssim 2$ $\mu$m and $R_{\textin{min}}\lesssim 0.15$ $\mu$m in the cases $\{\mu_1^{S*}=0$, $\mu_2^{S*}=10^{-7}$ Pa\,s\,m$\}$ and $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$, respectively. Therefore, we expect surface viscous stresses to be commensurate with the driving capillary pressure in the pinch-off region for those intervals of $R_{\textin{min}}$. In fact, the interfacial Ohnesorge numbers $\text{Oh}_{1,2}^{S*}$ defined in terms of $R_{\textin{min}}$ take values at least of order of unity in those intervals.
The balance between the capillary pressure and the surface viscous stresses in Eq.\ (\ref{NormalStress1}) yields
$\sigma_0/R_s\sim \mu_{1,2}^{S*}/(R_s\tau_s)$, where we have taken into account that the variation of surface velocity scales as $(R_s/\tau_s)/R_s$ due to the continuity equation. The above balance allows us to conclude that $\beta=1$, and therefore $\alpha=2/3$. According to our analysis,
\begin{equation}
\frac{R_{\textin{min}}}{(\mu_{1,2}^{S*})^{2/3}}\sim \left(\frac{\tau}{\mu_{1,2}^{S*}}\right)^{\gamma}
\end{equation}
in the viscous regime.
In the 1D (slenderness) approximation \citep{E97}, the axial forces per unit volume due to the shear and dilatational surface viscosities are $(9\mu_1^S R w_z)_z/2R^{2}$ and $(\mu_2^S R w_z)_z/2R^{2}$ \citep{MS18}, respectively, where $w$ is the $z$-component of the velocity and the subscript $z$ indicates the derivative with respect to the coordinate $z$. As can be seen, the terms corresponding to the shear and dilatational viscosities differ only by a factor 9. Therefore, the asymptotic behavior of $R_{\textin{min}}(\tau)$ for $\{\mu_1^{S*}=a$, $\mu_2^{S*}=0\}$ ($a$ is an arbitrary constant) is expected to be the same as that for $\{\mu_1^{S*}=0$, $\mu_2^{S*}=9a\}$. As will be seen below, this allows us to group the simulation results for $\mu_1^{S*}\neq 0$ and $\mu_2^{S*}\neq 0$.
Using the equivalence $9\mu_1^S\leftrightarrow \mu_2^S$, we find the values of the exponents $\beta$ and $\gamma$ leading to the collapse of all the numerical data for $R_{\textin{min}}\to 0$. Following the optimization method described by \citet{MG20}, the best collapse is obtained for $\beta=1.1$ and $\gamma=1.4$. Figure \ref{ssr} shows the results scaled with the exponents $\beta=1$ and $\alpha=2/3$ calculated in the previous analysis. As explained above, we have grouped the results for nonzero shear and dilatational viscosities using the factor 9 suggested by the 1D model. The simulations show the transition from the inertio-capillary regime $R_{\textin{min}}\sim \tau^{2/3}$ to the asymptotic behavior given by power law $\gamma=3/2$.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.425\textwidth}{!}{\includegraphics{fig13.pdf}}}
\caption{Dimensionless minimum radius $R_{\textin{min}}/R_0$ as a function of the dimensionless time to the breakup, $\tau/t_0$, for the breakup of a pendant drop of DIW+SDS 0.8cmc. The labels indicate the values of the nonzero shear/dilatational viscosity in each case.}
\label{ssr}
\end{figure}
The axial distributions of the capillary pressure and the dilatational viscous stress are shown in Figs.\ \ref{stresses} for the cases $\{\mu_1^{S*}=0$, $\mu_2^{S*}=10^{-7}$ Pa\,s\,m$\}$ and $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$. As can be observed, the dilatational viscous stress becomes comparable with the driving capillary pressure for $R_{\textin{min}}\lesssim 2$ $\mu$m and $R_{\textin{min}}\lesssim 0.15$ $\mu$m in the cases $\mu_2^{S*}=10^{-7}$ Pa\,s\,m and $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m, respectively. This explains the good agreement between the numerical simulations and the scaling proposed above for the minimum radius.
\begin{figure}[h]
\vcenteredhbox{\resizebox{0.35\textwidth}{!}{\includegraphics{fig14a.pdf}}}\vcenteredhbox{\resizebox{0.36\textwidth}{!}{\includegraphics{fig14b.pdf}}}
\caption{Axial distribution of the capillary stress $\text{Pc}=\hat{\sigma}\kappa$ (blue lines) and normal dilatational viscous stress $\widehat{\text{DV}}=\text{Oh}_2^S(\boldsymbol{\nabla}^S\cdot\mathbf{v}^S) \kappa$ (red lines) for DIW+SDS 0.8cmc and three instants as indicated by the value of $R_{\textin{min}}$. The left-hand and right-hand graphs correspond to $\{\mu_1^{S*}=0$, $\mu_2^{S*}=10^{-7}$ Pa\,s\,m$\}$ and $\{\mu_1^{S*}=0$, $\mu_2^{S*}=3.5\times 10^{-9}$ Pa\,s\,m$\}$, respectively.}
\label{stresses}
\end{figure}
\section{Conclusions}
\label{sec5}
We studied both numerically and experimentally the breakup of a pendant water droplet loaded with SDS. We measured a delay of the droplet breakup with respect to that predicted when only solutocapillarity and Marangoni stress ate accounted for. This delay is attributed to the role played by surface viscosities. When Marangoni and surface viscosity stresses are accounted for, then surface convection does not sweep away the surfactant from the thread neck, at least in the time interval analyzed. The results show that surface viscous stresses have little influence on both the surfactant distribution along the free surface and the free surface position. Therefore, the size of the satellite droplet and the amount of surfactant accumulated in it are hardly affected by the surface viscosities. These results differ from those obtained for a much more viscous surfactant \citep{PMHVV17}. As the free surface approaches its breakup, an inertio-capillary regime gives rise to that in which surface viscous stresses become commensurate with the driving capillary pressure. We have proposed a scaling law to account for the effect of surface viscosities on $R_{\textin{min}}(\tau)$ in this last regime.
The pinching of an interface is a singular phenomenon that allows us to test theoretical models under extreme conditions. The vanishing spatiotemporal scales reached by the system as the interface approaches its breakup unveil physical effects hidden in phenomena occurring on much larger scales. This work is an example of this. Surface viscous stresses become relevant in the vicinity of the pinching region long before thermal fluctuations become significant \citep{ML00,E02}, even for practically inviscid surfactants, such as SDS. In this sense, the surfactant-laden pendant droplet can be seen as a very sensitive surfactometer to determine the values of the surface viscosities, which constitutes a difficult problem \citep{ELS16}. A series of experiments for different surfactant concentrations and needle radii may lead to accurate measurements of $\mu_1^{S}(\Gamma)$ and $\mu_2^{S}(\Gamma)$ characterizing the behavior of low-viscosity surfactants.
\vspace{1cm}
Partial support from the Ministerio de Econom\'{\i}a y Competitividad and by Junta de Extremadura (Spain) through Grant Nos. DPI2016-78887 and GR18175is gratefully acknowledged.
|
1401.0046
|
\section{Repulsive Gravity and Cosmic Acceleration}
\IndexEntry{darkenergyIntro}
In the first modern cosmological model, Einstein\cite{de_einstein17}
\footnote{}{Chapter 25 from
the Particle Data Group Review of Particle Physics, 2013 partial update for
the 2014 division. Other chapters referred to in this review can be
found at {\tt http://pdg.lbl.gov}.}
modified his field equation of General Relativity (GR), introducing a
``cosmological term'' that enabled a solution with time-independent,
spatially homogeneous matter density $\rho_{\rm m}$ and constant positive
space curvature.
Although Einstein did not frame it this way, one can view the
``cosmological constant'' $\Lambda$ as representing a constant energy
density of the vacuum\cite{de_zeldovich68},
whose repulsive gravitational effect balances the attractive gravity
of matter and thereby allows a static solution.
After the development of dynamic cosmological
models\cite{de_friedmann}\cite{de_lemaitre} and the discovery
of cosmic expansion\cite{de_hubble}, the cosmological term appeared
unnecessary, and Einstein and de Sitter\cite{de_EdS} advocated
adopting an expanding, homogeneous and isotropic, spatially flat,
matter-dominated universe as the default cosmology until observations
dictated otherwise. Such a model has matter density equal to the
critical density, $\Omega_{\rm m} \equiv \rho_{\rm m}/\rho_{\rm c} = 1$, and negligible
contribution from other energy components\cite{de_pdgbbc}.
By the mid-1990s, Big Bang cosmology was convincingly established, but
the Einstein-de Sitter model was showing numerous
cracks, under the combined onslaught of data from the cosmic microwave
background (CMB), large scale galaxy clustering, and direct estimates
of the matter density, the expansion rate ($H_0$), and the age
of the Universe.
Introducing a cosmological constant offered a potential resolution
of many of these tensions.
In the late 1990s, supernova surveys by two
independent teams provided direct evidence for accelerating
cosmic expansion\cite{de_riess98}\cite{de_perlmutter99}, establishing
the cosmological constant model
(with $\Omega_{\rm m} \approx 0.3$, $\Omega_\Lambda \approx 0.7$)
as the preferred alternative to the $\Omega_{\rm m}=1$ scenario.
Shortly thereafter, CMB evidence for a spatially flat
universe\cite{de_debernardis00}\cite{de_hanany00}, and thus for
$\Omega_{\rm tot} \approx 1$, cemented the case for
cosmic acceleration by firmly eliminating the free-expansion
alternative with $\Omega_{\rm m} \ll 1$ and $\Omega_\Lambda = 0$.
Today, the accelerating universe is well established by multiple
lines of independent evidence from a tight web of precise
cosmological measurements.
As discussed in the Big Bang Cosmology article of this {\it Review}
(Sec.~\use{Chap.bigbangrpp}), the scale factor $R(t)$ of a homogeneous
and isotropic universe governed by GR grows at an accelerating rate
if the pressure $p < -{1\over 3}\rho$. A cosmological constant has
$\rho_\Lambda=\,{\rm const.}$ and pressure $p_\Lambda = -\rho_\Lambda$
(see Eq.~\use{Chap.bigbangrpp}.10), so it will drive acceleration
if it dominates the total energy density. However, acceleration
could arise from a more general form of ``dark energy'' that has
negative pressure, typically specified in terms of the
equation-of-state-parameter $w = p/\rho$ ($=-1$ for a cosmological
constant). Furthermore, the conclusion that acceleration requires
a new energy component beyond matter and radiation relies
on the assumption that GR is the correct description of gravity on
cosmological scales. The title of this article follows the common but inexact
usage of ``dark energy'' as a catch-all term for the origin of
cosmic acceleration, regardless of whether it arises from a new
form of energy or a modification of GR.
Our account here draws on the much longer review of cosmic
acceleration by Ref.\cite{de_weinberg13}, which provides background
explanation and extensive literature references for most of the
points in this article, but is less up to date in its description
of current empirical constraints.
\pagecheck{0.333333\vsize}
Below we will use the abbreviation $\Lambda$CDM to refer to a model
with cold dark matter, a cosmological constant, inflationary
initial conditions, and standard radiation and neutrino content.
We will use ``flat $\Lambda$CDM'' to further specify a flat universe
with $\Omega_{\rm tot}=1$. We will use $w$CDM to denote a model with the
same assumptions (including flatness) but a free, constant value of $w$.
\section{Theories of Cosmic Acceleration}
\IndexEntry{darkenergyTheory}
\subsection{Dark Energy or Modified Gravity?}
A cosmological constant is the mathematically simplest, and
perhaps the physically simplest, theoretical explanation for
the accelerating universe. The problem is explaining its unnaturally
small magnitude, as discussed in Sec.~\use{Chap.bigbangrpp}.4.7
of this {\it Review}.
An alternative (which still requires finding a way to make the
cosmological constant zero or at least negligibly small) is that the
accelerating cosmic expansion is driven by a new form of
energy such as a scalar field\cite{de_ratra88} with potential $V(\phi)$.
The energy density and pressure of the field $\phi({\bf x})$
take the same forms as for inflationary scalar fields, given
in Eq.~(\use{Chap.bigbangrpp}.52) of the Big Bang Cosmology article.
In the limit that
${1\over 2}\dot{\phi}^2 \ll |V(\phi)|$, the scalar field acts
like a cosmological constant, with $p_\phi \approx - \rho_\phi$.
In this scenario, today's cosmic acceleration is closely
akin to the epoch of inflation, but with radically different energy
and timescale.
More generally, the value of $w = p_\phi/\rho_\phi$ in scalar field
models evolves with time
in a way that depends on $V(\phi)$ and on the initial conditions
$(\phi_i,\dot{\phi}_i)$; some forms of $V(\phi)$ have attractor
solutions in which the late-time behavior is insensitive to initial
values. Many forms of time evolution are possible, including ones
where $w$ is approximately constant and broad classes where $w$
``freezes'' towards or ``thaws'' away from $w = -1$, with the transition
occurring when the field comes to dominate the total energy budget.
If $\rho_\phi$ is even approximately constant, then it becomes
dynamically insignificant at high redshift, because the matter density
scales as $\rho_{\rm m} \propto (1+z)^3$. ``Early dark energy'' models are
ones in which $\rho_\phi$ is a small but not negligible fraction
(\hbox{\it e.g.}} \def\cf{\hbox{\it cf.}, a few percent) of the total energy throughout the matter and
radiation dominated eras, tracking the dominant component before
itself coming to dominate at low redshift.
Instead of introducing a new energy component, one can attempt to
modify gravity in a way that leads to accelerated
expansion\cite{de_jain10}.
One option is to replace the
Ricci scalar ${\cal R}$ with a function ${\cal R}+f({\cal R})$
in the gravitational action\cite{de_carroll04}.
Other changes can be more radical, such as introducing extra dimensions
and allowing gravitons to ``leak'' off the brane that represents
the observable universe (the ``DGP'' model\cite{de_dvali00}).
The DGP example has
inspired a more general class of ``galileon'' and massive gravity models.
Constructing viable modified gravity models is challenging, in part
because it is easy to introduce theoretical inconsistencies
(such as ``ghost'' fields with negative kinetic energy)
but above all because GR is a theory with many high-precision empirical
successes on solar system scales\cite{de_will06}.
Modified gravity models typically invoke screening mechanisms
that force model predictions to approach those of GR in regions of
high density or strong gravitational potential.
Screening offers potentially distinctive signatures,
as the strength of gravity (\ie, the effective value of $G_{\rm N}$)
can vary by order unity in environments with different gravitational
potentials.
More generally, one can search for signatures of modified gravity by
comparing the history of cosmic structure growth to the history
of cosmic expansion. Within GR, these two are linked by a consistency
relation, as described below (\Eq{ede:growth}).
Modifying gravity can change the predicted rate of structure
growth, and it can make the growth rate dependent on scale or
environment. In some circumstances, modifying gravity alters
the combinations of potentials responsible for gravitational lensing
and the dynamics of non-relativistic
tracers (such as galaxies or stars)
in different ways (see Sec.~\use{Chap.bigbangrpp}.4.7
in this {\it Review}), leading to
order unity mismatches between the masses of objects inferred from
lensing and those inferred from dynamics in unscreened environments.
At present there are no fully realized and empirically viable modified
gravity theories that explain the observed level of cosmic acceleration.
The constraints on $f({\cal R})$ models now force
them so close to GR that they cannot produce acceleration without
introducing a separate dark energy component\cite{de_deathofchameleon}.
The DGP model is empirically ruled out by several tests,
including the expansion history, the integrated Sachs-Wolfe effect,
and redshift-space distortion measurements of the structure growth
rate\cite{de_deathofdgp}.
The elimination of these models should be considered an important
success of the program to empirically test theories of cosmic acceleration.
However, it is worth recalling that
there was no fully realized gravitational explanation for
the precession of Mercury's orbit prior to the completion of GR in
1915, and the fact that no complete and viable modified gravity
theory exists today does not mean that one will not arise in the future.
In the meantime, we can continue empirical investigations that can
tighten restrictions on such theories
or perhaps point towards the gravitational sector
as the origin of accelerating expansion.
\subsection{Expansion History and Growth of Structure}
The main line of empirical attack on dark energy is to measure the
history of cosmic expansion and the history of matter clustering
with the greatest achievable precision over a wide range of redshift.
Within GR, the expansion rate $H(z)$ is governed
by the Friedmann equation (see the articles on Big Bang Cosmology
and Cosmological Parameters---Secs.~\use{Chap.bigbangrpp}
and~\use{Chap.hubblerpp} in this {\it Review}).
For dark energy with an equation of state $w(z)$,
the cosmological constant contribution to the expansion, $\Omega_{\Lambda}$,
is replaced by a redshift-dependent contribution with the evolution
of the dark energy density following from Eq.~(\use{Chap.bigbangrpp}.10),
$$
\Omega_{\rm DE}{\rho_{\rm DE}(z) \over \rho_{\rm DE}(z=0)} = \Omega_{\rm DE} \exp \left[ 3\int_0^z [1+w(z')]
{dz' \over 1+z'} \right] ~=~ \Omega_{\rm DE}(1+z)^{3(1+w)},
\EQN{ede:rhode}
$$
where the second equality holds for constant $w$.
If $\Omega_{\rm m}$, $\Omega_{\rm r}$, and the present value of
$\Omega_{\rm tot}$ are known,
then measuring $H(z)$ pins down $w(z)$.
(Note that $\Omega_{\rm DE}$ is the same quantity denoted $\Omega_{\rm v}$
in Sec.~\use{Chap.bigbangrpp}, but we have adopted the DE subscript
to avoid implying that dark energy is necessarily a vacuum effect.)
While some observations can probe $H(z)$ directly, others measure
the distance-redshift relation. The basic relations between angular diameter
distance or luminosity distance and $H(z)$ are given in
Ch.~\use{Chap.bigbangrpp} ---and
these are generally unaltered in time-dependent dark energy or
modified gravity models. For convenience, in later sections,
we will sometimes refer to the comoving angular distance,
$D_{\rm A,c}(z) = (1+z) D_{\rm A}(z)$.
In GR-based linear perturbation theory, the density contrast
$\delta({\bf x},t) \equiv \rho({\bf x},t)/\bar\rho(t) - 1$ of
pressureless matter grows in proportion to the linear growth
function $G(t)$ (not to be confused with the gravitational constant $G_{\rm N}$),
which follows the differential equation
$$
\ddot G + 2H(z) \dot G - {3\over 2}\Omega_{\rm m} H_0^2 (1+z)^3 G =0\ .
\EQN{ede:growth}
$$
To a good approximation, the logarithmic derivative of $G(z)$ is
$$
f(z) \equiv -{d\ln G \over d\ln (1+z)} \approx
\left[\Omega_{\rm m} (1+z)^3 {H_0^2 \over H^2(z)} \right]^{\gamma}\ ,
\EQN{ede:fdef}
$$
where $\gamma \approx 0.55$ for relevant values of cosmological
parameters\cite{de_linder05}.
In an $\Omega_{\rm m}=1$ universe, $G(z) \propto (1+z)^{-1}$,
but growth slows when $\Omega_{\rm m}$ drops significantly below unity.
One can integrate
\Eq{ede:fdef} to get an approximate integral relation
between $G(z)$ and $H(z)$, but the full (numerical) solution to
\Eq{ede:growth} should be used for precision calculations.
Even in the non-linear regime, the amplitude of clustering is determined
mainly by $G(z)$, so observations of non-linear structure can be
used to infer the linear $G(z)$, provided one has good theoretical modeling
to relate the two.
In modified gravity models the growth rate of gravitational clustering
may differ from the GR prediction. A general
strategy to test modified gravity, therefore, is to measure both the
expansion history and the growth history to see whether they yield
consistent results for $H(z)$ or $w(z)$.
\subsection{Parameters}
Constraining a general history of $w(z)$ is nearly impossible, because
the dark energy density, which affects $H(z)$, is given by an integral
over $w(z)$, and distances and the growth factor involve a further
integration over functions of $H(z)$. Oscillations in $w(z)$ over a
range $\Delta z/(1+z) \ll 1$ are therefore extremely difficult to constrain.
It has become conventional to phrase constraints or projected constraints
on $w(z)$ in terms of a linear evolution model,
$$
w(a) = w_0 + w_a(1-a) = w_{\rm p} + w_a(a_{\rm p}-a),
\EQN{ede:w0wa}
$$
where $a \equiv (1+z)^{-1}$, $w_0$ is the value of $w$ at $z=0$, and $w_{\rm p}$
is the value of $w$ at a ``pivot'' redshift $z_{\rm p} \equiv a_{\rm p}^{-1} - 1$,
where it is best constrained by a given set of experiments.
For typical data combinations, $z_{\rm p} \approx 0.5$.
This simple parameterization can provide a good approximation to
the predictions of many physically motivated models
for observables measured with percent-level precision.
A widely used ``Figure of Merit'' (FoM) for dark energy
experiments\cite{de_detf}
is the projected combination of errors $[\sigma(w_{\rm p})\sigma(w_a)]^{-1}$.
Ambitious future experiments with $0.1$--$0.3\%$ precision on
observables can constrain richer descriptions of $w(z)$, which can
be characterized by principal components.
There has been less convergence on a standard parameterization for
describing modified gravity theories. Deviations from the GR-predicted
growth rate can be described by a deviation $\Delta\gamma$ in the
index of \Eq{ede:fdef}, together with an overall multiplicative
offset relative to the $G(z)$ expected from extrapolating the
CMB-measured fluctuation amplitude to low redshift.
However, these two parameters may not accurately capture the growth
predictions of all physically interesting models. Another
important parameter to constrain is the ratio of the gravitational
potentials governing space curvature and the acceleration of
non-relativistic test particles. The possible phenomenology of
modified gravity models is rich, which enables many consistency
tests but complicates the task of constructing parameterized descriptions.
The more general set of cosmological parameters is discussed elsewhere
in this {\it Review} (Sec.~\use{Chap.hubblerpp}),
but here we highlight a few that are
particularly important to the dark energy discussion:
\par\hang\textindent{$\bullet$} The dimensionless Hubble parameter
$h \equiv H_0/100\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$ determines the present day value of
the critical density and the overall scaling of distances inferred
from redshifts.
\par\hang\textindent{$\bullet$} $\Omega_{\rm m}$ and $\Omega_{\rm tot}$
affect the expansion history and the distance-redshift relation.
\par\hang\textindent{$\bullet$} The sound horizon
$r_{\rm s} = \int_0^{t_{\rm rec}} c_{\rm s}(t) dt/a(t)$, the comoving distance
that pressure waves can propagate between $t=0$ and recombination,
determines the physical scale of the acoustic peaks in the CMB
and the baryon acoustic oscillation (BAO) feature in low redshift
matter clustering\cite{de_rs}.
\par\hang\textindent{$\bullet$} The amplitude of matter fluctuations, conventionally
represented by the quantity $\sigma_8(z)$, scales the overall
amplitude of growth measures such as weak lensing or redshift-space
distortions (discussed in the next section).
\noindent
Specifically, $\sigma_8(z)$ refers to the rms fluctuation of the
matter overdensity $\rho/\bar{\rho}$ in spheres of radius
$8\,h^{-1}{\rm Mpc}$, computed from the linear theory matter power spectrum
at redshift $z$, and $\sigma_8$ on its own refers to the value
at $z = 0$ (just like our convention for $\Omega_{\rm m}$).
While discussions of dark energy are frequently phrased in terms of
values and errors on quantities like $w_{\rm p}$, $w_a$, $\Delta\gamma$,
and $\Omega_{\rm tot}$, parameter precision is the means to an end,
not an end in itself. The underlying goal of empirical studies
of cosmic acceleration is to address
two physically profound questions:
\par\hang\textindent{$1.$} Does acceleration arise from a breakdown of GR
on cosmological scales or from a new energy component that exerts
repulsive gravity within GR?
\par\hang\textindent{$2.$} If acceleration is caused by a new energy component,
is its energy density constant in space and time, as expected for
a fundamental vacuum energy, or does it show variations that
indicate a dynamical field?
\noindent
Substantial progress towards answering these questions,
in particular any definitive rejection of the cosmological constant
``null hypothesis,'' would be a major breakthrough in cosmology and
fundamental physics.
\section{Observational Probes}
\labelsection{darkenrgy:ObservProbes}
We briefly summarize the observational probes that play the
greatest role in current constraints on dark energy.
Further discussion and references can be found in other articles of
this {\it Review}, in particular
Secs.~\use{Chap.hubblerpp} (Cosmological Parameters)
and~\use{Chap.microwaverpp} (The Cosmic Microwave Background),
and in Ref.\cite{de_weinberg13}.
\noindent
{\it Cosmic Microwave Background Anisotropies:}
Although CMB anisotropies provide limited information about
dark energy on their own, CMB constraints on the geometry,
matter content, and radiation content of the Universe play a
critical role in dark energy studies when combined with low redshift
probes. In particular, CMB data supply measurements of
$\theta_{\rm s} = r_{\rm s}/D_{\rm A,c}(z_{\rm rec})$,
the angular size of the sound
horizon at recombination, from the angular location of the acoustic peaks,
measurements
of $\Omega_{\rm m} h^2$ and $\Omega_{\rm b} h^2$
from the heights of the peaks,
and normalization of the amplitude of matter fluctuations
at $z_{\rm rec}$ from the amplitude of the CMB fluctuations themselves.
Planck data yield a 0.4\% determination of $r_s$, which scales
as $(\Omega_{\rm m} h^2)^{-0.25}$ for cosmologies with standard
matter and radiation content.
The uncertainty in the matter fluctuation amplitude is 3\%,
dominated by uncertainty in the electron scattering optical
depth $\tau$, and it should drop substantially with future
analyses of Planck polarization maps.
Secondary anisotropies, including the
Integrated Sachs-Wolfe effect,
the Sunyaev-Zel'dovich (SZ,\cite{de_sunyaev70}) effect, and
gravitational lensing of primary anisotropies, provide
additional information about dark energy by constraining
low-redshift structure growth.
\noindent
{\it Type Ia Supernovae:}
Type Ia supernovae, produced by the thermonuclear explosions of
white dwarfs, exhibit 10-15\% scatter in peak luminosity after
correction for light curve duration (the time to rise and fall)
and color (which is a diagnostic of dust extinction).
Since the peak luminosity is not known {\it a priori},
supernova surveys constrain ratios of luminosity distances
at different redshifts. If one is comparing a high redshift sample
to a local calibrator sample measured with much higher precision
(and distances inferred from Hubble's law),
then one essentially measures
the luminosity distance in $\,h^{-1}{\rm Mpc}$, constraining the combination
$h D_{\rm L}(z)$.
With distance uncertainties of 5--8\% precision per well observed supernova,
a sample of $\sim 100$ SNe is
sufficient to achieve sub-percent statistical precision.
The 1--2\% systematic uncertainties in current samples are dominated
by uncertainties associated with photometric calibration and
dust extinction corrections. Another potential systematic is redshift
evolution of the supernova population itself, which can be tested
by analyzing subsamples grouped by spectral properties or host galaxy
properties to confirm that they yield consistent results.
\noindent
{\it Baryon Acoustic Oscillations (BAO):}
Pressure waves that propagate in the pre-recombination photo-baryon fluid
imprint a characteristic scale in the clustering of matter and
galaxies, which appears in the galaxy correlation function as
a localized peak at the sound horizon scale $r_{\rm s}$, or
in the power spectrum as a series of oscillations.
Since observed galaxy coordinates consist of angles and redshifts,
measuring this ``standard ruler'' scale in a galaxy
redshift survey
determines the angular diameter distance $D_{\rm A}(z)$
and the expansion rate $H(z)$, which convert
coordinate separations to comoving distances.
Errors on the two quantities are correlated, and in existing
galaxy surveys the best determined combination is
approximately $D_V(z) = [z D_{\rm A,c}^2(z)/H(z)]^{1/3}.$
As an approximate rule of thumb, a survey that
fully samples structures at redshift $z$ over a comoving volume $V$,
and is therefore limited by cosmic variance rather than shot noise,
measures $D_{A,c}(z)$ with a fractional error of
$0.005(V/10\,{\rm Gpc}^3)^{-1/2}$ and $H(z)$ with a fractional
error $1.6-1.8$ times higher.
BAO can also be measured in the Lyman-$\alpha$ forest of intergalactic
hydrogen absorption towards background quasars, where the best measured
parameter combination is more heavily weighted towards $H(z)$
because of strong redshift-space distortions that enhance
clustering along the line of sight.
BAO distance measurements complement SN distance measurements
by providing absolute rather than relative distances (with precise
calibration of $r_{\rm s}$ from the CMB) and by achieving greater
precision at high redshift thanks to the increasing comoving volume
available. Theoretical modeling suggests that BAO measurements
from even the largest
feasible redshift surveys will be limited by statistical rather
than systematic uncertainties.
\noindent
{\it Weak Gravitational Lensing:} Gravitational light bending by a clustered
distribution of matter shears the shapes of higher redshift background
galaxies in a spatially coherent manner, producing a correlated pattern
of apparent ellipticities.
By studying the weak lensing signal for source galaxies binned by
photometric redshift (estimated from broad-band colors),
one can probe the history of structure growth.
For a specified expansion history, the predicted signal scales approximately
as $\sigma_8\Omega_{\rm m}^{\alpha}$, with $\alpha \approx 0.3$--$0.5$.
The predicted signal
also depends on the distance-redshift relation, so weak lensing becomes
more powerful in concert with SN or BAO measurements that
can pin this relation down independently.
The most challenging systematics are shape measurement
biases, biases in the distribution of photometric
redshifts, and intrinsic alignments of galaxy orientations
that could contaminate the lensing-induced signal.
Predicting the large-scale weak lensing signal is straightforward
in principle, but exploiting small-scale measurements also requires
modeling the effects of complex physical processes such as star formation and
feedback on the matter power spectrum.
\noindent
{\it Clusters of Galaxies:}
Like weak lensing, the abundance of massive dark matter halos
probes structure growth by constraining $\sigma_8\Omega_{\rm m}^\alpha$, where
$\alpha \approx 0.3$--$0.5$. These halos can be identified as
dense concentrations of galaxies or through the signatures of
hot ($10^7$--$10^8\,$K) gas in X-ray emission or SZ
distortion of the CMB. The critical challenge in cluster
cosmology is calibrating the relation $P(M_{\rm halo}|O)$ between the
halo mass as predicted from theory
and the observable $O$ used for cluster identification.
Measuring the stacked weak lensing signal from clusters has emerged as
a promising approach to achieve percent-level
accuracy in calibration of the mean relation, which is
required for clusters to remain competitive with other growth probes.
\noindent
{\it Redshift-Space Distortions (RSD) and the Alcock-Paczynksi (AP) Effect:}
Redshift-space distortions of galaxy clustering, induced
by peculiar motions, probe structure growth by
constraining the parameter combination $f(z)\sigma_8(z)$,
where $f(z)$ is the growth rate defined by \Eq{ede:fdef}
\cite{de_kaiser87}\cite{de_percival09}.
Uncertainties in theoretical modeling of non-linear
gravitational evolution and the non-linear bias between the galaxy and matter
distributions currently limit application of
the method to large scales (comoving separations $r \ga 10\,h^{-1}{\rm Mpc}$ or wavenumbers
$k \la 0.2 h\,{\rm Mpc}^{-1}$).
A second source of anisotropy arises if one adopts the wrong cosmological
metric to convert angles and redshifts into comoving separations,
a phenomenon known as the Alcock-Paczynksi effect\cite{de_alcock79}.
Demanding isotropy of clustering at redshift $z$
constrains the parameter combination $H(z)D_{\rm A}(z)$.
The main challenge for the AP method is correcting for the
anisotropy induced by peculiar velocity RSD.
\noindent
{\it Direct Determination of $H_0$:} The value of $H_0$ sets the
current value of the critical density $\rho_{\rm c} = 3H_0^2/8\pi G_{\rm N}$,
and combination with CMB measurements provides a long lever arm
for constraining the evolution of dark energy.
The challenge in direct $H_0$ measurements is establishing
distances to galaxies that are far enough away that their
peculiar velocities are small compared to the expansion velocity
$v = H_0 d$. This can be done by building a ladder of distance
indicators tied to stellar parallax on its lowest rung, or by using
gravitational lens time delays or geometrical measurements of
maser data to circumvent this ladder.
\section{Current Constraints on Expansion, Growth, and Dark Energy}
The last decade has seen dramatic progress in measurements of
the cosmic expansion history and structure growth, leading to
much tighter constraints on the parameters of dark energy models.
CMB data from the WMAP and Planck satellites and from
higher resolution ground-based
experiments have provided an exquisitely
detailed picture of structure at the recombination epoch and
the first CMB-based measures of low redshift structure through
lensing and SZ cluster counts.
Cosmological supernova samples
have increased in size from tens to many hundreds, with continuous
coverage from $z = 0$ to $z \approx 1.4$, alongside major
improvements in data quality, analysis methods, and detailed
understanding of local populations. BAO measurements have
advanced from the first detections to 2\% precision at multiple
redshifts, with increasingly sophisticated methods for testing
systematics, fitting models, and evaluating statistical errors.
Constraints on low redshift structure from galaxy clusters have
become more robust, with improved X-ray and SZ data and weak
lensing mass calibrations, and they have been joined by the
first precise structure constraints from cosmic shear weak lensing,
galaxy-galaxy lensing, and redshift-space distortions.
The precision of direct $H_0$
measurements has sharpened from the $\sim 10\%$ error of the
HST Key Project \cite{de_freedman01}
to $3$--$4\%$ in some recent analyses.
\@figure\midinsert{distances}
\RPPfigure{pdg_darkenergy_distances.eps,width=3.0in}{center}
{The distance-redshift relation measured from Type Ia SNe and BAO
compared to the predictions (gray curve) of a flat
$\Lambda$CDM
model with
the best-fit parameters inferred from Planck+WP CMB data.
Circles show binned luminosity distances from the Union2.1 SN
sample, multiplied by $(1+z)^{-1}$ to convert to comoving angular
diameter distance. Squares show BAO distance measurements,
converted to $D_{\rm A,c}(z)$ for the Planck+WP cosmology and sound horizon,
from the references given in the text. The lower panel plots
residuals from the Planck+WP $\Lambda$CDM prediction, with dashed
curves that show the effect of changing $w$ by $\pm 0.1$ while
all other parameters are held fixed.
Note that the SN data points
can be shifted up or down by a constant factor to account
for freedom in the peak luminosity, while
the BAO points are calibrated to 0.4\% precision by the
sound horizon scale computed from Planck+WP data.}
\emsg{> \string\endfigure before \string\figure!}
As an illustration of current measurements of the cosmic expansion
history, Figure~\use{Fg.distances} compares distance-redshift measurements
from SN and BAO data to the predictions for a flat universe with
a cosmological constant. SN cosmology relies on compilation
analyses that try to bring data from different surveys probing
distinct redshift ranges to a common scale. The most influential
current compilations are SNLS3\cite{de_sullivan11},
which
combines data from the 3-year Supernova Legacy Survey sample and
the 1st-year SDSS-II Supernova Survey sample with local calibrators and
high-redshift SNe from HST surveys, and Union2.1\cite{de_suzuki11},
which has a broader selection of data, including some but not
all of the sources in SNLS3.
Here we have
used binned distance measurements from Union2.1, but we
caution that the different sample selections and analysis methodologies
lead to systematic differences comparable to the statistical
uncertainties, and it is not obvious which compilation,
if either, should be preferred.
Because the peak luminosity of a fiducial SN Ia is an unknown
free parameter, the SN distance measurements could all be shifted up and
down by a constant multiplicative factor; cosmological information resides in
the relative distances as a function of redshift.
The four BAO data points are taken from analyses of the
6dFGS survey\cite{de_6df},
SDSS-II\cite{de_padmanabhan12},
BOSS\cite{de_anderson12},
and WiggleZ\cite{de_blake11}.
For the BAO measurements we have adopted the sound horizon scale
$r_{\rm s} = 147.49$ Mpc from Planck CMB data, whose 0.4\% uncertainty
is small compared to the current BAO measurement
errors\cite{de_bao_correction}.
We have converted both SN luminosity distances and BAO $D_V$
distances to an equivalent comoving angular diameter distance.
The plotted
cosmological model has $\Omega_{\rm m} = 0.315$ and $h = 0.673$, the
best-fit values\cite{de_posterior}
from Planck+WP CMB data assuming $w = -1$ and
$\Omega_{\rm tot} = 1$.
Specifically, here and below we use parameter
values and MCMC chains from the ``Planck + WP'' analysis
of\cite{de_planckXVI},
which combines the Planck temperature
power spectrum with low multipole polarization measurements
from WMAP\cite{de_bennett12}.
In contrast to the Cosmological Parameters article of
this {\it Review}, we do not use the CMB data set that includes
higher resolution ground-based results because the corresponding chains are
not available for all of the cases we wish to examine,
but differences in cases where they are available are small.
The SN, BAO, and CMB data
sets, probing a wide range of redshifts with radically
different techniques, are mutually consistent with the predictions
of a flat
$\Lambda$CDM cosmology.
We have not included the $z=2.5$ BAO measurement from the BOSS
Lyman-$\alpha$ forest\cite{de_lyabao} on this plot, but it is also
consistent with this fiducial model.
Other curves in the lower panel of Figure~\use{Fg.distances}
show the effect of changing $w$ by $\pm 0.1$ with
all other parameters held fixed. However, such a single-parameter comparison
does not capture the impact of parameter degeneracies or the ability
of complementary data sets to break them, and if one instead forces a match
to CMB data by changing $h$ and $\Omega_{\rm m}$ when changing $w$
then the predicted BAO distances diverge at $z=0$
rather than converging there.
\epsfysize=43mm
\epsffile{figures/pdg_darkenergy_omegam_omegal.eps}
\epsfysize=43mm
\epsffile{figures/pdg_darkenergy_omegam_w.eps}
\epsfysize=43mm
\epsffile{figures/pdg_darkenergy_w05_wa.eps}
\@figure\midinsert{omegamomegal}
\vskip -30pt
\FigureCaption{Constraints on the present matter fraction $\Omega_{\rm m}$
and dark energy model parameters. Dark and light shaded regions indicate
68.3\% and 95.4\% confidence levels, respectively.
``CMB'' is Planck+WP, ``BAO'' is the combination of
SDSS-II, BOSS, and 6dFGS, and ``SN'' is Union2.
(a) The present dark energy fraction $\Omega_{\Lambda}$
vs.\ $\Omega_{\rm m}$, assuming a $\Lambda$CDM model.
CMB data, especially when combined with BAO constraints, strongly favor
a flat universe (diagonal dashed line).
(b) The dark energy equation of state $w$ vs.\ $\Omega_{\rm m}$,
assuming a constant value of $w$.
The dashed contours show the 68.3\% and 95.4\% CL regions
for the combination of WMAP9 and BAO data.
Curves on the left vertical axis show the probability distributions
for $w$ (normalized arbitrarily),
after marginalizing over $\Omega_{\rm m}$, for the CMB+BAO and
CMB+BAO+SN combinations (yellow and black, respectively),
using Planck+WP CMB data, and for the WMAP9+BAO combination (dashed black).
(c) Constraints on the two parameters of the dark energy model
with a time-dependent equation of state given by \Eq{ede:w0wa}:
$w(z=0.5)$ and $w_a = -dw/da$.
}
\emsg{> \string\endfigure before \string\figure!}
Figure~\use{Fg.omegamomegal}a plots joint constraints on
$\Omega_{\rm m}$ and $\Omega_\Lambda$ in a $\Lambda$CDM cosmological model,
assuming $w = -1$ but not requiring spatial flatness.
The SN constraints are computed from the Union2 sample,
and the CMB, CMB+BAO, and CMB+BAO+SN constraints are
taken from MCMC chains provided by the Planck Collaboration\cite{de_planckXVI}.
We do not examine BAO constraints separately from CMB, because the
constraining power of BAO relies heavily on the CMB calibration of $r_{\rm s}$.
The SN data or CMB data on their own are sufficient to reject an
$\Omega_\Lambda=0$ universe, but individually they allow a wide
range of $\Omega_{\rm m}$ and significant non-zero curvature.
The CMB+BAO combination zeroes in on a tightly constrained
region with $\Omega_{\rm m} = 0.309 \pm 0.011$ and
$\Omega_{\rm tot} = 1.000 \pm 0.0033$.
Combining SN with CMB would lead to a consistent constraint with
around $3$--$4\times$ larger errors.
Adding the SN data to the CMB+BAO combination makes only a small
difference to the constraints in this restricted model space.
Figure~\use{Fg.omegamomegal}b plots constraints in the
$\Omega_{\rm m}-w$ space, where we now consider models with constant $w(z)$
and (in contrast to panel a) assume spatial flatness.
CMB data alone allow a wide range of $w$, but combination with BAO
narrows the allowed range sharply. The preferred region is
consistent with the orthogonal SN constraint, and the
combination of the three data sets yields smaller uncertainties.
The black curve on the left axis shows the posterior p.d.f.\ for $w$
after marginalizing (with a flat prior) over $\Omega_{\rm m}$; we find
$w = -1.10 \pm 0.08$ at 68.3\% CL and $-1.10\pm 0.15$ at 95.4\% CL.
The dashed contours and dashed marginal curve show the impact
of substituting WMAP9 data
for Planck+WP in the CMB+BAO combination. The two constraints
are compatible, but the shift from WMAP to Planck+WP has reduced the
uncertainty in $w$ and pulled the best-fit value lower.
Figure~\use{Fg.omegamomegal}c considers a model space with
time varying $w$, evolving according to the linear parameterization
$w(a) = w_0 + w_a(1-a)$, again assuming flat space.
Instead of $w_0$ we show constraints on $w(z=0.5)$,
approximately the pivot redshift where $w$ is best determined and
covariance with $w_a$ is minimized. This plot shows that even
the combination of current CMB, BAO, and SN data places only weak constraints
on time evolution of the equation of state, still allowing order unity
changes in $w$ between $z=1$ and $z=0$ ($\Delta a = 0.5$).
The value of $w(z=0.5)$, on
the other hand, is reasonably well constrained, with errors only
slightly larger than those for the constant-$w$ model of panel b.
Errors on $w_0 = w(z=0.5)-0.333w_a$
are much larger and are strongly correlated with the $w_a$ errors.
While the CMB, BAO, and SN data sets considered here are mutually
consistent with a flat $\Lambda$CDM model, tensions arise when other
cosmological measurements enter the mix. Blue and yellow contours
in Figure~\use{Fg.omegamh0}a show CMB and CMB+BAO constraints
in the $\Omega_{\rm m}-H_0$ plane, assuming $w=-1$ and $\Omega_{\rm tot}=1$.
Red horizontal bars represent the direct estimate
$H_0 = 73.8\pm 2.4\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$ from Ref.\cite{de_riess11},
who use SN Ia distances to galaxies in the Hubble flow with the Ia
luminosity scale calibrated by
HST observations of Cepheids in nearby SN host galaxies.
Another recent estimate by Ref.\cite{de_freedman12},
which employs 3.6$\,\mu$m Cepheid observations to recalibrate
the HST Key Project distance ladder and reduce its uncertainties,
yields a similar central value and estimated error,
$H_0 = 74.3\pm 2.1\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$.
Figure~\use{Fg.omegamh0}a indicates a roughly $2\sigma$ tension
between these direct measurements and the CMB+BAO predictions.
The tension was already present with WMAP CMB data, as
shown in Figure~\use{Fg.omegamh0}b, but it has become stiffer
with Planck+WP, because of smaller CMB+BAO errors and a shift of central
values to slightly higher $\Omega_{\rm m}$ and lower $H_0$. In models with
free, constant $w$ (still assuming $\Omega_{\rm tot}=1$), the tension can
be lifted by going to $w < -1$ and lower $\Omega_{\rm m}$, as illustrated
in Figure~\use{Fg.omegamh0}c.
CMB data determine $\Omega_{\rm m} h^2$ with high precision from the
heights of the acoustic peaks, essentially independent of $w$.
Within the flat $\Lambda$CDM framework, the well determined distance
to the last scattering surface pins down a specific combination
of $(\Omega_{\rm m},h)$, but with free $w$ one can obtain the same distance
from other combinations along the $\Omega_{\rm m} h^2$ degeneracy axis.
\@figure\midinsert{omegamh0}
\RPPfigure{pdg_darkenergy_omegam_h0.eps,width=5in}{center}
{Constraints on the present matter fraction $\Omega_{\rm m}$
and the Hubble constant $H_0$ from various combinations of data,
assuming flat $\Lambda$CDM (left and middle panels) or a constant dark energy
equation of state $w$ (right panel).
Dark and light shaded regions indicate
68.3\% and 95.4\% confidence levels, respectively.
The right panel also shows 100 Monte Carlo samples from the
CMB+BAO constraints with the value of $w$ indicated by the colors
of the dots.
``CMB'' is Planck+WP in the outer panels and WMAP9 in the
middle panel, ``BAO'' is the combination of
SDSS-II, BOSS, and 6dFGS, and ``$H_0$ (HST)'' is the HST constraint
from\cite{de_riess11}.}
\emsg{> \string\endfigure before \string\figure!}
One should not immediately conclude from Figure~\use{Fg.omegamh0} that
$w \neq -1$, but this comparison
highlights the importance of fully understanding
(and reducing) systematic uncertainties in direct $H_0$ measurements.
If errors were reduced and the central value remained close to that
plotted in Figure~\use{Fg.omegamh0},
then the implications would be striking. Other recent $H_0$
determinations exhibit less tension with CMB+BAO, because of lower central
values and/or larger errors\cite{de_humphreys13}\cite{de_courtois12},
including the values of $H_0 = 69 \pm 7\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$ and
$68 \pm 9 \,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$ from Refs.\cite{de_reid13}\cite{de_kuo13},
who circumvent
the traditional distance ladder by using maser distances
to galaxies in the Hubble flow. Gravitational lens time delays
offer another alternative to the traditional distance ladder,
and their precision could become competitive over the next
few years, with increasing sample sizes
and better constrained lens models.
\@figure\midinsert{omegamsigma8}
\RPPfigure{pdg_darkenergy_omegam_sigma8.eps,width=3.5in}{center}
{Constraints on the present matter fraction $\Omega_{\rm m}$
and the present matter fluctuation amplitude $\sigma_8$.
Dark and light shaded regions indicate
68.3\% and 95.4\% confidence levels, respectively.
The upper left panel compares CMB+BAO constraints (using the
same data sets as in Fig. \use{Fg.omegamomegal}) for $\Lambda$CDM
with and without CMB lensing, and for a constant $w$ model
(including CMB lensing). The other three panels compare
flat $\Lambda$CDM constraints between various dark energy probes,
including weak lensing (upper right panel) and clusters (lower panels).
}
\emsg{> \string\endfigure before \string\figure!}
The amplitude of CMB anisotropies is proportional to the amplitude of
density fluctuations present at recombination, and by assuming GR and a
specified dark energy model one can extrapolate the growth of structure
forward to the present day to predict $\sigma_8$.
As discussed in \Sec{darkenrgy:ObservProbes}
probes of low redshift structure typically
constrain the combination
$\sigma_8\Omega_{\rm m}^\alpha$ with $\alpha \approx 0.3$--$0.5$.
Figure \use{Fg.omegamsigma8} displays constraints in the $\sigma_8-\Omega_{\rm m}$
plane from CMB+BAO data and from weak lensing and cluster
surveys\cite{de_sigom}.
Planck data themselves reveal a CMB lensing signature that constrains
low redshift matter clustering and suggests a fluctuation amplitude
somewhat lower than the extrapolated value for flat $\Lambda$CDM.
However, including the CMB lensing signal only slightly alters
the Planck+WP confidence interval for $\Lambda$CDM (purple vs. yellow
contours in Fig.~\use{Fg.omegamsigma8}a).
Allowing free $w$ (gray contours)
expands this interval, primarily in the direction of lower $\Omega_{\rm m}$
and higher $\sigma_8$ (with $w < -1$).
The red contours in Figure~\use{Fg.omegamsigma8}b plot the constraint
$\sigma_8(\Omega_{\rm m}/0.27)^{0.46} = 0.774_{-0.041}^{+0.032}$ inferred from
tomographic cosmic shear measurements in the CFHTLens survey\cite{de_heymans13}.
An independent analysis of galaxy-galaxy lensing and galaxy clustering
in the SDSS yields a similar result\cite{de_mandelbaum13},
$\sigma_8(\Omega_{\rm m}/0.27)^{0.57} = 0.77 \pm 0.05$.
Note that $\sigma_8$ and $\Omega_{\rm m}$ refer to $z=0$ values; the
weak lensing samples and the cluster samples discussed below
are not at zero redshift, but the values of $\sigma_8$ are
effectively extrapolated to $z=0$ for a fiducial cosmology.
(Within current parameter bounds, the uncertainty in extrapolating
growth from $z = 0.5$ to $z = 0$ is $1$--$2\%$, small compared to the
observational uncertainties.)
There is approximately $2\sigma$ tension between the $\sigma_8-\Omega_{\rm m}$
combination predicted by Planck+WP CMB+BAO for $\Lambda$CDM and the
lower value implied by the weak lensing measurements. This tension
was weaker for WMAP+BAO data (dotted contour) because of the larger
error and slightly lower best-fit parameter values.
Additional contours in Figures~\use{Fg.omegamsigma8}c and d show
$\sigma_8-\Omega_{\rm m}$ constraints inferred from three representative
cluster analyses\cite{de_clusters}:
$\sigma_8(\Omega_{\rm m}/0.27)^{0.47} = 0.784 \pm 0.027$
(CPPP), $\sigma_8(\Omega_{\rm m}/0.27)^{0.41} = 0.806 \pm 0.032$ (MaxBCG),
and $\sigma_8(\Omega_{\rm m}/0.27)^{0.32} = 0.782\pm 0.010$ (PlanckSZ).
The basic mass calibration comes from X-ray
data in CPPP, from weak lensing data in MaxBCG, and from SZ
data in PlanckSZ. Because the PlanckSZ constraint itself incorporates
BAO data, we have replaced the CMB+BAO contour with a CMB-only
contour in panel d.
The $\sigma_8\Omega_{\rm m}^\alpha$ constraints from recent cluster analyses
are not in perfect agreement, and the examples shown here are far
from exhaustive. Nonetheless, on balance the cluster analyses,
like the weak lensing analyses, favor lower $\sigma_8\Omega_{\rm m}^\alpha$
than the value extrapolated forward from Planck+WP assuming flat $\Lambda$CDM.
Redshift-space distortion analyses also tend to favor lower
$\sigma_8\Omega_{\rm m}^\alpha$, though statistical errors are still
fairly large. For example,\cite{de_reid12}
find $f(z)\sigma_8(z) = 0.415 \pm 0.034$ from SDSS-III BOSS
galaxies at $z = 0.57$, while the best-fit Planck+WP+BAO
flat $\Lambda$CDM model predicts $f(z)\sigma_8(z) = 0.478 \pm 0.008$ at this redshift.
With somewhat more aggressive modeling assumptions,\cite{de_blake11b}
infer $f(z)\sigma_8(z)$ from the WiggleZ survey
at $z = 0.22,$ 0.41, 0.60, and 0.78, with $\approx 10\%$ errors in
the three highest redshift bins (and 17\% at $z=0.22$), finding excellent
agreement with a flat $\Lambda$CDM model that has $\Omega_{\rm m} = 0.27$
and $\sigma_8=0.8$, and thus with the structure measurements
plotted in Figure~\use{Fg.omegamsigma8}.
Going from $\Lambda$CDM to $w$CDM does not readily resolve this tension,
because the CMB degeneracy direction with free $w$ is roughly parallel
to the $\sigma_8\Omega_{\rm m}^\alpha$ tracks from low redshift structure
(though the tracks themselves could shift or widen for $w \neq -1$).
Each of the low redshift probes has significant systematic uncertainties
that may not be fully represented in the quoted observational errors,
and the tensions are only about $2\sigma$ in the first place,
so they may be resolved by larger samples, better data,
and better modeling.
However, it is notable that all of the discrepancies are in the same direction.
On the CMB side, the tensions would be reduced
if the value of $\Omega_{\rm m}$ or the optical depth $\tau$ (and thus
the predicted $\sigma_8$) has been systematically overestimated.
The most exciting but speculative possibility is that these tensions
reflect a deviation from GR-predicted structure growth, pointing
towards a gravitational explanation of cosmic acceleration. Other possible
physical resolutions could come from dark energy models with
significant time evolution, from a massive neutrino component
that suppresses low redshift structure growth, or from decaying
dark matter that reduces $\Omega_{\rm m}$ at low $z$.
\@table\midinsert{constraints}
\Caption
Constraints on selected parameters from various combinations of
CMB, BAO, and SN data, given as mean values $\pm$ 68.3\% CL limits
(and $\pm$ 95.4\% CL limits for WMAP9-$w$CDM).
``Planck+WP'' combines the Planck temperature power spectrum with WMAP
large scale polarization.
``BAO'' combines the measurements of SDSS-II, BOSS, and 6dFGS.
``SN'' refers to the Union2.1 compilation.
The upper (lower) half of the table assumes a $\Lambda$CDM
(flat $w$CDM) cosmological model.
\emsg{> \NX\endCaption called before \NX\Caption.}
\centerline{\vbox{
\tabskip=0pt\baselineskip 22pt
\halign {\vrule height 0.8\strutskip depth 0.3\strutskip width 0pt#&\tabskip=1em plus 1em minus 0.5em%
\lefttab{#}&
\centertab{#}&
\centertab{#}&
\centertab{#}\tabskip=0pt\cr
\tableheaddoublerul
&& & Data combination & \cr
& Parameter & Planck+WP+BAO & Planck+WP+BAO+SN & WMAP9+BAO \cr
\tableheadsinglerul
& {\bf $\Lambda$CDM} &&&\cr
& $\Omega_{\rm m}$
& $0.309_{-0.011}^{+0.010}$
& $0.307_{-0.010}^{+0.011}$
& $0.295_{-0.012}^{+0.012}$ \cr
& $\Omega_{\rm tot}$
& $1.000_{-0.0033}^{+0.0033}$
& $1.000_{-0.0033}^{+0.0032}$
& $1.003_{-0.004}^{+0.004}$ \cr
& $h$
& $0.678_{-0.010}^{+0.011}$
& $0.679_{-0.011}^{+0.010}$
& $0.681_{-0.011}^{+0.011}$\cr
& $\sigma_8 (\Omega_{\rm m}/0.27)^{0.4}$
& $0.871_{-0.021}^{+0.020}$
& $0.869_{-0.021}^{+0.020}$
& $0.836_{-0.033}^{+0.033}$\cr
\noalign{\medskip\hrule\smallskip}
& {\bf $w$CDM (flat)} &&&\cr
& $\Omega_{\rm m}$
& $0.287_{-0.021}^{+0.021}$
& $0.294_{-0.014}^{+0.014}$
& $0.299_{-0.019}^{+0.022}\left(_{-0.042}^{+0.045}\right)$ \cr
& $w$
& $-1.13_{-0.11}^{+0.13}$
& $-1.10_{-0.07}^{+0.08}$
& $-0.98_{-0.12}^{+0.16}\left(_{-0.29}^{+0.33}\right)$ \cr
& $h$
& $0.708_{-0.030}^{+0.026}$
& $0.699_{-0.018}^{+0.017}$
& $0.681_{-0.032}^{+0.025}\left(_{-0.066}^{+0.060}\right)$ \cr
& $\sigma_8 (\Omega_{\rm m}/0.27)^{0.4}$
& $0.888_{-0.025}^{+0.025}$
& $0.885_{-0.023}^{+0.023}$
& $0.84_{-0.05}^{+0.05}\left(_{-0.09}^{+0.09}\right)$ \cr
\tablefootdoublerul
}}}
\egroup\par\vskip 3pt
Table 1.1 summarizes key results from
Figures~\use{Fg.omegamomegal}--\use{Fg.omegamsigma8},
with marginalized constraints on $\Omega_{\rm m}$, $\Omega_{\rm tot}$,
$w$, $h$, and $\sigma_8(\Omega_{\rm m}/0.27)^{0.4}$ for the
Planck+WP+BAO, Planck+WP+BAO+SN, and WMAP9+BAO combinations.
We list 68.3\% errors, and also 95.4\% errors for WMAP9+BAO
constraints on $w$CDM; in all other cases, the 95.4\% errors are very
close to double the 68.3\% errors.
For $\Lambda$CDM the Planck+WP combinations give $\Omega_{\rm tot} = 1.000$
with an error of 0.3\% and they predict, approximately,
$h = 0.68 \pm 0.01$ and $\sigma_8(\Omega_{\rm m}/0.27)^{0.4} = 0.87 \pm 0.02$.
Note that the $\Omega_{\rm m}$ and $h$ constraints are not identical
to those in Table~\use{Chap.hubblerpp}.1 of the Cosmological Parameters
article of this {\it Review} because those values assume
spatial flatness.
For $w$CDM, where flatness {\it is} assumed, the Planck+WP+BAO+SN combination
yields $w = -1.10^{+0.08}_{-0.07}$, consistent with a cosmological
constant at 1.2$\sigma$. With free $w$ the best-fit $h$ increases and its
error roughly doubles, but the error in $\sigma_8(\Omega_{\rm m}/0.27)^{0.4}$
grows only slightly, and its best-fit value moves a bit further
away from the lower amplitudes suggested by measurements of low
redshift structure.
\section{Summary and Outlook}
The preceding figures and table focus on model
parameter constraints, but as a description of the observational
situation it is most useful to characterize the precision, redshift range,
and systematic uncertainties of the basic expansion and growth measurements.
At present, supernova surveys constrain distance ratios at
the $1$--$2\%$ level in redshift bins of width $\Delta z=0.1$
over the range $0 < z < 0.6$, with
larger but still interesting error bars out to $z \approx 1.2$.
These measurements are currently limited by systematics
tied to photometric calibration, extinction, and reddening,
and possible evolution of the SN population.
BAO surveys have measured the absolute distance scale
(calibrated to the sound horizon $r_{\rm s}$) to 4.5\% at $z = 0.11$,
2\% at $z = 0.35$ and $z = 0.57$, 6\% at $z = 0.73$, and 3\% at $z = 2.5$.
Multiple studies have used clusters of galaxies or weak lensing
cosmic shear or galaxy-galaxy lensing to measure a parameter
combination $\sigma_8\Omega_{\rm m}^\alpha$ with $\alpha \approx 0.3$--$0.5$.
The estimated errors of these studies, including both statistical
contributions and identified systematic uncertainties, are about 5\%.
RSD measurements constrain the combination $f(z)\sigma_8(z)$, with
recent determinations spanning the redshift range $0 < z < 0.9$
with typical estimated errors of about $10\%$.
These errors are dominated by statistics, but
shrinking them further will require improvements in
modeling non-linear effects on small scales.
Direct distance-ladder
estimates of $H_0$ now span a small range (using overlapping
data but distinct treatments of key steps), with individual
studies quoting uncertainties of $3$--$5\%$, with similar statistical
and systematic contributions.
Planck data and higher resolution ground-based experiments
now measure CMB anisotropy with exquisite precision.
A flat $\Lambda$CDM model with standard radiation and neutrino content
can fit the CMB data and the BAO and SN distance measurements to within
their estimated uncertainties. However the Planck+WP+BAO
parameters for this model
are in approximately $2\sigma$ tension with some of
the direct $H_0$ measurements and most of the cluster and weak lensing
analyses, disagreeing by about 10\% in each case.
Similar tensions are present when using WMAP
data in place of Planck+WP data, but they are less evident
because the WMAP errors are larger and the best-fit $\Omega_{\rm m}$ value is lower.
Moving from $\Lambda$CDM to $w$CDM can relieve the tension with $H_0$,
but only by going to $w < -1$ (which would be more physically startling
than $w > -1$), and this change on its own does not produce better
agreement with the structure growth data.
It is not clear whether current tensions should be taken as a sign
of new physics or as a sign that at least
some of the experiments are underestimating their systematic uncertainties.
Factor-of-two reductions in error bars, if convincing, could lead to
exciting physical implications, or to a resolution of the existing
mild discrepancies. Moving forward, the community will have to balance
the requirement of strong evidence for interesting claims
(such as $w \neq -1$ or deviations from GR)
against the danger of confirmation bias, \ie, discounting observations
or error estimates when they do not overlap simple theoretical expectations.
There are many ongoing projects that should lead to improvement in
observational constraints in the near-term and over the next two
decades\cite{de_facilities}.
Final analyses of Planck temperature and polarization maps will
significantly tighten the CMB constraints, including an important
reduction of the uncertainty in the matter fluctuation amplitude that
will sharpen tests based on structure growth.
Final data from the SDSS-III BOSS survey, finishing in 2014, will reduce BAO
errors by a factor of two at $z = 0.3$, 0.6, and 2.5. Its SDSS-IV
successor eBOSS will yield the first BAO measurements in the redshift
range $1 < z < 2$ and improved precision at lower and higher redshifts.
The HETDEX project will measure BAO with Lyman-$\alpha$ emission
line galaxies at $z = 2$--$3$.
The same galaxy surveys carried out for BAO also provide
data for RSD measurements of structure growth and AP measurements
of cosmic geometry, and with improved theoretical modeling there is
potential for large precision gains over current constraints from
these methods.
The Dark Energy Survey (DES), which started operations in August 2013
and will run through 2018, will provide a sample of several thousand
Type Ia SNe,
enabling smaller statistical errors and division of the sample into subsets
for cross-checking evolutionary effects and other systematics.
DES imaging will be similar in depth but 50 times larger in area
than CFHTLens, providing a much more powerful weak lensing data set
and weak lensing mass calibration of enormous samples of galaxy
clusters (tens of thousands). Weak lensing surveys from the newly
commissioned Hyper Suprime-Cam on the Subaru telescope will be
smaller in area but deeper, with a comparable number of lensed galaxies.
Reducing weak lensing systematics below the small statistical errors
of these samples will be a major challenge, but one with a large payoff
in precision measurements of structure growth. Uncertainties in direct
determinations of $H_0$ should be reduced by further observations with
HST and, in the longer run, by Cepheid parallaxes from the GAIA
mission, by the ability of the James Webb Space Telescope to
discover Cepheids in more distant SN Ia calibrator galaxies, and by
independent estimates from larger samples of maser galaxies and
gravitational lensing time delays.
A still more ambitious period begins late in this decade and continues
through the 2020s, with experiments that include the Dark Energy
Spectroscopic Instrument (DESI), the Subaru Prime Focus Spectrograph (PFS),
the Large Synoptic Survey Telescope (LSST), and the space missions
Euclid and WFIRST (Wide Field Infrared Survey Telescope).
DESI and PFS both aim for major improvements in the precision of
BAO, RSD, and other measurements of galaxy clustering in the redshift
range $0.8 < z < 2$, where large comoving volume allows much smaller
cosmic variance errors than low redshift surveys like BOSS.
LSST will be the ultimate ground-based optical weak lensing experiment,
measuring several billion galaxy shapes over 20,000 deg$^2$ of the
southern hemisphere sky, and it will detect and monitor many thousands of SNe
per year. Euclid and WFIRST also have weak lensing as a primary
science goal, taking advantage of the high angular resolution and
extremely stable image quality achievable from space. Both missions
plan large spectroscopic galaxy surveys, which will provide better
sampling at high redshifts than DESI or PFS because of the lower
infrared sky background above the atmosphere. WFIRST is also designed
to carry out what should be the ultimate supernova cosmology
experiment, with deep, high resolution, near-IR observations and
the stable calibration achievable with a space platform.
Performance forecasts necessarily become more uncertain the further ahead
we look, but collectively these experiments are likely to achieve
1--2 order of magnitude improvements over the precision of current
expansion and growth measurements, while simultaneously extending
their redshift range, improving control of systematics, and enabling
much tighter cross-checks of results from entirely independent methods.
The critical clue to the origin of cosmic acceleration could also come
from a surprising direction, such as laboratory or solar system tests
that challenge GR, time variation of fundamental ``constants,'' or anomalous
behavior of gravity in some astronomical environments.
Experimental advances along these multiple axes could confirm today's
relatively simple, but frustratingly incomplete, ``standard model''
of cosmology, or they could force yet another radical revision in
our understanding of energy, or gravity, or the spacetime structure
of the Universe.
\smallskip
\noindent{\bf References:}
\parindent=20pt
\ListReferences
\beginDBonly
\smallskip
\refline\parindent=20pt
\par\hang\textindent{}For all references, see the full {\it Review.}
\endDBonly
\chapter{<title>} causes a page break, prints a
\def\chapter#1
\global\advance\chapternum by \@ne
\global\sectionnum=\z@
\global\def\@sectID{
\global\def\S@sectID{
\edef\lab@l{\ChapterStyle{\the\chapternum}
\ifshowchaptID
\global\edef\@chaptID{\lab@l.
\r@set
\else\edef\@chaptID{}\fi
\everychapter
\begingroup
\def\begingroup\unSpecial\@label##1{
\xdef\ChapterTitle{#1
\def\n{}\def\nl{}\def\mib{
\setHeadline{#1
\emsg{Chapter \@chaptID\space #1
\def\@quote{\string\@quote\relax
\addTOC{0}{\NX\TOCcID{\lab@l.}#1}{\folio
\endgroup %
\@Mark{#1
\s@ction
\afterchapter}
\def\everychapter{\relax
\def\afterchapter{\relax
\def\ChapterStyle#1{#1}
\def\setChapterID#1{\edef\@chaptID{#1.}}
\def\r@set
\global\subsectionnum=\z@
\global\subsubsectionnum=\z@
\ifx\eqnum\undefined\relax
\else\global\eqnum=\z@\fi
\ifx\theoremnum\undefined\relax
\else
\global\theoremnum=\z@
\global\lemmanum=\z@ %
\global\corollarynum=\z@ %
\global\definitionnum=\z@ %
\global\fignum=\z@ %
\ifRomanTables\relax %
\else\global\tabnum=\z@\fi
\fi}
\long\def\s@ction
\checkquote
\checkenv
\nobreak\smallbreak
\vskip 0pt}
\def\@Mark#1
\begingroup
\def\begingroup\unSpecial\@label##1{
\def\goodbreak{
\def\mib{}\def\n{
\mark{#1\NX\else\lab@l
\endgroup}%
\def\@noMark#1{\relax
\def\setHeadline#1{\@setHeadline#1\n\endlist}
\def\@setHeadline#1\n#2\endlist
\def\@arg{#2}\ifx\@arg\empty
\global\edef\hfill{#1
\else
\global\edef\hfill{#1\dots
\fi
}
\def\twelvepoint\boldface{\twelvepoint\boldface}
\def\section#1
\vskip\sectionskip
\goodbreak\pagecheck\sectionminspace
\global\advance\sectionnum by \@ne
\edef\lab@l{\@chaptID\SectionStyle{\the\sectionnum}
\ifshowsectID
\global\edef\@sectID{\SectionStyle{\the\sectionnum}.
\global\edef\@fullID{\lab@l.\space\space
\global\subsectionnum=\z@
\global\subsubsectionnum=\z@
\else\gdef\@fullID{}\fi
\everysection
\ifx\twelvepoint\bf\undefined\def\twelvepoint\bf{\bf}\fi
\vbox
{\raggedright\twelvepoint\boldface
\setbox0=\hbox{\noindent\twelvepoint\boldface\@fullID
\hangindent=\wd0 \hangafter=1
\noindent\@fullID
{#1}}}\relax
\begingroup
\def\begingroup\unSpecial\@label##1{
\global\edef\SectionTitle{#1
\def\n{}\def\nl{}\def\mib{
\ifnum\chapternum=0\setHeadline{#1}\fi
\emsg{Section \@fullID #1
\def\@quote{\string\@quote\relax
\addTOC{1}{\NX\TOCsID{\lab@l.}#1}{\folio
\endgroup
\s@ction
\aftersection}
\def\everysection{\relax
\def\aftersection{\relax
\def\setSectionID#1{\edef\@sectID{#1.}}
\def\SectionStyle#1{#1}
\def\pagecheck#1
\dimen@=\pagegoal
\advance\dimen@ by -\pagetotal
\ifdim\dimen@>0pt
\ifdim\dimen@< #1\relax
\vfil\break \fi\fi}
\def\Resetsection
\global\advance\sectionnum by \@ne
\global\edef\lab@l{\@chaptID\SectionStyle{\the\sectionnum}
\ifshowsectID
\global\edef\@sectID{\SectionStyle{\the\sectionnum}.
\global\edef\@fullID{\lab@l.\space\space
\global\subsectionnum=\z@
\global\subsubsectionnum=\z@
\else\gdef\@fullID{}\fi
\everysection
\aftersection}
\def\tenpoint\boldface{\tenpoint\boldface}
\def\subsection#1
\ifnum\subsectionnum=0
\par
\else
\vskip\subsectionskip
\fi
\goodbreak\pagecheck\sectionminspace
\global\advance\subsectionnum by \@ne
\subsubsectionnum=\z@
\edef\lab@l{\@chaptID\@sectID\SubsectionStyle{\the\subsectionnum}}%
\ifshowsectID
\global\edef\@fullID{\lab@l.\space\space
\else\gdef\@fullID{}\fi
\everysubsection
\begingroup
\def\begingroup\unSpecial\@label##1{
\global\edef\SubsectionTitle{#1
\def\n{}\def\nl{}\def\mib{
\emsg{\@fullID #1
\def\@quote{\string\@quote\relax
\addTOC{2}{\NX\TOCsID{\lab@l.}#1}{\folio
\endgroup
\s@ction
{\raggedright\twelvepoint\bf
\setbox0=\hbox{\noindent\tenpoint\boldface\@fullID
\hangindent=\wd0 \hangafter=1
\noindent\tenpoint\boldface\@fullID
\tenpoint\boldface\bfit
{#1}\hbox{\copy\colonbox}\relax
\nobreak
\aftersubsection\nobreak}
\def\everysubsection{\relax
\def\aftersubsection{\relax
\def\SubsectionStyle#1{#1}
\subsectionskip=\smallskipamount
\def\tenpoint\it{\tenpoint\it}
\def\subsubsection#1
\ifnum\subsectionnum=0
\par
\else
\vskip\subsectionskip
\fi
\goodbreak\pagecheck\sectionminspace
\global\advance\subsubsectionnum by \@ne
\edef\lab@l{\@chaptID\@sectID\SectionStyle{\the\subsectionnum}
\SectionStyle{\the\subsubsectionnum}
\ifshowsectID
\global\edef\@fullID{\lab@l.\space\space
\else\gdef\@fullID{}\fi
\everysubsubsection
\begingroup
\def\begingroup\unSpecial\@label##1{
\global\edef\SubsectionTitle{#1
\def\n{}\def\nl{}\def\mib{
\emsg{\@fullID #1
\def\@quote{\string\@quote\relax
\addTOC{3}{\NX\TOCsID{\lab@l.}#1}{\folio
\endgroup
\s@ction
{\raggedright\twelvepoint\bf
\setbox0=\hbox{\noindent\tenpoint\boldface\@fullID
\hangindent=\wd0 \hangafter=1
\tenpoint\boldface\noindent\@fullID
\tenpoint\it
#1\hbox{:}\relax
\aftersubsection}
\def\everysubsubsection{\relax
\def\aftersubsubsection{\relax
\def\SubsubsectionStyle#1{#1}
\newbox\@capbox
\newcount\@caplines
\def\CaptionName{}
\def\@ID{}
\def\caption#1
\def\lab@l{\@ID
\global\setbox\@capbox=\vbox\bgroup
\def\@inCaption{T
\normalbaselines
\dimen@=20\parindent
\ifdim\colwidth>\dimen@\narrower\f
\noindent{\bf \CaptionName~\@ID:\space
#1\relax
\vskip0pt
\global\@caplines=\prevgraf
\egroup
\ifnum\@ne=\@caplines
\global\setbox\@capbox=\vbox\bgroup
{\bf \CaptionName~\@ID:\space
#1\hfil\egroup
\fi %
\def\@inCaption{F
\if N\@whereCap\def\@whereCap{B}\fi
\if T\@whereCap
\centerline{\box\@capbox
\vglue 3pt
\fi %
}
\def\@inCaption{F
\long\def\Caption#1\emsg{> \NX\endCaption called before \NX\Caption.}{\caption{#1}}
\def\emsg{> \NX\endCaption called before \NX\Caption.}{\emsg{> \NX\emsg{> \NX\endCaption called before \NX\Caption.} called before \NX\Caption.}}
\def\emsg{> try using \NX\caption{ text... }}{\emsg{> try using \NX\caption{ text... }}}
\def\EQNOparse#1;#2;#3\endlist
\if ?#3?\relax
\global\advance\eqnum by\@ne
\edef\tnum{\@chaptID\the\eqnum
\Eqtag{#1}{\tnum
\@EQNOdisplay{#1
\else\stripblanks #2\endlist
\edef\p@rt{\tok
\if a\p@rt\relax
\global\advance\eqnum by\@ne\fi
\edef\tnum{\@chaptID\the\eqnum
\Eqtag{#1}{\tnum
\edef\tnum{\@chaptID\the\eqnum\p@rt}
\Eqtag{#1;\p@rt}{\tnum
\@EQNOdisplay{#1;#2
\fi %
\global\let\?=\tnum
\relax
\def\LabelParsewo#1;#2;#3\endlist
\if ?#3?\relax
\global\advance\@count by\@ne
\xdef\@ID{\@chaptID\the\@count
\tag{\@prefix#1}{\@ID
\else
\stripblanks #2\endlist
\edef\p@rt{\tok
\if a\p@rt\relax
\global\advance\@count by\@ne\fi
\xdef\@ID{\@chaptID\the\@count
\tag{\@prefix#1}{\@ID
\xdef\@ID{\@chaptID\the\@count\p@rt
\tag{\@prefix#1;\p@rt}{\@ID
\fi
}
\def\@ID{}
\def\@figure#1#2
\vskip 0pt
\begingroup
\let\@count=\fignum
\def\@prefix{Fg.
\if ?#2?\relax \def\@ID{
\else\LabelParsewo #2;;\endlist\fi
\def\CaptionName{Figure
\ifFigsLast
\emsg{\CaptionName\space\@ID. {#2} [storing in \jobname.fg]
\@fgwrite{\@comment> \CaptionName\space\@ID.\space{#2}
\@fgwrite{\NX\@FigureItem{\CaptionName}{\@ID}{\NX#1}
\newlinechar=`\^^M
\obeylines
\let\@next=\@copyfig
\else
#1\relax
\setbox\@capbox\vbox to 0pt{
\def\@whereCap{N
\emsg{\CaptionName\ \@ID.\ {#2}
\let\emsg{> \string\endfigure before \string\figure!}=\@endfigure
\let\endFigure=\@endfigure
\let\ENDFIGURE=\@endfigure
\let\@next=\@findcap
\fi
\@next}
\def\@table#1#2
\vskip 0pt
\begingroup %
\def\CaptionName{Table
\def\@prefix{Tb.
\let\@count=\tabnum
\if ?#2?\relax \def\@ID{
\else %
\ifRomanTables
\global\advance\@count by\@ne
\edef\@ID{\uppercase\expandafter
{\romannumeral\the\@count}
\tag{\@prefix#2}{\@ID
\else %
\LabelParsewo #2;;\endlist\fi
\fi %
\ifTabsLast
\emsg{\CaptionName\space\@ID. {#2} [storing in \jobname.tb]
\@tbwrite{\@comment> \CaptionName\space\@ID.\space{#2}
\@tbwrite{\NX\@FigureItem{\CaptionName}{\@ID}{\NX#1}
\newlinechar=`\^^M
\obeylines
\let\@next=\@copytab
\else
#1\relax
\setbox\@capbox\vbox to 0pt{
\def\@whereCap{N
\emsg{\CaptionName\ \@ID.\ {#2}
\let\egroup\par\vskip 3pt=\@endfigure
\let\endTable=\@endfigure
\let\ENDTABLE=\@endfigure
\let\@next=\@findcap
\fi %
\@next}
\def\beginRPPonly{\ifnum\BigBookOrDataBooklet=1 \relax}
\def\beginDBonly{\ifnum\BigBookOrDataBooklet=2 \relax}
\let\endDBonly\fi
\let\endRPPonly\fi
\def{\ifnum\BigBookOrDataBooklet=1 rpp\else db\fi}
\ifnum\BigBookOrDataBooklet=1
\def\AUXinit
\ifauxswitch
\immediate\openout\auxfileout=\jobname.aux
\else
\gdef\auxout##1##2{
\fi
\gdef\AUXinit{\relax}}
\else
\def\AUXinit
\ifauxswitch
\immediate\openout\auxfileout=\jobname.aux
\else
\gdef\auxout##1##2{
\fi
\gdef\AUXinit{\relax}}
\fi
\def\auxout#1#2{\AUXinit
\immediate\write\auxfileout
\NX\expandafter\NX\gdef
\NX\csname #1\NX\endcsname{#2}
}
\ifnum\BigBookOrDataBooklet=1
\def\ReadAUX
\openin\auxfilein=\jobname.aux
\ifeof\auxfilein\closein\auxfilein
\else\closein\auxfilein
\begingroup
\unSpecial %
\input \jobname.aux \relax
\endgroup
\fi}
\else
\def\ReadAUX
\openin\auxfilein=\jobname.aux
\ifeof\auxfilein\closein\auxfilein
\else\closein\auxfilein
\begingroup
\unSpecial %
\input \jobname.aux \relax
\endgroup
\fi}
\fi
\ReadAUX
\catcode`@=12
\global\font\elevenbf=cmbx10 scaled \magstephalf
\newbox\HEADFIRS
\newbox\HEADSECON
\newbox\HEADhbo
\newbox\HEADvbo
\newbox\RUNHEADhbo
\newtoks\RUNHEADto
\newcount\onemorechapter
\newdimen\titlelinewidth
\newdimen\movehead
\movehead=0pt
\titlelinewidth=.5pt
\def\tenpoint\it{\twelvepoint\boldface\bfit}
\def\setbox\RUNHEADhbox\hbox{\hss}{\setbox\RUNHEADhbox\hbox{\hss}}
\setbox\RUNHEADhbox\hbox{\hss}
\def\nochapternumberrunninghead#1%
{\setbox\RUNHEADhbox\hbox{\tenpoint\it %
#1}
\WWWhead{\string\wwwtitle{#1}}%
}
\def\runninghead#1{\setbox\RUNHEADhbox\hbox{\tenpoint\it%
\the\chapternum.~#1}%
\WWWhead{\string\wwwtitle{#1}}%
\RUNHEADtok={#1}}
\def\doublerunninghead#1#2{%
\onemorechapter=\chapternum\relax
\advance\onemorechapter by 1\relax
\setbox\RUNHEADhbox\hbox{\tenpoint\it%
\the\chapternum.~#1, \the\onemorechapter.~#2}}
\def\heading#1{\chapter{#1}\begingroup\unSpecial\@label{Chap.\jobname}%
\setbox\HEADFIRST=\hbox{\boldhead\the\chapternum.~#1}
\centerline{\copy\HEADFIRST}\vskip .1in}
\def\smallerheading#1{\chapter{#1}\begingroup\unSpecial\@label{Chap.\jobname}%
\setbox\HEADFIRST=\hbox{\elevenbf\the\chapternum.~#1}
\centerline{\copy\HEADFIRST}\vskip .1in}
\def\centerline{\copy\HEADFIRST}\vskip .1in{\relax}
\def\notitleheading#1{%
\chapter{#1}\begingroup\unSpecial\@label{Chap.\jobname}%
\setbox\HEADFIRST=\hbox{\boldhead\the\chapternum.~#1}}
\def\doubleheading#1#2{\chapter{#1}\begingroup\unSpecial\@label{Chap.\jobname}%
\centerline{\boldhead\hfill\the\chapternum.~#1\hfill}\vskip .1in%
\centerline{\boldhead\hfill #2\hfill}\vskip .2in}
\def\nochapterdoubleheading#1#2{%
\centerline{\boldhead\hfill #1\hfill}\vskip .1in%
\centerline{\boldhead\hfill #2\hfill}\vskip .2in}
\def\nochapterdbheading#1{%
\centerline{\boldhead\hfill #1\hfill}\vskip .1in}%
\def\nochapterheading#1{%
\begingroup\unSpecial\@label{Chap.\jobname}%
\setbox\HEADFIRST=\hbox{\boldhead\the\chapternum.~#1}
}
\def\nochapternumberheading#1{%
\begingroup\unSpecial\@label{Chap.\jobname}%
\setbox\HEADFIRST=\hbox{\boldhead~#1}
}
\def\nochapterheadingnochapternumber{%
\begingroup\unSpecial\@label{Chap.\jobname}%
\setbox\HEADFIRST=\hbox{\hss}
}
\def\multiheading#1#2{%
\chapter{#1}\begingroup\unSpecial\@label{Chap.\jobname}%
\setbox\HEADFIRST=\hbox{\boldhead\the\chapternum.~#1}
\setbox\HEADSECOND=\hbox{\boldhead #2}}
\headline={\ifnum\pageno=\Firstpage\ifodd\pageno\firstheadodd\else\firstheadeven\fi\else\ifodd\pageno\contheadodd\else\contheadeven\fi\fi}
\def\ifodd\pageno\firstheadodd\else\firstheadeven\fi{\ifodd\pageno\firstheadodd\else\firstheadeven\fi}
\def\ifodd\pageno\contheadodd\else\contheadeven\fi{\ifodd\pageno\contheadodd\else\contheadeven\fi}
\def\firstheadeven{%
\setbox\HEADvbox=\vtop to 1.15in{%
\vglue .2in%
\hbox to \fullhsize{%
\boldhead {\elevenssbf\Folio}\quad\copy\RUNHEADhbox\hss}%
\vskip .1in%
\hrule depth 0pt height \titlelinewidth
\vskip .25in%
\hbox to \fullhsize{\boldhead\hss\copy\HEADFIRST\hss}%
\hbox to \fullhsize{\vrule height 18pt width 0pt%
\boldhead\hss\copy\HEADSECOND\hss}%
\vss%
}%
\setbox\HEADhbox=\hbox{\raise.85in\copy\HEADvbox}%
\dp\HEADhbox=0pt\ht\HEADhbox=0pt\copy\HEADhbox%
}
\def\firstheadodd{%
\message{THIS IS FIRSTPAGE}%
\setbox\HEADvbox=\vtop to 1.15in{%
\vglue .2in%
\hbox to \fullhsize{%
\hss\copy\RUNHEADhbox\boldhead\quad{\elevenssbf\Folio}}%
\vskip .1in%
\hrule depth 0pt height \titlelinewidth
\vskip .25in%
\hbox to \fullhsize{\boldhead\hss\copy\HEADFIRST\hss}%
\hbox to \fullhsize{\vrule height 18pt width 0pt%
\boldhead\hss\copy\HEADSECOND\hss}%
\vss%
}%
\setbox\HEADhbox=\hbox{\raise.85in\copy\HEADvbox}%
\dp\HEADhbox=0pt\ht\HEADhbox=0pt\copy\HEADhbox%
}
\def\contheadeven{%
\setbox\HEADvbox=\vtop to .85in{%
\vglue .2in%
\hbox to \fullhsize{%
\boldhead {\elevenssbf\Folio}\quad\copy\RUNHEADhbox\hss}%
\vskip .1in%
\hrule depth 0pt height \titlelinewidth
\vss%
}%
\setbox\HEADhbox=\hbox{\raise.55in\copy\HEADvbox}%
\dp\HEADhbox=0pt\ht\HEADhbox=0pt\copy\HEADhbox%
}
\def\contheadodd{%
\setbox\HEADvbox=\vtop to .85in{%
\vglue .2in%
\hbox to \fullhsize{%
\hss\copy\RUNHEADhbox\boldhead\quad{\elevenssbf\Folio}}%
\vskip .1in%
\hrule depth 0pt height \titlelinewidth
\vss%
}%
\setbox\HEADhbox=\hbox{\raise.55in\copy\HEADvbox}%
\dp\HEADhbox=0pt\ht\HEADhbox=0pt\copy\HEADhbox%
}
\def\pagenumberonly{\setbox\RUNHEADhbox\hbox{\hss}%
\setbox\HEADFIRST\hbox{\hss}%
\titlelinewidth=0pt}
\input rotate
\let\RMPpageno=\pageno
\def\scaleit#1#2{\rotdimen=\ht#1\advance\rotdimen by \dp#1%
\hbox to \rotdimen{\hskip\ht#1\vbox to \wd#1{\rotstart{#2 #2 scale}%
\box#1\vss}\hss}\rotfinish}
\def\WhoDidIt#1{%
\noindent #1\smallskip}
\hyphenation{%
brems-strah-lung
Dan-ko-wych
Fuku-gi-ta
Gav-il-let
Gla-show
mono-pole
mono-poles
Sad-ler
}
\let\HANG=\hangindent\itemindent
\let\Item=\par\hang\textindent
\let\Itemitem=\itemitem
\newdimen\itemindent
\itemindent=20pt
\def\hangindent\itemindent{\hangindent\parindent}
\def\hangindent\itemindent{\hangindent\itemindent}
\def\par\hang\textindent{\par\hangindent\itemindent\textindent}
\def\textindent#1{\indent\llap{#1\enspace}\ignorespaces}
\def\textindent#1{\bgroup\parindent=\itemindent\indent%
\llap{#1\enspace}\egroup\ignorespaces}
\def\itemitem{\par\bgroup\parindent=\itemindent\indent\egroup
\hangindent2\itemindent \textindent}
\EnvLeftskip=\itemindent
\EnvRightskip=0pt
\long\def\poormanbold#1%
{%
\leavevmode\hbox%
{%
\hbox to 0pt{#1\hss}\raise.3pt%
\hbox to .3pt{#1\hss}%
\hbox to 0pt{#1\hss}\raise.3pt%
\hbox {#1\hss}%
\hss%
}%
}
\def{\parfillskip=0pt\par}\vfill\eject\noindent\ignorespaces{{\parfillskip=0pt\par}\vfill\eject\noindent\ignorespaces}
\def\vfill\eject\ignorespaces{\vfill\eject\ignorespaces}
\long\def\XsecFigures#1#2#3#4#5#6#7%
{%
\ifnum\IncludeXsecFigures = 0 %
\vfill%
\Page#1%
\centerline{\figbox{\twelvepoint\bf #2 FIGURE}{7.75in}{4.8in}}%
\vfill%
\Page#4%
\centerline{\figbox{\twelvepoint\bf #5 FIGURE}{7.75in}{4.8in}}%
\vfill%
\else%
\vbox%
{%
\Page#1%
\hbox to \hsize%
{%
\vtop to 4in%
{%
\hsize = 0in%
\special%
{%
insert rpp$figures:#3.ps,%
top=13.4in,left=0.0in,%
magnification=1300,%
string="/translate{pop pop}def"%
}%
\vss%
}%
\hss%
}%
\vglue.5in%
\Page#4%
\hbox to \hsize%
{%
\vtop to 6in%
{%
\hsize = 0in%
\special%
{%
insert rpp$figures:#6.ps,%
top=13.4in,left=0.0in,%
magnification=1300,%
string="/translate{pop pop}def"%
}%
\vss%
}%
\hss%
}%
\vss%
}%
\fi%
\vfill%
#7%
\lastpagenumber%
}
\gdef\refline{\parskip=2pt\medskip\hrule width 2in\vskip 3pt%
\tolerance=10000\pretolerance=10000}
\gdef\refstar{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^\ast}}
\gdef\databookrefstar{{^\ast}}
\gdef\refdstar{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^{\ast\ast}$}}
\gdef\refdagger{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^\dagger$}}
\gdef\refstar{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^\ast$}}
\gdef\refddagger{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^\ddagger$}}
\gdef\refdstar{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^{\ast\ast}$}}
\gdef\refdagger{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^\dagger$}}
\gdef\refddagger{\parindent=20pt\vskip2pt\par\hang\textindent{\hss$^\ddagger$}}
\def\catcode`\^^M=10
\catcode`\^^M=10
\def\refskip{\vskip0pt plus 2pt
\refindent=20p
\leftskip=20p
}
\def\Refskip{\vskip0pt plus 2pt
\def\noalign{\vglue2pt\hrule\vskip3pt\hrule\smallskip}{\noalign{\vglue2pt\hrule\vskip3pt\hrule\smallskip}}
\def\noalign{\medskip\hrule\smallskip}{\noalign{\medskip\hrule\smallskip}}
\def\noalign{\vskip 3pt\hrule\vskip3pt\hrule\smallskip}{\noalign{\vskip 3pt\hrule\vskip3pt\hrule\smallskip}}
\def\noalign{\hrule}{\noalign{\hrule}}
\def\noalign{\hrule height .95pt}{\noalign{\hrule height .95pt}}
\def\noalign{\vskip 2pt}\noalign{\hrule}\noalign{\vskip 2pt}{\noalign{\vskip 2pt}\noalign{\hrule}\noalign{\vskip 2pt}}
\def\cr\tablerule{\cr\noalign{\hrule}}
\def\pmalign#1#2{$\llap{$#1$}\pm\rlap{$#2$}$}
\def\dashalign#1#2{\llap{#1}\hbox{--}\rlap{#2}}
\def\decalign#1#2{\llap{#1}.\rlap{#2}}
\let\da=\decalign
\def\phantom{1}\relax{\phantom{1}\relax}
\def\phantom{11}\relax{\phantom{11}\relax}
\def\centertab#1{\hfil#1\hfil}
\def\righttab#1{\hfil#1}
\def\lefttab#1{#1\hfil}
\let\ct=\centertab
\let\rt=\righttab
\let\lt=\lefttab
\def\vrule height 12pt depth 5pt width 0pt{\vrule height 12pt depth 5pt width 0pt}
\def\vrule height 10pt depth 4pt width 0pt{\vrule height 10pt depth 4pt width 0pt}
\def\vrule height 10pt depth 4pt width 0pt{\vrule height 10pt depth 4pt width 0pt}
\def\vrule height 11pt depth 4pt width 0pt{\vrule height 11pt depth 4pt width 0pt}
\def\vrule height 13pt depth 5pt width 0pt{\vrule height 13pt depth 5pt width 0pt}
\def\vrule height 11.25pt depth 2.5pt width 0pt{\vrule height 11.25pt depth 2.5pt width 0pt}
\def\vrule height 0pt depth 0pt width 0pt{\vrule height 0pt depth 0pt width 0pt}
\def\omit\vrule height 0pt depth 4pt width 0pt{\omit\vrule height 0pt depth 4pt width 0pt}
\def\afterlboxstrut{%
\noalign{\vskip-4pt}\omit\vrule height 0pt depth 4pt width 0pt}
\def\vrule height 13pt depth 4pt width 0pt{\vrule height 13pt depth 4pt width 0pt}
\def\vrule height 10pt depth 6pt width 0pt{\vrule height 10pt depth 6pt width 0pt}
\def\vrule height 0pt depth 6pt width 0pt{\vrule height 0pt depth 6pt width 0pt}
\def\vrule height 4pt depth 6pt width 0pt{\vrule height 4pt depth 6pt width 0pt}
\def65536{65536}
\newdimen\PSOutputWidth
\newdimen\PSInputWidth
\newdimen\PSOffsetX
\newdimen\PSOffsetY
\def\PSOrigin{%
0 0 moveto 10 0 lineto stroke 0 0 moveto 0 10 lineto stroke}
\def\PSScale{%
\number\PSOutputWidth \space \number\PSInputWidth \space div \space
dup scale
\def\PSOffset{%
\number\PSOffsetX \space 65536 \space div %
\number\PSOffsetY \space 65536 \space div \space translate}
\def\PSTransform{%
\PSScale \space \PSOffset}
\def\UGSTransform{%
\PSScale \space \PSOffset \space 27 600 translate -90 rotate}
\def\UGSLandInput#1{%
\PSOffsetX=0in
\PSOffsetY=0in
\PSOutputWidth=\hsize
\PSInputWidth=11in
\special{#1 \PSOrigin \space \UGSTransform}}
\def\UGSInput#1{\PSOutputWidth=\hsize%
\PSOffsetX=-125pt
\PSOffsetY=0in
\PSInputWidth=8.125in
\special{#1 \PSOrigin \space \UGSTransform}}
\def\AIInput#1{\PSOutputWidth=\hsize%
\PSOffsetX= 0in
\PSOffsetY= 0in
\PSInputWidth=8.5in
\special{#1 \PSOrigin \space \PSTransform}}
\def\mbox#1{{\ifmmode#1\else$#1$\fi}}
\def\overline{\overline}
\def\overrightarrow{\overrightarrow}
\def\Widevec#1{\vbox to 0pt{\vss\hbox{$\overrightarrow #1$}}}
\def\Widebar#1{\vbox to 0pt{\vss\hbox{$\overline #1$}}}
\def\Widetilde#1{\vbox to 0pt{\vss\hbox{$\widetilde #1$}}}
\def\Widehat#1{\vbox to 0pt{\vss\hbox{$\widehat #1$}}}
\def\mbox{t\Widebar t}{\mbox{t\Widebar t}}
\def\mbox{u\Widebar u}{\mbox{u\Widebar u}}
\def\mbox{c\Widebar c}{\mbox{c\Widebar c}}
\def\mbox{q\Widebar q}{\mbox{q\Widebar q}}
\def\mbox{g\Widebar g}{\mbox{g\Widebar g}}
\def\mbox{s\Widebar s}{\mbox{s\Widebar s}}
\def\mbox{p\Widebar p}{\mbox{p\Widebar p}}
\def\mbox{d\Widebar d}{\mbox{d\Widebar d}}
\def\mbox{b\Widebar b}{\mbox{b\Widebar b}}
\def\mbox{\Widebar K}{\mbox{\Widebar K}}
\def\mbox{\Widebar D}{\mbox{\Widebar D}}
\def\mbox{\Widebar B}{\mbox{\Widebar B}}
\def\mbox{\Widebar A}{\mbox{\Widebar A}}
\def\mbox{\Widebar A}{\mbox{\Widebar A}}
\def\mbox{\Widebar x}{\mbox{\Widebar x}}
\def\mbox{\Widebar z}{\mbox{\Widebar z}}
\def\mbox{\Widebar f}{\mbox{\Widebar f}}
\def\mbox{\Widebar e}{\mbox{\Widebar e}}
\def\mbox{\Widebar c}{\mbox{\Widebar c}}
\def\mbox{\Widebar t}{\mbox{\Widebar t}}
\def\mbox{\Widebar s}{\mbox{\Widebar s}}
\def\mbox{\Widebar u}{\mbox{\Widebar u}}
\def\mbox{\Widebar r}{\mbox{\Widebar r}}
\def\mbox{\Widebar d}{\mbox{\Widebar d}}
\def\mbox{\Widebar b}{\mbox{\Widebar b}}
\def\mbox{\Widebar q}{\mbox{\Widebar q}}
\def\mbox{\Widebar g}{\mbox{\Widebar g}}
\def\mbox{\Widebar g}{\mbox{\Widebar g}}
\def\mbox{\Widebar a}{\mbox{\Widebar a}}
\def\mbox{\Widevec p}{\mbox{\Widevec p}}
\def\mbox{\Widevec r}{\mbox{\Widevec r}}
\def\mbox{\Widevec J}{\mbox{\Widevec J}}
\def\mbox{\Widebar p}{\mbox{\Widebar p}}
\def\mbox{\Widebar n}{\mbox{\Widebar n}}
\def\widehat\alpha{\widehat\alpha}
\def\mbox{\Widebar \Lambda}{\mbox{\Widebar \Lambda}}
\def\mbox{\Widebar \Omega}{\mbox{\Widebar \Omega}}
\def\mbox{\Widebar \mu}{\mbox{\Widebar \mu}}
\def\mbox{{\Widevec \beta}}{\mbox{{\Widevec \beta}}}
\def\mbox{\Widebar \Xi}{\mbox{\Widebar \Xi}}
\def\mbox{\Widebar \xi}{\mbox{\Widebar \xi}}
\def\mbox{\Widebar \Gamma}{\mbox{\Widebar \Gamma}}
\def\mbox{\Widebar \theta}{\mbox{\Widebar \theta}}
\def\mbox{\Widebar \nu}{\mbox{\Widebar \nu}}
\def\mbox{\Widebar \ell}{\mbox{\Widebar \ell}}
\def\mbox{\Widebar \alpha}{\mbox{\Widebar \alpha}}
\def\mbox{\Widebar \ell}{\mbox{\Widebar \ell}}
\def\mbox{\Widebar \psi}{\mbox{\Widebar \psi}}
\def\frac#1#2{{\displaystyle{#1 \over #2}}
\def\textfrac#1#2{{\textstyle{#1 \over #2^{\vrule height 5pt depth 0pt width 0pt}}}
\def\vrule height 5pt depth 0pt width 0pt{\vrule height 5pt depth 0pt width 0pt}
\def\ifmmode{\hbox{ GeV }}\else{GeV}\fi{\ifmmode{\hbox{ GeV }}\else{GeV}\fi}
\def\ifmmode{\hbox{ MeV }}\else{MeV}\fi{\ifmmode{\hbox{ MeV }}\else{MeV}\fi}
\def\ifmmode{\hbox{ keV }}\else{keV}\fi{\ifmmode{\hbox{ keV }}\else{keV}\fi}
\def\ifmmode{\hbox{ eV }}\else{eV}\fi{\ifmmode{\hbox{ eV }}\else{eV}\fi}
\def\ifmmode{{\rm GeV}/c}\else{GeV/$c$}\fi{\ifmmode{{\rm GeV}/c}\else{GeV/$c$}\fi}
\def\ifmmode{({\rm TeV}/c)^{-1}}\else{(TeV/$c)^{-1}$}\fi{\ifmmode{({\rm TeV}/c)^{-1}}\else{(TeV/$c)^{-1}$}\fi}
\def\ifmmode{{\rm TeV}/c}\else{TeV/$c$}\fi{\ifmmode{{\rm TeV}/c}\else{TeV/$c$}\fi}
\def\cm{{\rm cm}}
\def\ifmmode{\mu{\rm m}}\else{$\mu$m}\fi{\ifmmode{\mu{\rm m}}\else{$\mu$m}\fi}
\def\ifmmode{\mu{\rm s}}\else{$\mu$s}\fi{\ifmmode{\mu{\rm s}}\else{$\mu$s}\fi}
\def\lum{\ifmmode{{\rm cm}^{-2}{\rm s}^{-1}}%
\else{cm$^{-2}$s$^{-1}$}\fi}%
\def\lstd{\ifmmode{10^{33}\,{\rm cm}^{-2}{\rm s}^{-1}}%
\else{$10^{33}\,$cm$^{-2}$s$^{-1}$}\fi}%
\def\hilstd{\ifmmode{10^{34}\,{\rm cm}^{-2}{\rm s}^{-1}}%
\else{$10^{34}\,$cm$^{-2}$s$^{-1}$}\fi}%
\def\VEV#1{\left\langle #1\right\rangle}
\def\,\hbox{\msxmten\char'162}\,{\,\hbox{\msxmten\char'162}\,}
\def\,\hbox{\tensy\char'015}\,{\,\hbox{\tensy\char'015}\,}
\def\,\hbox{\msxmten\char'046}\,{\,\hbox{\msxmten\char'046}\,}
\def\,\hbox{\msxmten\char'056}\,{\,\hbox{\msxmten\char'056}\,}
\def\,\hbox{\msxmten\char'046}\,{\,\hbox{\msxmten\char'046}\,}
\def\,\hbox{\msxmten\char'056}\,{\,\hbox{\msxmten\char'056}\,}
\def\ttt#1{\times 10^{#1}}
\def$^{-1}${$^{-1}$}
\def$^{-2}${$^{-2}$}
\def$^{-3}${$^{-3}$}
\def\ifmmode{|\eta|}\else{$|\eta|$}\fi{\ifmmode{|\eta|}\else{$|\eta|$}\fi}
\def\abs#1{\left| #1\right|}
\def\ifmmode{p_\perp}\else{$p_\perp$}\fi{\ifmmode{p_\perp}\else{$p_\perp$}\fi}
\def\deg{\ifmmode{^\circ}\else{$^\circ$}\fi}%
\def\ifmmode{\hbox{missing-}E_t}\else{$\hbox{missing-}E_t$}\fi{\ifmmode{/\mkern-11mu E_t}\else{${/\mkern-11mu E_t}$}\fi}
\def\ifmmode{\hbox{missing-}E_t}\else{$\hbox{missing-}E_t$}\fi{\ifmmode{\hbox{missing-}E_t}\else{$\hbox{missing-}E_t$}\fi}
\let\etmiss\ifmmode{\hbox{missing-}E_t}\else{$\hbox{missing-}E_t$}\fi
\def \ln\,{ \ln\,}
\def\to{\rightarrow
\def\tenrm *{\tenrm *}
\def\Twelvebf *{\Twelvebf *}
\def\comma{\ ,\
\def\semi{\ ;\
\def\period{\ .\
\newdimen\Linewidth \Linewidth=0.001in
\newdimen\boxsideindent \boxsideindent=0.5in
\newdimen\halfboxsideindent \halfboxsideindent=0.25in
\newdimen\boxheightindent \boxheightindent=0.05in
\newdimen\figboxwidth \figboxwidth=4.25in
\newdimen\figboxheight \figboxheight=4.25in
\long\def\boxit#1#2#3%
{%
\vbox%
{%
\hrule height #3%
\hbox%
{%
\vrule width #3%
\vbox%
{%
\kern #2%
\hbox%
{%
\kern #2%
\vbox{\hsize=\wd#1\noindent\copy#1}%
\kern #2%
}%
\kern #2%
}%
\vrule width #3%
}%
\hrule height #3%
}%
}
\def\boxA{%
\setbox0=\hbox{A}\boxit{0}{1pt}{.5pt}%
}
\def\boxB{%
\setbox0=\hbox{B}\boxit{0}{1pt}{.5pt}%
}
\def\boxplain{%
\setbox0=\hbox{\phantom{\vrule height .5em width .5em}}\boxit{0}{1pt}{.5pt}%
}
\def\leavevmode\lower 2pt\hbox{\boxA}{\leavevmode\lower 2pt\hbox{\boxA}}
\def\leavevmode\lower 2pt\hbox{\boxB}{\leavevmode\lower 2pt\hbox{\boxB}}
\def\leavevmode\lower 2pt\hbox{\boxplain}{\leavevmode\lower 2pt\hbox{\boxplain}}
\def\Boxit#1#2#3%
{%
\vto
{%
\hrule height \Linewidth%
\hbox%
{%
\vrule width \Linewidth%
\vbox%
{%
\kern #
\hbox%
{%
\kern #
\vbox{\hbox to 0in{\hss\copy#1\hss}}%
\kern #
}%
\kern #
}%
\vrule width \Linewidth%
}%
\hrule height \Linewidth%
}%
}
\def\figbox#1#2#3%
{\setbox0=\hbox{#1}\dp0=0pt\ht0=0pt\figboxwidth=#2\relax\figboxheight=#3\relax%
\divide\figboxwidth by 2\relax%
\divide\figboxheight by 2\relax%
\Boxit{0}{\figboxwidth}{\figboxheight}
}%
\def\Figbox#1#2#3%
{%
\halfboxsideindent=\boxsideindent\divide\halfboxsideindent by 2\relax%
\hglue\halfboxsideindent%
\setbox0=\hbox{#1}\figboxwidth=#2\relax\figboxheight=#3\relax%
\divide\figboxwidth by 2\relax%
\divide\figboxheight by 2\relax%
\advance\figboxwidth by -\boxsideindent\relax%
\advance\figboxheight by -\boxheightindent\relax%
\Boxit{0}{\figboxwidth}{\figboxheight}}%
\def\figcaption#1#2{%
\bgroup\Tenpoint\par\noindent\narrower FIG.~#1. #2 \smallskip\egroup}
\def\figinsert#1#2{%
\ifdraft{\vrule height #1 depth 0pt width 0.5pt}%
\vbox to 40pt{\hbox to 0pt{\qquad\qquad#2 \hss}\vss}%
\vbox to -40pt{\hbox to 0pt{\qquad\qquad\hss}\vss}%
\else{\vrule height #1 depth 0pt width 0pt}%
\noindent\AIInput{disk$physics00:[deg.loi.physfigs]#2.ps}\fi}
\def\figsize#1#2{%
\ifdraft{\vrule height #1 depth 0pt width 0.5pt}%
\vbox to 40pt{\hbox to 0pt{\qquad#2 \hss}\vss}%
\vbox to -40pt{\hbox to 0pt{\qquad\hss}\vss}%
\else{\vrule height #1 depth 0pt width 0pt}\fi}
\def1.9{1.9}
\ifx\undefined\psfig\else\endinput\fi
\let\LaTeXAtSign=\@
\let\@=\relax
\edef\psfigRestoreAt{\catcode`\@=\number\catcode`@\relax}
\catcode`\@=11\relax
\newwrite\@unused
\def\ps@typeout#1{{\let\protect\string\immediate\write\@unused{#1}}}
\ps@typeout{psfig/tex 1.9}
\def./{./}
\def\psfigurepath#1{\edef./{#1}}
\def\@nnil{\@nil}
\def\@empty{}
\def\@psdonoop#1\@@#2#3{}
\def\@psdo#1:=#2\do#3{\edef\@psdotmp{#2}\ifx\@psdotmp\@empty \else
\expandafter\@psdoloop#2,\@nil,\@nil\@@#1{#3}\fi}
\def\@psdoloop#1,#2,#3\@@#4#5{\def#4{#1}\ifx #4\@nnil \else
#5\def#4{#2}\ifx #4\@nnil \else#5\@ipsdoloop #3\@@#4{#5}\fi\fi}
\def\@ipsdoloop#1,#2\@@#3#4{\def#3{#1}\ifx #3\@nnil
\let\@nextwhile=\@psdonoop \else
#4\relax\let\@nextwhile=\@ipsdoloop\fi\@nextwhile#2\@@#3{#4}}
\def\@tpsdo#1:=#2\do#3{\xdef\@psdotmp{#2}\ifx\@psdotmp\@empty \else
\@tpsdoloop#2\@nil\@nil\@@#1{#3}\fi}
\def\@tpsdoloop#1#2\@@#3#4{\def#3{#1}\ifx #3\@nnil
\let\@nextwhile=\@psdonoop \else
#4\relax\let\@nextwhile=\@tpsdoloop\fi\@nextwhile#2\@@#3{#4}}
\ifx\undefined\fbox
\newdimen\fboxrule
\newdimen\fboxsep
\newdimen\ps@tempdima
\newbox\ps@tempboxa
\fboxsep = 3pt
\fboxrule = .4pt
\long\def\fbox#1{\leavevmode\setbox\ps@tempboxa\hbox{#1}\ps@tempdima\fboxrule
\advance\ps@tempdima \fboxsep \advance\ps@tempdima \dp\ps@tempboxa
\hbox{\lower \ps@tempdima\hbox
{\vbox{\hrule height \fboxrule
\hbox{\vrule width \fboxrule \hskip\fboxsep
\vbox{\vskip\fboxsep \box\ps@tempboxa\vskip\fboxsep}\hskip
\fboxsep\vrule width \fboxrule}
\hrule height \fboxrule}}}}
\fi
\newread\ps@stream
\newif\ifnot@eof
\newif\if@noisy
\newif\if@atend
\newif\if@psfile
{\catcode`\%=12\global\gdef\epsf@start
\def\epsf@PS{PS}
\def\epsf@getbb#1{%
\openin\ps@stream=#1
\ifeof\ps@stream\ps@typeout{Error, File #1 not found}\else
{\not@eoftrue \chardef\other=12
\def\do##1{\catcode`##1=\other}\dospecials \catcode`\ =10
\loop
\if@psfile
\read\ps@stream to \epsf@fileline
\else{
\obeyspaces
\read\ps@stream to \epsf@tmp\global\let\epsf@fileline\epsf@tmp}
\fi
\ifeof\ps@stream\not@eoffalse\else
\if@psfile\else
\expandafter\epsf@test\epsf@fileline:. \\%
\fi
\expandafter\epsf@aux\epsf@fileline:. \\%
\fi
\ifnot@eof\repeat
}\closein\ps@stream\fi}%
\long\def\epsf@test#1#2#3:#4\\{\def\epsf@testit{#1#2}
\ifx\epsf@testit\epsf@start\else
\ps@typeout{Warning! File does not start with `\epsf@start'. It may not be a PostScript file.}
\fi
\@psfiletrue}
{\catcode`\%=12\global\let\epsf@percent
\long\def\epsf@aux#1#2:#3\\{\ifx#1\epsf@percent
\def\epsf@testit{#2}\ifx\epsf@testit\epsf@bblit
\@atendfalse
\epsf@atend #3 . \\%
\if@atend
\if@verbose{
\ps@typeout{psfig: found `(atend)'; continuing search}
}\fi
\else
\epsf@grab #3 . . . \\%
\not@eoffalse
\global\no@bbfalse
\fi
\fi\fi}%
\def\epsf@grab #1 #2 #3 #4 #5\\{%
\global\def\epsf@llx{#1}\ifx\epsf@llx\empty
\epsf@grab #2 #3 #4 #5 .\\\else
\global\def\epsf@lly{#2}%
\global\def\epsf@urx{#3}\global\def\epsf@ury{#4}\fi}%
\def\epsf@atendlit{(atend)}
\def\epsf@atend #1 #2 #3\\{%
\def\epsf@tmp{#1}\ifx\epsf@tmp\empty
\epsf@atend #2 #3 .\\\else
\ifx\epsf@tmp\epsf@atendlit\@atendtrue\fi\fi}
\chardef\psletter = 11
\chardef\other = 12
\newif \ifdebug
\newif\ifc@mpute
\c@mputetrue
\let\then = \relax
\def\r@dian{pt }
\let\r@dians = \r@dian
\let\dimensionless@nit = \r@dian
\let\dimensionless@nits = \dimensionless@nit
\def\internal@nit{sp }
\let\internal@nits = \internal@nit
\newif\ifstillc@nverging
\def \Mess@ge #1{\ifdebug \then \message {#1} \fi}
{
\catcode `\@ = \psletter
\gdef \nodimen {\expandafter \n@dimen \the \dimen}
\gdef \term #1 #2 #3%
{\edef \t@ {\the #1
\edef \t@@ {\expandafter \n@dimen \the #2\r@dian}%
\t@rm {\t@} {\t@@} {#3}%
}
\gdef \t@rm #1 #2 #3%
{{%
\count 0 = 0
\dimen 0 = 1 \dimensionless@nit
\dimen 2 = #2\relax
\Mess@ge {Calculating term #1 of \nodimen 2}%
\loop
\ifnum \count 0 < #1
\then \advance \count 0 by 1
\Mess@ge {Iteration \the \count 0 \space}%
\Multiply \dimen 0 by {\dimen 2}%
\Mess@ge {After multiplication, term = \nodimen 0}%
\Divide \dimen 0 by {\count 0}%
\Mess@ge {After division, term = \nodimen 0}%
\repeat
\Mess@ge {Final value for term #1 of
\nodimen 2 \space is \nodimen 0}%
\xdef \Term {#3 = \nodimen 0 \r@dians}%
\aftergroup \Term
}}
\catcode `\p = \other
\catcode `\t = \other
\gdef \n@dimen #1pt{#1}
}
\def \Divide #1by #2{\divide #1 by #2}
\def \Multiply #1by #
{
\count 0 = #1\relax
\count 2 = #2\relax
\count 4 = 65536
\Mess@ge {Before scaling, count 0 = \the \count 0 \space and
count 2 = \the \count 2}%
\ifnum \count 0 > 32767
\then \divide \count 0 by 4
\divide \count 4 by 4
\else \ifnum \count 0 < -32767
\then \divide \count 0 by 4
\divide \count 4 by 4
\else
\fi
\fi
\ifnum \count 2 > 32767
\then \divide \count 2 by 4
\divide \count 4 by 4
\else \ifnum \count 2 < -32767
\then \divide \count 2 by 4
\divide \count 4 by 4
\else
\fi
\fi
\multiply \count 0 by \count 2
\divide \count 0 by \count 4
\xdef \product {#1 = \the \count 0 \internal@nits}%
\aftergroup \product
}}
\def\r@duce{\ifdim\dimen0 > 90\r@dian \then
\multiply\dimen0 by -1
\advance\dimen0 by 180\r@dian
\r@duce
\else \ifdim\dimen0 < -90\r@dian \then
\advance\dimen0 by 360\r@dian
\r@duce
\fi
\fi}
\def\Sine#1%
{{%
\dimen 0 = #1 \r@dian
\r@duce
\ifdim\dimen0 = -90\r@dian \then
\dimen4 = -1\r@dian
\c@mputefalse
\fi
\ifdim\dimen0 = 90\r@dian \then
\dimen4 = 1\r@dian
\c@mputefalse
\fi
\ifdim\dimen0 = 0\r@dian \then
\dimen4 = 0\r@dian
\c@mputefalse
\fi
\ifc@mpute \then
\divide\dimen0 by 180
\dimen0=3.141592654\dimen0
\dimen 2 = 3.1415926535897963\r@dian
\divide\dimen 2 by 2
\Mess@ge {Sin: calculating Sin of \nodimen 0}%
\count 0 = 1
\dimen 2 = 1 \r@dian
\dimen 4 = 0 \r@dian
\loop
\ifnum \dimen 2 = 0
\then \stillc@nvergingfalse
\else \stillc@nvergingtrue
\fi
\ifstillc@nverging
\then \term {\count 0} {\dimen 0} {\dimen 2}%
\advance \count 0 by 2
\count 2 = \count 0
\divide \count 2 by 2
\ifodd \count 2
\then \advance \dimen 4 by \dimen 2
\else \advance \dimen 4 by -\dimen 2
\fi
\repeat
\fi
\xdef \sine {\nodimen 4}%
}}
\def\Cosine#1{\ifx\sine\UnDefined\edef\Savesine{\relax}\else
\edef\Savesine{\sine}\fi
{\dimen0=#1\r@dian\advance\dimen0 by 90\r@dian
\Sine{\nodimen 0}
\xdef\cosine{\sine}
\xdef\sine{\Savesine}}}
\def\psdraft{
\def\@psdraft{0}
}
\def\psfull{
\def\@psdraft{100}
}
\psfull
\newif\if@scalefirst
\def\@scalefirsttrue{\@scalefirsttrue}
\def\@scalefirstfalse{\@scalefirstfalse}
\@scalefirstfalse
\newif\if@draftbox
\def\psnodraftbox{
\@draftboxfalse
}
\def\psdraftbox{
\@draftboxtrue
}
\@draftboxtrue
\newif\if@prologfile
\newif\if@postlogfile
\def\pssilent{
\@noisyfalse
}
\def\psnoisy{
\@noisytrue
}
\psnoisy
\newif\if@bbllx
\newif\if@bblly
\newif\if@bburx
\newif\if@bbury
\newif\if@height
\newif\if@width
\newif\if@rheight
\newif\if@rwidth
\newif\if@angle
\newif\if@clip
\newif\if@verbose
\def\@p@@sclip#1{\@cliptrue}
\def\@p@@sfigure#1{\def\@p@sfile{null}\def\@p@sbbfile{null}
\openin1=#1.bb
\ifeof1\closein1
\openin1=./#1.bb
\ifeof1\closein1
\openin1=#1
\ifeof1\closein1%
\openin1=./#1
\ifeof1
\ps@typeout{Error, File #1 not found}
\if@bbllx\if@bblly
\if@bburx\if@bbury
\def\@p@sfile{#1}%
\def\@p@sbbfile{#1}%
\fi\fi\fi\fi
\else\closein1
\def\@p@sfile{./#1}%
\def\@p@sbbfile{./#1}%
\fi%
\else\closein1%
\def\@p@sfile{#1}
\def\@p@sbbfile{#1}
\fi
\else
\def\@p@sfile{./#1}
\def\@p@sbbfile{./#1.bb}
\fi
\else
\def\@p@sfile{#1}
\def\@p@sbbfile{#1.bb}
\fi}
\def\@p@@sfile#1{\@p@@sfigure{#1}}
\def\@p@@sbbllx#1{
\@bbllxtrue
\dimen100=#1
\edef\@p@sbbllx{\number\dimen100}
}
\def\@p@@sbblly#1{
\@bbllytrue
\dimen100=#1
\edef\@p@sbblly{\number\dimen100}
}
\def\@p@@sbburx#1{
\@bburxtrue
\dimen100=#1
\edef\@p@sbburx{\number\dimen100}
}
\def\@p@@sbbury#1{
\@bburytrue
\dimen100=#1
\edef\@p@sbbury{\number\dimen100}
}
\def\@p@@sheight#1{
\@heighttrue
\dimen100=#1
\edef\@p@sheight{\number\dimen100}
}
\def\@p@@swidth#1{
\@widthtrue
\dimen100=#1
\edef\@p@swidth{\number\dimen100}
}
\def\@p@@srheight#1{
\@rheighttrue
\dimen100=#1
\edef\@p@srheight{\number\dimen100}
}
\def\@p@@srwidth#1{
\@rwidthtrue
\dimen100=#1
\edef\@p@srwidth{\number\dimen100}
}
\def\@p@@sangle#1{
\@angletrue
\edef\@p@sangle{#1}
}
\def\@p@@ssilent#1{
\@verbosefalse
}
\def\@p@@sprolog#1{\@prologfiletrue\def\@prologfileval{#1}}
\def\@p@@spostlog#1{\@postlogfiletrue\def\@postlogfileval{#1}}
\def\@cs@name#1{\csname #1\endcsname}
\def\@setparms#1=#2,{\@cs@name{@p@@s#1}{#2}}
\def\ps@init@parms{
\@bbllxfalse \@bbllyfalse
\@bburxfalse \@bburyfalse
\@heightfalse \@widthfalse
\@rheightfalse \@rwidthfalse
\def\@p@sbbllx{}\def\@p@sbblly{}
\def\@p@sbburx{}\def\@p@sbbury{}
\def\@p@sheight{}\def\@p@swidth{}
\def\@p@srheight{}\def\@p@srwidth{}
\def\@p@sangle{0}
\def\@p@sfile{} \def\@p@sbbfile{}
\def\@p@scost{10}
\def\@sc{}
\@prologfilefalse
\@postlogfilefalse
\@clipfalse
\if@noisy
\@verbosetrue
\else
\@verbosefalse
\fi
}
\def\parse@ps@parms#1{
\@psdo\@psfiga:=#1\do
{\expandafter\@setparms\@psfiga,}}
\newif\ifno@bb
\def\bb@missing{
\if@verbose{
\ps@typeout{psfig: searching \@p@sbbfile \space for bounding box}
}\fi
\no@bbtrue
\epsf@getbb{\@p@sbbfile}
\ifno@bb \else \bb@cull\epsf@llx\epsf@lly\epsf@urx\epsf@ury\fi
}
\def\bb@cull#1#2#3#4{
\dimen100=#1 bp\edef\@p@sbbllx{\number\dimen100}
\dimen100=#2 bp\edef\@p@sbblly{\number\dimen100}
\dimen100=#3 bp\edef\@p@sbburx{\number\dimen100}
\dimen100=#4 bp\edef\@p@sbbury{\number\dimen100}
\no@bbfalse
}
\newdimen\p@intvaluex
\newdimen\p@intvaluey
\def\rotate@#1#2{{\dimen0=#1 sp\dimen1=#2 sp
\global\p@intvaluex=\cosine\dimen0
\dimen3=\sine\dimen1
\global\advance\p@intvaluex by -\dimen3
\global\p@intvaluey=\sine\dimen0
\dimen3=\cosine\dimen1
\global\advance\p@intvaluey by \dimen3
}}
\def\compute@bb{
\no@bbfalse
\if@bbllx \else \no@bbtrue \fi
\if@bblly \else \no@bbtrue \fi
\if@bburx \else \no@bbtrue \fi
\if@bbury \else \no@bbtrue \fi
\ifno@bb \bb@missing \fi
\ifno@bb \ps@typeout{FATAL ERROR: no bb supplied or found}
\no-bb-error
\fi
%
\count203=\@p@sbburx
\count204=\@p@sbbury
\advance\count203 by -\@p@sbbllx
\advance\count204 by -\@p@sbblly
\edef\ps@bbw{\number\count203}
\edef\ps@bbh{\number\count204}
\if@angle
\Sine{\@p@sangle}\Cosine{\@p@sangle}
{\dimen100=\maxdimen\xdef\r@p@sbbllx{\number\dimen100}
\xdef\r@p@sbblly{\number\dimen100}
\xdef\r@p@sbburx{-\number\dimen100}
\xdef\r@p@sbbury{-\number\dimen100}}
\def\minmaxtest{
\ifnum\number\p@intvaluex<\r@p@sbbllx
\xdef\r@p@sbbllx{\number\p@intvaluex}\fi
\ifnum\number\p@intvaluex>\r@p@sbburx
\xdef\r@p@sbburx{\number\p@intvaluex}\fi
\ifnum\number\p@intvaluey<\r@p@sbblly
\xdef\r@p@sbblly{\number\p@intvaluey}\fi
\ifnum\number\p@intvaluey>\r@p@sbbury
\xdef\r@p@sbbury{\number\p@intvaluey}\fi
}
\rotate@{\@p@sbbllx}{\@p@sbblly}
\minmaxtest
\rotate@{\@p@sbbllx}{\@p@sbbury}
\minmaxtest
\rotate@{\@p@sbburx}{\@p@sbblly}
\minmaxtest
\rotate@{\@p@sbburx}{\@p@sbbury}
\minmaxtest
\edef\@p@sbbllx{\r@p@sbbllx}\edef\@p@sbblly{\r@p@sbblly}
\edef\@p@sbburx{\r@p@sbburx}\edef\@p@sbbury{\r@p@sbbury}
\fi
\count203=\@p@sbburx
\count204=\@p@sbbury
\advance\count203 by -\@p@sbbllx
\advance\count204 by -\@p@sbblly
\edef\@bbw{\number\count203}
\edef\@bbh{\number\count204}
}
\def\in@hundreds#1#2#3{\count240=#2 \count241=#3
\count100=\count240
\divide\count100 by \count241
\count101=\count100
\multiply\count101 by \count241
\advance\count240 by -\count101
\multiply\count240 by 10
\count101=\count240
\divide\count101 by \count241
\count102=\count101
\multiply\count102 by \count241
\advance\count240 by -\count102
\multiply\count240 by 10
\count102=\count240
\divide\count102 by \count241
\count200=#1\count205=0
\count201=\count200
\multiply\count201 by \count100
\advance\count205 by \count201
\count201=\count200
\divide\count201 by 10
\multiply\count201 by \count101
\advance\count205 by \count201
%
\count201=\count200
\divide\count201 by 100
\multiply\count201 by \count102
\advance\count205 by \count201
%
\edef\@result{\number\count205}
}
\def\compute@wfromh{
\in@hundreds{\@p@sheight}{\@bbw}{\@bbh}
\edef\@p@swidth{\@result}
}
\def\compute@hfromw{
\in@hundreds{\@p@swidth}{\@bbh}{\@bbw}
\edef\@p@sheight{\@result}
}
\def\compute@handw{
\if@height
\if@width
\else
\compute@wfromh
\fi
\else
\if@width
\compute@hfromw
\else
\edef\@p@sheight{\@bbh}
\edef\@p@swidth{\@bbw}
\fi
\fi
}
\def\compute@resv{
\if@rheight \else \edef\@p@srheight{\@p@sheight} \fi
\if@rwidth \else \edef\@p@srwidth{\@p@swidth} \fi
}
\def\compute@sizes{
\compute@bb
\if@scalefirst\if@angle
\if@width
\in@hundreds{\@p@swidth}{\@bbw}{\ps@bbw}
\edef\@p@swidth{\@result}
\fi
\if@height
\in@hundreds{\@p@sheight}{\@bbh}{\ps@bbh}
\edef\@p@sheight{\@result}
\fi
\fi\fi
\compute@handw
\compute@resv}
\def\psfig#1{\vbox {
%
\ps@init@parms
\parse@ps@parms{#1}
\compute@sizes
%
\ifnum\@p@scost<\@psdraft{
%
\special{ps::[begin] \@p@swidth \space \@p@sheight \space
\@p@sbbllx \space \@p@sbblly \space
\@p@sbburx \space \@p@sbbury \space
startTexFig \space }
\if@angle
\special {ps:: \@p@sangle \space rotate \space}
\fi
\if@clip{
\if@verbose{
\ps@typeout{(clip)}
}\fi
\special{ps:: doclip \space }
}\fi
\if@prologfile
\special{ps: plotfile \@prologfileval \space } \fi
\if@verbose{
\ps@typeout{psfig: including \@p@sfile \space }
}\fi
\special{ps: plotfile \@p@sfile \space }
\if@postlogfile
\special{ps: plotfile \@postlogfileval \space } \fi
\special{ps::[end] endTexFig \space }
\vbox to \@p@srheight sp{
\hbox to \@p@srwidth sp{
\hss
}
\vss
}
}\else{
\if@draftbox{
\hbox{\frame{\vbox to \@p@srheight sp{
\vss
\hbox to \@p@srwidth sp{ \hss \@p@sfile \hss }
\vss
}}}
}\else{
\vbox to \@p@srheight sp{
\vss
\hbox to \@p@srwidth sp{\hss}
\vss
}
}\fi
}\fi
}}
\psfigRestoreAt
\let\@=\LaTeXAtSign
\def\ColliderTableInsert#1%
{{%
\parindent = 0pt \leftskip = 0pt \rightskip = 0pt%
\vskip .4in%
\nobreak%
\vskip -.5in%
\leavevmode%
\centerline{\psfig{figure=#1,clip=t}}%
\nobreak%
\vglue .1in%
\nobreak%
\vskip -.3in%
\nobreak%
}}
\global\def\FigureInsert#1#2%
{{%
\def\CompareStrings##1##2%
{%
TT\fi%
\edef\StringOne{##1}%
\edef\StringTwo{##2}%
\ifx\StringOne\StringTwo%
}%
\parindent = 0pt \leftskip = 0pt \rightskip = 0pt%
\vskip .4in%
\leavevmode%
\if\CompareStrings{#2}{left}%
\leftline{\psfig{figure=figures/#1,clip=t}}%
\else\if\CompareStrings{#2}{center}%
\centerline{\psfig{figure=figures/#1,clip=t}}%
\else\if\CompareStrings{#2}{right}%
\rightline{\psfig{figure=figures/#1,clip=t}}%
\fi\fi\fi%
\nobreak%
\vglue .1in%
\nobreak%
}}
\global\def\FigureInsertScaled#1#2#3%
{{%
\def\CompareStrings##1##2%
{%
TT\fi%
\edef\StringOne{##1}%
\edef\StringTwo{##2}%
\ifx\StringOne\StringTwo%
}%
\parindent = 0pt \leftskip = 0pt \rightskip = 0pt%
\vskip .4in%
\leavevmode%
\if\CompareStrings{#2}{left}%
\leftline{\psfig{figure=figures/#1,height=#3,clip=t}}%
\else\if\CompareStrings{#2}{center}%
\centerline{\psfig{figure=figures/#1,height=#3,clip=t}}%
\else\if\CompareStrings{#2}{right}%
\rightline{\psfig{figure=figures/#1,height=#3,clip=t}}%
\fi\fi\fi%
\nobreak%
\vglue .1in%
\nobreak%
}}
\def\insertpsfigure#1#2#3#4{
\hbox to \hsize
{
\vbox to #
{
\hsize = 0in
\special
{
insert rpp$figures:#2
top=#3,left=#
}
\vss
}
\hss
}
}
\def\insertpsfiguremag#1#2#3#4#5{
\hbox to \hsize
{
\vbox to #
{
\hsize = 0in
\special
{
insert rpp$figures:#2
top=#3,left=#4
magnification=#5%
}
\vss
}
\hss
}
}
\newdimen\beforefigureheight
\newdimen\afterfigureheight
\beforefigureheight=-.5in
\afterfigureheight=-.3in
\def\RPPfigure#1#2#3{
\vskip \beforefigureheight
\FigureInsert{#1}{#2}
\vskip \afterfigureheight
\FigureCaption{#3}
\WWWfigure{#1}
}
\def\RPPfigurescaled#1#2#3#4{
\vskip \beforefigureheight
\FigureInsertScaled{#1}{#2}{#3}
\vskip \afterfigureheight
\FigureCaption{#4}
\WWWfigure{#1}
}
\def\RPPtextfigure#1#2#3{
\vskip \beforefigureheight
\FigureInsert{#1}{#2}
\vskip \afterfigureheight
\FigureCaption{#3}
\WWWtextfigure{#1}
}
\newcount\Firstpage
\newif\ifpageindexopen \newread\pageindexread \newwrite\pageindexwrite
\ifx\WHATEVERIWANT\undefined
\else
\def{}
\fi
\newcount\lastpage \lastpage=0\relax
\def\Page#1{%
\write\pageindexwrite{\string\xdef\string#1{\the\pageno}}%
}
\newtoks\FigureCaptiontok
\global\def\FigureCaption#1{
\Caption
#1
\emsg{> \NX\endCaption called before \NX\Caption.}%
\global\FigureCaptiontok={#1}%
}
\newtoks\ABlanktok
\ABlanktok={ }
\newif\ifwwwfigureopen \newread\wwwfigureread \newwrite\wwwfigurewrite
\ifx\WHATEVERIWANT\undefined
\else
\def{}
\fi
\global\def\WWWfigure#1{%
\immediate\write\wwwfigurewrite{%
\string\figurename{\string#1}%
}
\immediate\write\wwwfigurewrite{%
\string\figurenumber{\the\chapternum.\the\fignum}%
}
\immediate\write\wwwfigurewrite{%
\string\figurecaption{\the\FigureCaptiontok}%
}
\immediate\write\wwwfigurewrite{%
\the\ABlanktok
}
}%
\global\def\WWWhead#1{%
\immediate\write\wwwfigurewrite{%
#1%
}%
}
\newtoks\widthofcolumntoks
\widthofcolumntoks={\widthofcolumn=}
\global\def\WWWwidthofcolumn#1{%
\immediate\write\wwwfigurewrite{%
\the\widthofcolumntoks#1%
}%
}
\global\def\WWWtextfigure#1{%
\immediate\write\wwwfigurewrite{%
\string\figurename{\string#1}%
}
\immediate\write\wwwfigurewrite{%
\string\figurenumber{\the\fignum}%
}
\immediate\write\wwwfigurewrite{%
\string\figurecaption{\the\FigureCaptiontok}%
}
\immediate\write\wwwfigurewrite{%
\the\ABlanktok
}
}%
\global\def\WWWhead#1{%
\immediate\write\wwwfigurewrite{%
#1%
}%
}
\def { }
\def\IndexEntry#1%
{%
\write\pageindexwrite%
{%
\string\expandafter%
\string\def\string\csname \noexpand#1%
\string\endcsname\expandafter{\the\pageno}%
}%
}
\def\lastpagenumber{%
\write\pageindexwrite{\string\def\string\lastpage{\the\pageno}}%
}
\def\bumpuppagenumber{%
\pageno=\lastpage \advance\pageno by 1 \Firstpage=\pageno}
\def\pageno=\lastpage \Firstpage=\pageno{\pageno=\lastpage \Firstpage=\pageno}
\let\indexpage=\IndexEntry
\def\relax{\ifodd\pageno\hoffset=.8in\else\hoffset=.3in\fi}%
\def\relax{\ifodd\pageno\hoffset=0in\else\hoffset=0in\fi}%
\def\relax{\ifodd\pageno\hoffset=.1in\else\hoffset=.1in\fi}%
\def\relax{\ifodd\pageno\hoffset=.08in\else\hoffset=.08in\fi}%
\def\relax{\ifodd\pageno\hoffset=.12in\else\hoffset=.12in\fi}%
\def\relax{\relax}
\newdimen\Fullpagewidth \Fullpagewidth=8.75in
\newdimen\Halfpagewidth \Halfpagewidth=4.25in
\newdimen\fullhsize
\newcount\columnbreak
\newdimen\VerticalFudge
\VerticalFudge =-.32in
\fullhsize=\Fullpagewidth \hsize=\Halfpagewidth
\def\hbox to\fullhsize{\hbox to\fullhsize}
\let\knuthmakeheadline=\makeheadline
\let\knuthmakefootline=\baselineskip=24pt \fullline{\the\footline}
\def\dbmakeheadline{\vbox to 0pt{\vskip-22.5pt
\line{\vbox to10pt{}\the\headline}\vss}\nointerlineskip}
\def\baselineskip=24pt \line{\the\footline}{\baselineskip=24pt \line{\the\footline}}
\let\lr=L \newbox\leftcolumn
\def\ScalingPostScript#1{\special{ps: #1 #1 scale}}
\def\dbonecolumn{\hsize=4.25in%
\output={%
\relax%
\shipout\vbox{%
\parindent = 0p
\leftskip = 0p
\nointerlineski
\ScalingPostScript{1.0
\nointerlineski
\makeheadline
\pagebody
\baselineskip=24pt \fullline{\the\footline}
\vglue \VerticalFudge
\nointerlineskip\SetOverPageBox{}\copy\OverPageBox
\advancepageno}
\ifnum\outputpenalty>-20000 \else\dosupereject\fi
}
\def\onecolumn{\hsize=8.75in%
\output={%
\relax%
\shipout\vbox{%
\parindent = 0p
\leftskip = 0p
\nointerlineski
\ScalingPostScript{1.0
\nointerlineski
\makeheadline
\pagebody
\baselineskip=24pt \fullline{\the\footline}
\vglue \VerticalFudge
\nointerlineskip\SetOverPageBox{}\copy\OverPageBox
\advancepageno}
\ifnum\outputpenalty>-20000 \else\dosupereject\fi
}
\def\twocol{\output={%
\if L\lr
\global\setbox\leftcolumn=\leftline{\pagebody} \global\let\lr=R
\else \doubleformat \global\let\lr=L\fi
\ifnum\outputpenalty>-20000 \else\dosupereject\fi}
\def\doubleformat{\shipout\vbox{%
\parindent = 0p
\leftskip = 0p
\nointerlineski
\ScalingPostScript{1.0
\nointerlineski
\makeheadline%
\hbox to\fullhsize{\box\leftcolumn\hfil\leftline{\pagebody}}
\baselineskip=24pt \fullline{\the\footline}%
\vglue \VerticalFudge
\nointerlineskip\SetOverPageBox{}\copy\OverPageBox
\advancepageno}}
\def\leftline{\pagebody}{\leftline{\pagebody}}
\def\makeheadline{\vbox to 0pt{\vskip-22.5pt
\hbox to\fullhsize{\vbox to10pt{}\the\headline}\vss}\nointerlineskip}
\def\baselineskip=24pt \fullline{\the\footline}{\baselineskip=24pt \hbox to\fullhsize{\the\footline}}
\columnbreak=0
\ifnum\columnbreak=1
\def\CB{\vfill\eject}\fi
\ifnum\columnbreak=0
\def\CB{\relax}\fi
\def{\parfillskip=0pt\par}\vfill\eject\noindent\ignorespaces{%
{\parfillskip=0pt\par}\vfill\eject\noindent\ignorespaces}
\def\vfill\eject\ignorespaces{\vfill\eject\ignorespaces}
\gdef\breakrefitem{\hangafter=0\hangindent=\refindent}
\def\midline{\vskip .25in \noindent
\setbox1=\hbox to 8.75in{%
\hss\vrule width 5.6in height 1.9pt\hss}\wd1=0pt\box1
\vskip .25in \bigskip\bigskip \noindent
\setbox2=\hbox to 8.75in{\hss QUARKMODEL SECTION GOES HERE\hss}\wd2=0pt\box2}
\def\@refitem#1{%
\paroreject \hangafter=0 \hangindent=\refindent \Textindent{#1.}}
\def\refitem#1{%
\paroreject \hangafter=0 \hangindent=\refindent \Textindent{#1.}}
\def\smallersubfont{%
\textfont0=\eightrm \scriptfont0=\sevenrm \scriptscriptfont0=\sevenrm
\textfont1=\eighti \scriptfont1=\seveni \scriptscriptfont1=\seveni
\textfont2=\eightsy \scriptfont2=\sevensy \scriptscriptfont2=\sevensy
\textfont3=\eightex \scriptfont3=\sevenex \scriptscriptfont3=\sevenex}
\def\biggersubfont{%
%
\textfont0=\tenrm \scriptfont0=\eightrm \scriptscriptfont0=\sevenrm
\textfont1=\teni \scriptfont1=\eighti \scriptscriptfont1=\seveni
\textfont2=\tensy \scriptfont2=\eightsy \scriptscriptfont2=\sevensy
\textfont3=\tenex \scriptfont3=\eightex \scriptscriptfont3=\sevenex}
\raggedright
\newskip\doublecolskip
\global\doublecolskip=3.333333pt plus3.333333pt minus1.00006pt
\global\spaceskip=\doublecolski
\parindent=12pt
\tenpoint\singlespace
\def\ninepointvspace{
\normalbaselineskip=9pt
\setbox\strutbox=\hbox{\vrule height7pt depth2pt width0pt}%
\normalbaselines}
\newdimen\strutskip
\def\vrule height 0.8\strutskip depth 0.3\strutskip width 0pt {\vrule height 0.7\strutskip
depth 0.3\strutskip
width 0pt}%
\def\setstrut {%
\strutskip = \baselineskip
}
\setstrut
\def\Folio{\ifnum\pageno<0 \romannumeral-\pageno
\else \number\pageno \fi }
\def\RPPonly#1{\beginRPPonly #1\endRPPonly}
\def\DBonly#1{\beginDBonly #1\endDBonly}
\def\nocropmarks{%
\footline={\hss\sevenrm\today\quad\TimeOfDay\hss
}
\def\blackbox{\overfullrule=5pt
\def\noblackbox{\overfullrule=0pt
\blackbox
\def\penalty-100\relax{\penalty-100\relax}
\def\fn#1{{}^{#1}}
\def\CompareStrings#1#2%
{%
TT\fi%
\edef\StringOne{#1}%
\edef\StringTwo{#2}%
\ifx\StringOne\StringTwo%
}%
\def\CompareStrings#1#2%
{%
TT\fi%
\edef\StringOne{#1}%
\edef\StringTwo{#2}%
\ifx\StringOne\StringTwo%
}
\defParticle Physics Booklet{Physical Review D}
\defParticle Physics Booklet{RPP}
\defone{one}
\BigBookOrDataBooklet = 1
\WhichSection=7
\if\CompareStrings{Particle Physics Booklet}{RPP}
\def4.25in{8.75in}
\else\if\CompareStrings{Particle Physics Booklet}{Particle Physics Booklet}
\def4.25in{4.25in}
\fi\fi
\if\CompareStrings{one}{two}
\def60{85}
\fi
\if\CompareStrings{one}{one}
\def60{original}
\fi
\if\CompareStrings{Particle Physics Booklet}{Particle Physics Booklet}
\def60{60}
\fi
\defNo{No}
\defNo{No}
\defConsecutive{Consecutive}
\newdimen\StartImageHsize
\newdimen\StartImageVsize
\newdimen\StartStockHsize
\newdimen\StartStockVsize
\newdimen\FinalImageHsize
\newdimen\FinalImageVsize
\newdimen\FinalStockHsize
\newdimen\FinalStockVsize
\StartImageHsize = 4.25in
\if\CompareStrings{Particle Physics Booklet}{Physical Review D}
\FinalImageHsize = 7.05in
\FinalImageVsize = 10.05in
\FinalStockHsize = 8.25in
\FinalStockVsize = 11.25in
\if\CompareStrings{4.25in}{9.60in}
\if\CompareStrings{60}{85}
\def1.0{.863970588}
\else\if\CompareStrings{60}{letter}
\def1.0{.734375000}
\else\if\CompareStrings{60}{WWW-odd}
\def1.0{.911458333}
\else\if\CompareStrings{60}{original}
\def1.0{1.0}
\fi\fi\fi\fi
\StartImageVsize = 13.54255319in
\StartStockHsize = 11.11702128in
\StartStockVsize = 15.15957448in
\StartImageVsize = 13.68510638in
\StartStockHsize = 11.23404255in
\StartStockVsize = 15.31914894in
\else\if\CompareStrings{4.25in}{8.75in}
\if\CompareStrings{60}{85}
\def1.0{.947899160}
\else\if\CompareStrings{60}{letter}
\def1.0{.805714286}
\else\if\CompareStrings{60}{WWW-odd}
\def1.0{1.0}
\else\if\CompareStrings{60}{original}
\def1.0{1.0}
\fi\fi\fi\fi
\StartImageVsize = 12.47340425in
\StartStockHsize = 10.2393617in
\StartStockVsize = 13.96276595in
\fi\fi
\else\if\CompareStrings{Particle Physics Booklet}{Physics Letters B}
\FinalImageHsize = 6.60in
\FinalImageVsize = 9.50in
\FinalStockHsize = 7.50in
\FinalStockVsize = 10.30in
\if\CompareStrings{4.25in}{9.60in}
\if\CompareStrings{60}{85}
\def1.0{.808823529}
\else\if\CompareStrings{60}{letter}
\def1.0{.687500000}
\else\if\CompareStrings{60}{WWW-odd}
\def1.0{.911458333}
\else\if\CompareStrings{60}{original}
\def1.0{1.0}
\fi\fi\fi\fi
\StartImageVsize = 13.8181818in
\StartStockHsize = 10.9090909in
\StartStockVsize = 14.9818181in
\else\if\CompareStrings{4.25in}{8.75in}
\if\CompareStrings{60}{85}
\def1.0{.887394958}
\else\if\CompareStrings{60}{letter}
\def1.0{.754285714}
\else\if\CompareStrings{60}{WWW-odd}
\def1.0{1.0}
\else\if\CompareStrings{60}{original}
\def1.0{1.0}
\fi\fi\fi\fi
\StartImageVsize = 12.594696in
\StartStockHsize = 9.943181in
\StartStockVsize = 13.655303in
\fi\fi
\else\if\CompareStrings{Particle Physics Booklet}{Particle Physics Booklet}
\FinalImageHsize = 2.60in
\FinalImageVsize = 4.70in
\FinalStockHsize = 3.00in
\FinalStockVsize = 5.00in
\if\CompareStrings{4.25in}{4.50in}
\if\CompareStrings{60}{60}
\def1.0{.962962962}
\else\if\CompareStrings{60}{letter}
\def1.0{.633333333333}
\else\if\CompareStrings{60}{WWW-odd}
\def1.0{1.27}
\else\if\CompareStrings{60}{original}
\def1.0{1.0}
\fi\fi\fi\fi
\StartImageVsize = 8.134615393in
\StartStockHsize = 5.192307698in
\StartStockVsize = 8.653846154in
\else\if\CompareStrings{4.25in}{4.25in}
\if\CompareStrings{60}{60}
\def1.0{1.019607843}
\else\if\CompareStrings{60}{letter}
\def1.0{.726495726}
\else\if\CompareStrings{60}{WWW-odd}
\def1.0{1.344705882}
\else\if\CompareStrings{60}{original}
\def1.0{1.0}
\fi\fi\fi\fi
\StartImageVsize = 7.682692308in
\StartStockHsize = 4.903846154in
\StartStockVsize = 8.173076923in
\fi\fi
\fi\fi\fi
\vsize = \StartImageVsize
\newdimen \NeededHsize
\newdimen \NeededVsize
\newdimen \CropMarkAddition
\if\CompareStrings{No}{Yes}
\CropMarkAddition = 2in
\NeededHsize = \StartStockHsize
\NeededVsize = \StartStockVsize
\advance \NeededHsize by \CropMarkAddition
\advance \NeededVsize by \CropMarkAddition
\NeededHsize = 1.0\NeededHsize
\NeededVsize = 1.0\NeededVsize
\else
\NeededHsize = 1.0\StartImageHsize
\NeededVsize = 1.0\StartImageVsize
\fi
\newdimen \PaperSizeWidth
\newdimen \PaperSizeHeight
\defletter{ledger}
\PaperSizeWidth = 11in
\PaperSizeHeight = 17in
\ifdim \NeededHsize < 8.50in
\ifdim \NeededVsize < 11.00in
\defletter{letter}
\PaperSizeWidth = 8.5in
\PaperSizeHeight = 11.0in
\fi\fi
\if\CompareStrings{No}{Yes}
\hoffset = .625in
\dimen1 = \PaperSizeWidth
\advance \dimen1 by -\NeededHsize
\divide \dimen1 by 2
\advance \hoffset by \dimen1
\voffset = .625in
\dimen1 = \PaperSizeHeight
\advance \dimen1 by -\NeededVsize
\divide \dimen1 by 2
\advance \voffset by \dimen1
\else
\hoffset = -.5in
\voffset = -.5in
\fi
\newcount\BleederPointer
\BleederPointer=7
\newbox\OverPageBox
\def\SetOverPageBox#1%
{%
\setbox\OverPageBox = \vbox%
{{%
\if\CompareStrings{No}{Yes}%
\BleederTab%
{\BleederPointer}%
{10}%
{\StartImageHsize}%
{\StartImageVsize}%
{0.2in}%
\fi%
\nointerlineskip%
\if\CompareStrings{No}{Yes}%
{%
\if\CompareStrings{60}{100}%
\def\temp{}%
\else%
\def\temp{Particle Physics Booklet}%
\fi%
\CropMarks%
{\temp}%
{60}%
{\StartStockHsize}%
{\StartStockVsize}%
{\StartImageHsize}%
{\StartImageVsize}%
}%
\fi%
}}%
\dimen0 = \ht\OverPageBox%
\advance\dimen0 by \dp\OverPageBox%
\ht\OverPageBox = \dimen0%
\dp\OverPageBox = 0pt%
}
\def\anp#1,#2(#3){{\rm Adv.\ Nucl.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\aip#1,#2(#3){{\rm Am.\ Inst.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\aj#1,#2(#3){{\rm Astrophys.\ J.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ajs#1,#2(#3){{\rm Astrophys.\ J.\ Supp.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ajl#1,#2(#3){{\rm Astrophys.\ J.\ Lett.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ajp#1,#2(#3){{\rm Am.\ J.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\apny#1,#2(#3){{\rm Ann.\ Phys.\ (NY)\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\apnyB#1,#2(#3){{\rm Ann.\ Phys.\ (NY)\ }{\bf B#1}, {\rm#2} {\rm(#3)}}
\def\apD#1,#2(#3){{\rm Ann.\ Phys.\ }{\bf D#1}, {\rm#2} {\rm(#3)}}
\def\ap#1,#2(#3){{\rm Ann.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ass#1,#2(#3){{\rm Ap.\ Space Sci.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\astropp#1,#2(#3)%
{{\rm Astropart.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\aap#1,#2(#3)%
{{\rm Astron.\ \& Astrophys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\araa#1,#2(#3)%
{{\rm Ann.\ Rev.\ Astron.\ Astrophys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\arnps#1,#2(#3)%
{{\rm Ann.\ Rev.\ Nucl.\ and Part.\ Sci.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\arns#1,#2(#3)%
{{\rm Ann.\ Rev.\ Nucl.\ Sci.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\cqg#1,#2(#3){{\rm Class.\ Quantum Grav.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\cpc#1,#2(#3){{\rm Comp.\ Phys.\ Comm.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\cjp#1,#2(#3){{\rm Can.\ J.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\cmp#1,#2(#3){{\rm Commun.\ Math.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\cnpp#1,#2(#3)%
{{\rm Comm.\ Nucl.\ Part.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\cnppA#1,#2(#3)%
{{\rm Comm.\ Nucl.\ Part.\ Phys.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\el#1,#2(#3){{\rm Europhys.\ Lett.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\epjC#1,#2(#3){{\rm Eur.\ Phys.\ J.\ }{\bf C#1}, {\rm#2} {\rm(#3)}}
\def\grg#1,#2(#3){{\rm Gen.\ Rel.\ Grav.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\hpa#1,#2(#3){{\rm Helv.\ Phys.\ Acta }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ieeetNS#1,#2(#3)%
{{\rm IEEE Trans.\ }{\bf NS#1}, {\rm#2} {\rm(#3)}}
\def\IEEE #1,#2(#3)%
{{\rm IEEE }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ijar#1,#2(#3)%
{{\rm Int.\ J.\ of Applied Rad.\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\def\ijari#1,#2(#3)%
{{\rm Int.\ J.\ of Applied Rad.\ and Isotopes\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\defJ.\ Chem.\ Phys.\ } \def\jmo{J.\ Mod.\ Opt.\ #1,#2(#3){{\rm J.\ Chem.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\jgr#1,#2(#3){{\rm J.\ Geophys.\ Res.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\jetp#1,#2(#3){{\rm Sov.\ Phys.\ JETP\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\jetpl#1,#2(#3)%
{{\rm Sov.\ Phys.\ JETP Lett.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\jpA#1,#2(#3){{\rm J.\ Phys.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\jpG#1,#2(#3){{\rm J.\ Phys.\ }{\bf G#1}, {\rm#2} {\rm(#3)}}
\def\jpamg#1,#2(#3)%
{{\rm J.\ Phys.\ A: Math.\ and Gen.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\jpcrd#1,#2(#3)%
{{\rm J.\ Phys.\ Chem.\ Ref.\ Data\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\def\jpsj#1,#2(#3){{\rm J.\ Phys.\ Soc.\ Jpn.\ }{\bf G#1}, {\rm#2} {\rm(#3)}}
\def\lnc#1,#2(#3){{\rm Lett.\ Nuovo Cimento\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\def\nature#1,#2(#3){{\rm Nature} {\bf #1}, {\rm#2} {\rm(#3)}}
\def\nc#1,#2(#3){{\rm Nuovo Cimento} {\bf #1}, {\rm#2} {\rm(#3)}}
\def\nim#1,#2(#3)%
{{\rm Nucl.\ Instrum.\ Methods\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\nimA#1,#2(#3)%
{{\rm Nucl.\ Instrum.\ Methods\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\nimB#1,#2(#3)%
{{\rm Nucl.\ Instrum.\ Methods\ }{\bf B#1}, {\rm#2} {\rm(#3)}}
\def\np#1,#2(#3){{\rm Nucl.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\npps#1,#2(#3){{\rm Nucl.\ Phys.\ (Proc.\ Supp.) }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\mnras#1,#2(#3){{\rm MNRAS\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\medp#1,#2(#3){{\rm Med.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\mplA#1,#2(#3){{\rm Mod.\ Phys.\ Lett.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\npA#1,#2(#3){{\rm Nucl.\ Phys.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\npB#1,#2(#3){{\rm Nucl.\ Phys.\ }{\bf B#1}, {\rm#2} {\rm(#3)}}
\def\npBps#1,#2(#3){{\rm Nucl.\ Phys.\ (Proc.\ Supp.) }{\bf B#1},
{\rm#2} {\rm(#3)}}
\def\pasp#1,#2(#3){{\rm Pub.\ Astron.\ Soc.\ Pac.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\defPhys.\ Lett.\ } \def\pra{Phys.\ Rev.\ A #1,#2(#3){{\rm Phys.\ Lett.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\fp#1,#2(#3){{\rm Fortsch.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ijmpA#1,#2(#3)%
{{\rm Int.\ J.\ Mod.\ Phys.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\ijmpE#1,#2(#3)%
{{\rm Int.\ J.\ Mod.\ Phys.\ }{\bf E#1}, {\rm#2} {\rm(#3)}}
\def\plA#1,#2(#3){{\rm Phys.\ Lett.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\plB#1,#2(#3){{\rm Phys.\ Lett.\ }{\bf B#1}, {\rm#2} {\rm(#3)}}
\def\pnasus#1,#2(#3)%
{{\it Proc.\ Natl.\ Acad.\ Sci.\ \rm (US)}{B#1}, {\rm#2} {\rm(#3)}}
\def\ppsA#1,#2(#3){{\rm Proc.\ Phys.\ Soc.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\ppsB#1,#2(#3){{\rm Proc.\ Phys.\ Soc.\ }{\bf B#1}, {\rm#2} {\rm(#3)}}
\def\pr#1,#2(#3){{\rm Phys.\ Rev.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\prA#1,#2(#3){{\rm Phys.\ Rev.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\prB#1,#2(#3){{\rm Phys.\ Rev.\ }{\bf B#1}, {\rm#2} {\rm(#3)}}
\def\prC#1,#2(#3){{\rm Phys.\ Rev.\ }{\bf C#1}, {\rm#2} {\rm(#3)}}
\def\prD#1,#2(#3){{\rm Phys.\ Rev.\ }{\bf D#1}, {\rm#2} {\rm(#3)}}
\def\prept#1,#2(#3){{\rm Phys.\ Reports\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\def\prslA#1,#2(#3)%
{{\rm Proc.\ Royal Soc.\ London }{\bf A#1}, {\rm#2} {\rm(#3)}}
\defPhys.\ Rev.\ Lett.\ } \def\rmp{Rev.\ Mod.\ Phys.\ #1,#2(#3){{\rm Phys.\ Rev.\ Lett.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ps#1,#2(#3){{\rm Phys.\ Scripta\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ptp#1,#2(#3){{\rm Prog.\ Theor.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ppnp#1,#2(#3)%
{{\rm Prog.\ in Part.\ Nucl.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\ptps#1,#2(#3)%
{{\rm Prog.\ Theor.\ Phys.\ Supp.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\pw#1,#2(#3){{\rm Part.\ World\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\pzetf#1,#2(#3)%
{{\rm Pisma Zh.\ Eksp.\ Teor.\ Fiz.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\rgss#1,#2(#3){{\rm Revs.\ Geophysics \& Space Sci.\ }{\bf #1},
{\rm#2} {\rm(#3)}}
\def\rmp#1,#2(#3){{\rm Rev.\ Mod.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\rnc#1,#2(#3){{\rm Riv.\ Nuovo Cimento\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\def\rpp#1,#2(#3)%
{{\rm Rept.\ on Prog.\ in Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\science#1,#2(#3){{\rm Science\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\def\sjnp#1,#2(#3)%
{{\rm Sov.\ J.\ Nucl.\ Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\panp#1,#2(#3)%
{{\rm Phys.\ Atom.\ Nucl.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\spu#1,#2(#3){{\rm Sov.\ Phys.\ Usp.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\surveyHEP#1,#2(#3)%
{{\rm Surv.\ High Energy Physics\ } {\bf #1}, {\rm#2} {\rm(#3)}}
\def\yf#1,#2(#3){{\rm Yad.\ Fiz.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\zetf#1,#2(#3)%
{{\rm Zh.\ Eksp.\ Teor.\ Fiz.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\zp#1,#2(#3){{\rm Z.~Phys.\ }{\bf #1}, {\rm#2} {\rm(#3)}}
\def\zpA#1,#2(#3){{\rm Z.~Phys.\ }{\bf A#1}, {\rm#2} {\rm(#3)}}
\def\zpC#1,#2(#3){{\rm Z.~Phys.\ }{\bf C#1}, {\rm#2} {\rm(#3)}}
\def\ExpTechHEP{{\it Experimental Techniques in High Energy
Physics}\rm, T.~Ferbel (ed.) (Addison-Wesley, Menlo Park, CA, 1987)}
\def\MethExpPhys#1#2{{\it Methods
of Experimental Physics}\rm, L.C.L.~Yuan and
C.-S.~Wu, editors, Academic Press, 1961, Vol.~#1, p.~#2}
\def\MethTheorPhys#1{{\it Methods of Theoretical Physics}, McGraw-Hill,
New York, 1953, p.~#1}
\def\xsecReacHEP{%
{\it Total Cross Sections for Reactions of High Energy Particles},
Landolt-B\"ornstein, New Series Vol.~{\bf I/12~a} and {\bf I/12~b},
ed.~H.~Schopper (1988)}
\def\xsecReacHEPgray{%
{\it Total Cross Sections for Reactions of High Energy Particles},
Landolt-B\"ornstein, New Series Vol.~{\bf I/12~a} and {\bf I/12~b},
ed.~H.~Schopper (1988).
Gray curve shows Regge fit from \Tbl{hadronic96}
}
\def\xsecHadronicCaption{%
\noindent%
Hadronic total and elastic cross sections vs. laboratory beam momentum
and total center-of-mass energy.
Data courtesy A.~Baldini,
V.~Flaminio, W.G.~Moorhead, and D.R.O.~Morrison, CERN;
and COMPAS Group, IHEP, Serpukhov, Russia.
See \xsecReacHEP.\par}
\def\xsecHadronicCaptiongray{%
\noindent%
Hadronic total and elastic cross sections vs. laboratory beam momentum
and total center-of-mass energy.
Data courtesy A.~Baldini,
V.~Flaminio, W.G.~Moorhead, and D.R.O.~Morrison, CERN;
and COMPAS Group, IHEP, Serpukhov, Russia.
See \xsecReacHEPgray.
\par}
\def\LeptonPhotonseventyseven#1{%
{\it Proceedings of the 1977 International
Symposium on Lepton and Photon Interactions at High Energies}
(DESY, Hamburg, 1977), p.~#1}
\def\LeptonPhotoneightyseven#1{%
{\it Proceedings of the 1987 International Symposium on
Lepton and Photon Interactions at High Energies}, Hamburg,
July 27--31, 1987, edited by
W.~Bartel and R.~R\"uckl (North Holland, Amsterdam, 1988), p.~#1}
\def\HighSensitivityBeauty#1{%
{\it Proceedings of the Workshop on High Sensitivity Beauty Physics
at Fermilab}, Fermilab, November 11--14, 1987, edited by A.J.~Slaughter,
N.~Lockyer, and M.~Schmidt (Fermilab, Batavia, IL, 1988), p.~#1}
\def\ScotlandHEP#1#2{%
{\it Proceedings of the XXVII International
Conference on High-Energy Physics},
Glasgow, Scotland, July 20--27, 1994, edited by P.J. Bussey
and I.G. Knowles
(Institute of Physics, Bristol, 1995), Vol.~#1, p.~#2}
\def\SDCCalorimetryeightynine#1{%
{\it Proceedings of the Workshop on Calorimetry for the
Supercollider},
Tuscaloosa, AL, March 13--17, 1989, edited by R.~Donaldson and
M.G.D.~Gilchriese (World Scientific, Teaneck, NJ, 1989), p.~#1}
\def\Snowmasseightyeight#1{%
{\it Proceedings of the 1988 Summer Study on High Energy
Physics in the 1990's},
Snowmass, CO, June 27 -- July 15, 1990, edited by F.J.~Gilman and S.~Jensen,
(World Scientific, Teaneck, NJ, 1989) p.~#1}
\def\Snowmasseightyeightnopage{%
{\it Proceedings of the 1988 Summer Study on High Energy
Physics in the 1990's},
Snowmass, CO, June 27 -- July 15, 1990, edited by F.J.~Gilman and S.~Jensen,
(World Scientific, Teaneck, NJ, 1989)}
\def\Ringbergeightynine#1{%
{\it Proceedings of the Workshop on Electroweak Radiative Corrections
for $e^+ e^-$ Collisions},
Ringberg, Germany, April 3--7, 1989, edited by J.H.~Kuhn,
(Springer-Verlag, Berlin, Germany, 1989) p.~#1}
\def\EurophysicsHEPeightyseven#1{%
{\it Proceedings of the International Europhysics Conference on
High Energy Physics},
Uppsala, Sweden, June 25 -- July 1, 1987, edited by O.~Botner,
(European Physical Society, Petit-Lancy, Switzerland, 1987) p.~#1}
\def\WarsawEPPeightyseven#1{%
{\it Proceedings of the 10$\,^{th}$ Warsaw Symposium on Elementary Particle
Physics},
Kazimierz, Poland, May 25--30, 1987, edited by Z.~Ajduk,
(Warsaw Univ., Warsaw, Poland, 1987) p.~#1}
\def\BerkeleyHEPeightysix#1{%
{\it Proceedings of the 23$^{rd}$ International Conference on
High Energy Physics},
Berkeley, CA, July 16--23, 1986, edited by S.C.~Loken,
(World Scientific, Singapore, 1987) p.~#1}
\def\ExpAreaseightyseven#1{%
{\it Proceedings of the Workshop on Experiments, Detectors, and Experimental
Areas for the Supercollider},
Berkeley, CA, July 7--17, 1987,
edited by R.~Donaldson and
M.G.D.~Gilchriese (World Scientific, Singapore, 1988), p.~#1}
\def\CosmicRayseventyone#1#2{%
{\it Proceedings of the International Conference on Cosmic
Rays},
Hobart, Australia, August 16--25, 1971,
Vol.~{\bf#1}, p.~#2}
\lefteqnsidedimen=22pt
\lefteqnside=\lefteqnsidedimen
\newdimen\Textpagelength \Textpagelength=11.6in
\newdimen\Textplusheadpagelength \Textplusheadpagelength=12.0in
\Fullpagewidth=8.75in
\Halfpagewidth=4.25in
\newbox\indexGreek
\newbox\indexOmit
\newbox\wwwfootcitation
\newbox\indexfootline
\def\IsThisTheFirstpage{\ifnum\pageno=\Firstpage%
\global\advance\vsize by .3in
\else\relax\fi
}
\sectionskip=\bigskipamount
\ifnum\WhichSection=7\relax
\gdef\runningdate{\bgroup\sevenrm\today\quad\TimeOfDay\egroup}
\else
\gdef\runningdate{\relax}
\fi
\advance\voffset by .8in
\ifnum\WhichSection=7\relax
{\newlinechar=`\|%
\def\obeyspaces{\catcode`\ =\active}%
{\obeyspaces\global\let =\space}
\obeyspaces%
\message{2 8 1/2 by 11 paper (DRAFT MODE)|}
\message{HI THERE -- THIS is 7}}
\footline{\IsThisTheFirstpage}
\hsize=4.25in\vsize=7.3in
\advance\vsize by 1.5in\advance\hoffset by .7in
\advance\voffset by -.4in
\let\twocol\relax
\let\makeheadline=\dbmakeheadline
\let\baselineskip=24pt \fullline{\the\footline}=\baselineskip=24pt \line{\the\footline}
\def\centerline{\copy\HEADFIRST}\vskip .1in{\centerline{\copy\HEADFIRST}\vskip .1in}
\headline={\ifodd\pageno\hfil\copy\RUNHEADhbox\quad\elevenssbf \Folio%
\else\elevenssbf\Folio\quad\copy\RUNHEADhbox\hfill\fi}
\footline={\hfill\runningdate\hfill}
\setbox\wwwfootcitation=\vtop {%
\vglue .1in%
\hbox to 6in{%
\hss{\sevenrm CITATION: D.E. Groom {\sevenit et al.},
European Physical Journal {\sevenbf C15}, 1 (2000)}\hss}
\vglue .005in%
\hbox to 6in{%
\hss{\sevenrm
available on
the PDG WWW pages (URL: {\ninett http://pdg.lbl.gov/})
\qquad\runningdate}\hss}
\vss%
\vss}%
\gdef\firstfoot{\centerline{\hss\copy\wwwfootcitation\hss}}
\gdef\restoffoot{\centerline{\hss\runningdate\hss}}
\footline={\restoffoot}
\Linewidth=.00003pt
\Linewidth=0pt
\parskip=\smallskipamount
\sectionminspace=1.2in
\tenpoint
\BleederPointer=7
\else
\fi
\sectionskip=\bigskipamount
\ifnum\WhichSection=1\relax
{\newlinechar=`\|%
\def\obeyspaces{\catcode`\ =\active}%
{\obeyspaces\global\let =\space}
\obeyspaces%
\message{1 11x17 paper|}
\message{HI THERE -- THIS is 1}}
\footline{\IsThisTheFirstpage}
\fullhsize=\Fullpagewidth\hsize=\Halfpagewidth
\vsize=\Textpagelength
\Linewidth=.00003pt
\Linewidth=0pt
\parskip=\smallskipamount
\sectionminspace=1.2in
\tenpoint
\BleederPointer=7
\else
\fi
\ifnum\WhichSection=2\relax
\VerticalFudge =-.32in
\VerticalFudge =-.23in
\hsize=4.25in\vsize=7.3in
\let\boldhead=\boldheaddb
\dbonecolumn
\let\makeheadline=\dbmakeheadline
\let\baselineskip=24pt \fullline{\the\footline}=\baselineskip=24pt \line{\the\footline}
\def\centerline{\copy\HEADFIRST}\vskip .1in{\centerline{\copy\HEADFIRST}\vskip .1in}
\headline={\ifodd\pageno\hfil\copy\RUNHEADhbox\quad\elevenssbf\Folio%
\else\elevenssbf\Folio\quad\copy\RUNHEADhbox\hfill\fi}
\footline{}
\tenpoint
\Linewidth=.00003pt
\Linewidth=0pt
\parskip=1pt plus 1pt
\sectionminspace=1in
\global\sectionskip=\smallskipamount
\tenpoint
\abovedisplayskip=\medskipamount
\belowdisplayskip=\medskipamount
\def\tenpoint\it{\tenpoint\it}
\advance\voffset by .8in
\else
\fi
\superrefsfals
\refReset\rela
\global\eqnum=0\rela
\refindent=20pt
\newdimen\reftopglue
\reftopglue=0pt
\ATunlock
\newif\ifEjectHere \EjectHerefalse
\ifEjectHere
\message{HERE I AM}
\fi
\def\@GetRefText#1#2
\ifnum\CiteType=4\else
\ifnullname
\p@nctwrite{;
\@refwrite{\@comment ... Reference text for
"#1" defined on page \number\pageno.
\@refwrite{\@refbreak
\else
\ifnum\refnum>1\p@nctwrite{. }\fi
\ifEjectHere
\@refwrite{\string\vfill\string\eject\string\vglue\string\reftopglue}
\EjectHerefalse
\fi
\@refwrite{\@comment
\@refwrite{\@comment (\the\refnum) Reference text for
"#1" defined on page \number\pageno.
\@refwrite{\string\@refitem{\the\refnum}{#1}
\fi
\fi
\begingroup
\def\endreference{\NX\endreference
\def\reference{\NX\reference}\def\ref{\NX\ref
\seeCR\newlinechar=`\^^M
\@copyref#2}
\ATlock
\newread\inputauxin
\def\inputaux#1
\openin\inputauxin=#1
\ifeof\inputauxin\closein\inputauxin
\else\closein\inputauxin
\begingroup
\def\@tag##1##2{\endgroup
\edef\@@temp{##2
\testtag{##1}\XA\xdef\csname\tok\endcsname{\@@temp}}%
\unSpecial\ATunlock
\input #1 \relax
\endgroup
\fi}
\def\trippleheading#1#2#3{\chapter{#1}\begingroup\unSpecial\@label{Chap.\jobname}%
\centerline{\boldhead\hfill\the\chapternum.~#1\hfill}\vskip .1in%
\centerline{\boldhead\hfill #2\hfill}\vskip .1in%
\centerline{\boldhead\hfill #3\hfill}\vskip .2in}
\def{}
\def{}
\def{}
\def{}
\def\Phi{\Phi}
\begingroup\unSpecial\@newlabel{Chap.collidersrpp}{{26}{1}}
\begingroup\unSpecial\@newlabel{Chap.bigbangnucrpp}{{20}{220}}
\begingroup\unSpecial\@newlabel{Chap.hubblerpp}{{21}{224}}
\begingroup\unSpecial\@newlabel{Chap.microwaverpp}{{23}{238}}
\begingroup\unSpecial\@newlabel{Chap.bigbangrpp}{{19}{1}}
\begingroup\unSpecial\@newlabel{Sec.MuonEnergy}{{27.6}{25}}
\begingroup\unSpecial\@newlabel{Chap.strucfunrpp}{{16}{1}}
\begingroup\unSpecial\@newlabel{Chap.stanmodelrpp}{{10}{1}}
\begingroup\unSpecial\@newlabel{Chap.qcdrpp}{{9}{1}}
\begingroup\unSpecial\@newlabel{Tb.AverageHadronMult}{{40.1}{3}}
\begingroup\unSpecial\@newlabel{Chap.crosssecrpp}{{39}{1}}
\begingroup\unSpecial\@newlabel{Eq.cumul }{{31.6}{2}}
\begingroup\unSpecial\@newlabel{Tb.probone}{{31.1}{10}}
\begingroup\unSpecial\@newlabel{Sec.Normal}{{31.4.3}{6}}
\begingroup\unSpecial\@newlabel{Chap.su3rpp}{{36}{1}}
\begingroup\unSpecial\@newlabel{Sec.epluseminusannihilation}{{39.2}{1}}
\begingroup\unSpecial\@newlabel{Chap.fragrpp}{{17}{1}}
\def{}
\def{}
\def{}
\def{}
\def{}
\def{}
|
1401.0339
|
\section{Introduction} \label{section:introduction}
Over the past two decades, discontinuous Galerkin (DG) finite element methods have emerged as an effective and popular choice for the numerical solution of a wide range of partial differential equations. This is mainly stimulated by their high degree of locality, their extreme flexibility with respect to $hp$-adaptive mesh refinement, and their natural ability to accommodate high-order discretizations for hyperbolic problems in a locally conservative manner without excessive numerical stabilization. As it stands, there exists a vast amount of literature on the \emph{a~priori} error analysis of DG methods for linear problems; we refer to the recent book of Di~Pietro \& Ern~\cite{PietroErn2011} for a comprehensive overview of the most prominent results. For nonlinear problems, however, there are still relatively few results available; we mention the works of Houston \emph{et al.}~\cite{HoustonEtAl2005}, Ortner \& S\"{u}li~\cite{OrtnerSuli2007}, Gudi \& Pani~\cite{GudiPani2007}, Gudi \emph{et al.}~\cite{GudiEtAl2008a, GudiEtAl2008b}, \cite{DolejsiEtAl2005}, Dolej\v{s}\'{\i}~\cite{Dolejsi2008}, Bustinza \& Gatica~\cite{BustinzaGatica2004}, and Bi \& Lin~\cite{BiLin2012}. It is fair to say that the extension of DG methods from linear to nonlinear problems is non-obvious in many cases, particularly with respect to the proper formulation of the element boundary terms, and that the analysis turns out to be more challenging.
In this article, we present and analyze a family of interior penalty DG methods for the numerical solution of the following class of quasilinear elliptic boundary value problems. Let~$\Omega$ be an open bounded domain in~$\mathbb{R}^d$, $d \geq 2$, with Lipschitz boundary $\partial \Omega = \Gamma_\rmD \cup \Gamma_\rmN$, where $\Gamma_\rmD \neq \emptyset$ and $\Gamma_\rmN = \partial \Omega \setminus \Gamma_\rmD$. Denoting by~$\bn \colon \Gamma_\rmN \to \mathbb{R}^d$ the unit outward normal to~$\Gamma_\rmN$, our model problem of interest is stated as follows: find $u \colon \closure{\Omega} \to \mathbb{R}$ such that
\begin{subequations} \label{eq:model_problem}
\begin{alignat}{2}
- \nabla \cdot \left( \bA(\bx, \nabla{u}) \nabla{u} \right) = \ & f & \quad & \text{in} \ \Omega , \label{eq:model_problem_pde}
\\
u = \ & g_\rmD & \qquad & \text{on} \ \Gamma_\rmD , \label{eq:model_problem_bcD}
\\
\bA(\bx, \nabla{u}) \nabla{u} \cdot \bn = \ & g_\rmN & \quad & \text{on} \ \Gamma_\rmN , \label{eq:model_problem_bcN}
\end{alignat}
\end{subequations}
where $\bA \in [L^\infty(\closure{\Omega} \times \mathbb{R}^d)]^{d, d}$, $f \in L^2(\Omega)$, $g_\rmD \in H^{1/2}(\Gamma_\rmD)$ and $g_\rmN \in L^2(\Gamma_\rmN)$. In what follows, we assume that, for~$\bx \in \closure{\Omega}$ and~$\bv \in \mathbb{R}^d$, the nonlinear map $\bv \mapsto \bA(\bx, \bv) \bv$ is \emph{Lipschitz continuous} and \emph{strongly monotone}, as phrased by the following statement.
\begin{assumption} \label{asm:A}
There exist constants $C_\bA \geq M_\bA > 0$ such that, for all~$\bx \in \closure{\Omega}$ and all~$\bv_1, \bv_2 \in \mathbb{R}^d$,
\begin{gather}
\abs{ \bA(\bx, \bv_1) \bv_1 - \bA(\bx, \bv_2) \bv_2} \leq C_\bA \, \abs{\bv_1 - \bv_2} , \label{eq:asm_A1}
\\[3pt]
( \bA(\bx, \bv_1) \bv_1 - \bA(\bx, \bv_2) \bv_2 ) \cdot (\bv_1 - \bv_2) \geq M_\bA \, \abs{\bv_1 - \bv_2}^2 . \label{eq:asm_A2}
\end{gather}
\end{assumption}
Subject to the above assumpion, one can show that problem~\eqref{eq:model_problem} admits a unique weak solution~$u \in H^1(\Omega)$. In passing, we note that problems of the type~\eqref{eq:model_problem} satisfying Assumption~\ref{asm:A} arise in several applications. A classic example is mean curvature flow, for which $\bA(\bx, \nabla{u}) = (1 + \abs{\nabla{u}}^2 )^{-1/2} \, \mathbf{I}$ with~$\mathbf{I}$ the $d \times d$~identity matrix; this has applications in image processing and interface modeling in two-fluid flows, among others. Another example is the modeling of non-Newtonian fluids. For the sake of notational simplicity, we henceforth suppress the dependence of $\bA(\bx, \bv)$ on~$\bx$ and simply write $\bA(\bv)$ instead.
The development of DG methods for problems of the type~\eqref{eq:model_problem} has also been pursued by several other researchers. In \cite{BustinzaGatica2004}, an $h$-version local DG method is developed and analyzed exhibiting optimal error estimates in the broken $H^1(\Omega)$-norm and $L^2(\Omega)$-norm. The development and analysis of $hp$-version interior penalty DG methods is initiated by Houston \emph{et al.}~\cite{HoustonEtAl2005}. Quasi-optimal error estimates are presented for the error in the broken $H^1(\Omega)$-norm, which are optimal in the mesh size $h$ and mildly supoptimal in the polynomial degree $p$, by half an order in $p$. Estimates for the error in the $L^2(\Omega)$-norm are not presented, but numerical experiments reveal the convergence in the $L^2(\Omega)$-norm to be suboptimal. This suboptimality is caused by so-called dual inconsistency of the method due to a particular formulation of the element boundary terms. Difficulties with respect to the proper formulation of the element boundary terms have motivated other researchers to consider the development of \emph{incomplete} interior penalty DG methods; cf. \cite{OrtnerSuli2007, Dolejsi2008, BiLin2012}. In \cite{GudiEtAl2008b}, a family of interior penalty DG methods is presented and analyzed with a particular choice of the element boundary terms, for which quasi-optimal $hp$-error estimates are derived in both the broken $H^1(\Omega)$-norm and $L^2(\Omega)$-norm.
The purpose of this article is to present and analyze a new family of interior penalty $hp$-DG methods for the numerical solution of~\eqref{eq:model_problem} with quasi-optimal $hp$-error estimates in both the broken $H^1(\Omega)$-norm and $L^2(\Omega)$-norm. As in \cite{HoustonEtAl2005} and \cite{GudiEtAl2008b}, our family of methods depends on the parameter $\theta \in [-1, 1]$. In the linear setting of $\bA(\cdot) = \mathbf{I}$ with $\mathbf{I}$ the $d \times d$ identity matrix and for particular choices of $\theta$, the proposed DG formulation reduces to various well-known interior penalty methods; notable examples include the symmetric and nonsymmetric interior penalty methods of, respectively, Arnold~\cite{Arnold1982} and Rivi\`{e}re \emph{et al.}~\cite{RiviereEtAl1999}. Subject to Assumption~\ref{asm:A}, we prove that the proposed DG formulation is well-posed provided the discontinuity penalization parameter is chosen sufficiently large. Moreover, \emph{a priori} error estimates are presented for the error in the broken $H^1(\Omega)$-norm, displaying precisely the same $h$-optimal and $p$-suboptimal convergence rates as obtained for the interior penalty approximation of linear elliptic problems; cf. \cite{HoustonEtAl2002}. \emph{A priori} estimates for linear functionals of the error and the error in the $L^2(\Omega)$-norm are also derived and shown to be $h$-optimal when $\theta = -1$. The analysis is completed under fairly weak conditions on the $hp$-finite element space allowing for non-affine and curved elements with multilevel hanging nodes and non-uniform polynomial degree.
The remainder of this article is organized as follows. Section~\ref{section:preliminaries} establishes notation, definitions and some auxiliary results. In Section~\ref{section:dgfem}, we introduce the interior penalty $hp$-DG approximation of \eqref{eq:model_problem} and prove several fundamental properties including a well-posedness result. Section~\ref{section:error_analysis} is concerned with the error analysis. Finally, in Section~\ref{section:numerical_experiments} some numerical experiments are presented to illustrate the theoretical results. The appendix is devoted to some auxiliary results regarding the well-posedness of nonlinear variational problems.
\section{Preliminaries} \label{section:preliminaries}
For~$h > 0$, let~$\mT_h$ be a subdivision of~$\Omega$ into disjoint open element domains~$K$ such that $\closure{\Omega} = \cup_{K \in \mT_h} \closure{K}$. Here, $h = \max_{K \in \mT_h} h_K$, where $h_K = \diam(K)$. Each~$K \in \mT_h$ is the image of a fixed reference domain~$\hat{K}$ under a bijective mapping~$T_K \colon \hat{K} \to K$ (that is, $K = T_K(\hat{K})$ for all~$K \in \mT_h$), where~$\hat{K}$ is either the open unit simplex or the open unit hypercube in~$\mathbb{R}^d$. For~$K \in \mT_h$, we denote by~$\bn_K$ the unit outward normal with respect to~$\partial K$. Furthermore, for any pair of neighboring elements~$K, K' \in \mT_h$, we refer to the nonempty $(d-1)$-dimensional interior of~$\partial K \cap \partial K'$ as an interior face of~$\mT_h$. Likewise, for any~$K \in \mT_h$, a boundary face lying on~$\Gamma_\rmD$ (resp.~$\Gamma_\rmN$) is the nonempty $(d-1)$-dimensional interior of~$\partial K \cap \Gamma_\rmD$ (resp.~$\partial K \cap \Gamma_\rmN$). The interior faces and the boundary faces lying on~$\Gamma_\rmD$ and~$\Gamma_\rmN$ are collected in the sets~$\mF_{h, 0}$, $\mF_{h, \rmD}$ and~$\mF_{h, \rmN}$, respectively, and we define $\mF_h := \mF_{h, 0} \cup \mF_{h, \rmD} \cup \mF_{h, \rmN}$. In addition, we let $\mF_{h, 0, \rmD} := \mF_{h, 0} \cup \mF_{h, \rmD}$, and, for each~$K \in \mT_h$, we denote by~$\mF_{h, K}$ the set of faces lying on~$\partial K$; i.e., $\mF_{h, K} := \{ F \in \mF_h \, : \, F \subset \partial K \}$. The union of all interior faces is denoted by~$\Gamma_{h, 0}$ (i.e., $\Gamma_{h, 0} := \cup_{F \in \mF_{h, 0}} F$), and analogously we let~$\Gamma_{h, \rmD}$ and~$\Gamma_{h, \rmN}$ represent the union of faces lying on~$\Gamma_\rmD$ and~$\Gamma_\rmN$. We also define $\Gamma_{h, 0, \rmD} := \Gamma_{h, 0} \cup \Gamma_{h, \rmD}$.
To characterize functions on~$\mT_h$ that are possibly discontinuous across inter-element boundaries, we introduce the broken Sobolev space
\begin{equation*}
H^s(\Omega, \mT_h) := \{ v \in L^2(\Omega) : \left. v \right|_K \in H^s(K) , \ \forall K \in \mT_h \} ,
\end{equation*}
where $0 < s \leq \infty$. Here, $H^s(K)$ denotes the standard Sobolev-Slobodeckij space of order~$s$ for the domain~$K \in \mT_h$. The space~$H^s(\Omega, \mT_h)$ is equipped with the broken norm and semi-norm
\begin{equation*}
\norm{v}_{H^s(\Omega, \mT_h)} := \left( \sum_{K \in \mT_h} \norm{v}_{H^s(K)}^2 \right)^{1/2} ,
\quad
\seminorm{v}_{H^s(\Omega, \mT_h)} := \left( \sum_{K \in \mT_h} \seminorm{v}_{H^s(K)}^2 \right)^{1/2} ,
\end{equation*}
where $\norm{\cdot}_{H^s(K)}$ and~$\seminorm{\cdot}_{H^s(K)}$ denote the standard Sobolev-Slobodeckij norm and semi-norm, respectively.
Next, we define jump and average operators for scalar- and vector-valued functions. Let~$K, K' \in \mT_h$ be two adjacent element domains sharing an interior face~$F \in \mathcal{F}_{h, 0}$. Given a scalar-valued function~$v \in H^1(\Omega, \mT_h)$, we define the jump and average of~$v$ at~$F$ by
\begin{equation*}
\left. \jump{v} \right|_F := \left. v \right|_{K} \bn_{K} + \left. v \right|_{K'} \bn_{K'} ,
\qquad
\left. \avg{v} \right|_F := (\left. v \right|_{K} + \left. v \right|_{K'}) / 2 .
\end{equation*}
Analogously, for a vector-valued function~$\bq \in [H^1(\Omega, \mT_h)]^d$, we set
\begin{equation*}
\left. \jump{\bq} \right|_F := \left. \bq \right|_{K} \cdot \bn_{K} + \left. \bq \right|_{K'} \cdot \bn_{K'} ,
\qquad
\left. \avg{\bq} \right|_F := (\left. \bq \right|_{K} + \left. \bq \right|_{K'}) / 2 .
\end{equation*}
If~$F \in \mF_{h, \rmD}$ or~$F \in \mF_{h, \rmN}$, we moreover define $\left. \jump{v} \right|_F := \left. v \right|_K \bn_K$, $\left. \avg{v} \right|_F := \left. v \right|_K$ and~$\left. \avg{\bq} \right|_F := \left. \bq \right|_K$, where~$K \in \mT_h$ such that~$F \subset \partial K$; the quantity~$\left. \jump{\bq} \right|_F$ is not required for~$F \in \mF_{h, \rmD} \cup \mF_{h, \rmN}$ and is thus left undefined.
Given a nonnegative integer~$k$, let~$\hat{P}_k(\hat{K})$ denote the space of polynomials of total degree up to~$k$ with support on the reference domain~$\hat{K}$. Also, let~$\hat{Q}_k(\hat{K})$ denote the space of tensor-product polynomials of degree up to~$k$ in each coordinate direction of~$\hat{K}$. We define~$\hat{S}_k(\hat{K}) = \hat{P}_k(\hat{K})$ when~$\hat{K}$ is the unit $d$-simplex, and~$\hat{S}_k(\hat{K}) = \hat{Q}_k(\hat{K})$ when~$\hat{K}$ is the unit $d$-hypercube. In addition, let~$S_k(K) = \{ v : v \, \circ \, T_K \in \hat{S}_k(\hat{K}) \}$. Then, assigning to each~$K \in \mT_h$ an integer~$p_K \geq 1$ to represent the local polynomial degree, we introduce the $hp$-finite element space
\begin{equation*}
V_{h, p} = \{ v \in H^1(\Omega, \mT_h) : \left. v \right|_K \in S_{p_K}(K) , \ \forall K \in \mT_h \} ,
\end{equation*}
where~$p = \min_{K \in \mT_h} p_K$.
In the analysis that follows, we make some structural assumptions on the subdivision~$\mT_h$ and the distribution of the local polynomial degrees~$\{ p_K \}_{K \in \mT_h}$.
\begin{assumption} \label{asm:B} \
\begin{enumerate}
\item[(i)] For each~$K \in \mT_h$ and some integer~$r_K \geq 2$, the map~$T_K \colon \hat{K} \to K$ is a $C^{r_K}$-diffeomorphism satisfying $\seminorm{T_K}_{[W_\infty^{s}(\hat{K})]^{d, d}} \leq \beta_1 \, h_K^s$ and $\seminorm{T_K^{-1}}_{[W_\infty^s(K)]^{d, d}} \leq \beta_1 \, h_K^{-s}$ for~$s \in [0, r_K]$, with constant~$\beta_1$ independent of~$h_K$.
\item[(ii)] The subdivision~$\mT_h$ is \emph{uniformly graded}; i.e., there exists a constant $\beta_2 > 0$ such that, for all pairs of neighboring elements~$K, K' \in \mT_h$ sharing a face~$F \in \mF_{h, 0}$, there holds $\beta_2^{-1} \leq h_K / h_{K'} \leq \beta_2$.
\item[(iii)] The polynomial degrees~$\{ p_K \}_{K \in \mT_h}$ have \emph{bounded local variation}; i.e., there exists a constant~$\beta_3 > 0$ such that, for all pairs of neighboring elements~$K, K' \in \mT_h$ sharing a face~$F \in \mF_{h, 0}$, there holds $\beta_3^{-1} \leq p_K / p_{K'} \leq \beta_3$.
\end{enumerate}
\end{assumption}
Note that we allow for fairly general subdivisions composed of possibly non-affine and curved elements with multilevel hanging nodes. The only requirement is that each~$K \in \mT_h$ is nondegenerate and sufficiently ``close`` to some affine image of the reference domain~$\hat{K}$ (cf. Assumption~\ref{asm:B}(i); see also, for example, \cite{CiarletRaviart1972}), and that the number of hanging nodes per element face is bounded for all~$K \in \mT_h$ (cf. Assumption~\ref{asm:B}(ii)). We remark that, if~$\mT_h$ is composed of affine images of simplices and/or multilinear images of hypercubes, then Assumption~\ref{asm:B}(i) reduces to a standard shape regularity condition.
We end this section with some auxiliary results that are needed for the subsequent analysis. Here, and in the sequel, we denote by~$C$ and~$C_i$~($i = 1, 2, \dots$) generic constants, possibly different on each occurrence, which are independent of~$h$ and~$p$. In addtion, we write~$C \equiv C( \lambda_1 , \dots, \lambda_N)$ to indicate the dependence of the constant~$C$ on the parameters~$\lambda_1, \dots, \lambda_N$. We state without proof the following trace inequality; the proof is analogous to that of Lemma~1.49 in~\cite{PietroErn2011}.
\begin{lemma}[Multiplicative trace inequality] \label{lem:trace_inequality}
Let $K \in \mT_h$ and $F \in \mF_{h, K}$. Then, for any $v \in H^{s + 1}(K)$, $0 \leq s \leq r_K - 1$, there exists a constant~$C \equiv C(d, \beta_1)$ such that
\begin{equation} \label{eq:trace_inequality}
\norm{v}_{H^s(F)}^2
\leq
C \left( h_K^{-1} \norm{v}_{H^s(K)}^2 + \norm{v}_{H^s(K)} \, \norm{v}_{H^{s+1}(K)} \right) .
\end{equation}
\end{lemma}
For future reference, we also state the following $hp$-type inverse estimates; cf.~\cite[Lemma~3]{Quarteroni1984}.
\begin{lemma}[Inverse estimates] \label{lem:inv_estimates}
Let~$K \in \mT_h$ and $F \in \mF_{h, K}$, and denote by $\meas[d]{K}$ and $\meas[d-1]{F}$ the corresponding Hausdorff measures of dimension $d$ and $d-1$, respectively. Then, for any $v \in S_{p_K}(K)$, there exists a constant~$C \equiv C(d, \beta_1)$ such that:
\begin{enumerate}
\item[(i)] for~$0 \leq s \leq r_K - 1$,
\begin{equation}
\norm{v}_{H^{s + 1}(K)}
\leq
C \, p_K \, h_K^{-1/2} \, \norm{v}_{H^s(K)} \, ; \label{eq:inv_estimate_1}
\end{equation}
\item[(ii)] for~$0 \leq s \leq r_K$,
\begin{align}
\norm{v}_{W_\infty^s(K)}
\leq \ &
C \, p_K \, \meas[d]{K}^{-1/2} \, \norm{v}_{H^s(K)} \, , \label{eq:inv_estimate_2a}
\\[6pt]
\norm{v}_{W_\infty^s(F)}
\leq \ &
C \, p_K \, \meas[d-1]{F}^{-1/2} \, \norm{v}_{H^s(F)} \, . \label{eq:inv_estimate_2b}
\end{align}
\end{enumerate}
\end{lemma}
Using the trace inquality~\eqref{eq:trace_inequality} and the inverse estimate~\eqref{eq:inv_estimate_1}, and taking into consideration Assumption~\ref{asm:B}, we prove the following result.
\begin{lemma} \label{lem:inv_trace_inequality}
Let
\begin{equation}
\mu_F :=
\begin{cases}
\frac{1}{2} ( \meas[d]{K} + \meas[d]{K'} ) \, / \, \meas[d-1]{F} & \text{for} \ F \in \mF_{h, 0},
\\[3pt]
\meas[d]{K} \, / \, \meas[d-1]{F} & \text{for} \ F \in \mF_{h, \rmD} \cup \mF_{h, \rmN} ,
\end{cases} \label{eq:mu_F}
\end{equation}
and
\begin{equation}
p_F :=
\begin{cases}
\frac{1}{2}(p_K + p_{K'}) & \text{for} \ F \in \mF_{h, 0} ,
\\[3pt]
p_K & \text{for} \ F \in \mF_{h, \rmD} \cup \mF_{h, \rmN} ,
\end{cases} \label{eq:p_F}
\end{equation}
where~$K, K' \in \mT_h$ (resp. $K \in \mT_h$) are the element domains adjacent to the face~$F \in \mF_{h, 0}$ (resp. $F \in \mF_{h, \rmD} \cup \mF_{h, \rmN}$). There exists a constant~$C \equiv C(d, \beta_1, \beta_2, \beta_3)$ such that, for all~$v \in V_{h, p}$,
\begin{equation}
\sum_{F \in \mF_h} \frac{\mu_F}{p_F^2} \int_F \avg{\abs{\nabla{v}}}^2 \ud s
\leq
C \sum_{K \in \mT_h} \int_K \abs{\nabla{v}}^2 \ud x . \label{eq:inv_trace_inequality}
\end{equation}
\end{lemma}
\begin{proof}
Let~$K \in \mT_h$ and~$F \in \mF_{h, K}$. From Assumption~\ref{asm:B}(i) and~\ref{asm:B}(ii) it follows that there exists a constant~$C_1 \equiv C_1(d, \beta_1, \beta_2)$ such that~$\mu_F \leq C_1 \, h_K$. Moreover, Assumption~\ref{asm:B}(iii) implies that~$p_F^2 \geq C_2 \, p_K^2$ for some positive constant~$C_2 \equiv C_2(\beta_3)$. Hence, by the Young's inequality, we deduce that
\begin{equation*}
\sum_{F \in \mF_h} \frac{\mu_F}{p_F^2} \int_F \avg{\abs{\nabla{v}}}^2 \ud s
\leq
\frac{C_1}{C_2} \sum_{K \in \mT_h} \frac{h_K}{p_K^2} \sum_{F \in \mF_{h, K}} \int_F \bigabs{\left.(\nabla{v})\right|_K}^2 \ud s .
\end{equation*}
On account of Assumption~\ref{asm:B}(ii) we have that $\mathrm{card}(\mF_{h, K}) \leq C_3$ for some positive integer~$C_3 \equiv C_3(d, \beta_2)$. Using the trace inequality~\eqref{eq:trace_inequality} with constant $C_4 \equiv C_4(d, \beta_1)$, we then obtain:
\begin{equation*}
\sum_{F \in \mF_h} \frac{\mu_F}{p_F^2} \int_F \avg{\abs{\nabla{v}}}^2 \ud s
\leq
C_3 \, C_4 \, \frac{C_1}{C_2} \sum_{K \in \mT_h} \frac{h_K}{p_K^2} \left( h_K^{-1} \norm{v}_{H^1(K)}^2 + \norm{v}_{H^1(K)} \norm{v}_{H^2(K)} \right) .
\end{equation*}
The proof is concluded by applying the inverse estimate~\eqref{eq:inv_estimate_1}.
\end{proof}
\section{Discontinuous Galerkin method} \label{section:dgfem}
Let us consider the sum space~$V(h, p) := V_{h, p} + H^s(\Omega)$, $s > 3/2$. For~$w, v \in V(h, p)$, we introduce the semilinear form
\begin{equation}
N(w; v)
=
\sum_{K \in \mT_h} \bA(\nabla{w}) \nabla{w} \cdot \nabla{v} \ud x
+
B_0(w; v)
+
B_\rmD(w; v) , \label{eq:N}
\end{equation}
and the linear form
\begin{equation}
L(v)
=
\sum_{K \in \mT_h} \int_K f v \ud x
+
\int_{\Gamma_{h, \rmN}} g_\rmN \, v \ud s . \label{eq:L}
\end{equation}
Here,
\begin{align*}
B_0(w; v)
= &
- \int_{\Gamma_{h, 0}} \avg{\bA(\nabla{w} - \sigma \jump{w}) \nabla{w}} \cdot \jump{v} \ud s
\\ &
+
\theta \int_{\Gamma_{h, 0}} \avg{\bA^\rmT(\nabla{w} - \sigma \jump{w}) \nabla{v} } \cdot \jump{w} \ud s
\\ &
+
\int_{\Gamma_{h, 0}} \sigma \avg{\bA(\nabla{w} - \sigma \jump{w})} \, \jump{w} \cdot \jump{v} \ud s
\\ &
+
\theta \int_{\Gamma_{h, 0}} \sigma^{-1} \avg{( \bA(\nabla{w}) - \bA(\nabla{w} - \sigma \jump{w}) ) \nabla{w} \cdot \nabla{v}} \ud s
\end{align*}
and
\begin{align*}
B_\rmD(w; v)
= &
- \int_{\Gamma_{h, \rmD}} \bA(\nabla{w} - \sigma \bn (w - g_\rmD)) \nabla{w} \cdot \bn v \ud s
\\ &
+
\theta \int_{\Gamma_{h, \rmD}} \bA^\rmT(\nabla{w} - \sigma \bn (w - g_\rmD)) \nabla{v} \cdot \bn (w - g_\rmD) \ud s
\\ &
+
\int_{\Gamma_{h, \rmD}} \bA(\nabla{w} - \sigma \bn (w - g_\rmD)) \bn v \cdot \bn (w - g_\rmD) \ud s
\\ &
+
\theta \int_{\Gamma_{h, \rmD}} \sigma^{-1} ( \bA(\nabla{w}) - \bA(\nabla{w} - \sigma \bn (w - g_\rmD)) ) \nabla{w} \cdot \nabla{v} \ud s ,
\end{align*}
where~$\bA^\rmT(\cdot)$ denotes the transpose of~$\bA(\cdot)$, $\theta$ is a fixed constant in~$[-1, 1]$, and~$\sigma$ is a piecewise constant function on~$\Gamma_{h, 0, \rmD}$, defined by
\begin{equation*}
\left. \sigma \right|_F = \alpha \, \frac{ p_F^2 }{ \mu_F } , \qquad F \in \mF_{h, 0, \rmD} .
\end{equation*}
Here, $\mu_F$ and~$p_F$ are defined as in~\eqref{eq:mu_F} and~\eqref{eq:p_F}, and $\alpha$ is the so-called \emph{interior penalty parameter}, which is a positive constant independent of~$h$ and~$p$. As usual, we require that~$\alpha$ is sufficiently large. Anticipating the result of Theorem~\ref{thm:wellposedness_dgfem}, we state that $\alpha > \alpha_0 = 2 \, C \, (1 + \lambda_\theta \, C_\bA / M_\bA)^2$ will suffice, where~$\lambda_\theta = 1 + \abs{1 + \theta}$ and $C$ is the constant from Lemma~\ref{lem:inv_trace_inequality}.
The interior penalty $hp$-DG approximation of~\eqref{eq:model_problem} is now stated as follows: find~$u_{h, p} \in V_{h, p}$ such that
\begin{equation}
N(u_{h, p}; v) = L(v) \qquad \forall \, v \in V_{h, p} . \label{eq:dgfem}
\end{equation}
We note that, in the linear case of~$\bA(\cdot) = \mathbf{I}$, with~$\mathbf{I}$ the $d \times d$ identity matrix, and for particular choices of the parameters~$\theta$ and~$\alpha$, the DG formulation~\eqref{eq:dgfem} reduces to various well-known DG methods. Notable examples include the \emph{symmetric} interior penalty method for~$\theta = -1$ and~$\alpha > \alpha_0 > 0$ (cf.~\cite{Arnold1982}), and the \emph{nonsymmetric} interior penalty method for~$\theta = 1$ and~$\alpha > 0$ (cf.~\cite{RiviereEtAl1999}).
Under suitable regularity conditions, one can show that~\eqref{eq:dgfem} is a consistent approximation of~\eqref{eq:model_problem}.
\begin{lemma}[Galerkin orthogonality] \label{lem:galerkin_orthogonality}
Assume that~\eqref{eq:model_problem} has a strong solution~$u \in H^s(\Omega) \cap C^0(\Omega)$, $s > 3/2$. Then,
\begin{equation} \label{eq:galerkin_orthogonality}
N(u; v) - N(u_{h, p}; v) = 0 \qquad \forall v \in V_{h, p} .
\end{equation}
\end{lemma}
\begin{proof}
Since~$u \in C^0(\Omega)$, we have that~$\left. \jump{u} \right|_F = 0$ strongly for all~$F \in \mF_{h, 0}$. Moreover, since~$u$ satisfies~\eqref{eq:model_problem_pde} almost everywhere, we have that~$\nabla \cdot \bA(\nabla{u}) \nabla{u} \in L^2(\Omega)$. From~\cite[Lemma~1.24]{PietroErn2011}, it then follows that $\left. \jump{\bA(\nabla{u}) \nabla{u}} \right|_F = 0$ almost everywhere for all~$F \in \mF_{h, 0}$. Therefore, upon integration by parts, we find that~$N(u; v) = L(v)$ for all~$v \in V_{h, p}$, from which we infer the stated result.
\end{proof}
For the analysis of the $hp$-DG approximation~\eqref{eq:dgfem}, we introduce the norms
\begin{gather*}
\dgnorm{v}^2
:=
\sum_{K \in \mT_h} \int_K \abs{\nabla{v}}^2 \ud x
+
\int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{v}}^2 \ud s , \qquad v \in V(h, p) ,
\\
\dgnorm{v}_+^2
:=
\dgnorm{v}^2
+
\int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{\nabla{v}}}^2 \ud s , \qquad v \in V(h, p) .
\end{gather*}
We note that these norms are equivalent on~$V_{h, p}$ for any~$\alpha > 0$. Indeed, by Lemma~\ref{lem:inv_trace_inequality} there exists a constant~$C$ such that
\begin{equation}
\dgnorm{v}^2 \leq \dgnorm{v}_+^2 \leq (1 + C \alpha^{-1}) \, \dgnorm{v}^2 \qquad \forall v \in V_{h, p} . \label{eq:norm_equivalence}
\end{equation}
Next, let $X(\Gamma_{h, 0, \rmD}) = \Pi_{K \in \mT_h} L^2(\partial K \cap \Gamma_{h, 0, \rmD})$ and define the trace operator $\widehat{\nabla}_\sigma \colon V(h, p) \to [X(\Gamma_{h, 0, \rmD})]^d$ such that, for~$K \in \mT_h$ and~$F \in \mF_{h, K}$,
\begin{equation*}
\left. ( \widehat{\nabla}_\sigma \, w ) \right|_K
=
\begin{cases}
\left. \left( \left. (\nabla{w}) \right|_K \right) \right|_{F} - \sigma \jump{w} & \text{if} \ F \in \mF_{h, 0} ,
\\[3pt]
\left. \left( \left. (\nabla{w}) \right|_K \right) \right|_{F} - \sigma \bn (w|_K - g_\rmD) & \text{if} \ F \in \mF_{h, \rmD} .
\end{cases}
\end{equation*}
By the fact that~$\avg{\jump{\cdot}} = \jump{\cdot}$, we have the following useful identity:
\begin{equation} \label{eq:N_alt}
\begin{aligned}
N(w; v)
= &
\sum_{K \in \mT_h} \bA(\nabla{w}) \nabla{w} \cdot \nabla{v} \ud x
\\ &
-
\int_{\Gamma_{h, 0, \rmD}} \avg{ \bA(\widehat{\nabla}_\sigma \, w) \, \widehat{\nabla}_\sigma \, w \cdot (\theta \sigma^{-1} \nabla{v} + \jump{v}) } \ud s
\\ &
+
\theta \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\bA(\nabla{w}) \nabla{w} \cdot \nabla{v}} \ud s .
\end{aligned}
\end{equation}
Rewriting the semilinear form~$N$ according to~\eqref{eq:N_alt} and using Assumption~\ref{asm:A}, we are able to prove the following two lemmata.
\begin{lemma}[Lipschitz continuity] \label{lem:lipschitz_continuity}
There exists a constant~$C_N \equiv C_N(\theta, C_\bA)$ such that
\begin{equation}
N(w_1; v) - N(w_2; v) \leq C_N \, \dgnorm{w_1 - w_2}_+ \, \dgnorm{v}_+ \qquad \forall w_1, w_2, v \in V(h, p) . \label{eq:lipschitz_continuity}
\end{equation}
\end{lemma}
\begin{proof}
Starting from~\eqref{eq:N_alt} and using that~$\abs{\avg{\bq_1 \cdot \bq_2}} \leq \avg{\abs{\bq_1} \ \abs{\bq_2}} \leq 2 \avg{\abs{\bq_1}} \, \avg{\abs{\bq_2}}$ for all~$\bq_1, \bq_2 \in [H^1(\Omega, \mT_h)]^d$, we have that
\begin{align*}
& N(w_1; v) - N(w_2; v)
\leq
\sum_{K \in \mT_h} \int_K \abs{\bA(\nabla{w_1}) \nabla{w_1} - \bA(\nabla{w_2}) \nabla{w_2}} \ \abs{\nabla{v}} \ud x
\\ & \quad
+
\int_{\Gamma_{h, 0, \rmD}} \avg{ \abs{\bA(\widehat{\nabla}_\sigma \, w_1) \, \widehat{\nabla}_\sigma \, w_1 - \bA(\widehat{\nabla}_\sigma \, w_2) \, \widehat{\nabla}_\sigma \, w_2} } \left( 2 \abs{\theta} \sigma^{-1} \avg{\abs{\nabla{v}}} + \abs{\jump{v}} \right) \ud s
\\ & \quad
+
2 \, \abs{\theta} \, \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{\bA(\nabla{w_1}) \nabla{w_1} - \bA(\nabla{w_2}) \nabla{w_2}}} \avg{\abs{\nabla{v}}} \ud s .
\end{align*}
We use the Lipschitz condition~\eqref{eq:asm_A1} from~Assumption~\ref{asm:A} to bound each of these terms, yielding
\begin{align*}
& N(w_1; v) - N(w_2; v)
\leq
C_\bA \sum_{K \in \mT_h} \int_K \abs{\nabla{w_1}- \nabla{w_2}} \ \abs{\nabla{v}} \ud x
\\ & \quad
+
2 \abs{\theta} \, C_\bA \int_{\Gamma_{h, 0, \rmD}} \abs{\jump{w_1 - w_2}} \, \avg{\abs{\nabla{v}}} \ud s
+
C_\bA \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w_1 - w_2}} \ \abs{\jump{v}} \ud s
\\ & \quad
+
4 \abs{\theta} \, C_\bA \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{\nabla{w_1} - \nabla{w_2}}} \, \avg{\abs{\nabla{v}}} \ud s .
\end{align*}
Upon application of the Cauchy-Schwarz inequality, we arrive at~\eqref{eq:lipschitz_continuity} with $C_N = (2 + 4 \abs{\theta}) \, C_\bA$.
\end{proof}
\begin{lemma}[Strong monotonicity] \label{lem:strong_monotonicity}
Let~$\theta \in [-1, 1]$ and select $\alpha > \alpha_0 = 2 \, C ( 1 + \lambda_\theta \, C_\bA / M_\bA )^2$, where $\lambda_\theta = 1 + \abs{1 + \theta}$ and~$C$ is the constant from Lemma~\ref{lem:inv_trace_inequality}. There exists a positive constant~$M_N \equiv M_N(M_\bA, \alpha_0 / \alpha)$ such that
\begin{equation}
N(w_1; w_1 - w_2) - N(w_2; w_1 - w_2) \geq M_N \, \dgnorm{w_1 - w_2}^2 \qquad \forall w_1, w_2 \in V_{h, p} . \label{eq:strong_monotonicity}
\end{equation}
\end{lemma}
\begin{proof}
Let us write $w = w_1 - w_2$. Starting from~\eqref{eq:N_alt}, we have that
\begin{equation} \label{eq:N_alt_2}
N(w_1; w_1 - w_2) - N(w_2; w_1 - w_2)
=
T_1 + T_2 + T_3 + T_4 ,
\end{equation}
where
\begin{align*}
T_1 = & \sum_{K \in \mT_h} \int_K ( \bA(\nabla{w_1}) \nabla{w_1} - \bA(\nabla{w_2}) \nabla{w_2} ) \cdot \nabla{w} \ud x ,
\\
T_2 = & \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{ ( \bA(\widehat{\nabla}_\sigma w_1) \widehat{\nabla}_\sigma w_1 - \bA(\widehat{\nabla}_\sigma w_2) \widehat{\nabla}_\sigma w_2 ) \cdot \widehat{\nabla}_\sigma w } \ud s ,
\\
T_3 = & - (1 + \theta) \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{ ( \bA(\widehat{\nabla}_\sigma w_1) \widehat{\nabla}_\sigma w_1 - \bA(\widehat{\nabla}_\sigma w_2) \widehat{\nabla}_\sigma w_2 ) \cdot \nabla{w} } \ud s ,
\\
T_4 = & \ \theta \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{ ( \bA(\nabla{w_1}) \nabla{w_1} - \bA(\nabla{w_2}) \nabla{w_2} ) \cdot \nabla{w} } \ud s .
\end{align*}
Using the monotonicity condition~\eqref{eq:asm_A2} from Assumption~\ref{asm:A}, it immediately follows that
\begin{equation*}
T_1 \geq M_\bA \sum_{K \in \mT_h} \int_K \abs{\nabla{w}}^2 \ud x .
\end{equation*}
Analogously, for~$T_2$, we find that
\begin{align*}
T_2
\geq \ &
M_\bA \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{ \abs{\widehat{\nabla} w}^2 } \ud s
\\ = \ &
M_\bA \int_{\Gamma_{h, 0, \rmD}} \left( \sigma^{-1} \avg{\abs{\nabla{w}}^2} - 2 \avg{\nabla{w}} \cdot \jump{w} + \sigma \abs{\jump{w}}^2 \right) \ud s
\\ \geq \ &
- 2 M_\bA \int_{\Gamma_{h, 0, \rmD}} \avg{\abs{\nabla{w}}} \ \abs{\jump{w}} \ud s
+
M_\bA \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w}}^2 \ud s .
\end{align*}
The first term on the right hand side can be further bounded by using the Young's inequality $2 a b \leq \epsilon^{-1} a^2 + \epsilon b^2$, where $a = \sigma^{-1/2} \avg{\abs{\nabla{w}}}$, $b = \sigma^{1/2} \abs{\jump{w}}$ and~$\epsilon > 0$. Subsequently applying Lemma~\ref{lem:inv_trace_inequality}, we obtain
\begin{align*}
T_2
\geq &
- M_\bA \, \epsilon^{-1} \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{\nabla{w}}}^2 \ud s
+
M_\bA (1 - \epsilon) \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w}}^2 \ud s
\\ \geq &
- M_\bA \, \epsilon^{-1} \, C \, \alpha^{-1} \sum_{K \in \mT_h} \int_K \abs{\nabla{w}}^2 \ud x
+
M_\bA (1 - \epsilon) \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w}}^2 \ud s ,
\end{align*}
where~$C$ is the constant from Lemma~\ref{lem:inv_trace_inequality}. For~$T_3$, using the Lipschitz condition~\eqref{eq:asm_A1} from Assumption~\ref{asm:A} together with the fact that~$\avg{\abs{\cdot}^2} \leq 2 \avg{\abs{\cdot}}^2$, and proceeding similarly as for~$T_2$, we have that
\begin{align*}
T_3
\geq &
- \abs{1 + \theta} \, C_\bA \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{ \widehat{\nabla}_\sigma w} \ \abs{\nabla{w}}} \ud s
\\ \geq &
- \abs{1 + \theta} \, C_\bA \int_{\Gamma_{h, 0, \rmD}} ( 2 \sigma^{-1} \avg{\abs{\nabla{w}}}^2 + \abs{\jump{w}} \ \avg{\abs{\nabla{w}}} ) \ud s
\\ \geq &
- \abs{1 + \theta} \, C_\bA \left( (2 + \epsilon^{-1}) \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{\nabla{w}}}^2 \ud s
+
\epsilon \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w}}^2 \ud s \right)
\\ \geq &
- \abs{1 + \theta} \, C_\bA \left( (2 + \epsilon^{-1}) \, C \, \alpha^{-1} \sum_{K \in \mT_h} \int_K \abs{\nabla{w}}^2 \ud x
+
\epsilon \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w}}^2 \ud s \right)
\end{align*}
for any~$\epsilon > 0$. Finally, for~$T_4$, using the Lipschitz condition~\eqref{eq:asm_A1} together with the fact that~$\abs{\theta} \leq 1$, and subsequently applying Lemma~\ref{lem:inv_trace_inequality}, we obtain
\begin{align*}
T_4
\geq &
- \abs{\theta} C_\bA \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{\nabla{w}}}^2 \ud s
\\ \geq &
- C_\bA \, C \, \alpha^{-1} \sum_{K \in \mT_h} \int_K \abs{\nabla{w}}^2 \ud x .
\end{align*}
Substituting the above bounds for~$T_1$ to~$T_4$ back into~\eqref{eq:N_alt_2} and recalling that~$C_\bA \geq M_\bA > 0$, we deduce that
\begin{align*}
\hspace{20pt} & \hspace{-20pt} N(w_1; w_1 - w_2) - N(w_2; w_1 - w_2)
\\
\geq & \left( M_\bA - (2 + \epsilon^{-1}) \, \lambda_\theta \, C_\bA \, C \, \alpha^{-1} \right) \sum_{K \in \mT_h} \int_K \abs{\nabla{w_1} - \nabla{w_2}}^2 \ud x
\\
& + \left( M_\bA - \lambda_\theta \, C_\bA \, \epsilon \right) \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w_1 - w_2}}^2 \ud s ,
\end{align*}
where~$\lambda_\theta = 1 + \abs{1 + \theta}$. Upon selecting $\epsilon = M_\bA / (2 \, \lambda_\theta \, C_\bA)$, we arrive at
\begin{align*}
N(w_1; w_1 - w_2) - N(w_2; w_1 - w_2)
\geq \ &
M_\bA \left( 1 - \frac{\alpha_0}{\alpha} \right)
\sum_{K \in \mT_h} \int_K \abs{\nabla{w_1} - \nabla{w_2}}^2 \ud x
\\ &
+
\frac{1}{2} M_\bA
\int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{w_1 - w_2}}^2 \ud s ,
\end{align*}
where~$\alpha_0 = 2 \, C \left( 1 + \lambda_\theta \, C_\bA / M_\bA \right)^2$. Hence, we have proved~\eqref{eq:strong_monotonicity} with $M_N = M_\bA \, \max ( \frac{1}{2}, \, 1 - \alpha_0 / \alpha )$. We conclude by noting that~$M_N > 0$ whenever~$\alpha > \alpha_0$.
\end{proof}
With the aid of Lemma~\ref{lem:lipschitz_continuity} and Lemma~\ref{lem:strong_monotonicity}, we are now in the position to prove that the DG approximation~\eqref{eq:dgfem} admits a unique solution~$u_{h, p} \in V_{h, p}$. Necessary and sufficient conditions for existence and uniqueness are provided by Theorem~\ref{thm:nonlinear_infsup} in the Appendix. The following result is an immediate consequence.
\begin{theorem}[Existence and uniqueness] \label{thm:wellposedness_dgfem}
Let~$\theta \in [-1, 1]$ and~$\alpha > \alpha_0 = 2 \, C \, ( 1 + \lambda_\theta \, C_\bA / M_\bA )^2$, where $\lambda_\theta = 1 + \abs{1 + \theta}$ and~$C$ is the constant from Lemma~\ref{lem:inv_trace_inequality}. Then, the DG approximation~\eqref{eq:dgfem} has a unique solution~$u_{h, p} \in V_{h, p}$.
\end{theorem}
\section{\emph{A priori} error analysis} \label{section:error_analysis}
We begin by introducing the following $hp$-approximation results.
\begin{lemma} \label{lem:approximation_estimate}
Let $K \in \mT_h$ such that $K = T_K(\hat{K})$, where $\hat{K}$ is either the unit $d$-simplex or the unit $d$-hypercube, and $T_K$ is a $C^{r_K}$-diffeomorphism in compliance with Assumption~\ref{asm:B}(i). For~$s_K \geq 0$, let $v \in H^{s_K}(K)$ and define $t_K = \min(r_K, s_K)$. Then, for $p_K = 1, 2, \dots$, there exists a mapping $\pi_K \colon H^{s_K}(K) \to S_{p_K}(K)$ and a constant~$C$ independent of~$h_K$, $p_K$ and~$v$ such that:
\begin{enumerate}
\item[(i)] for~$0 \leq k \leq t_K$,
\begin{equation*}
\norm{v - \pi_K(v)}_{H^k(K)}
\leq
C \, \frac{h_K^{\mu_K - k}}{p_K^{t_K - k}} \, \norm{v}_{H^{t_K}(K)} \, ;
\end{equation*}
\item[(ii)] for~$0 \leq k + 1/2 < t_K$, and for~$F \in \mF_{h, K}$,
\begin{equation*}
\norm{v - \pi_K(v)}_{H^k(F)}
\leq
C \, \frac{h_K^{\mu_K - k - 1/2}}{p_K^{t_K - k - 1/2}} \, \norm{v}_{H^{t_K}(K)} \, .
\end{equation*}
\end{enumerate}
Here, $\mu_K = \min(p_K + 1, r_K, s_K)$.
\end{lemma}
\begin{proof}
We refer to the proof of Lemma~4.5 in~\cite{BabuskaSuri1987} for the case that~$K$ is an affine image of the unit triangle or unit quadrilateral. The generalization to non-affine triangles and quadrilaterals follows \emph{mutatis mutandis} by proceeding similarly as in the proof of Theorem~1 of~\cite{CiarletRaviart1972} while making use of~\cite[Lemma~4.1]{BabuskaSuri1987}, and subsequently exploiting Assumption~\ref{asm:B}(i). The argument for simplices and hypercubes of dimension $d > 2$ is completely analogous.
\end{proof}
\begin{corollary} \label{crl:approximation_estimate_dgnorm}
For $s > 3/2$, let~$\Pi_{h, p} \colon H^s(\Omega, \mT_h) \to V_{h, p}$ such that $\left. \Pi_{h, p}(\cdot) \right|_K = \pi_K(\cdot)$ for~$K \in \mT_h$, where~$\pi_K$ is the mapping from Lemma~\ref{lem:approximation_estimate}. Moreover, let~$v \in H^s(\Omega, \mT_h)$ with~$\left. v \right|_K \in H^{s_K}(K)$, $s_K \geq s$, $K \in \mT_h$, and select $\alpha > 0$. There exists a constant~$C$ such that
\begin{equation*}
\dgnorm{v - \Pi_{h, p}(v)}_+
\leq
C \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \, \norm{v}_{H^{t_K}(K)}^2 \right)^{1/2} ,
\end{equation*}
where~$t_K = \min(r_K, s_K)$ and~$\mu_K = \min(p_K + 1, r_K, s_K)$.
\end{corollary}
\begin{proof}
Consider $K \in \mT_h$ and~$F \in \mF_{h, K}$. From Assumption~\ref{asm:B} it follows that there exists positive constants~$C_1 \equiv C_1(d, \beta_1, \beta_2)$ and~$C_2 \equiv C_2(d, \beta_3)$ such that $C_1^{-1} h_K \leq \mu_F \leq C_1 \, h_K$ and $C_2^{-1} p_K^2 \leq p_F^2 \leq C_2 \, p_K^2$. Hence,
\begin{equation*}
\alpha \, C_3^{-1} \, \frac{p_K^2}{h_K} \leq \left. \sigma \right|_F \leq \alpha \, C_3 \, \frac{p_K^2}{h_K} ,
\end{equation*}
where $C_3 = C_2 / C_1$. Accordingly, by Young's inequality, we have that, for~$\eta = v - \Pi_{h, p}(v)$,
\begin{align*}
\dgnorm{\eta}_+^2
= \ &
\sum_{K \in \mT_h} \int_K \abs{\nabla{\eta}}^2 \ud x
+
\int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\jump{\eta}}^2 \ud s
+
\int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{\abs{\nabla{\eta}}}^2 \ud s
\\ \leq \ &
\sum_{K \in \mT_h} \left( \norm{\eta}_{H^1(K)}^2
+
\sum_{F \in \mF_{h, K}} \!\! \left( 2 \alpha C_3 \frac{p_K^2}{h_K} \norm{\eta}_{L^2(F)}^2
+
\alpha^{-1} C_3 \frac{h_K}{p_K^2} \norm{\eta}_{H^1(F)}^2 \right) \right) .
\end{align*}
Here, in view of the approximation estimates from Lemma~\ref{lem:approximation_estimate},
\begin{align*}
\norm{\eta}_{H^1(K)}^2 \leq & \ C \, \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 2}} \, \norm{v}_{H^{t_K}(K)}^2 ,
\\
\norm{\eta}_{L^2(F)}^2 \leq & \ C \, \frac{h_K^{2 \mu_K - 1}}{p_K^{2 t_K - 1}} \, \norm{v}_{H^{t_K}(K)}^2 ,
\\
\norm{\eta}_{H^1(F)}^2 \leq & \ C \frac{h_K^{2 \mu_K - 3}}{p_K^{2 t_K - 3}} \norm{v}_{H^{t_K}(K)}^2 .
\end{align*}
Hence,
\begin{equation*}
\dgnorm{\eta}_+^2
\leq
C \sum_{K \in \mT_h} \left( \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 2}}
+
\alpha \, C_3 \, C_4 \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}}
+
\alpha^{-1} C_3 \, C_4 \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 1}} \right) \norm{v}_{H^{t_K}(K)}^2 ,
\end{equation*}
where $C_4 = \max_{K \in \mT_h} \left( \card(\mF_{h, K}) \right)$.
\end{proof}
Using the $hp$-approximation estimate from Corollary~\ref{crl:approximation_estimate_dgnorm}, we prove the following \emph{a priori} error bound.
\begin{theorem} \label{thm:dgnorm_error_estimate}
Let~$u$ denote the solution to~\eqref{eq:model_problem} and suppose that~$u \in H^s(\Omega) \cap C^0(\Omega)$, $s > 3/2$, with $\left. u \right|_K \in H^{s_K}(K)$, $s_K \geq s$, $K \in \mT_h$. Furthermore, let $\theta \in [-1, 1]$ and $\alpha > \alpha_0$, with~$\alpha_0$ as in Lemma~\ref{lem:strong_monotonicity}. Then, denoting by~$u_{h, p} \in V_{h, p}$ the solution to~\eqref{eq:dgfem}, there exists a constant~$C$ such that
\begin{equation} \label{eq:dgnorm_error_estimate}
\dgnorm{u - u_{h, p}}_+ \leq C \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \, \norm{u}_{H^{t_K}(K)}^2 \right)^{1/2} ,
\end{equation}
where~$t_K = \min(r_K, s_K)$ and~$\mu_K = \min(p_K + 1, r_K, s_K)$.
\end{theorem}
\begin{proof}
Denote by~$\Pi_{h, p} \colon H^s(\Omega, \mT_h) \to V_{h, p}$ the mapping from Corollary~\ref{crl:approximation_estimate_dgnorm}, and let us write~$u - u_{h, p} = \eta + \xi$, where~$\eta = u - \Pi_{h, p}(u)$ and~$\xi = \Pi_{h, p}(u) - u_{h, p}$. Using Lemma~\ref{lem:strong_monotonicity}, the Galerkin-orthogonality property~\eqref{eq:galerkin_orthogonality} and Lemma~\ref{lem:lipschitz_continuity}, we have that
\begin{align*}
M_N \, \dgnorm{\xi}^2
\leq \ &
N(\Pi_{h, p}(u); \xi) - N(u_{h, p}; \xi)
\\ \leq \ &
N(\Pi_{h, p}(u); \xi) - N(u; \xi)
\\ \leq \ &
C_N \, \dgnorm{\eta}_+ \, \dgnorm{\xi}_+ .
\end{align*}
Since~$\xi \in V_{h, p}$, we note from~\eqref{eq:norm_equivalence} that there exists a constant~$C$ such that~$\dgnorm{\xi}_+^2 \leq C \, \dgnorm{\xi}^2$. Hence,
\begin{equation*}
\dgnorm{\xi}_+ \leq C \, \frac{C_N}{M_N} \, \dgnorm{\eta}_+ ,
\end{equation*}
and therefore, by the triangle inequality,
\begin{equation*}
\dgnorm{u - u_{h, p}}_+
\leq
\dgnorm{\eta}_+ + \dgnorm{\xi}_+
\leq \left( 1 + C \, \frac{C_N}{M_N} \right) \dgnorm{\eta}_+ .
\end{equation*}
The estimate~\eqref{eq:dgnorm_error_estimate} then follows by applying Corollary~\ref{crl:approximation_estimate_dgnorm}.
\end{proof}
We remark that the error estimate obtained in Theorem~\ref{thm:dgnorm_error_estimate} displays the same quasi-optimality as the error estimates obtained for interior penalty DG approximations of linear elliptic problems; cf., for example, \cite[Theorem~4.5]{HoustonEtAl2002}. That is, provided that $r_K \geq s_K \geq p_K + 1$ for all~$K \in \mT_h$, the estimate~\eqref{eq:dgnorm_error_estimate} is optimal in~$h$ and slightly suboptimal in~$p$, by half an order in~$p$. Here, the condition that~$r_K \geq s_K$ for all~$K \in \mT_h$ reflects the dependence of the estimates on the regularity of the mappings~$\{ T_K \}_{K \in \mT_h}$, and stresses the importance of proper mesh design, especially when curved elements are used; cf.~\cite{CiarletRaviart1972}.
Next, let~$\psi \in L^2(\Omega)$ and consider the linear functional $J_\psi(w) = (\psi, w)_\Omega$, where $w \in V(h, p)$ and $(\cdot, \cdot)_\Omega$ denotes the $L^2(\Omega)$~inner product. We shall now be concerned with obtaining a bound for the error $J_\psi(u) - J_\psi(u_{h, p})$. The analysis is based on a duality argument and relies on Fr\'{e}chet differentiability of the map~$\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ with respect to~$\bv$.
Accordingly, if the limit exits, let us denote by
\begin{equation} \label{eq:da}
\ba'(\bq; \bw) := \lim_{t \to 0} \frac{\bA(\bq + t \bw) (\bq + t \bw) - \bA(\bq) \bq}{t} , \qquad \bq, \bw \in \mathbb{R}^d ,
\end{equation}
the derivative of the map $\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ at~$\bq$ in the direction~$\bw$. Thanks to Assumption~\ref{asm:A} we are able to make the following claim.
\begin{lemma} \label{lem:frechet_differentiability}
Let $\bA$ satisfy the Lipschitz condition~\eqref{eq:asm_A1} of Assumption~\ref{asm:A}. Then, the map~$\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ is Fr\'{e}chet differentiable almost everywhere. That is, for almost every~$\bq \in \mathbb{R}^d$, we have that:
\begin{enumerate}
\item[(i)] the limit~\eqref{eq:da} exists for all~$\bw \in \mathbb{R}^d$;
\item[(ii)] the mapping~$\bw \mapsto \ba'(\bq; \bw) \colon \mathbb{R}^d \to \mathbb{R}^d$ is linear and continuous;
\item[(iii)] $\ba'(\bq; \bw) = \bA(\bq + \bw) (\bq + \bw) - \bA(\bq) \bq + o(\abs{\bw})$ as $\bw \to \mathbf{0}$ in $\mathbb{R}^d$.
\end{enumerate}
\end{lemma}
\begin{proof}
The lemma is an immediate consequence of Rademacher's Theorem; see, for example, \cite[Section~3.1.2]{EvansGariepy1992}.
\end{proof}
For simplicity of presentation, and without loss of generality, we henceforth assume that the map~$\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ is \emph{everywhere} Fr\'{e}chet differentiable in~$\mathbb{R}^d$, and we refer to Remark~\ref{rem:regularization} below for further discussion. Then, for $\bq \in \mathbb{R}^d$, let $\bA^\ast(\bq) \in \mathbb{R}^{d, d}$ such that~$\bA^\ast(\bq) \bv \cdot \bw = \ba'(\bq; \bw) \cdot \bv$ for all~$\bv, \bw \in \mathbb{R}^d$. Given~$\psi \in L^2(\Omega)$, we introduce the dual problem: find~$z \colon \Omega \to \mathbb{R}$ such that
\begin{subequations} \label{eq:dual_problem}
\begin{alignat}{2}
- \nabla{} \cdot \left( \bA^\ast(\nabla{u}) \nabla{z} \right) = \ & \psi & \qquad & \text{in} \ \Omega , \label{eq:dual_problem_pde}
\\
z = \ & 0 & \qquad & \text{on} \ \Gamma_\rmD , \label{eq:dual_problem_bcD}
\\
\bA^\ast(\nabla{u}) \nabla{z} \cdot \bn = \ & 0 & \qquad & \text{on} \ \Gamma_\rmN . \label{eq:dual_problem_bcN}
\end{alignat}
\end{subequations}
Using Assumption~\ref{asm:A}, it is easy verify that~$\abs{\bA^\ast(\bq) \bv} \leq C_\bA \abs{\bv}$ and $\bA^\ast(\bq) \bv \cdot \bv \geq M_\bA \abs{\bv}^2$ for all~$\bq, \bv \in \mathbb{R}^d$, where~$C_\bA$ and~$M_\bA$ are the constants from~\eqref{eq:asm_A1} and~\eqref{eq:asm_A2}.
Hence, by the Lax-Milgram theorem we deduce that~\eqref{eq:dual_problem} has a unique weak solution~$z \in H^1(\Omega)$. In what follows, we shall assume slightly stronger regularity by supposing that there exists a strong solution $z \in H^2(\Omega)$ satisfying
\begin{equation} \label{eq:dual_regularity}
\norm{z}_{H^2(\Omega)} \leq C \norm{\psi}_{L^2(\Omega)} .
\end{equation}
From~\cite[Theorem~8.12]{GilbargTrudinger1983}, we note that this is satisfied if $\partial \Omega$ is of class~$C^2$ with~$\Gamma_\rmN = \emptyset$, and if~$\bA^\ast(\nabla{u}) \in [C^{0, 1}(\closure{\Omega})]^{d, d}$.
With the aid of the dual problem~\eqref{eq:dual_problem} we are able to derive the following \emph{a priori} bound for the error~$J_\psi(u) - J_\psi(u_{h, p})$.
\begin{theorem} \label{thm:functional_error_estimate}
Consider the same premises as in Theorem~\ref{thm:dgnorm_error_estimate}. Furthermore, assume that the map~$\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ is everywhere Fr\'{e}chet differentiable in $\mathbb{R}^d$, and given $\psi \in L^2(\Omega)$, suppose that the dual problem~\eqref{eq:dual_problem} has a strong solution~$z \in H^2(\Omega)$ with $\left. z \right|_K \in H^{\ell_K}(K)$, $\ell_K \geq 2$, $K \in \mT_h$. Then, there exists a constant~$C$ such that
\begin{align}
J_\psi(u) - J_\psi(u_{h, p})
\leq &
\ C \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \norm{u}_{H^{t_K}(K)}^2 \right)^{\!\!1/2}
\nonumber \\
& \times \! \left(
\left( \sum_{K \in \mT_h} \frac{h_K^{2 \lambda_K - 2}}{p_K^{2 m_K - 3}} \norm{z}_{H^{m_K}(K)}^2 \right)^{\!\!1/2}
+
\frac{1 + \theta}{\sqrt{\alpha}} \, \norm{z}_{H^2(\Omega)} \! \right) + R , \label{eq:functional_error_estimate}
\end{align}
where $t_K = \min(r_K, s_K)$, $m_K = \min(r_K, \ell_K)$, $\mu_K = \min(p_K + 1, r_K, s_K)$, $\lambda_K = \min(p_K + 1, r_K, \ell_K)$, and where $R = o( \dgnorm{u - u_{h, p}}_+ ) \, \norm{z}_{H^2(\Omega)}$. Moreover, if the map~$\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ is twice continuously differentiable everywhere in~$\mathbb{R}^d$, then there exists a constant~$C$ such that
\begin{equation}
R \leq C \max_{K \in \mT_h} \left( \frac{p_K^{3/2}}{h_K^{d/2}} \right) \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \norm{u}_{H^{t_K}(K)}^2 \! \right) \norm{z}_{H^2(\Omega)} . \label{eq:functional_error_estimate_Rbound}
\end{equation}
\end{theorem}
Before we embark on the proof of Theorem~\ref{thm:functional_error_estimate}, we first introduce an auxiliary result. By our assumption that the map~$\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ is everywhere Fr\'{e}chet differentiable in~$\mathbb{R}^d$, we have that the map $y \mapsto N(y; v) \colon V(h, p) \to \mathbb{R}$ is everywhere Fr\'{e}chet differentiable in~$V(h, p)$. Accordingly, for any~$v \in V(h, p)$, let~$N'(q; w, v)$ denote the derivative of the map $y \mapsto N(y; v) \colon V(h, p) \to \mathbb{R}$ at~$q$ in the direction~$w$, given by
\begin{equation*}
N'(q; w, v) = \lim_{t \to 0} \frac{N(q + t w; v) - N(q; v)}{t} , \qquad q, w, v \in V(h, p) .
\end{equation*}
We introduce the following auxiliary result.
\begin{lemma} \label{lem:dual_consistency}
Let~$u \in H^s(\Omega) \cap C^0(\Omega)$, $s > 3/2$, denote the solution of~\eqref{eq:model_problem}, and suppose that the dual problem~\eqref{eq:dual_problem} has a strong solution~$z \in H^2(\Omega)$. Then,
\begin{equation*}
J_\psi(w)
=
N'(u; w, z)
-
(1 + \theta) \int_{\Gamma_{h, 0, \rmD}} \ba'(\nabla{u}; \jump{w}) \cdot \nabla{z} \ud s \qquad \forall w \in V(h, p) .
\end{equation*}
\end{lemma}
\begin{proof}
Since~$z \in H^2(\Omega)$, we have that~$\left. \jump{z} \right|_F = \mathbf{0}$ for all~$F \in \mF_{h, 0}$. Accordingly, evaluating~$N'(u; w, z)$ for any~$w \in V(h, p)$, we find that
\begin{equation}
N'(u; w, z)
=
\sum_{K \in \mT_h} \int_K \ba'(\nabla{u}; \nabla{w}) \cdot \nabla{z} \ud x
+
\theta \int_{\Gamma_{h, 0, \rmD}} \! \avg{\ba'(\nabla{u}; \jump{w}) \cdot \nabla{z}} \ud s . \label{eq:dN_uwz}
\end{equation}
Using the dual problem~\eqref{eq:dual_problem} and applying integration-by-parts, we also find that, for all~$w \in V(h, p)$,
\begin{equation} \label{eq:Jpsi_w}
\begin{aligned}
J_\psi(w)
= \ &
- \sum_{K \in \mT_h} \int_K w \, \nabla \cdot \left( \bA^\ast(\nabla{u}) \nabla{z} \right) \ud x
\\ = \ &
\sum_{K \in \mT_h} \left( \int_K \bA^\ast(\nabla{u}) \nabla{z} \cdot \nabla{w} \ud x - \int_{\partial K} \bA^\ast(\nabla{u}) \nabla{z} \cdot \bn_K w \ud s \right)
\\ = \ &
\sum_{K \in \mT_h} \int_K \bA^\ast(\nabla{u}) \nabla{z} \cdot \nabla{w} \ud x
-
\int_{\Gamma_{h, 0}} \jump{\bA^\ast(\nabla{u}) \nabla{z}} \avg{w} \ud s
\\ & -
\int_{\Gamma_{h, 0, \rmD}} \avg{\bA^\ast(\nabla{u}) \nabla{z}} \cdot \jump{w} \ud s .
\end{aligned}
\end{equation}
By \cite[Lemma~1.24]{PietroErn2011}, it follows that $\left. \jump{\bA^\ast(\nabla{u}) \nabla{z}} \right|_F = 0$ weakly for all~$F \in \mF_{h, 0}$. Thence, comparing~\eqref{eq:dN_uwz} and~\eqref{eq:Jpsi_w} while noting that $\bA^\ast(\nabla{u}) \nabla{z} \cdot \bw = \ba'(\nabla{u}; \bw) \cdot \nabla{z}$ for all~$\bw \in \mathbb{R}^d$, we obtain the stated result.
\end{proof}
With the aid of Lemma~\ref{lem:dual_consistency}, we now present a proof of Theorem~\ref{thm:functional_error_estimate}.
\begin{proof}[Proof of Theorem~\ref{thm:functional_error_estimate}]
Denote by~$\Pi_{h, p} \colon H^s(\Omega, \mT_h) \to V_{h, p}$, $s > 3/2$, the mapping from Corollary~\ref{crl:approximation_estimate_dgnorm}, and let us write $e = u - u_{h, p}$. Lemma~\ref{lem:dual_consistency} implies that
\begin{align}
J_\psi(u) - J_\psi(u_{h, p})
= \ &
N'(u; e, z)
-
(1 + \theta) \int_{\Gamma_{h, 0, \rmD}} \ba'(\nabla{u}; \jump{e}) \cdot \nabla{z} \ud s \nonumber
\\ = \ &
N'(u; e, z - \Pi_{h, p}(z))
-
(1 + \theta) \int_{\Gamma_{h, 0, \rmD}} \ba'(\nabla{u}; \jump{e}) \cdot \nabla{z} \ud s \nonumber
\\ &
+
N'(u; e, \Pi_{h, p}(z)) . \label{eq:dual_error_representation}
\end{align}
Considering the first term in~\eqref{eq:dual_error_representation}, we deduce by Lemma~\ref{lem:lipschitz_continuity} that
\begin{align*}
N'(u; e, z - \Pi_{h, p}(z))
= \ &
\lim_{t \to 0} \frac{ N(u + t e; z - \Pi_{h, p}(z)) - N(u; z - \Pi_{h, p}(z))}{t}
\\ \leq \ &
\sup_{t > 0} \frac{ N(u + t e; z - \Pi_{h, p}(z)) - N(u; z - \Pi_{h, p}(z)) }{ t }
\\ \leq \ &
C_N \, \dgnorm{e}_+ \, \dgnorm{z - \Pi_{h, p}(z)}_+ ,
\end{align*}
where~$C_N$ is the constant from Lemma~\ref{lem:lipschitz_continuity}. Using the error estimate of Theorem~\ref{thm:dgnorm_error_estimate} and the approximation estimate of Corollary~\ref{crl:approximation_estimate_dgnorm}, we then obtain:
\begin{align*}
& N'(u; e, z - \Pi_{h, p}(z))
\\ & \quad \leq
C \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \norm{u}_{H^{t_K}(K)}^2 \right)^{\!1/2}
\left( \sum_{K \in \mT_h} \frac{h_K^{2 \lambda_K - 2}}{p_K^{2 m_K - 3}} \norm{z}_{H^{m_K}(K)}^2 \right)^{\!1/2} .
\end{align*}
Next, applying the Cauchy-Schwarz inequality to the second term in~\eqref{eq:dual_error_representation}, we have that
\begin{align*}
& (1 + \theta) \int_{\Gamma_{h, 0, \rmD}} \ba'(\nabla{u}; \jump{u - u_{h, p}}) \cdot \nabla{z} \ud s
\\ & \qquad \leq
(1 + \theta)
\left( \int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\ba'(\nabla{u}; \jump{u - u_{h, p}})}^2 \ud s \right)^{1/2}
\left( \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \abs{\nabla{z}}^2 \right)^{1/2} .
\end{align*}
Using that $\abs{\ba'(\bq; \bw)} \leq C_\bA \abs{\bw}$ for all~$\bq, \bw \in \mathbb{R}^d$ and subsequently applying Theorem~\ref{thm:dgnorm_error_estimate}, we find:
\begin{equation*}
\int_{\Gamma_{h, 0, \rmD}} \sigma \abs{\ba'(\nabla{u}; \jump{e})}^2 \ud s
\, \leq \,
C_\bA \dgnorm{e}^2
\, \leq \,
C \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \norm{u}_{H^{t_K}(K)}^2 \right) .
\end{equation*}
Moreover, argueing similarly as in the proof of Lemma~\ref{lem:inv_trace_inequality} and subsequently applying the trace inequality from Lemma~\ref{lem:trace_inequality}, we deduce that
\begin{align}
\int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \abs{\nabla{z}}^2 \ud s
\leq \ &
C \alpha^{-1} \sum_{K \in \mT_h} \frac{h_K}{p_K^2} \int_{\partial K} \abs{\nabla{z}}^2 \ud s \nonumber
\\ \leq \ &
C \alpha^{-1} \sum_{K \in \mT_h} \frac{h_K}{p_K^2} \left( h_K^{-1} \norm{z}_{H^1(K)}^2 + \norm{z}_{H^1(K)} \, \norm{z}_{H^2(K)} \right) \nonumber
\\ \leq \ &
C \alpha^{-1} \, \norm{z}_{H^2(\Omega)}^2 . \label{eq:dual_trace_inequality}
\end{align}
Hence, we obtain:
\begin{equation*}
(1 + \theta) \int_{\Gamma_{h, 0, \rmD}} \! \ba'(\nabla{u}; \jump{e}) \cdot \nabla{z} \ud s
\leq
C \, \frac{1 + \theta}{\sqrt{\alpha}} \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \norm{u}_{H^{t_K}(K)}^2 \! \right)^{\!1/2} \norm{z}_{H^2(\Omega)} .
\end{equation*}
Substituting the above bounds back into~\eqref{eq:dual_error_representation}, we arrive at the stated estimate~\eqref{eq:functional_error_estimate} with~$R = N'(u; e, \Pi_{h, p}(z))$.
We claim that $R = o( \dgnorm{e}_+ ) \, \norm{z}_{H^2(\Omega)}$. Fr\'{e}chet differentiability of the map $y \mapsto N(y; v) \colon V(h, p) \to \mathbb{R}$ everywhere in~$V(h, p)$ implies that
\begin{equation*}
N'(q; w, v) = N(q + w; v) - N(q; v) + o( \dgnorm{w}_+ ) \, \dgnorm{v}_+ \qquad \text{as $\dgnorm{w}_+ \to 0$} ,
\end{equation*}
for all~$q, w, v \in V(h, p)$. Hence, by the Galerkin-orthogonality property of Lemma~\ref{lem:galerkin_orthogonality}, we obtain that
\begin{align*}
R = N'(u; e, \Pi_{h, p}(z))
= \ &
N(u; \Pi_{h, p}(z)) - N(u_{h, p}; \Pi_{h, p}(z))
+
o( \dgnorm{e}_+ ) \ \dgnorm{\Pi_{h, p}(z)}_+
\\ = \ &
o( \dgnorm{e}_+ ) \ \dgnorm{\Pi_{h, p}(z)}_+
\end{align*}
as $\dgnorm{e}_+ \to 0$. Here, in view of~\eqref{eq:dual_trace_inequality}, we have that~$\dgnorm{z}_+ \leq C \norm{z}_{H^2(\Omega)}$, so that, by the triangle inequality and Corollary~\ref{crl:approximation_estimate_dgnorm},
\begin{equation}
\dgnorm{\Pi_{h, p}(z)}_+ \leq C \norm{z}_{H^2(\Omega)} . \label{eq:dgnormPiz}
\end{equation}
Therefore, we find that $R = o( \dgnorm{e}_+ ) \, \norm{z}_{H^2(\Omega)}$, as claimed.
It remains to prove the estimate~\eqref{eq:functional_error_estimate_Rbound} subject to the condition that the map $\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ is twice continuously differentiable everywhere in~$\mathbb{R}^d$. Accordingly, let
\begin{equation*}
\ba''(\bq; \bw_1, \bw_2) := \lim_{t \to 0} \frac{\ba'(\bq + t \bw_2; \bw_1) - \ba'(\bq; \bw_1)}{t} , \qquad \bq, \bw_1, \bw_2 \in \mathbb{R}^d ,
\end{equation*}
denote the second-order derivative of the map~$\bv \mapsto \bA(\bv) \bv \colon \mathbb{R}^d \to \mathbb{R}^d$ at~$\bq \in \mathbb{R}^d$ in the direction~$(\bw_1, \bw_2) \in \mathbb{R}^d \times \mathbb{R}^d$, and let there be a constant~$C'_\bA$ such that $\abs{\ba''(\bq; \bw_1, \bw_2)} \leq C'_\bA \, \abs{\bw_1} \, \abs{\bw_2}$ for all $\bq, \bw_1, \bw_2 \in \mathbb{R}^d$. By Taylor's Theorem, we have that
\begin{equation}
\bA(\bv_1) \bv_1 - \bA(\bv_2) \bv_2 = \ba'(\bv_2; \bv_1 - \bv_2) + \br(\bv_2, \bv_1 - \bv_2) , \qquad \forall \bv_1, \bv_2 \in \mathbb{R}^d , \label{eq:taylor_expansion}
\end{equation}
with the integral remainder
\begin{equation*}
\br(\bv_2, \bv_1 - \bv_2)
=
\int_0^1 \ba''( \bv_2 + t (\bv_1 - \bv_2); \bv_1 - \bv_2, \bv_1 - \bv_2) (1 - t) \ud t ,
\end{equation*}
satisfying $\abs{\br(\bv_2, \bv_1 - \bv_2)} \leq C'_\bA \abs{\bv_1 - \bv_2}^2$. Now, recall that $R = N'(u; e, \Pi_{h, p}(z))$. Using the Galerkin-orthogonality property of Lemma~\ref{lem:galerkin_orthogonality} and the Taylor expansion~\eqref{eq:taylor_expansion}, we deduce that
\begin{align*}
R
= \ &
N'(u; e, \Pi_{h, p}(z))
-
N(u; \Pi_{h, p}(z))
+
N(u_{h, p}; \Pi_{h, p}(z))
\\ = \ &
- \sum_{K \in \mT_h} \int_K \br(\nabla{u}; \nabla{e}) \cdot \nabla{(\Pi_{h, p}(z))} \ud x
\\ &
+ \int_{\Gamma_{h, 0, \rmD}} \avg{ \br(\nabla{u}, \widehat{\nabla}_\sigma \, e) \cdot (\theta \sigma^{-1} \nabla{(\Pi_{h, p}(z))} + \jump{\Pi_{h, p}(z)}) } \ud s
\\ &
- \theta \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{ \br(\nabla{u}, \nabla{e}) \cdot \nabla({\Pi_{h, p}(z)}) } \ud s
\\ \leq \ &
C'_\bA \sum_K \int_K \abs{\nabla{e}}^2 \ \abs{\nabla{(\Pi_{h, p}(z))}} \ud x
\\ &
+ C'_\bA \int_{\Gamma_{h, 0, \rmD}} \avg{ \abs{\widehat{\nabla}_\sigma \, e}^2 \, \left( \abs{\theta} \, \sigma^{-1} \abs{\nabla(\Pi_{h, p}(z))} + \abs{\jump{\Pi_{h, p}(z)}} \right) } \ud s
\\ &
+ C'_\bA \, \abs{\theta} \int_{\Gamma_{h, 0, \rmD}} \sigma^{-1} \avg{ \abs{\nabla{e}}^2 \ \abs{\nabla{(\Pi_{h, p}(z))}} } \ud s .
\end{align*}
By Young's inequality and the fact that~$\abs{\avg{\bq_1 \cdot \bq_2}} \leq \avg{\abs{\bq_1} \ \abs{\bq_2}} \leq 2 \avg{\abs{\bq_1}} \, \avg{\abs{\bq_2}}$ for all~$\bq_1, \bq_2 \in [H^1(\Omega, \mT_h)]^d$, we then obtain:
\begin{align}
R
\leq \ &
C'_\bA \sum_K \int_K \abs{\nabla{e}}^2 \ \abs{\nabla{(\Pi_{h, p}(z))}} \ud x \nonumber
\\ &
+ 12 \abs{\theta} \, C'_\bA \int_{\Gamma_{h, 0, \rmD}} \left( \sigma^{-1} \avg{\abs{\nabla{e}}}^2 + \sigma \abs{\jump{e}}^2 \right) \, \avg{\abs{\nabla{(\Pi_{h, p}(z))}}} \ud s \nonumber
\\ &
+ 4 \, C'_\bA \int_{\Gamma_{h, 0, \rmD}} \left( \avg{\abs{\nabla{e}}}^2 + \sigma^2 \abs{\jump{e}}^2 \right) \, \abs{\jump{\Pi_{h, p}(z)}} \ud s \nonumber
\\ \leq \ &
(4 + 12 \abs{\theta}) \ C'_\bA \ \dgnorm{e}_+^2 \ \dgnorm{\Pi_{h, p}(z)}_\star , \label{eq:Rbound_star}
\end{align}
where
\begin{align}
\dgnorm{\Pi_{h, p}(z)}_\star
= \ &
\max_{K \in \mT_h} \norm{\Pi_{h, p}(z)}_{W^1_\infty(K)}
+
\max_{F \in \mF_{h, 0, \rmD}} \bignorm{\avg{\abs{\nabla{(\Pi_{h, p}(z))}}}}_{L^\infty(F)}
\nonumber \\ &
+
\max_{F \in \mF_{h, 0, \rmD}} \sigma \, \bignorm{ \abs{\jump{\Pi_{h, p}(z)}} }_{L^\infty(F)} . \label{eq:norm_Piz_star}
\end{align}
An upper bound for $\dgnorm{e}_+$ is provided by Theorem~\ref{thm:dgnorm_error_estimate}. To prove~\eqref{eq:functional_error_estimate_Rbound}, it thus remains to show that $\dgnorm{\Pi_{h, p}(z)}_\star \leq C \max_{K \in \mT_h} \Big( p_K^{3/2} \, h_K^{-d/2} \Big) \norm{z}_{H^2(\Omega)}$.
To this end, let us note that, in view of Lemma~\ref{lem:approximation_estimate} and the triangle inequality, there exists a constant~$C$ such that $\norm{\Pi_{h, p}(z)}_{H^2(K)} \leq C \norm{z}_{H^2(K)}$. Thence, exploiting the inverse estimate~\eqref{eq:inv_estimate_2a}, we have that
\begin{align*}
\max_{K \in \mT_h} \norm{\Pi_{h, p}(z)}_{W^1_\infty(K)}
\leq & \
C \max_{K \in \mT_h} \left( \frac{p_K}{h_K^{d / 2}} \, \norm{\Pi_{h, p}(z)}_{H^1(K)} \right)
\\ \leq & \
C \max_{K \in \mT_h} \left( \frac{p_K}{h_K^{d / 2}} \right) \norm{z}_{H^2(\Omega)} .
\end{align*}
For the second term in~\eqref{eq:norm_Piz_star}, we apply the inverse estimate~\eqref{eq:inv_estimate_2b} to obtain
\begin{align*}
& \max_{F \in \mF_{h, 0, \rmD}} \bignorm{\avg{\abs{\nabla{(\Pi_{h, p}(z))}}}}_{L^\infty(F)}
\\ & \qquad \leq
\max_{K \in \mT_h} \left( \max_{F \in \mF_{h, K}} \bignorm{ \left. (\Pi_{h, p}(z)) \right|_K }_{W^1_\infty(F)} \right)
\\ & \qquad \leq
C \max_{K \in \mT_h} \left( \max_{F \in \mF_{h, K}} \frac{p_K}{(\meas[d-1]{F})^{1/2}} \, \bignorm{ \left. (\Pi_{h, p}(z)) \right|_K }_{H^1(F)} \right) .
\end{align*}
On account of Assumption~\ref{asm:B}, there exists a constant~$C \equiv C(d, \beta_1, \beta_2)$ such that~$\meas[d-1]{F} \geq C \, h_K^{d-1}$ for all~$F \in \mF_{h, K}$, $K \in \mT_h$. Applying the trace inequality~\eqref{eq:trace_inequality}, we then find that
\begin{align*}
& \max_{F \in \mF_{h, 0, \rmD}} \bignorm{\avg{\abs{\nabla{(\Pi_{h, p}(z))}}}}_{L^\infty(F)}
\\ & \qquad \leq
C \max_{K \in \mT_h} \frac{p_K}{h_K^{d/2}} \left( \norm{\Pi_{h, p}(z)}_{H^1(K)}^2 + h_K \, \norm{\Pi_{h, p}(z)}_{H^1(K)} \, \norm{\Pi_{h, p}(z)}_{H^2(K)} \right)^{1/2}
\\ & \qquad \leq
C \max_{K \in \mT_h} \left( \frac{p_K}{h_K^{d/2}} \right) \norm{z}_{H^2(\Omega)} .
\end{align*}
Finally, considering the third term in~\eqref{eq:norm_Piz_star}, we deduce that, by Assumption~\ref{asm:B} and the inverse estimate~\eqref{eq:inv_estimate_2b},
\begin{align*}
& \max_{F \in \mF_{h, 0, \rmD}} \sigma \, \bignorm{ \abs{\jump{\Pi_{h, p}(z)}} }_{L^\infty(F)}
\\ & \qquad = \
\max_{K \in \mT_h} \left( \max_{F \in \mF_{h, K} \cap \mF_{h, 0, \rmD}} \sigma \bignorm{ \abs{\jump{\Pi_{h, p}(z)}} }_{L^\infty(F)} \right)
\\ & \qquad \leq \
C \max_{K \in \mT_h} \left( \frac{p_K^3}{h_K^{(d+1)/2}} \max_{F \in \mF_{h, K} \cap \mF_{h, 0, \rmD}} \bignorm{ \abs{\jump{\Pi_{h, p}(z)}} }_{L^2(F)} \right) .
\end{align*}
By the fact that $z \in H^1(\Omega)$ with~$z = 0$ on~$\Gamma_\rmD$, we have that~$\bignorm{\abs{\jump{\Pi_{h, p}(z)}}}_{L^2(F)} = \bignorm{\abs{\jump{z - \Pi_{h, p}(z)}}}_{L^2(F)}$ for all~$F \in \mF_{h, 0, \rmD}$. Applying Lemma~\ref{lem:approximation_estimate}, we then obtain:
\begin{align*}
\max_{F \in \mF_{h, 0, \rmD}} \sigma \, \bignorm{ \abs{\jump{\Pi_{h, p}(z)}} }_{L^\infty(F)}
\leq \ &
C \max_{K \in \mT_h} \left( \frac{p_K^3}{h_K^{(d+1)/2}} \max_{F \in \mF_{h, K}} \bignorm{z - (\Pi_{h, p}(z)) |_K }_{L^2(F)} \right)
\\ \leq \ &
C \max_{K \in \mT_h} \left( \frac{p_K^{3/2}}{h_K^{(d-2)/2}} \right) \norm{z}_{H^2(\Omega)} .
\end{align*}
Substituting the above inequalities back into~\eqref{eq:norm_Piz_star}, we thus find that $\dgnorm{\Pi_{h, p}(z)}_\star \leq C \max_{K \in \mT_h}\left( p_K^{3/2} \, h_K^{-d/2} \right) \norm{z}_{H^2(\Omega)}$, which, by~\eqref{eq:Rbound_star}, brings us to the stated result~\eqref{eq:functional_error_estimate_Rbound}.
\end{proof}
As a corollary to Theorem~\ref{thm:functional_error_estimate}, we obtain the following estimate for the error in the $L^2(\Omega)$-norm.
\begin{corollary} \label{crl:l2norm_error_estimate}
Consider the same premises as in Theorem~\ref{thm:functional_error_estimate} and assume that the dual regularity estimate~\eqref{eq:dual_regularity} holds. Then, there exists a constant~$C$ such that
\begin{equation}
\norm{u - u_{h, p}}_{L^2(\Omega)}
\leq C \left( \frac{h}{p^{1/2}} + \frac{1+\theta}{\sqrt{\alpha}} \right) \, \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \norm{u}_{H^{t_K}(K)}^2 \! \right)^{\!1/2}
+ R , \label{eq:l2norm_error_estimate}
\end{equation}
where $t_K = \min(r_K, s_K)$, $\mu_K = \min(p_K + 1, r_K, s_K)$ and $R = o( \dgnorm{u - u_{h, p}}_+ )$. Moreover, if the map~$\bv \mapsto \bA(\bv) \colon \mathbb{R}^d \to \mathbb{R}^d$ is twice continuously differentiable everywhere in~$\mathbb{R}^d$, then there exists a constant~$C$ such that
\begin{equation}
R \leq C \max_{K \in \mT_h} \left( \frac{p_K^{3/2}}{h_K^{d/2}} \right) \, \left( \sum_{K \in \mT_h} \frac{h_K^{2 \mu_K - 2}}{p_K^{2 t_K - 3}} \norm{u}_{H^{t_K}(K)}^2 \! \right) . \label{eq:l2norm_error_estimate_Rbound}
\end{equation}
\end{corollary}
\begin{proof}
The result follows immediately from Theorem~\ref{thm:functional_error_estimate} by selecting~$\psi = u - u_{h, p}$ and subsequently applying the regularity estimate~\eqref{eq:dual_regularity}.
\end{proof}
Let us briefly discuss the error estimates presented in Theorem~\ref{thm:functional_error_estimate} and Corollary~\ref{crl:l2norm_error_estimate}. For $h / p$ sufficiently small, we observe that
\begin{equation*}
J_\psi(u) - J_\psi(u_{h, p})
\leq
C \left( \frac{h^{\mu + \lambda - 2}}{p^{t + m - 3}} + \frac{1 + \theta}{\sqrt{\alpha}} \frac{h^{\mu-1}}{p^{t - 3/2}} \right) \norm{u}_{H^t(\Omega)} \, \norm{z}_{H^m(\Omega)}
\end{equation*}
and
\begin{equation*}
\norm{u - u_{h, p}}_{L^2(\Omega)}
\leq
C \left( \frac{h^{\mu}}{p^{t - 1}} + \frac{1 + \theta}{\sqrt{\alpha}} \frac{h^{\mu-1}}{p^{t - 3/2}} \right) \norm{u}_{H^t(\Omega)} ,
\end{equation*}
where~$t = \min_{K \in \mT_h}(t_K)$, $m = \min_{K \in \mT_h}(m_K)$, $\mu = \min_{K \in \mT_h}(\mu_K)$ and $\lambda = \min_{K \in \mT_h}(\lambda_K)$. Accordingly, when~$\theta = -1$, we find that both estimates are optimal in~$h$ and slightly suboptimal in~$p$, by one order in~$p$. On the other hand, when~$\theta \neq -1$, we find that the estimates are suboptimal in both~$h$ and~$p$, by a factor of respectively~$h^{\lambda - 1} / p^{m - 3/2}$ and~$h / p^{1/2}$. This suboptimality can be attributed to a lack of dual consistency; see Lemma~\ref{lem:dual_consistency}. We note that, for $h / p$ sufficiently small, the above estimates are identical to those obtained for interior penalty DG approximations of linear elliptic problems; cf.~\cite[Theorem~4.4]{HarrimanEtAl2003}.
\begin{remark} \label{rem:regularization}
For the proof of Theorem~\ref{thm:functional_error_estimate} and Corollary~\ref{crl:l2norm_error_estimate} we assumed that the map~$\bv \mapsto \bA(\bv) \colon \mathbb{R}^d \to \mathbb{R}^d$ is Fr\'{e}chet differentiable everywhere in~$\mathbb{R}^d$. This was done in order to ensure that the dual problem~\eqref{eq:dual_problem} is well defined. It is envisaged that, with some additional effort, this assumption can be avoided, for instance, by reformulating the dual problem based on a regularization of the map $\bv \mapsto \bA(\bv) \colon \mathbb{R}^d \to \mathbb{R}^d$, for example, by using the techniques in~\cite{LasryLions1986}.
\end{remark}
\section{Numerical experiments} \label{section:numerical_experiments}
We present some numerical examples to verify the theoretical error estimates presented in Section~\ref{section:error_analysis}. For simplicity, we restrict the presentation to 2D problems and consider uniformly refined meshes composed of affine quadrilaterals with uniform values of the polynomial degree~$\{ p_K \}_{K \in \mT_h}$. Throughout this section, the interior penalty parameter is fixed at~$\alpha = 10$. The nonlinear equations arising in the DG approximation are solved using an exact Newton method with a tolerance of $10^{-10}$. High-order numerical quadrature is used to integrate the terms appearing in the assembly of the associated algebraic system of equations, as well as to evaluate the error of the DG solution in various norms.
\subsection{Example 1}
For the first numerical example, we consider the problem of Example 1 in \cite{BustinzaGatica2004}; cf. also Example 1 in \cite{HoustonEtAl2005}. Accordingly, let $\Omega = (-1, 1)^2$ with $\Gamma_\rmD = [-1, 1] \times \{ -1 \} \cup \{ 1 \} \times [-1, 1]$ and $\Gamma_\rmN = [-1, 1] \times \{ 1 \} \cup \{ -1 \} \times [-1, 1]$, and let $\bA(\bx, \nabla{u}) = \left( 2 + (1 + \abs{\nabla{u}})^{-1} \right) \mathbf{I}$, where~$\mathbf{I}$ is the~$2 \times 2$ identity matrix. The data $f$, $g_\rmD$ and $g_\rmN$ are chosen such that the solution is given by the smooth function $u(\bx) = \cos(\pi x_1 / 2) \, \cos(\pi x_2 / 2)$. We note that $\bA$ satisfies Assumption~\ref{asm:A} with~$C_\bA = 3$ and $M_\bA = 2$.
\begin{figure}[!b]
\psfrag{xlabel}[t][c]{$1/h$}
\psfrag{ylabel}[b][c]{$\dgnorm{u - u_{h, p}}$}
\psfrag{p = 1}[l][c]{\footnotesize \hspace{-8pt} $ p = 1$}
\psfrag{p = 2}[l][c]{\footnotesize \hspace{-8pt} $ p = 2$}
\psfrag{p = 3}[l][c]{\footnotesize \hspace{-8pt} $ p = 3$}
\psfrag{p = 4}[l][c]{\footnotesize \hspace{-8pt} $ p = 4$}
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex1_h_eDG_SIPG.eps}
\hfill
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex1_h_eDG_NIPG.eps}
\caption{Example 1. Convergence of $\dgnorm{u - u_{h, p}}$ with $h$-refinement for $p = 1$, $2$, $3$ and $4$. Left: $\theta = -1$. Right: $\theta = 1$.}
\label{fig:ex1_h_eDG}
\psfrag{xlabel}[t][c]{$1/h$}
\psfrag{ylabel}[b][c]{$\norm{u - u_{h, p}}_{L^2(\Omega)}$}
\psfrag{p = 1}[l][c]{\footnotesize \hspace{-8pt} $ p = 1$}
\psfrag{p = 2}[l][c]{\footnotesize \hspace{-8pt} $ p = 2$}
\psfrag{p = 3}[l][c]{\footnotesize \hspace{-8pt} $ p = 3$}
\psfrag{p = 4}[l][c]{\footnotesize \hspace{-8pt} $ p = 4$}
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex1_h_eL2_SIPG.eps}
\hfill
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex1_h_eL2_NIPG.eps}
\caption{Example 1. Convergence of $\norm{u - u_{h, p}}_{L^2(\Omega)}$ with $h$-refinement for $p = 1$, $2$, $3$ and $4$. Left: $\theta = -1$. Right: $\theta = 1$.}
\label{fig:ex1_h_eL2}
\end{figure}
We investigate the convergence of the DG approximation~\eqref{eq:dgfem} on a sequence of successively refined meshes for different polynomial degrees. We consider two choices of the parameter $\theta$, viz. $\theta = -1$ and~$\theta = 1$. Figure~\ref{fig:ex1_h_eDG} presents the convergence of the DG-norm of the error with $h$-refinement for $p = 1$, $2$, $3$ and $4$. We observe that $\dgnorm{u - u_{h, p}}$ converges to zero, for each fixed value of~$p$, at a rate $\mathcal{O}(h^p)$ as $h \to 0$. We note that these results are in perfect agreement with the theoretical error estimate presented in Theorem~\ref{thm:dgnorm_error_estimate}, and that the computed errors are virtually indistinguishable between the two choices of the parameter~$\theta$. In Figure~\ref{fig:ex1_h_eL2}, we show the convergence of the $L^2(\Omega)$-norm of the error with $h$-refinement for $p = 1$, $2$, $3$ and $4$. Here, significant differences are observed between the two choices of~$\theta$. For~$\theta = -1$, optimal convergence rates are obtained for all values of~$p$; i.e., $\norm{u - u_{h, p}}_{L^2(\Omega)} = \mathcal{O}(h^{p+1})$ as $h \to 0$ for each fixed value of~$p$. For~$\theta = 1$ on the other hand, we see that $\norm{u - u_{h, p}}_{L^2(\Omega)}$ behaves like $\mathcal{O}(h^{p+1})$ as $h \to 0$ for odd values of $p$, and like $\mathcal{O}(h^p)$ as $h \to 0$ for even values of $p$. This suboptimal convergence behavior for~$\theta = 1$ is attributable to a lack of dual consistency; cf. Lemma~\ref{lem:dual_consistency}. The obtained convergence rates for $\norm{u - u_{h, p}}_{L^2(\Omega)}$ are in agreement with the theoretical error estimates presented in Corollary~\ref{crl:l2norm_error_estimate}. Comparing the current result to the results reported for the same example in \cite{HoustonEtAl2005}, we note that presented DG method with $\theta = -1$ shows improved convergence behavior with respect to the error in the $L^2(\Omega)$-norm.
\subsection{Example 2}
In the second example, we consider a problem with a non-smooth solution. Let $\Omega = (-1, 1)^2$ with $\Gamma_\rmD = \partial \Omega$, and $\bA(\bx, \nabla{u}) = ( 1 + \mathrm{e}^{-\abs{\nabla{u}}^2} ) \mathbf{I}$, where~$\mathbf{I}$ denotes again the~$2 \times 2$ identity matrix. It is easy to verify that Assumption~\ref{asm:A} is satisfied with~$C_\bA = 1$ and $M_\bA = 1 - \sqrt{2/\mathrm{e}}$. The data $f$ and $g_\rmD$ are chosen such that the solution is given by $u(\bx) = \abs{\bx}^3$. We note that the solution features a singularity at the point $(0, 0)$, and that $u \in H^{4 - \epsilon}(\Omega)$ for arbitrary small $\epsilon > 0$.
We investigate the convergence behavior with $p$-refinement for the two meshes displayed in Figure~\ref{fig:ex2_meshes}. In Tables~\ref{table:ex2_mesh_a} and~\ref{table:ex2_mesh_b}, we show the convergence of the DG-norm of the error and the $L^2(\Omega)$-norm for~$p = 1$, $2$, \dots, $24$, and $\theta = -1$, grouped in odd and even values of $p$. For mesh (a), we observe that $\dgnorm{u - u_{h, p}}$ converges at a rate of almost $\mathcal{O}(p^{-6})$ as $p \to \infty$, and that $\norm{u - u_{h, p}}_{L^2(\Omega)}$ converges at a rate of approximately $\mathcal{O}(p^{-15/2})$. Comparing with the theoretical error estimates of Theorem~\ref{thm:dgnorm_error_estimate} and Corollary~\ref{crl:l2norm_error_estimate}, we note that these convergence rates are more than twice the predicted rate. Indeed, since $u \in H^{4 - \epsilon}$ for any $\epsilon > 0$, the expected convergence rates are $\mathcal{O}(p^{-5/2+\epsilon})$ for $\dgnorm{u - u_{h, p}}$ and $\mathcal{O}(p^{-3+\epsilon})$ for $\norm{u - u_{h, p}}_{L^2(\Omega)}$. This order-doubling convergence behavior is attributable to the fact that the singularity in $u$ at the point $(0, 0)$ coincides with a vertex of mesh (a). In the presence of such corner singularities, it is possible to establish \emph{a priori} error estimates that reflect this order-doubling phenomenon by using approximation results in terms of weighted Sobolev norms; cf., for example, \cite[Remark 3.8]{HoustonEtAl2002}. For mesh (b), on the other hand, the singularity in $u$ lies in the interior of an element rather than at a vertex. Here, we see that the $p$-convergence rates approach the theoretical convergence rates predicted by Theorem~\ref{thm:dgnorm_error_estimate} and Corollary~\ref{crl:l2norm_error_estimate}. Indeed, it is found that $\dgnorm{u - u_{h, p}}$ and $\norm{u - u_{h, p}}_{L^2(\Omega)}$ both behave like $\mathcal{O}(p^{-3})$ as $p \to \infty$. For $\dgnorm{u - u_{h, p}}$, this constitutes a slight improvement of the theoretical convergence rate, by half an order in $p$, while for $\norm{u - u_{h, p}}_{L^2(\Omega)}$ the convergence rate is in perfect agreement. We end this example by stating that the results for $\theta = 1$ are almost identical.
\begin{figure}[h]
\psfrag{p0}[c][r]{\footnotesize $(-1, -1)$}
\psfrag{p1}[c][l]{\footnotesize $( 1, -1)$}
\psfrag{p2}[b][l]{\footnotesize $( 1, 1)$}
\psfrag{p3}[b][r]{\footnotesize $(-1, 1)$}
\centering
\begin{minipage}[h]{0.3\textwidth} \centering
\includegraphics[width=1.0\textwidth]{ex2_mesh_a.eps} \\
{\footnotesize (a)}
\end{minipage}
\hspace{1cm}
\begin{minipage}[h]{0.3\textwidth} \centering
\includegraphics[width=1.0\textwidth]{ex2_mesh_b.eps} \\
{\footnotesize (b)}
\end{minipage}
\caption{The two meshes considered for Example 2.}
\label{fig:ex2_meshes}
\end{figure}
\begin{table}[htb]
\setlength{\tabcolsep}{5pt}
\centering
\begin{minipage}[h]{0.45\textwidth} \centering \small
\begin{tabular}{llclc}
\hline
$p$ & \multicolumn{2}{l}{$\dgnorm{u - u_{h, p}}$} & \multicolumn{2}{l}{$\norm{u - u_{h, p}}_{L^2(\Omega)}$} \\
\hline
1 & 3.11E+00 & \textemdash & 4.46E-01 & \textemdash \\
3 & 4.09E-02 & (3.94) & 3.30E-03 & (4.47) \\
5 & 2.17E-03 & (5.75) & 1.51E-04 & (6.03) \\
7 & 2.84E-04 & (6.05) & 1.35E-05 & (7.18) \\
9 & 6.38E-05 & (5.94) & 1.97E-06 & (7.66) \\
11 & 1.96E-05 & (5.89) & 4.51E-07 & (7.34) \\
13 & 7.32E-06 & (5.88) & 1.31E-07 & (7.43) \\
15 & 3.16E-06 & (5.88) & 4.50E-08 & (7.44) \\
17 & 1.51E-06 & (5.88) & 1.76E-08 & (7.50) \\
19 & 7.86E-07 & (5.88) & 7.64E-09 & (7.51) \\
21 & 4.36E-07 & (5.88) & 3.59E-09 & (7.55) \\
23 & 2.55E-07 & (5.89) & 1.80E-09 & (7.57) \\
\hline
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}[h]{0.45\textwidth} \centering \small
\begin{tabular}{llclc}
\hline
$p$ & \multicolumn{2}{l}{$\dgnorm{u - u_{h, p}}$} & \multicolumn{2}{l}{$\norm{u - u_{h, p}}_{L^2(\Omega)}$} \\
\hline
2 & 5.74E-01 & \textemdash & 8.60E-02 & \textemdash \\
4 & 8.72E-03 & (6.04) & 6.74E-04 & (7.00) \\
6 & 7.12E-04 & (6.18) & 3.71E-05 & (7.15) \\
8 & 1.28E-04 & (5.96) & 5.00E-06 & (6.97) \\
10 & 3.43E-05 & (5.91) & 9.28E-07 & (7.55) \\
12 & 1.17E-05 & (5.89) & 2.37E-07 & (7.49) \\
14 & 4.73E-06 & (5.88) & 7.52E-08 & (7.45) \\
16 & 2.16E-06 & (5.88) & 2.78E-08 & (7.46) \\
18 & 1.08E-06 & (5.88) & 1.15E-08 & (7.49) \\
20 & 5.81E-07 & (5.88) & 5.19E-09 & (7.54) \\
22 & 3.32E-07 & (5.88) & 2.53E-09 & (7.56) \\
24 & 1.99E-07 & (5.89) & 1.30E-09 & (7.59) \\
\hline
\end{tabular}
\end{minipage}
\vspace{6pt}
\caption{Example 2. Convergence of $\dgnorm{u - u_{h, p}}$ and $\norm{u - u_{h, p}}_{L^2(\Omega)}$ with $p$-refinement for mesh (a) and $\theta = - 1$. The results are grouped in odd and even values of $p$. The quantities in brackets indicate the $p$-convergence rates.} \label{table:ex2_mesh_a}
\vspace{24pt}
\begin{minipage}[h]{0.45\textwidth} \centering \small
\begin{tabular}{llclc}
\hline
$p$ & \multicolumn{2}{l}{$\dgnorm{u - u_{h, p}}$} & \multicolumn{2}{l}{$\norm{u - u_{h, p}}_{L^2(\Omega)}$} \\
\hline
1 & 2.07E+00 & \textemdash & 2.31E-01 & \textemdash \\
3 & 3.39E-02 & (3.74) & 2.22E-03 & (4.23) \\
5 & 3.42E-03 & (4.49) & 1.67E-04 & (5.07) \\
7 & 1.03E-03 & (3.55) & 4.39E-05 & (3.96) \\
9 & 4.49E-04 & (3.32) & 1.81E-05 & (3.53) \\
11 & 2.35E-04 & (3.23) & 9.29E-06 & (3.32) \\
13 & 1.38E-04 & (3.17) & 5.44E-06 & (3.20) \\
15 & 8.82E-05 & (3.14) & 3.48E-06 & (3.13) \\
17 & 5.97E-05 & (3.11) & 2.36E-06 & (3.09) \\
19 & 4.23E-05 & (3.10) & 1.68E-06 & (3.06) \\
21 & 3.11E-05 & (3.08) & 1.24E-06 & (3.04) \\
23 & 2.35E-05 & (3.08) & 9.42E-07 & (3.02) \\
\hline
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}[h]{0.45\textwidth} \centering \small
\begin{tabular}{llclc}
\hline
$p$ & \multicolumn{2}{l}{$\dgnorm{u - u_{h, p}}$} & \multicolumn{2}{l}{$\norm{u - u_{h, p}}_{L^2(\Omega)}$} \\
\hline
2 & 2.21E-01 & \textemdash & 2.12E-02 & \textemdash \\
4 & 3.63E-03 & (5.93) & 4.62E-04 & (5.52) \\
6 & 1.09E-03 & (2.96) & 1.50E-04 & (2.78) \\
8 & 4.75E-04 & (2.90) & 6.64E-05 & (2.83) \\
10 & 2.49E-04 & (2.89) & 3.50E-05 & (2.86) \\
12 & 1.47E-04 & (2.90) & 2.07E-05 & (2.89) \\
14 & 9.38E-05 & (2.91) & 1.32E-05 & (2.90) \\
16 & 6.36E-05 & (2.92) & 8.97E-06 & (2.91) \\
18 & 4.51E-05 & (2.92) & 6.36E-06 & (2.92) \\
20 & 3.31E-05 & (2.93) & 4.67E-06 & (2.93) \\
22 & 2.50E-05 & (2.93) & 3.53E-06 & (2.94) \\
24 & 1.94E-05 & (2.94) & 2.73E-06 & (2.94) \\
\hline
\end{tabular}
\end{minipage}
\vspace{6pt}
\caption{Example 2. Convergence of $\dgnorm{u - u_{h, p}}$ and $\norm{u - u_{h, p}}_{L^2(\Omega)}$ with $p$-refinement for mesh (b) and $\theta = - 1$. The results are grouped in odd and even values of $p$. The quantities in brackets indicate the $p$-convergence rates.}
\label{table:ex2_mesh_b}
\end{table}
\subsection{Example 3}
In the third and final example, we consider a case not fully covered by our theory. We consider the solution of the $p(\bx)$-Laplace equation with $\bA(\bx, \nabla{u}) = \abs{\nabla{u}}^{p(\bx)-2} \mathbf{I}$, where $p(\bx) = 4 - \abs{\bx}^2$. Note that $\bA$ does not comply with Assumption~\ref{asm:A} for $\abs{\bx} < 1$. The problem is posed on the $L$-shaped domain $\Omega = (-1, 1)^2 \setminus [0, 1) \times (-1, 0]$ with $\Gamma_\rmN = [-1, 1] \times \{ 1 \} \cup \{ -1 \} \times [-1, 1]$ and $\Gamma_\rmD = \partial \Omega \setminus \Gamma_\rmN$. The data $f$, $g_\rmD$ and $g_\rmN$ are chosen such that the solution is given by the smooth function $u(\bx) = x_1 \, \mathrm{e}^{x_1 \, x_2}$.
In Figure~\ref{fig:ex3_h_eDG}, we show the convergence of the DG-norm of the error with $h$-refinement for $p = 1$, $2$, $3$, $4$ and $\theta = -1$, $1$. As in Example 1, we observe that $\dgnorm{u - u_{h, p}}$ converges to zero, for each fixed value of~$p$, at a rate $\mathcal{O}(h^p)$ as $h \to 0$. Note that this is in perfect agreement with the theoretical error estimate presented in Theorem~\ref{thm:dgnorm_error_estimate}, even though the underlying Assumption~\ref{asm:A} is not met. Also note that the results are virtually distinguishable between the two choices of the parameter $\theta$. In Figure~\ref{fig:ex3_h_eL2}, we present the convergence of the $L^2(\Omega)$-norm with $h$-refinement for $p = 1$, $2$, $3$, $4$ and $\theta = -1$, $1$. Here, as in Example 1, significant differences are observed between the two values of~$\theta$. For~$\theta = -1$, optimal convergence rates are obtained for all values of~$p$; i.e., $\norm{u - u_{h, p}}_{L^2(\Omega)} = \mathcal{O}(h^{p+1})$ as $h \to 0$ for each fixed value of~$p$. For~$\theta = 1$ on the other hand, we see that $\norm{u - u_{h, p}}_{L^2(\Omega)}$ behaves like $\mathcal{O}(h^{p+1})$ as $h \to 0$ for odd values of $p$, and like $\mathcal{O}(h^p)$ as $h \to 0$ for even values of $p$. The convergence behavior for $\norm{u - u_{h, p}}_{L^2(\Omega)}$ is very similar to that seen in Example 1 and agrees well with the theoretical error estimates of Corollary~\ref{crl:l2norm_error_estimate}.
\begin{figure}[h]
\psfrag{xlabel}[t][c]{$1/h$}
\psfrag{ylabel}[b][c]{$\dgnorm{u - u_{h, p}}$}
\psfrag{p = 1}[l][c]{\footnotesize \hspace{-8pt} $ p = 1$}
\psfrag{p = 2}[l][c]{\footnotesize \hspace{-8pt} $ p = 2$}
\psfrag{p = 3}[l][c]{\footnotesize \hspace{-8pt} $ p = 3$}
\psfrag{p = 4}[l][c]{\footnotesize \hspace{-8pt} $ p = 4$}
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex3_h_eDG_SIPG.eps}
\hfill
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex3_h_eDG_NIPG.eps}
\caption{Example 3. Convergence of $\dgnorm{u - u_{h, p}}$ with $h$-refinement for $p = 1$, $2$, $3$ and $4$. Left: $\theta = -1$. Right: $\theta = 1$.}
\label{fig:ex3_h_eDG}
\psfrag{xlabel}[t][c]{$1/h$}
\psfrag{ylabel}[b][c]{$\norm{u - u_{h, p}}_{L^2(\Omega)}$}
\psfrag{p = 1}[l][c]{\footnotesize \hspace{-8pt} $ p = 1$}
\psfrag{p = 2}[l][c]{\footnotesize \hspace{-8pt} $ p = 2$}
\psfrag{p = 3}[l][c]{\footnotesize \hspace{-8pt} $ p = 3$}
\psfrag{p = 4}[l][c]{\footnotesize \hspace{-8pt} $ p = 4$}
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex3_h_eL2_SIPG.eps}
\hfill
\includegraphics[width=0.45\textwidth, bb = 105 227 500 564, clip=true]{ex3_h_eL2_NIPG.eps}
\caption{Example 3. Convergence of $\norm{u - u_{h, p}}_{L^2(\Omega)}$ with $h$-refinement for $p = 1$, $2$, $3$ and $4$. Left: $\theta = -1$. Right: $\theta = 1$.}
\label{fig:ex3_h_eL2}
\end{figure}
\section*{Acknowledgement}
The work presented in this paper was completed while the author was a Ph.D. student at the Delft University of Technology working under the supervision of Dr.~S.~J.~Hulshoff, for whose guidance and support the author is most grateful.
|
1401.0146
|
\section{Introduction and motivation}
In the last years there has been great interest in the field of cavity
quantum electrodynamics. In particular, systems with strong coupling between
single quantum dots (QDs) and high quality microcavities have been studied
for different reasons, such us to gain insight into different quantum optics
effects \cite{rei,yos,press,reit,dal,optic,mich,ima}, like quantum
decoherence, entanglement and possible applications in quantum information
processing \cite{rei,yos,press,reit}. For example, some of these systems were
proposed as a single-photon source \cite{press,mich} for realization of all
optical quantum computing \cite{ima}. The SC regime takes place when the
coupling between a single quantum emitter and cavity mode is stronger
compared to their decay rates. In this case, the emitter and cavity
coherently exchange energy back and forth leading to Rabi oscillations. The
SC between single (In,Ga)As QD and micropillar cavity modes \cite{rei}, has
become apparent in photoluminiscence data which displayed anti-crossings
between the QD exciton and cavity-mode dispersion relations \cite%
{rei,yos,reit}. While usually temperature was used to tune the energy of the
excitonic transition to that of the cavity mode, it was shown recently that
the magnetic field can also be used as a tuning parameter\cite{reit}.
Some of the experimental works \cite{rei,press,reit} analyze their data in
terms of a 2x2 matrix that mixes the exciton and the cavity mode. To
introduce life time effects, complex energies are used to represent the
energies of the uncoupled system and therefore, the matrix is non Hermitian.
Similar expressions for the Rabi splitting were obtained using a master
equation within a phenomenological framework \cite{car,andre}, but to our
knowledge, a microscopic description of the system which includes finite
life times of the exciton and the cavity mode is still lacking,
In this paper, we extend the Hamiltonian which describe the coupling of the
exciton and cavity modes \cite{andre} to include the broadening of both
excitations, due to a mixture with a continuum of bosonic excitations. The
problem can be solved rigorously for weak excitation. We compare our results
for the intensity of photoluminiscence with recent experiments\cite%
{rei,reit}.
\section{Model}
$_{{}}$\label{Model}
The core of the model contains the cavity photon mode, the excitonic degrees
of freedom represented by a spin $1/2$, and the coupling between them \cite%
{andre}. We include the coupling of the cavity mode with a continuum of
radiative modes which gives rise to the broadening of the cavity mode (the
most important one) \cite{leon1,leon2,bruc}. We also couple the excitonic
mode with a continuum of bosonic excitations, leading to a broadening of the
excitonic energy. The Hamiltonian is
\begin{eqnarray}
H &=&E_{x}S_{z}+E_{c}a^{\dagger }a+(VS^{-}a^{\dagger }+\text{H.c.}%
)+\sum_{r}\epsilon _{r}a_{r}^{\dagger }a_{r}
+\sum_{r}(V_{r}a_{r}^{\dagger }a+\text{H.c.})+\sum_{\nu }\epsilon _{\nu
}b_{\nu }^{\dagger }b_{\nu } \nonumber \\
&&+\sum_{\nu }(V_{\nu }b_{\nu }^{\dagger }S^{-}+\text{H.c.}). \label{mod}
\end{eqnarray}%
where $a^{\dagger }$ is the creation operator of the cavity mode, $%
S_{z},S^{+},S^{-}$ are spin operators for the two level system of the QD
with ground $(|\downarrow \rangle )$ and excited $(|\uparrow \rangle )$
state which represent zero and one exciton respectively, $a_{r}^{\dagger }$
creates the radiative mode $r$ which couples to the cavity mode and
similarly $b_{\nu }^{\dagger }$ creates a bosonic excitation $\nu $ coupled
to the exciton. For simplicity, the subscripts indicating the polarization
of the modes are dropped.
The first three terms of the Hamiltonian describe the strong coupling
between the cavity mode and the exciton \cite{andre}. The fourth and fifth
terms describe a continuum of radiative modes and its coupling to the cavity
mode. The following two terms have a similar effect for the exciton mode.
The model is similar to the one previously used by us to describe Raman
experiments in microcavities with quantum wells inside them \cite%
{leon1,leon2,bruc}. The main difference is that in the previous case, the
problem has a two dimensional translational symmetry leading to delocalized
excitons, and for each wave vector the probability of occupation of the
excitonic state is very small. This allows to treat the excitons as bosonic
excitations with a high degree of accuracy \cite{leon1}. This is not possible
in the present case and the model becomes highly non trivial. In spite of
this, some exact results can be derived, using the fact that the total
number of excitations
\begin{equation}
N_{e}=S_{z}+1/2+a^{\dagger }a+\sum_{r}a_{r}^{\dagger }a_{r}+\sum_{\nu
}b_{\nu }^{\dagger }b_{\nu } \label{ne}
\end{equation}
is conserved. For example, clearly the ground state is the only state with $%
N_{e}=0$.
In the following,we denote by $|n,S_{z}\rangle $ the states of the system
with $n$ cavity photons, exciton state $S_{z}$ and no bosons described by $%
a_{r}$ or $b_{\nu }$. If one of the latter is occupied, we denote the
corresponding state by $|n,S_{z};r\rangle $ or $|n,S_{z};\nu \rangle $. The
Hilbert subspace with $N_{e}=1$ can be treated analytically. This subspace
consist of a state formed by zero cavity photon and one exciton, $%
|0,\uparrow \rangle $, one cavity mode (and no excitons) $|1,\downarrow
\rangle $, and two continua of bosonic excitations $|0,\downarrow ;r\rangle $%
, $|0,\downarrow ;\nu \rangle $. The problem in this subspace takes a form
similar to that of an impurity interacting with a continuum or the resonant
level model.
The photoluminscence measurements in micropillar cavities \cite{rei,yos,reit}
suggest that the only subspace relevant for the experiments is that with $%
N_{e}=1$. To see this, let us neglect for the moment the modes $r$ and $\nu $
responsible for the broadening of the cavity mode and the exciton. The
resulting model which contains the three first terms of Eq. (\ref{mod}) can
be solved exactly \cite{andre}. For each subspace of definite $N_{e}$, the
model consists of two branches which have an anticrossing as a function of
the detuning $\Delta =E_{x}-E_{c}$. The value of the Rabi splitting is $2V%
\sqrt{N_{e}}$. Experimentally only one anticrossing is reported. This
indicates that the probability of exciting two modes is low and that the
theoretical results for $N_{e}=1$ are enough to describe the experiments.
\section{Green's functions}
Motivated from the above discussion, we assume that the observed
photoluminiscence is proportional to the density of cavity photons in the
subspace with $N_{e}=1$. In other words, the observed intensity is
proportional to the density
\begin{equation}
\rho _{11}(\omega )=-\frac{1}{\pi }\mathrm{Im}G_{11}(\omega ), \label{rho}
\end{equation}%
where $G_{11}(\omega )$ is a many-body Green's function obtained from the
diagonal matrix element of the state $|1,\downarrow \rangle $ (for brevity
we denote $|1\rangle \equiv |1,\downarrow \rangle, |0\rangle \equiv
|0,\uparrow \rangle, |r\rangle \equiv |0,\downarrow;r \rangle,|\nu\rangle
\equiv |0,\downarrow;\nu \rangle$) of the operator given by the inverse of
the Hamiltonian in the subspace $N_{e}=1:$
\begin{equation}
G_{11}(\omega )=\langle 1|G(\omega )|1\rangle ;\text{ }G(\omega )=(\omega
+i0^{+}-H)^{-1}. \label{g11}
\end{equation}%
The matrix elements of the operator $G(\omega )$ can be evaluated from the
equation $G(\omega -H)=I$, where $I$ is the identity matrix. Taking the
element $\left\lbrace 11 \right\rbrace $ of this equation one obtains
\begin{equation}
G_{11}(\omega )\left( \omega -E_{c}\right) -VG_{10}(\omega
)-\sum_{r}V_{r}G_{1r}(\omega )=1. \label{ecg1}
\end{equation}%
Proceeding in a similar way for matrix elements $\left\lbrace 10 \right\rbrace , \left\lbrace 1r \right\rbrace$,
and the indices of
the new Green%
\'{}%
s functions that appear, the system of equations for $G_{ij}$ can be closed.
In particular the relations
\begin{eqnarray}
G_{10}(\omega ) &=&\frac{V_{r}G_{11}}{(\omega -E_{x}-S_{x})} \nonumber \\
G_{1r}(\omega ) &=&\frac{V_{r}}{(\omega -\epsilon _{r})}G_{11}, \label{ecg2}
\end{eqnarray}%
Replaced in Eq. (\ref{ecg1}) gives a closed expression for the Green%
\'{}%
s function $G_{11}$:
\begin{equation}
G_{11}(\omega )=\frac{1}{\omega -E_{c}-S_{c}-\frac{V^{2}}{\omega -E_{x}-S_{x}%
}}, \label{green}
\end{equation}%
where
\begin{eqnarray}
S_{c}(\omega ) &=&\sum_{r}\frac{V_{r}^{2}}{\omega +i0^{+}-\epsilon _{r}}\ ,
\nonumber \\
S_{x}(\omega ) &=&\sum_{\nu }\frac{V_{\nu }^{2}}{\omega +i0^{+}-\epsilon
_{\nu }}. \label{sums}
\end{eqnarray}%
In this paper, for simplicity we assume (as usual in related problems)
constant $V_{r}$, $V_{\nu }$ and densities of the modes $r$ and $\nu $, near
the point of zero detuning $\Delta =E_{x}-E_{c}$. Then, except for some real
shifts that can be incorporated in $E_{x}$ and $E_{c}$ \cite{leon1}, the
above sums reduce to imaginary constants that we take as parameters
\begin{equation}
S_{c}(\omega )=-i\delta _{c}\ ,\qquad S_{x}(\omega )=-i\delta _{x}\ .
\label{delta}
\end{equation}
Note that interchanging the subscripts $x$ and $c$, Eqs. (\ref{green}) and (
\ref{delta}) give the density of the excitonic excitation $|0,\uparrow
\rangle $.
\section{Analysis and discussion}
\label{Analisys and discussion}
\subsection{The density as sum of two asymmetric Lorentzians}
With a straightforward manipulation, Eq. (\ref{green}) can be written a sum
of two fractions, with denominators linear in $\omega $. Replacing in Eq. (%
\ref{rho}), the density $\rho _{11}$ can be separated as
\begin{eqnarray}
\rho _{11}(\omega ) &=&\rho _{11}^{1}(\omega )+\rho _{11}^{2}(\omega ),
\nonumber \\
\rho _{11}^{1}(\omega ) &=&\frac{1}{\pi }\frac{\delta _{1}a_{1}+d(\omega
-\omega _{1})}{(\omega -\omega _{1})^{2}+\delta _{1}^{2}}, \nonumber \\
\rho _{11}^{2}(\omega ) &=&\frac{1}{\pi }\frac{\delta _{2}a_{2}-d(\omega
-\omega _{2})}{(\omega -\omega _{2})^{2}+\delta _{2}^{2}}, \label{dens}
\end{eqnarray}%
where
\begin{eqnarray}
a_{1(2)} &=&\frac{1}{2}\pm \frac{x\Delta -y\delta }{4(x^{2}+y^{2})}
\nonumber \\
\omega _{1(2)} &=&c\pm x \nonumber \\
\delta _{2(1)} &=&c^{\prime }\pm y \nonumber \\
d &=&\frac{y\Delta -x\delta }{4(x^{2}+y^{2})} \label{dens2}
\end{eqnarray}%
with
\begin{eqnarray}
x &=&\frac{1}{2}\sqrt{A+\frac{1}{2}\sqrt{B}} \nonumber \\
y &=&\frac{\Delta \delta }{4x} \nonumber \\
c &=&\frac{E_{c}+E_{x}}{2} \nonumber \\
c^{\prime } &=&\frac{\delta _{c}+\delta _{x}}{2} \label{xy}
\end{eqnarray}%
and
\begin{eqnarray}
A &=&\frac{\Delta ^{2}-\delta ^{2}+4V^{2}}{2} \nonumber \\
B &=&(\Delta ^{2}+\delta ^{2})^{2}+8V^{2}(\Delta ^{2}-\delta ^{2})+16V^{4}
\nonumber \\
\delta &=&\delta _{c}-\delta _{x}. \label{ab}
\end{eqnarray}
The result can be interpreted as the sum of two asymmetric Lorentzians, with
opposite asymmetries controlled by $d$. Neglecting the effect of $d$, the
position, amplitude and width of the Lorentzians is given by $\omega _{i}$, $%
a_{i}$ and $\delta _{i}$ respectively with $\omega _{1}<\omega _{2}$ except
for detuning $\Delta =E_{x}-E_{c}=0$ if $2V\leq \delta $. The results for $%
\omega _{i}$, and $\delta _{i}$ agree with the real and imaginary parts of
the complex roots of a $2\times 2$ matrix, given by Eq. (1) of Press et a
\cite{press}. The above results provide a microscopic justification for this
expression.
It is easy to see from Eqs. (\ref{dens})-(\ref{ab}) that in the limit $%
\delta _{c}\longrightarrow \delta _{x}$, $d\longrightarrow 0$, and therefore
the density is give by the sum of two Lorentzians separated $2x$ with $x=%
\sqrt{((E_{c}+E_{x})/2)^{2}+V^{2}}$ and with widths $\delta _{c}$ and $%
\delta _{x}
In Fig. \ref{lor}, we show the resulting parameters of the two peaks for the
experiment of Ref. \onlinecite{rei}. There is a qualitative agreement with Fig. 4
of Ref. \onlinecite{rei}
\begin{figure}[tbp]
\includegraphics[width=10.5cm]{fign1.eps}
\caption{Peak energies, linewidths and intensity in function of detuning for
the parameters of Ref. \onlinecite{rei}: $\delta_c=0.09$ meV, $\delta_x=0.036$ meV, $V=0.075$ meV.
The dashed line correspond to the maxima of $\rho_{11}(\omega)$.}
\label{lor}
\end{figure}
In Fig. \ref{wea}, we show the position of the peaks $\omega _{i}$ for a case in
which $V$ was reduced to $V=0.022$ meV, so that there is no splitting of the
energies for zero detuning.
\begin{figure}[tbp]
\includegraphics[width=8.5cm]{fignw.eps}
\caption{Peak energies for the case of weak coupling. Parameters are
$\delta_c=0.09$ meV, $\delta_x=0.036$ meV, $V=0.022$ meV.}
\label{wea}
\end{figure}
\subsection{Position of the maxima}
When $V\gg \delta _{c},\delta _{x}$, the spectral density $\rho _{11}(\omega
)$ shows two maxima for all values of the detuning $\Delta $. The position
of the maxima are given by some real roots of a polynomial of fifth degree
obtained from the condition $\partial \rho _{11}(\omega )/\partial \omega =0$%
. As $V$ decreases, the two maxima merge into one for zero detuning, at a
critical value $V_{c}$.
In Fig. \ref{vc} we represent $V_{c}$ as a function of one of the
broadenings $\delta _{c}$ or $\delta _{x}$, keeping constant the other one.
We can that as $\delta _{x}$ increases, $V_{c}$ also increases. However, the
trend is the opposite as $\delta _{c}$ increases.
\begin{figure}[tbp]
\includegraphics[width=8.5cm]{vc.eps}
\caption{ Critical coupling as a function of (a) linewidth of the cavity mode
$\protect\delta _{c}$, and (b) linewidth of exciton mode $\protect\delta _{x}$%
}
\label{vc}
\end{figure}
In Fig. \ref{de1} we plot the difference between the two maxima $\ \Delta E$
as function of $V$ for the case $\delta _{x}=\delta _{c}$. The
experimentally measured Rabi splitting might be identified with $\ \Delta E$%
, but another possibility is to relate the Rabi splitting with $\omega
_{2}-\omega _{1}=2x$ [see Eqs. (\ref{dens})-(\ref{ab})]. We believe that the
latter approach is more physical if the experimental line can be fit by Eq. (%
\ref{dens}).
We can see from the figure that $\Delta E$ has a behavior of a square root
as a function of $\Delta V=V-V_{c}$ for small $\Delta V$, while for large $V$%
, $\Delta E$ is proportional to $2V$, as expected.
\begin{figure}[tbp]
\includegraphics[width=8.5cm]{delta1.eps}
\caption{Separation between the maxima of $\rho_{11}(\omega)$
as a function of coupling contant $V$ for
$\protect\delta _{x}=\protect\delta _{c}$ and $E_{x}=E_{c}$}
\label{de1}
\end{figure}
\subsection{Spectral density for different detunings}
The spectral density (assumed proportional to the photoluminiscence
intensity) for different detuning $\Delta =E_{x}-E_{c}$, is presented in
Fig. \ref{inte} for \ parameters that correspond to \ the particular
experimental work of Reithmaier \textit{et al. }\cite{rei} in which the
detuning was controlled by the tempertaure The anticrossing is clearly
visible and the variation of the intensity profile is similar to the
observed one. For negative detuning, the peak at lowest energy is more
excitonic like and therefore has lower intensity than the other one, which
has a greater admixture with the cavity mode. The situation is reversed for
positive detuning.
\begin{figure}[tbp]
\includegraphics[width=9.0cm]{int.eps}
\caption{Photoluminescense spectra for several values of detunning
$\Delta =E_{x}-E_{c}$. The parameter used are
the same as in Fig. \ref{lor} \cite{rei}.}
\label{inte}
\end{figure}
In Ref. \onlinecite{reit}, the detuning was controlled by the application of a
magnetic field. In Fig. \ref{intep} we show the corresponding theoretical
curve, with the same qualitative trends as before. The coupling $V=0.046$
meV was adjusted in such a way that the difference between the maxima in $%
\rho _{11}(\omega )$ correspond to the reported Rabi splitting of $0.0956$ meV
and half width at half maximum of the cavity and exciton modes $\delta
_{c}=0.06$ meV and $\delta _{x}=0.01$ meV respectively (corresponding to
half of the reported full width at half maximum).
\begin{figure}[tbp]
\includegraphics[width=9.5cm]{intp.eps}
\caption{(a) Photoluminescense spectra for several values of detuning,
(b) Energy dispersion of the two emission modes.
the uncoupled modes is indicated by dot lines.
Parameters are $\delta_c=0.06$ meV, $\delta_x=0.01$ meV, $V=0.046$ meV}
\label{intep}
\end{figure}
\section{Summary}
\label{Conclusions}
We have studied a microscopic model that couples a cavity mode with an
exciton, and includes coupling to two continua of bosonic excitations, which
give rise to a homogeneous broadening of both modes. Although the model is
not exactly solvable, we treat exactly the low-energy spectrum and provide
expressions for the low-energy part of the spectral density of the cavity
mode and the exciton. The former agrees with measured photoluminiscence
spectra for several detunings.
Our approach provides a microscopic justification for simple
phenomenological expressions for the position and widths of the two mixed
modes, between the cavity mode and the exciton, when both modes have a
homogeneous broadening.
\section*{Acknowledgments}
We thank CONICET from Argentina for financial support. This work was
partially supported by PIP 11220080101821 of CONICET, and PICT 2006/483 and
PICT R1776 of the ANPCyT.
|
1401.0006
|
\section{Introduction}
Derivative expansion is a useful tool at low-energy scale and widely used in effective theories.
Chiral perturbation theory is one of great successful examples as the low-energy effective theory in hadron physics~\cite{weinberg, Weinberg1979327, Ecker:1994gg}.
In modern point of view, the hydrodynamics is also a low-enegy effective theory; the leading-order hydrodynamic equations are called Euler equations, and the first order of hydrodynamic equations are called Navier-Stokes equations.
Causality is an important concept in physics.
For relativistic systems, the propagation of any information cannot exceed the speed of light (relativistic causality).
However, it seems that a low-energy effective theory in medium is incompatible with relativistic causality.
For example, in the first-order relativistic hydrodynamics, the shear and heat flows violate causality because it has the form of a diffusion equation~\cite{Israel:1976tn,Israel:1979wp, Pu:2009fj, Koide:2011tj, Denicol:2008ha, Buchel:2009tt, Baier:2007ix}.
One might regard acausality in the effective theory as
a problem about range of validity~\cite{geroch, Lindblom19961, Kostadt:2000ty, *Kostadt:2001rr, Van:2007pw}.
The low-energy effective theory based on the derivative expansion has an ultraviolet cutoff that separates micro- and macro-scopic degrees of freedom.
One may think that the violation of causality is not the problem as long as the violation is much smaller than the cutoff scale.
However, such a small violation will be amplified by Lorentz boost, and it exceeds the UV cutoff scale~\cite{Hiscock:1985zz,Denicol:2008ha}.
One also might expect that acausality in the diffusion equation is caused by non-Lorentz covariance of the equation.
However, the Lorentz covariance does not ensure causality.
In fact, the relativistic hydrodynamic equations in the first order of the derivative expansion violate causality even though the equations are covariant~\cite{Israel:1976tn,Israel:1979wp, Pu:2009fj, Koide:2011tj, Denicol:2008ha, Buchel:2009tt,Baier:2007ix}.
For a concrete example, let us consider the Lorentz covariant diffusion equation,
\begin{align}
\biggl[ u^\mu \partial_\mu + \Gamma (\eta^{\mu \nu}-u^\mu u^\nu )\partial_\mu \partial_\nu \biggr]n(x^\mu)=0,
\label{eq:covariantdiffusion}
\end{align}
where $u^\mu$ is a time-like vector satisfying $u^2=1$, $n(x^\mu)$ a scalar density, $\Gamma$ the diffusion constant, $\eta^{\mu \nu} = \mathrm{diag} (1, -1, -1, -1) $ the Minkowski metric.
If we choose $u^\mu = (1, \bf{0})$, Eq.~(\ref{eq:covariantdiffusion}) becomes the ordinary diffusion equation, $\bigl( \partial_0 - \Gamma \partial_i^2 \bigr) n(x^\mu) =0$.
The retarded Green function of Eq.~(\ref{eq:covariantdiffusion}) for constant $u^\mu$ becomes
\begin{align}
G_R(x^\mu)=\theta(x_t) \biggl( \frac{1}{\pi \Gamma x_t} \biggr)^{3/2}\exp\biggl[ \frac{x_s^2}{4\Gamma x_t }\biggr] ,
\end{align}
where $\theta(x_t)$ is the step function, $x_t \equiv u_\mu x^\mu$ and $x_s^\mu \equiv (\eta^{\mu \nu} - u^\mu u^\nu) x_\mu$ are
the time- and space-like components of $x^\mu$, respectively.
We can see that the retarded Green function does not vanish in the space-like region, $x^2=x_t^2 + x_s^2 < 0$.
Therefore, the Lorentz covariance of the equation is not sufficient to ensure causality.
It is argued that acausality comes from the difference between the order of time- and space-like derivatives in the equation of motion~\cite{Israel:1976tn, Israel:1979wp, Pu:2009fj, Denicol:2008ha}.
In the diffusion equation Eq.~(\ref{eq:covariantdiffusion}), the time-like derivative is the first order while the space-like one is the second order.
This different order is cured by introducing $\tau ( u^\mu \partial_\mu )^2$ to the equation:
\begin{align}
\biggl[ \tau ( u^\mu \partial_\mu )^2 +u^\mu \partial_\mu
+ \Gamma (\eta^{\mu \nu}-u^\mu u^\nu)\partial_\mu\partial_\nu \biggr]n(x^\mu)=0,
\end{align}
where $\tau$ is a parameter corresponding to the relaxation time.
This equation has the form of the telegraphic equation, so that the propagation is restricted in the region, $v x_t > x_s $, where $v=\sqrt{\Gamma/\tau}$ and $x_s = \sqrt{-x_s^2}$.
If $\Gamma<\tau$, causality is satisfied because the velocity is smaller than the speed of right, $v<1$~\cite{Pu:2009fj, Denicol:2008ha}.
However, if $\Gamma>\tau$, the propagation speed exceeds the speed of light.
Furthermore, if $\Gamma=0$, causality is not violated even though the order of the time and spatial derives are different.
Therefore, the equal order of space and time derivatives in the equation of motion itself does not mean that the Green function is causal.
We note that the group velocity with a finite wavenumber is not good quantity for discussing causality.
The group velocity is given as
\begin{align}
v_g (k) \equiv \frac{\partial\, \mathrm{Re} \,\omega(k)}{\partial k},
\end{align}
where $ k = |\bm{k}|$ is the momentum, and $\omega(k)$ is the pole of the retarded Green function.
It is known that the retarded Green function can vanish in the space-like region even if $v_g (k)$ is faster than the speed of light at some momentum.
For $\omega (k)\propto k$ at $k \rightarrow \infty$, it is shown that causality is satisfied
if the front velocity $v_f \equiv \lim_{k \rightarrow \infty} v_g (k)$ is slower than the speed of light~\cite{Brillouin, milonni2004fast, Chaio, Pu:2009fj, Koide:2011tj}.
This was firstly pointed out by Sommerfeld and Brilliouin in the context of light pulse propagation in medium~\cite{Brillouin}.
They considered propagation of discontinuous wave front to define the information propagation, which is described by the front velocity.
The group velocity describes the propagation of the peak of the pulse, but does not determine the information propagation~\cite{milonni2004fast, Chaio}.
We note that this condition for the front velocity does not cover acausality for the diffusion mode, whose pole is
\begin{align}
\omega_{\text{diff}} (k) &\propto -i k^2. \label{eq:diffusionmode}
\end{align}
We see that causality is violated although the front velocity of diffusion mode vanishes., i.e., smaller than the speed of light.
What ensures causality in general?
In quantum field theory, relativistic causality is ensured by
\begin{equation}
[\phi(x), \phi(y)] = 0 \;\; \text{for} \;\; x^2-y^2 <0,
\end{equation}
where $\phi(x)$ is a Heisenberg operator, $[.. , ...]$ the commutation relation.
The retarded propagator also vanishes in the space-like region:
\begin{align}
G_R(x^\mu-y^\mu) &\equiv \theta( x^0 - y^0 )\langle [\phi(x), \phi(y)] \rangle, \nonumber \\
&=0 \;\; \text{for} \;\; x^2-y^2<0. \label{eq:causality}
\end{align}
The low-energy poles of the propagator correspond to those of the Green function in the low-energy effective theory.
Therefore, Eq.~(\ref{eq:causality}) should be satisfied even in the low-energy Green function if the theory respects special relativity.
The purpose of this paper is to derive the conditions ensuring relativistic causality in an effective theory based on the derivative expansion.
We will consider the retarded Green function in a scalar theory at tree level, i.e., thermal and quantum fluctuation will not be taken into account.
In this case, the retarded Green function in the derivative expansion is generally written as a rational function in the momentum space:
\begin{equation}
G_R(\omega, k ) = \frac{Q(\omega, k)}{P(\omega, k)}, \label{eq:Gpoly}
\end{equation}
where $P(\omega, k) $ and $Q(\omega , k)$ are polynomials in $\omega $ and $k$:
\begin{align}
P(\omega , k) = p_n (k) \omega^n + p_{n-1}(k)\omega^{n-1} + ... + p_1(k) \omega +p_0(k), \\
Q(\omega, k) = q_m (k)\omega^m + q_{m-1}(k)\omega^{n-1} + ... + q_1(k) \omega +q_0(k).
\end{align}
Here, $n>m$, and $p_j (k)$ and $q_j(k)$ are the polynomials in $k$.
We assumed isotropy, and then the Green function turns out to be a function of $k$.
We will discuss relativistic causality based on Eqs.~(\ref{eq:causality}) and (\ref{eq:Gpoly}).
In the following sections, we will derive the general conditions ensuring that the retarded Green function vanishes in the space-like region,
which are given by
\begin{align}
{\lim_{k \to \infty}\biggl| \mathrm{Re}\,}\frac{\omega(k)}{k} \biggr| &< 1,\label{eq:recondition}\\
\lim_{k \to \infty} \biggl|\mathrm{Im}\frac{\omega(k)}{k} \biggr| &< \infty , \label{eq:imcondition}
\end{align}
and that $p_n (k)$ must not depend on $k$.
These conditions ensure causality in effective theories based on the derivative expansion, and are our main results in this paper.
We note that the first condition, Eq.~(\ref{eq:recondition}), is nothing but the condition that the front velocity is smaller than the speed of light because, if $\omega (k)\propto k$ at larger $k$, we have
\begin{align}
\lim_{k\to\infty} \left| \frac{{\mathrm{Re}\,} \omega (k) }{k}\right| = \lim_{k\to\infty}\left|\frac{\partial\, { \mathrm{Re}\,} \omega (k)}{\partial k} \right|= v_f <1.
\end{align}
This paper is organized as follows. In Sec.~\ref{sec:Derivation}, we derive the general conditions to ensure causality in an effective theory based on the derivative expansion using the retarded Green function.
In Secs.~\ref{sec:ContributionFromBranchCuts} and \ref{sec:ContributionFromPole}, we evaluate the contributions from branch cuts and poles to the Green function, respectively. Section~\ref{Sec:Summary} is devoted to summary.
\section{Derivation of the general conditions ensuring causality} \label{sec:Derivation}
As mentioned in the previous section, relativistic causality implies that the propagation vanishes for the space-like region, i.e.,
$G_R(x_\mu)=0$ for $x^2<0$. In this section, we show that this holds if Eqs.~(\ref{eq:recondition}) and (\ref{eq:imcondition}) are satisfied.
Let us start with the retarded Green function in momentum space.
In an effective theory base on the derivative expansion, the retarded Green function is given as the rational function Eq.~(\ref{eq:Gpoly}).
In order to discuss causality, we employ the partial-fraction decomposition for the retarded Green function,
\begin{equation}
G_R (\omega, k ) = \sum_{i, j} \frac{f_{i, j} (k)}{\big(\omega -\omega_j(k)\big)^i },
\end{equation}
where $\omega_j(k)$ denote the position of poles on the complex $\omega$-plane.
Since we consider the retarded Green function, the poles are located on the lower half-plane, i.e., $\mathrm{Im}\,\omega_j(k)<0$.
We consider the case that orders of all the poles are one.
The higher-order poles can be treated as first-order poles by infinitesimally splitting the pole positions.
For example, a second-order pole is rewritten as
\begin{align}
\frac{1}{(\omega-\omega_j(k))^2}
=\lim_{\epsilon \rightarrow 0} \frac{1}{2\epsilon}
\biggl( \frac{1}{\omega-\omega_j(k)-\epsilon}- \frac{1}{\omega-\omega_j(k)+\epsilon }\biggr). \label{eq:second-order-pole}
\end{align}
Therefore, the following argument is valid if the $\epsilon\to 0$ limit can be smoothly taken after calculations.
In this case, the retarded Green function is written as
\begin{equation}
G_R (\omega, k ) = \sum_{j} \frac{f_{j} (k)}{\omega -\omega_j(k) }, \label{eq:Gpf}
\end{equation}
with
\begin{equation}
f_j(k) =\lim_{\omega \to \omega_j(k)}\biggl[ \big(\omega - \omega_j(k)\big)\frac{Q(\omega,k)}{P(\omega, k)} \biggr]. \label{eq:fjk}
\end{equation}
The retarded Green function in coordinate space is given by the Fourier transformation of $G_R(\omega, k)$:
\begin{equation}
G_R (x^\mu) = \int \frac{d \omega}{2\pi}\frac{d^3k}{(2\pi)^3} G(\omega, k) e^{-i \omega t + i \bm{k} \cdot \bm{x}}.\\
\end{equation}
This integral becomes zero for $t<0$ because the pole is located on the lower half-plane. For $t>0$, the retarded Green function becomes
\begin{align}
G_R (x^\mu) &= \int \frac{d \omega}{2\pi}\frac{d^3k}{(2\pi)^3}
\sum_j \frac{f_j (k)}{\omega -\omega_j(k) } e^{-i \omega t + i \bm{k} \cdot \bm{x}}, \notag\\
&= i \sum_j \int\frac{d^3k}{(2\pi)^3} f_j (k) e^{-i \omega_j(k) t + i \bm{k} \cdot \bm{x}}, \notag\\
&=\frac{1}{4\pi^2 r} \sum_j \int_{-\infty}^{\infty}dk k f_j(k) \exp \biggl[ ik \biggl( r -\frac{\omega_j(k)}{k}t \biggr)\biggr],
\label{eq:G}
\end{align}
where $r\equiv |\bf{x}|$.
Now, we discuss the conditions ensuring that the retarded Green function vanishes in the space-like region, $ r - t >0$.
To evaluate the integral with respect to $k$, we consider the complex integral on the complex $k$-plane.
If the integrand does not have any poles and branch cuts,
we can evaluate the integral along the contour in Fig.~\ref{fig:contour}a.
The retarded Green function is equal to the contribution from $C_{\infty}$,
which vanishes if $\omega_j(k)$ satisfies Eqs.~(\ref{eq:recondition}) and (\ref{eq:imcondition}).
If this is not the case, $ r -(\omega_j(k) / k) t $ changes the sign at some point on $C_{\infty}$,
and the contribution from $C_{\infty}$ does not generally vanish.
Therefore, Eqs.~(\ref{eq:recondition}) and (\ref{eq:imcondition}) must be satisfied to ensure causality.
In general, $f_j(k)$ may contain cuts and poles on the complex $k$-plane.
In the following subsections,
we evaluate contributions from these poles and cuts to the integral, and show that they cancel out
if $p_n (k)$ does not depends on $k$.
Furthermore, we discuss Eq.~(\ref{eq:G}) cannot vanish in the space-like region if $p_n (k)$ depends on $k$.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\hsize]{contour1.eps}&
\includegraphics[width=0.45\hsize]{contour2.eps}\\
(a)& (b)
\label{fig:contour}
\end{tabular}
\caption{The contours for evaluating the integral Eq.~(\ref{eq:G}) without branch cuts (a) and
with a branch cut (b).}
\end{figure}
\subsection{Contributions from branch cuts}\label{sec:ContributionFromBranchCuts}
Here, we evaluate the contributions from branch cuts.
First, let us factorize the denominator of the propagator in Eq.~(\ref{eq:Gpoly}) into the $\omega$-independent part and others,
\begin{equation}
P(\omega, k) =p_n (k) \prod_j^n (\omega -\omega_j(k) ), \label{eq:denominator}
\end{equation}
where $\omega_j(k)$ may have branch cuts on the complex $k$-plane, while $p_n (k)$ is analytic.
We consider a simple situation that $\omega_j(k)$ have only one branch cut on the upper half-plane, like Fig.~\ref{fig:contour}a.
The generalization to the case that $\omega_j(k)$ have multi-branch cuts is straightforward.
We suppose that $\omega_{j_1}$ has a cut. If we pass across the cut, we have the discontinuity,
\begin{equation}
\lim_{\epsilon\to0}\omega_{j_1} (s+ \epsilon ) \neq \lim_{\epsilon\to0}\omega_{j_1} (s - \epsilon),
\end{equation}
where $s$ is a point on the cut.
In contrast, $P(\omega, k) $ does not have the discontinuity on the cut,
\begin{equation}
\lim_{\epsilon\to0}P(\omega, s+\epsilon) =\lim_{\epsilon\to0} P(\omega, s-\epsilon),
\end{equation}
because the $P(\omega,k)$ is polynomial and analytic on complex $k$-plane.
This implies that $\omega_{j_1}$ has the pair $\omega_{j_2}$ such that
\begin{equation}
\lim_{\epsilon\to0}\omega_{j_1} (s+ \epsilon ) = \lim_{\epsilon\to0}\omega_{j_2} (s - \epsilon), \label{eq:pair}
\end{equation}
for $j_1 \neq j_2$.
Next, let us evaluate the integral Eq.~(\ref{eq:G}), which can be performed by the integral along the contour in Fig.~\ref{fig:contour}b.
Noting that the residue $f_{j_{1,2}} (k)$ has the same branch cut as that of $\omega_{j_{1,2}}$ [See Eq.~(\ref{eq:fjk})],
we have
\begin{align}
G_R(x^\mu) &= \frac{1}{4\pi^2 r} \sum_j \int_{C_1+C_2} dk k f_j (k) e^{-i \omega_j(k)t+ i k r}, \notag \\
&= \frac{1}{4\pi^2 r} \int_{C_1+C_2} dk k\bigg[ f_{j_1} (k) e^{-i \omega_{j_1}(k)t+ i k r}
+f_{j_2} (k) e^{-i \omega_{j_2}(k)t+ i k r}\biggr]. \label{eq:cut12}
\end{align}
Here, we assumed that Eqs.~(\ref{eq:recondition}) and (\ref{eq:imcondition}) are satisfied, and dropped the contribution from $C_\infty$.
From Eqs. (\ref{eq:fjk}) and (\ref{eq:pair}), we obtain the following relations:
\begin{align}
\int_{C_2} f_{j_1} (k) e^{-i \omega_{j_1}(k)t+ i k r} &= - \int_{C_1} f_{j_2} (k) e^{-i \omega_{j_2}(k)t+ i k r}, \\
\int_{C_2} f_{j_2} (k) e^{-i \omega_{j_2}(k)t+ i k r} &= - \int_{C_1} f_{j_1} (k) e^{-i \omega_{j_1}(k)t+ i k r}.
\end{align}
Then, the first and second terms in the last line of Eq.~(\ref{eq:cut12}) cancel out.
Therefore, the conditions, Eqs.~(\ref{eq:recondition}) and (\ref{eq:imcondition}), does not change even if the branch cuts exist.
\subsection{Contributions from poles}\label{sec:ContributionFromPole}
Next, we discuss the poles of $f_j(k)$.
From Eq.~(\ref{eq:fjk}), $f_j (k)$ can be written as
\begin{align}
f_j(k) = \frac{Q(\omega_j(k),k)}{p_n (k) \prod_{i \neq j} (\omega_j (k)- \omega_i (k))}.
\end{align}
The poles on the complex $k$-plane are given as the solution of the followings:
\begin{align}
p_n(k) &= 0, \label{eq:pnkpole} \\
\quad \omega_j (k)- \omega_i (k) &=0. \label{eq:omegapole}
\end{align}
We will show that poles from Eq.~(\ref{eq:omegapole}) does not contribute to Eq.~(\ref{eq:G}),
whereas the contribution of those from Eq.~(\ref{eq:pnkpole}) does not vanish and violates causality.
Thus, $p_n(k)$, which is the coefficient of the highest-order time derivative, must not depend on $k$ for causality.
First, we show that contribution of poles from Eq.~(\ref{eq:omegapole}) cancels out.
We here set $p_n(k) \rightarrow p_n$.
We suppose that a pair, $\omega_{j_1}(k) $ and $ \omega_{j_2}(k)$, satisfies $\omega_{j_1} (k)- \omega_{j_2} (k) =0$ at $k=k_c$ on the upper half-plane, and the solution is the first order.
We also assume that $\omega_{i} (k)- \omega_{j} (k) \neq 0$ if $i,j\neq j_1$, $j_2$.
Thus, we can write the difference of $\omega_{j_1}(k)$ and $\omega_{j_2}(k)$ as
\begin{equation}
\omega_{j_1}(k) - \omega_{j_2}(k) = (k-k_c) g(k),
\end{equation}
where $g(k)$ is an analytic function and nonvanishing at $k=k_c$.
The generalization to a higher-order case is straightforward because we can treat higher-order poles as first-order poles by the similar procedure in Eq.~(\ref{eq:second-order-pole}).
Then, $f_{j_1}$ and $f_{j_1}$ can be written as
\begin{align}
f_{j_1}(k) = \frac{Q(\omega_{j_1}(k), k)}{p_n \prod_i( \omega_{j_1}(k) - \omega_i(k) )}
= \frac{1}{k-k_c} \cdot \frac{Q(\omega_{j_1}(k), k)}{p_n g(k) \prod_{i\neq j_1}( \omega_{j_1}(k) - \omega_i(k) )},\\
f_{j_2}(k) = \frac{Q(\omega_{j_2}(k), k)}{p_n \prod_i( \omega_{j_2}(k) - \omega_i(k) )}
= \frac{-1}{k-k_c} \cdot \frac{Q(\omega_{j_2}(k), k)}{p_n g(k) \prod_{i\neq j_2}( \omega_{j_2}(k) - \omega_i(k) )}.
\end{align}
If we introduce the following functional,
\begin{equation}
F(\omega_j(k), k) \equiv \frac{Q(\omega_{j}(k), k)}{p_n g(k) \prod_{i\neq j}( \omega_{j}(k) - \omega_i(k) )},
\end{equation}
we can write $f_{j_1}(k)$ and $f_{j_2} (k)$ as the form,
\begin{align}
f_{j_1}(k) = \frac{1}{k-k_c} F(\omega_{j_1}(k), k),
\qquad f_{j_2}(k) = \frac{-1}{k-k_c} F(\omega_{j_2}(k), k).
\end{align}
At $k=k_c$, $F(\omega_1(k_c),k_c)=F(\omega_2(k_c),k_c)$, i.e., the residues of $ f_{j_1}(k)$ and $ f_{j_2}(k)$ have the oposite sign: $\mathrm{Res} f_{j_1}(k_c)= - \mathrm{Res} f_{j_2}(k_c)$.
From this fact, the equation (\ref{eq:G}) in the space-like region turns out to be
\begin{align}
G_R( x^\mu ) &= \frac{1}{4\pi r^2} \int^{\infty}_{-\infty} dk \frac{k}{k-k_c}
\biggl[ F(\omega_1(k),k) e^{-i\omega_1(k) t+ikr} - F(\omega_2(k), k)e^{-i\omega_2(k t)+ikr}\biggr]\notag \\
&= \frac{ik_c}{2 r^2}\biggl[ F(\omega_1(k_c),k_c) e^{-i\omega_1(k_c) t+i k_c r}
- F(\omega_2(k_c), k_c)e^{-i\omega_2(k_c )t+i k_c r}\biggr] \notag\\
&=0.
\end{align}
Therefore, the contributions from poles vanish in $G_R( x^\mu ) $.
Next, we consider the contribution from Eq.(\ref{eq:pnkpole}).
We suppose that Eq.~(\ref{eq:omegapole}) dose not have any solutions, and $p_n (k) $ is given as
\begin{align}
p_n (k) = k^2+k_c^2,
\end{align}
where $k_c$ is a real positive constant.
In this case, the retarded Green function in the space-like region becomes
\begin{align}
G_R(x^\mu ) & = \frac{1}{4\pi r^2} \sum_j \int^{\infty}_{-\infty} dk \frac{k}{k^2+k^2_c} g_j (k) \exp \biggl[ ikr -i \omega_j(k) t\biggr],
\nonumber \\
& = \frac{i}{4r^2}\sum_{j} g_j (i k_c) \exp \biggl[ -k_c r -i \omega_j (i k_c) t \biggr], \label{eq:acausal}
\end{align}
where we have supposed that Eqs~(\ref{eq:recondition}) and (\ref{eq:imcondition}) are satisfied, and introduced
\begin{align}
g_j (k) \equiv \frac{Q(\omega_j(k), k)}{\prod_{i \neq j} (\omega_j (k) - \omega_i(k))}.
\end{align}
We can see that Eq.~(\ref{eq:acausal}) does not vanish and causality is violated.
We note that $p_n (k)$ necessarily yields the poles in the complex $k$ plane if it depends on $k$
because it is the polynomial of $k$.
Therefore, $p_n (k)$, which is the coefficient of the highest time derivative, must not include $k$.
\section{Summary and outlook}\label{Sec:Summary}
We studied relativistic causality in an effective theory based on the derivative expansion.
We first discussed that the Lorentz covariance and the equal order of space and time derivatives in the equation of motion do not ensure causality.
In general, the Green function in the derivative expansion can be generally written as a rational function in the momentum space, Eq.(\ref{eq:Gpoly}).
Using this Green function, we derived the general conditions ensuring causality, i.e., the Green function vanishes in the space-like region.
The conditions are given by Eqs.~(\ref{eq:recondition}), (\ref{eq:imcondition})
and that the coefficient of the highest-order time derivative does not include the space derivative.
Our condition Eq.~(\ref{eq:recondition}) is consistent with the condition about the front velocity.
Furthermore, we obtained the condition for the imaginary parts of poles, Eq.~(\ref{eq:imcondition}).
The diffusion mode Eq.~(\ref{eq:diffusionmode}) satisfies Eq.~(\ref{eq:recondition}), but violate causality due to Eq.~(\ref{eq:imcondition}).
In this paper, we considered the Green function of the scalar fields. Our results will not change for other fields such as vector fields,
because the analyticity of the Green function is independent of spin or helicity.
It is a remaining task to study the effect of the thermal and quantum fluctuations which modify the Green function.
In this case, the Green function has branch cuts and may not be written by a simple rational function Eq.~(\ref{eq:Gpoly}).
It is discussed that the derivative expansion causes unphysical instability in the context of relativistic hydrodynamics~\cite{Hiscock:1985zz, Li:2010fr, Pu:2009fj, Denicol:2008ha}. It seems that relativistic hydrodynamics satisfying causality conditions are stable~\cite{Denicol:2008ha}.
However, it is not clear what condition ensures the stability in an effective theory based on the derivative expansion,
which is beyond the scope of this paper, and we leave it for future work.
\acknowledgements
We acknowledge various discussions at RIKEN Open House 2013.
This work was partially supported by JSPS KAKENHI Grants Numbers 24740184, 23340067,
and by RIKEN iTHES Project.
|
1401.0354
|
\section{Introduction}
\label{sec:intro}
The Abelian sandpile model and close variants were introduced
several times in different contexts independently. There is motivation
coming from statistical physics, probability and combinatorics.
However, we are going to delay a detailed discussion of where the
model comes from to Section \ref{sec:motivation}, and start with its
definition and some of its basic properties in Section \ref{sec:model}.
There are a number of reasons why this seems to be a good choice:
\begin{enumerate}
\item The basic model is very simple to define, and some of its
fundamental properties can be established without any serious
pre-requisites. It is hoped that the model will have sufficient
appeal on its own without motivation in advance.
\item We do not want to assume prior familiarity with statistical
physics models such as percolation, the Ising model, etc. However,
since the connection with critical phenomena is very important,
it has to be explained, and it will be easier to do so when the
basic model can be used as illustration. We have attempted to organize
Section \ref{sec:motivation} in such a way that a reader unfamiliar
with statistical physics has a quick access to some important concepts.
\item As part of the motivation, we will also be ready to state some of
the main open questions.
\end{enumerate}
There are a number of excellent surveys already available on
sandpile models. Our focus is similar to that of Redig's
notes \cite{Redig}, in that we cover rigorous results
roughly at the level of beginning PhD students.
On the other hand, we have incorporated some topics not covered in
\cite{Redig} and some results that are more recent. For example, we
discuss connections to the Tutte polynomial, the rotor-router walk
and a large part of Priezzhev's computation of height probabilities
in 2D. Dhar's extensive survey \cite{Dhar06}, written from the point of view
of theoretical physics, will be an invaluable guide
to anyone wanting to learning about the model.
An aspect of the theory that does not seem to receive much attention in
the physics literature, though, is the precise arguments involving the
limit of infinite graphs. Here we explain how this can be done based
on one-endedness of components of the wired uniform spanning
forest \cite{JW12}. The connection to the rotor-router model
is due to \cite{HLMPPW}, that extends many of the basic results to
directed graphs. For simplicity, we restricted attention to undirected
graphs.
The outline of the paper is as follows. Section \ref{sec:model}
introduces the sandpile Markov chain, recurrent configurations,
the sandpile group and Dhar's formula for the average number of
topplings. In Section \ref{sec:motivation} we give a
brief introduction to critical phenomena using the percolation model
as example. Self-organized criticality is first illustrated with the
forest-fire model built on the Erd\H{o}s-R\'{e}nyi random graph, that
is perhaps the most intuitive example of the concept.
Then we discuss self-organized criticality in the Abelian sandpile
model in terms of critical exponents. Section \ref{sec:connections}
starts with the burning
bijection of Majumdar and Dhar and the connection to uniform spanning
trees. Following this we present the connections to the rotor-router
model and the Tutte polynomial. Section \ref{sec:determinantal}
is devoted to exactly computable results, and starts with Majumdar and
Dhar's method. The scaling limit of the height $0$ field is discussed.
Section \ref{ssec:height-123} is devoted to an exposition of the
computation of height probabilities in 2D due to Priezzhev, and
is followed by further 2D results in Section \ref{ssec:corr-123}.
Section \ref{sec:measures} is devoted to questions on infinite graphs
and highlights the role that properties of the wired uniform spanning
forest play in infinite volume limits. Finally,
Section \ref{sec:infinite-conf} discusses certain questions of
stabilizability of infinite configurations.
\section{The Abelian sandpile model / chip-firing game on a finite graph}
\label{sec:model}
\subsection{Definition of the model}
\label{ssec:model-def}
Let $G = (V \cup \{ s \}, E)$ be a finite, connected multigraph
(i.e.~we allow multiple edges between vertices). The distinguished vertex
$s$ is called the \textbf{sink}. We exclude loop-edges for
simplicity (their presence would involve only trivial modifications).
We write $\deg_G(x)$ for the degree of the vertex $x$ in the graph $G$,
and we write $x \sim y$ to denote that vertices $x$ and $y$ are
connected by at least one edge.
\begin{example}
\label{example:wired-graph}
Let $V \subset \Z^d$ be finite. Identify all vertices in $V^c = \Z^d \setminus V$
into a single vertex that becomes the sink $s$. Then remove all loop-edges
at $s$. This is called the \textbf{wired graph} induced by $V$.
Instead of $\Z^d$, we can start from any locally finite, infinite,
connected graph.
\end{example}
A \textbf{sandpile} is a collection
of indistinguishable particles (chips, sand grains, etc.) on the
vertices in $V$. A sandpile is hence specified by a map
$\eta : V \to \{ 0, 1, 2, \dots \}$. We say that $\eta$ is
\textbf{stable at $x \in V$}, if $\eta(x) < \deg_G(x)$, and
we say that it is \textbf{stable}, if it is stable at all
$x \in V$.
We now introduce a dynamics that stablizes any unstable sandpile.
If $\eta$ is unstable at some $x \in V$ (i.e.~$\eta(x) \ge \deg_G(x)$),
$x$ is \textbf{allowed to topple} which means that $x$ sends one
particle along each edge incident to it. (In the combinatorics literature
it is common to say \textbf{the vertex $x$ fires} by sending chips
to its neighbours.) On toppling the vertex $x$, the particles are
re-distributed as follows:
\eqnsplst
{ \eta(x) &\ \longrightarrow\ \eta(x) - \deg_G(x) \\
\eta(y) &\ \longrightarrow\ \eta(y) + a_{xy}, \quad y \in V,\, y \not= x, }
where $a_{xy} = \text{number of edges between $x$ and $y$}$.
Regarding $\eta$ as a row vector, this can be concisely written as
\eqn{e:topple}
{ \eta \longrightarrow \eta - \Delta'_{x,\cdot}, }
where
\eqnsplst
{ \Delta'_{xy}
&= \begin{cases}
\deg_G(x) & \text{if $x = y \in V$;}\\
-a_{xy} & \text{if $x \not= y$, $x, y \in V$;}
\end{cases} \\
\Delta'_{x,\cdot}
&= \text{row $x$ of $\Delta'$}. }
In other words, if $\Delta = \text{graph Laplacian of $G$}$
then $\Delta' = \text{restriction of $\Delta$ to $V \times V$}$.
Particles arriving at the sink are lost, that is, we do not
keep track of them. Observe that requiring $\eta(x) \ge \deg_G(x)$
before toppling ensures that we still have a sandpile
after toppling (i.e.~the number of praticles at $x$ is still
non-negative after toppling). We also say in this case that toppling $x$ is
\textbf{legal}.
Toppling a vertex may create further unstable
vertices.
\begin{definition}
Given a sandpile $\xi$, we define its \textbf{stabilization}
\eqnst
{ \xi^\circ \in \Omega_G
:= \{ \text{stable sandpiles} \}
= \prod_{x \in V} \{ 0, 1, \dots, \deg_G(x) - 1 \}, }
by carrying out all possible legal topplings, in any order, until
a stable sandpile is reached.
\end{definition}
\begin{theorem}
\label{thm:stabilization}
\cite{Dhar90}
The map $\xi \mapsto \xi^\circ$ is well-defined.
\end{theorem}
\begin{proof}
We need to show:
\begin{itemize}
\item[(a)] Only finitely many topplings can occur, regardless
of how we choose to topple vertices.
\item[(b)] The final stable configuration is independent of the
sequence of topplings chosen.
\end{itemize}
In order to see (a), observe that if $x \sim s$ then $x$ can
topple only finitely many times (the system loses particles
to $s$ on each toppling of $x$). It follows by induction that
for all $k \ge 1$, if $x \sim x_{k-1} \sim \dots \sim x_1 \sim s$,
then $x$ can topple only finitely many times. Since $G$
is connected, we are done.
We now prove (b) in two steps.
(i) \emph{``Topplings commute''}. If $x, y \in V$, $x \not= y$ and
$\eta$ is unstable at both $x$ and $y$, then writing $T_x$ to denote
the effect of toppling $x$ we claim that
\eqn{e:topplings-commute}
{ T_y T_x \eta
= T_x T_y \eta. }
Observe that in either order, both topplings are legal.
Then the claim is immediate from observing that both sides equal
$\eta - \Delta'_{x,\cdot} - \Delta'_{y,\cdot}$.
(ii) Suppose now that
\eqn{e:1st}
{ x_1, x_2, \dots, x_k }
and
\eqn{e:2nd}
{ y_1, y_2, \dots, y_\ell }
are two sequences of vertices that are both possible stablizing sequences
of $\eta$. That is, when carried out in order from left to right,
in both sequences each toppling is legal, and the final results
are stable configurations. If $\eta$ is already stable, then
$k = \ell = 0$ and there is nothing to prove.
Otherwise, we have $k, \ell \ge 1$ and $\eta(x_1) \ge \deg_G(x_1)$.
Therefore, $x_1$ must occur somewhere in the second sequence, otherwise
the second sequence would never reduce the number of particles at $x_1$.
Let $x_1 = y_i$, $1 \le i \le \ell$, and suppose that $i$ is the smallest
such index. By part (i), the toppling of $y_i = x_1$ can be moved
to the front of the second stabilizing sequence. Precisely, we have
\eqnsplst
{ T_{x_1} T_{y_{i-1}} \dots T_{y_1} \eta
&= T_{y_{i-1}} T_{x_1} T_{y_{i-2}} \dots T_{y_1} \eta \\
&= T_{y_{i-1}} T_{y_{i-2}} T_{x_1} \dots T_{y_1} \eta \\
&\ \ \vdots \\
&= T_{y_{i-1}} T_{y_{i-2}} \dots T_{y_1} T_{x_1} \eta. }
It follows that the sequence
\eqn{e:2nd'}
{ x_1, y_1, \dots, y_{i-1}, y_{i+1}, \dots, y_\ell }
also stablizes $\eta$. We now remove $x_1$ from the beginning
of the sequences \eqref{e:1st} and \eqref{e:2nd'} and repeat the
argument for $T_{x_1} \eta$. Iterating gives that $k = \ell$ and the
multisets $[x_1,\dots,x_k]$ and $[y_1,\dots,y_\ell]$ are permutations
of each other. That is, each vertex topples the same number of times in
the two stabilizing sequences, and hence they reach the same final
configuration. This completes the proof that the
stabilization $\xi \mapsto \xi^\circ$ is well-defined.
\end{proof}
\begin{remark}
Sometimes, especially in the physics literature, a stable sandpile is
defined as having possible values $1, \dots, \deg_G(x)$ at $x$, and
a toppling of $x$ is allowed when $\eta(x) > \deg_G(x)$.
It is easy to see that this merely amounts to a trivial shift
of coordinates, and defines the same model.
\end{remark}
\medbreak
{\bf Motivating remarks.} The sandpile dynamics can be
viewed as a toy model of avalanche-type phenomena. Adding a single particle
to the pile and stabilizing can induce a complex sequence of topplings,
called an ``avalanche''. However, the model is \emph{not intended} as a
realistic model of sand. In order to model sand grains moving down a slope,
a more suitable condition for toppling could be that the \emph{discrete gradient}
exceeds some fixed critical value $d_c > 0$. It is easy to see that in such
models topplings do not commute. In fact, if $y_1 \sim x \sim y_2$ and
$\eta(x) - \eta(y_1) = d_c = \eta(x) - \eta(y_2)$, one
needs to make a choice in the model if a particle from $x$ will move to
$y_1$ or $y_2$. We will see later that commutativity in the
Abelian sandpile has many nice consequences, which make
it more amenable to study. The point is that the Abelian model
already possesses important qualitative features of avalanche-like
phenomena and as we will see, has very nontrivial behaviour.
We will return to this in Section \ref{sec:motivation}.
\medbreak
\begin{exercise} \emph{(Asymmetric sandpile model)}
Let $G = (V \cup \{ s \}, E)$ be a \emph{directed} graph with a distinguished vertex $s$.
Find appropriate definitions of ``stable'' and ``toppling''.
Find a condition on $G$ that ensures that stabilization is well-defined.
\end{exercise}
\begin{exercise} \emph{(Least action principle)}
Check that the argument of Theorem \ref{thm:stabilization}(b) gives the
following stronger statement.
Suppose that $x_1, x_2, \dots, x_k$ is a stabilizing
sequence for $\eta$ consisting of legal topplings. Suppose that
$y_1, y_2, \dots, y_\ell$ is any other sequence of possibly illegal
topplings, such that carrying them out results in a stable configuration.
Then each vertex is toppled at least as many times in the $y$-sequence
as in the $x$-sequence. In other words, with legal topplings,
each vertex does the minimum amount of required ``work'' to
stabilize the configuration. See \cite{FLP10} for more on
this ``least action principle''.
\end{exercise}
\begin{definition}
The \textbf{addition operators} are the maps on sandpiles
defined by adding one particle at $x$ and stabilizing.
More formally, $E_x \eta = ( \eta + \one_x )^\circ$, where
$\one_x$ is the row vector with $1$ in position $x$ and $0$
elsewhere. The sequence of topplings carried out in stabilizing
$\eta + \one_x$ is called the \textbf{avalanche} induced by
the addition.
\end{definition}
\begin{lemma}
\label{lem:Abelian}
\cite{Dhar90}
We have $E_x E_y = E_y E_x$ for all $x, y \in V$.
\end{lemma}
\begin{proof}
We have
\eqn{e:ExEy}
{ E_x E_y \eta
= ( ( \eta + \one_y )^\circ + \one_x )^\circ }
and
\eqn{e:EyEx}
{ E_y E_x \eta
= ( ( \eta + \one_x )^\circ + \one_y )^\circ. }
We show that both expressions equal
\eqn{e:both}
{ ( \eta + \one_x + \one_y )^\circ. }
To see this, start with the configuration $\eta + \one_x + \one_y$,
and carry out topplings as in the stabilization of $\eta + \one_y$.
The extra particle present at $x$ does not affect the legality
of any of the topplings. Hence with the extra particle at $x$ present,
we arrive at the configuration $( \eta + \one_y )^\circ + \one_x$.
Now carry out any further topplings that are possible, arriving at
the right hand side of \eqref{e:ExEy}. Due to Theorem \ref{thm:stabilization},
the final configuration also equals \eqref{e:both}. Equality of
\eqref{e:both} and \eqref{e:EyEx} is seen similarly.
\end{proof}
So far the dynamics has been determinisitic. We now add randomness
and define the \textbf{sandpile Markov chain} as follows. We take as
state space the set $\Omega_G$ of stable sandpiles. Fix a positive
probability distribution $p$ on $V$, i.e.~$\sum_{x \in V} p(x) = 1$
and $p(x) > 0$ for all $x \in V$. Given the current state
$\eta \in \Omega_G$, pick a random vertex $X \in V$ according to $p$,
add a particle there, and stabilize to obtain the next state of the
Markov chain. That is, the Markov chain makes the transition
\eqnst
{ \eta \ \longrightarrow\ E_X \eta = ( \eta + \one_X )^\circ. }
If the Markov chain has initial state $\eta_0$, we can write
the time evolution, using Theorem \ref{thm:stabilization}, as
\eqnst
{ \eta_n
= \left( \eta_0 + \sum_{i=1}^n \one_{X_i} \right)^\circ
= E_{X_n} \dots E_{X_1} \eta_0, }
where $X_1, X_2, \dots$ are i.i.d.~random variables distributed
according to $p$.
We denote by $\eta^\max$ the sandpile defined by
$\eta^\max(x) = \deg_G(x) - 1$, $x \in V$. For sandpiles $\eta, \xi$,
we write $\eta \ge \xi$, if $\eta(x) \ge \xi(x)$ for all $x \in V$.
Recall the following standard terminology for general Markov chains.
For two states $\eta, \xi$ of a Markov chain we say that
\emph{$\xi$ can be reached from $\eta$}, if there exists $n \ge 0$ such that
$\mathbf{p}^n(\eta,\xi) > 0$, where $\mathbf{p}^n$ is the $n$-step
transition probability. We say that \emph{$\eta$ and $\xi$ communicate},
if they can be reached from each other. This is an equivalence relation,
and the equivalence classes are the \emph{communicating classes}.
A state $\eta$ is \emph{recurrent}, if starting from $\eta$ the chain returns
to $\eta$ with probability $1$, and it is \emph{transient} otherwise.
Recurrence and transience are \emph{class properties}, that is, if
one state in a class is recurrent then all states are.
\begin{theorem}
\label{thm:recurrent}
\cites{Dhar90,HLMPPW}
Consider the sandpile Markov chain on any finite connected multigraph
$G = (V \cup \{ s \}, E)$ (satisfying $p(x) > 0$ for all $x \in V$).\\
(i) There is a single recurrent class.\\
(ii) The following are equivalent for $\eta \in \Omega_G$:
\begin{itemize}
\item[(a)] $\eta$ is recurrent;
\item[(b)] there exists a sandpile $\xi \ge \eta^\max$ such that
$\xi^\circ = \eta$;
\item[(c)] for any sandpile $\sigma$, it is possible to reach $\eta$ from
$\sigma$ by adding particles and toppling vertices, i.e.~there exists a
sandpile $\zeta$ such that $\eta = (\sigma + \zeta)^\circ$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) The configuration $\eta^\max$ is reachable for the Markov chain from
any $\zeta \in \Omega_G$ (by addition of particles). Hence $\eta^\max$ is
recurrent, and the recurrent class containing it is the only recurrent class.
(ii) (a) $\Longrightarrow$ (b). If $\eta$ is recurrent, it is reachable
from $\eta^\max$, i.e.~there exist $k \ge 0$ and $x_1, \dots, x_k \in V$ such
that
\eqnst
{ \eta
= E_{x_k} \dots E_{x_1} \eta^\max
= \left( \eta^\max + \sum_{i = 1}^k \one_{x_i} \right)^\circ. }
Hence we can take $\xi$ to be the configuration inside the parantheses
on the right hand side.
(b) $\Longrightarrow$ (a). If $\xi^\circ = \eta$, $\xi \ge \eta^\max$,
we can write $\xi = \eta^\max + \sum_{i=1}^k \one_{x_i}$ with
some $x_1, \dots, x_k \in V$, so that $\eta = E_{x_k} \dots E_{x_1} \eta^\max$.
This shows that $\eta$ is reachable from $\eta^\max$, and hence
recurrent.
(c) $\Longrightarrow$ (b). This is obvious by taking $\sigma = \eta^\max$.
(b) $\Longrightarrow$ (c).
Let $\xi \ge \eta^\max$ be such that
$\xi^\circ = \eta$. Take $\zeta := \xi - \sigma^\circ \ge \eta^\max - \sigma^\circ \ge 0$.
Then since $\xi - \sigma^\circ \ge 0$, starting from
$\sigma + \zeta = \sigma + \xi - \sigma^\circ$ we can legally topple
a sequence of vertices that stabilizes $\sigma$, and arrive at the
configuration $\sigma^\circ + \xi - \sigma^\circ = \xi$.
Now we can legally topple a sequence of vertices that stabilizes $\xi$,
and arrive at the configuration $\eta$.
This shows that $\eta$ has property (c).
\end{proof}
\begin{definition}
We denote by $\cR_G$ the set of recurrent sandpiles.
\end{definition}
\subsection{The sandpile group / critical group}
\label{ssec:sandpile-group}
Let $G = (V \cup \{ s \}, E)$ be a finite connected multigraph.
We now define the sandpile group of $G$.
Consider $\Z^V$ as an Abelian group. The integer row span
$\Z^V \Delta'_G$ of the matrix $\Delta'_G$ forms a subgroup of
$\Z^V$. For $\xi, \zeta \in \Z^V$, let us write $\xi \sim \zeta$
if $\xi - \zeta \in \Z^V \Delta'_G$. This is an equivalence
relation, and we write $[\xi]$ to denote the equivalence
class containing $\xi$. The equivalence classes form
an Abelian group, the factor group
\eqnst
{ K_G
:= \Z^V / \Z^V \Delta'_G. }
The group $K_G$ is called the \textbf{sandpile group} of $G$
(sometimes called the \textbf{critical group} in the
combinatorics literature). Any toppling corresponds to
subtracting a row of $\Delta'_G$ from the configuration
(recall \eqref{e:topple}). Therefore, during
stabilization a configuration is replaced by an equivalent one.
Hence we can expect that the group $K_G$ plays a role in
understanding the sandpile Markov chain. This is made
precise by the following theorem.
\begin{theorem}
\label{thm:group}
\cite{Dhar90}
(i) Every equivalence class in $K_G$ contains precisely one recurrent
sandpile in $\cR_G$. In particular,
\eqnst
{ | \cR_G |
= | K_G |
= \det(\Delta'_G). }
(ii) Consequently, the following operation
$\oplus : \cR_G \times \cR_G \to \cR_G$ turns $\cR_G$ into an Abelian
group isomorphic to $K_G$:
\eqnst
{ \eta \oplus \xi
:= (\eta + \xi)^\circ. }
\end{theorem}
The proof of (i) we give is due to \cite{HLMPPW}.
We will need the following lemma of \cite{HLMPPW} that provides a
configuration with a special property (later it will become clear that
this is a representative of the identity of $K_G$).
\begin{lemma}
\label{lem:id-like}
\cite{HLMPPW}
Let $\eps := \delta - \delta^\circ$, where $\delta$ is defined
by $\delta(x) = \deg_G(x)$, $x \in V$. If $\eta$ is recurrent, then
$(\eta + \eps)^\circ = \eta$.
\end{lemma}
\begin{proof}
By Theorem \ref{thm:recurrent}, if $\eta$ is recurrent, it is possible to
add particles to $\delta$ and stabilize to get $\eta$. That is, there
exists $\zeta \ge 0$ such that $\eta = (\delta + \zeta)^\circ$. Consider the
configuration
\eqnst
{ \gamma
= (\zeta + \delta) + \eps
= \delta + \zeta + \delta - \delta^\circ. }
Since $\eps \ge 0$, we can start from $\gamma$ and legally topple a sequence
of vertices that stabilizes $\zeta + \delta$, arriving at the
configuration $\eta + \eps$. Stabilizing further gives $(\eta + \eps)^\circ$.
On the other hand, since $\delta - \delta^\circ \ge 0$,
we can start from $\gamma$, and legally topple a sequence of vertices
that stabilizes $\delta$, arriving at the configuration
$\delta^\circ + \zeta + \delta - \delta^\circ = \zeta + \delta$.
Stabilizing further we obtain $\eta$. Comparing the two stabilizing
sequences, Theorem \ref{thm:stabilization}(ii) yields
$\eta = (\eta + \eps)^\circ$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:group}.]
(i) We first observe that every equivalence class contains some
representative $\xi$ with $\xi \ge \eta^\max$. Then
$\xi^\circ =: \eta \in \cR_G$ by Theorem \ref{thm:recurrent}(ii), and
$\eta \in [\xi]$. It follows that $\cR_G$ intersects each
equivalence class.
It remains to show that the intersection of $\cR_G$ with any equivalence
class contains at most one element. To see this, suppose that
$\eta_1 \sim \eta_2$, $\eta_1, \eta_2 \in \cR_G$, and we show $\eta_1 = \eta_2$.
Since $\eta_1 \sim \eta_2$, there exist $c_x \in \Z$, $x \in V$, such that
\eqnst
{ \eta_1
= \eta_2 + \sum_{x \in V} c_x \Delta'_{x,\cdot}. }
Let $V_- := \{ x \in V : c_x < 0 \}$ and $V_+ := \{ x \in V : c_x > 0 \}$, and define
\eqnst
{ \eta
:= \eta_1 + \sum_{x \in V_-} (-c_x) \Delta'_{x,\cdot}
= \eta_2 + \sum_{x \in V_+} c_x \Delta'_{x,\cdot}. }
Take $k$ large enough such that the configuration $\eta'$ defined by
$\eta' = \eta + k \eps$ satisfies $\eta'(x) \ge |c_x| \deg_G(x)$ for all $x \in V$.
This is possible, since each entry of $\eps$ is at least $1$.
Starting from $\eta'$, we may legally topple $(-c_x)$-times
each vertex $x \in V_-$, arriving at the configuration
$\eta_1 + k \eps$. This further stabilizes to $\eta_1$, by Lemma \ref{lem:id-like}.
Similarly, we can legally topple $c_x$-times each vertex
$x \in V_+$, arriving at the configuration $\eta_2 + k \eps$.
This further stabilizes to $\eta_2$, again by Lemma \ref{lem:id-like}.
Comparing the two stabilzations, Theorem \ref{thm:stabilization}(ii)
yields $\eta_1 = \eta_2$.
The number of elements of $K_G$ is the index of the subgroup $\Z^V \Delta'_G$.
It is easy to see that this equals the determinant of $\Delta'_G$.
(ii) It is not difficult to show (see Exercise \ref{ex:oplus}) that
if $\eta, \xi \in \cR_G$, we have $(\eta + \xi)^\circ \in \cR_G$.
Hence $\oplus$ indeed maps $\cR_G \times \cR_G$ into $\cR_G$.
We also have
\eqnst
{ [\eta \oplus \xi]
= [ (\eta + \xi)^\circ ]
= [ \eta + \xi ]
= [ \eta ] + [ \xi ]. }
This shows that $\oplus$ indeed corresponds to the group operation in $K_G$.
\end{proof}
\begin{exercise}
\label{ex:oplus}
Show that if $\eta, \xi \in \cR_G$, then $(\eta + \xi)^\circ \in \cR_G$.
(\emph{Hint:} One way to see this is the criterion of
Theorem \ref{thm:recurrent}(ii)(b).)
\end{exercise}
The identity element of $K_G$, that is, the unique recurrent
configuration $\eta_0 \in \cR_G$ such that $[\eta_0] = [0]$,
displays highly non-trivial features. On large rectangular regions
in $\Z^2$, the identity element displays both regular and fractal
patterns; see the pictures in \cites{BR02,LP10}.
Le Borgne and Rossin \cite{BR02} prove some rigorous results
for rectangular regions.
\begin{exercise}
\label{ex:sink-nbr}
Show that if there is a unique $x \in V$ such that $x \sim s$, then
$E_x$ restricted to $\cR_G$ is the identity.
(\emph{Hint:} Show that ${\bf 1}_x$ equals the sum of the rows
of $\Delta'_G$. This is a special case of Dhar's ``multiplication
by identity test''; see \cite{Dhar90}.)
\end{exercise}
\begin{exercise}
\label{ex:smaller-gen}
Suppose that $V \subset \Z^d$, and $G$ is the wired graph induced by $V$.
Show that if $p(x) > 0$ for all $x \in V$ such that $x \sim s$, then
the sandpile Markov chain is irreducible. (That is, in this case the
condition imposed on $p$ in Theorem \ref{thm:recurrent} can be sybstantially
relaxed.)
\end{exercise}
\subsection{The stationary distribution and Dhar's formula}
\label{ssec:stationary}
Once the sandpile Markov chain reaches the set $\cR_G$ of
recurrent sandpiles, it never leaves it. Let us write
$\nu_G$ for the stationary distribution, which by
Theorem \ref{thm:recurrent}(i) is unique, and is concentrated
on $\cR_G$. In view of Theorem \ref{thm:group},
the restriction of the Markov chain to $\cR_G$ can be identified
with a random walk on the finite group $K_G$. A transition from
the state $\eta \in \cR_G$ to $(\eta + \mathbf{1}_x)^\circ$, $x \in V$,
is identified with adding to $[\eta] \in K_G$ the
group element $[\mathbf{1}_x]$. As the next easy exercise shows, this
implies that the stationary distribution of the sandpile Markov chain is
\emph{uniform} on $\cR_G$.
\begin{exercise}
\label{ex:rw-group}
Let $K$ be a finite group, and let $(X_n)_{n \ge 0}$ be an
irreducible random walk on $K$, that is a Markov chain with transition
matrix $P(h,gh) = \mu(g)$, where $\sum_{g \in K} \mu(g) = 1$, and
the support of $\mu$ generates $K$. Show that the stationary
distribution of $(X_n)_{n \ge 0}$ is uniform on $K$.
\end{exercise}
The following exercise is essentially a triviality. However, we prefer
to state it explicitly for two reasons: (i) it will play an important role in
Theorem \ref{thm:Dhar's-formula} below; (ii) its version for infinite
graphs is far from trivial.
\begin{exercise}
\label{ex:inv-addition}
Check that the additition operators $E_x : \cR_G \to \cR_G$, $x \in V$,
leave the measure $\nu_G$ invariant.
\end{exercise}
The following theorem due to Dhar \cite{Dhar90} gives a formula
for the average number of topplings induced by adding a single particle,
under stationarity.
For a sandpile $\eta$ and $x, y \in V$, let us write $n(x,y;\eta)$ for
the number of topplings occurring at $y$ during the stabilization
of $\eta + \one_x$.
\begin{theorem}
\label{thm:Dhar's-formula}
Let $G = (V \cup \{ s \},E)$ be a finite connected multigraph.
We have
\eqn{e:Dhar's-formula}
{ \E_{\nu_G} [ n(x,y,\cdot) ]
= (\Delta'_G)^{-1}_{xy}, \quad x, y \in V. }
\end{theorem}
\begin{proof}
From the definition of stabilization we have the relation:
\eqnst
{ (\eta + \one_x)^\circ(y)
= \eta(y) + \one_x(y) - \sum_{z \in V} n(x,z;\eta) (\Delta'_G)_{zy}. }
Now average both sides with respect to the stationary distribution
$\nu_G$. We get
\eqnst
{ \E_{\nu_G} [ (\eta + \one_x)^\circ(y) ]
= \E_{\nu_G} [ \eta(y) ] + \one_x(y)
- \sum_{z \in V} \E_{\nu_G} [ n(x,z;\eta) ] (\Delta'_G)_{zy}. }
Due to Exercise \ref{ex:inv-addition}, the left hand side equals
the first term on the right hand side, which gives
\eqnst
{ \sum_{z \in V} \E_{\nu_G} [ n(x,z;\eta) ] (\Delta'_G)_{zy}
= \one_x(y). }
Since this holds for all $x, y \in V$, we get \eqref{e:Dhar's-formula}.
\end{proof}
The above theorem is extremely useful in estimating topplings in an
avalanche. However, it only gives information about the first moment
of the toppling numbers.
\begin{open}
Find a useful expression for the second moment of the toppling numbers.
\end{open}
\section{Motivation from statistical physics}
\label{sec:motivation}
In the physics literature, the sandpile model appears in connection
with the notion of ``self-organized criticality'' (SOC) \cites{Dhar90,Dhar06}.
In order to explain what SOC is, we first clarify the meaning
of ``criticality'' in the example of percolation in Section \ref{ssec:percolation}.
In Section \ref{ssec:SOC} the notion of SOC is illustrated via an example
closely related to percolation. In Section \ref{ssec:SOC-sandpile}, we discuss
SOC in the sandpile model and state some open problems.
In Section \ref{sec:connections} various connections to other models
will be presented as well.
\subsection{Percolation --- an example of a critical phenomenon}
\label{ssec:percolation}
A simple-to-define but deep example of criticality is provided by
\textbf{bond percolation} on the $d$-dimensional integer lattice $\Z^d$.
Let $0 < p < 1$. Declare each edge of $\Z^d$ (also called a bond)
\textbf{occupied} with probability $p$ and \textbf{vacant} with
probability $1-p$, independently. Percolation theory studies the
geometry of the connected components (called \textbf{clusters})
of the random subgraph of $\Z^d$ induced by the occupied bonds.
We are going to write $\Pr_p$ for the underlying probability measure
when the parameter value is $p$.
Let $\cC$ denote the connected occupied component containing the origin.
A fundamental result
in percolation theory is the following theorem due to Broadbent and
Hammersley \cites{BH57,Hamm57a,Hamm59}.
\begin{theorem}
\label{thm:phase-transition}
Let $d \ge 2$. There exists a critical probability $0 < p_c = p_c(d) < 1$
such that
\eqnsplst
{ \Pr_p [ |\cC| < \infty ] = 1, \quad \text{ if $p < p_c$;}\\
\Pr_p [ |\cC| = \infty ] > 0, \quad \text{ if $p > p_c$.} }
\end{theorem}
It is easy to see using translation invariance that in the
case $p < p_c$ there is no infinite cluster
anywhere in the lattice $\Pr_p$-a.s. It can also be shown that in the case
$p > p_c$ there exists a \emph{unique} infinite cluster \emph{somewhere}
in the lattice; see \cite{Grimmett}.
Note the qualitative similarity with extinction/survival for a
branching process depending on whether the mean offspring is
less than or greater than $1$. One says that a \textbf{phase transition} occurs
at the \textbf{critical value} $p = p_c$ as the parameter $p$ is
increased. The critical value separates the \textbf{subcritical phase}
$p < p_c$ where all clusters are finite a.s., and
the \textbf{supercritical phase} $p > p_c$ where there exists an
infinite cluster a.s.
Percolation at the critical point $p = p_c$ has features that set it
apart from the sub- and supercritical phases.
For example, it is known that for percolation with $p \not= p_c$, the probabilities
$\Pr_p [ |\cC| = k ]$ decay fast with $k$. The results \cite{Grimmett}*{Theorem 6.78},
\cite{Grimmett}*{Theorem 8.65} say that when $d \ge 2$, we have
\eqnsplst
{ \Pr_p \Big[ |\cC| \ge k \Big]
&\le C_1(p) \exp ( - c_1(p) k ) \quad \text{ when $p < p_c$;}\\
\Pr_p \Big[ k \le |\cC| < \infty \Big]
&\le C_2(p) \exp \big( - c_2(p) k^{(d-1)/d} \big) \quad \text{ when $p > p_c$;}\\
\Pr_p \Big[ x\in \cC,\, |\cC| < \infty \Big]
&\le \exp \big( - c_3(p) |x| \big) \quad \text{ when $p \not= p_c$.} }
While these results are already not easy to establish, the case of $p = p_c$ is
even more challenging. For example, it is a major conjecture
that $\Pr_{p_c} [ |\cC| < \infty ] = 1$ in all dimensions $d \ge 2$.
So far, this has only been established in the planar case $d = 2$ \cites{Harris,Kesten80}
and when $d$ is sufficiently large ($d \ge 19$) \cites{BA91,HS90}.
A more detailed conjecture is that in all dimensions $d \ge 2$, the behaviour at and
close to $p = p_c$ is characterized by power laws. For example, it
is expected that there exist \textbf{critical exponents}
$\beta, \delta, \rho, \gamma, \eta \ge 0$, depending on the dimension $d$,
such that
\eqnspl{e:exponents}
{ \Pr_p [ |\cC| = \infty ]
&= (p - p_c)^{\beta + o(1)} \quad \text{ as $p \downarrow p_c$;} \\
\Pr_{p_c} [ |\cC| \ge k ]
&= k^{-1/\delta + o(1)} \quad \text{ as $k \to \infty$;} \\
\Pr_{p_c} [ \mathrm{rad}(\cC) \ge n ]
&= n^{-1/\rho + o(1)} \quad \text{ as $n \to \infty$;} \\
\E_p [ |\cC|; |\cC| < \infty ]
&= |p - p_c|^{-\gamma + o(1)} \quad \text{ as $p \to p_c$;} \\
\Pr_{p_c} [ x \in \cC ]
&= \frac{1}{|x|^{d - 2 + \eta + o(1)}} \quad \text{ as $|x| \to \infty$;} }
where $\mathrm{rad}(\cC) = \sup \{ |x| : x \in \cC \}$.
A further conjecture of \textbf{universality} states that the values
of the exponents are not sensitive to the structure of the lattice.
In particular, they would not change if the cubic lattice is replaced
by any other $d$-dimensional periodic lattice.
Most progress on the conjectures \eqref{e:exponents} has been made in
$d = 2$ and in high dimensions. In the planar case,
replace bond percolation on $\Z^2$ by so-called \textbf{site percolation}
on the triangular lattice. Here the \emph{vertices} of the triangular lattice
are declared occupied/vacant with probabilities $p$ and $1-p$,
and the nearest neighbour occupied connected components are considered.
The combined result of the papers \cites{Kesten87,Smirnov01,LSW02,SW01}
is that for site percolation on the triangular grid we have
$\beta = 5/36$, $\delta = 91/5$, $\rho = 48/5$, $\gamma = 43/18$, $\eta = 5/24$.
In sufficiently high dimensions ($d \ge 19$) it has been established
that $\beta = 1, \delta = 2$, $\rho = 1/2$, $\gamma = 1$, $\eta = 0$;
\cites{BA91,HS90,HS00a,HS00b,HvdHS03,KN11}. It is conjectured that
these are the values of the exponents for all $d > 6$.
(The exponents have been established for all $d > 6$ in a modified model
where long bonds are allowed; see the above references.)
\subsection{Self-organized criticality}
\label{ssec:SOC}
At first it seems that the intriguing properties of critical percolation
are very sensitive to the fact that we are at the critical point: $p = p_c$
exactly, or (in large finite systems) $p \approx p_c$. However,
there are interesting examples of \emph{dynamically evolving} models
where criticality (i.e.~power law behaviour of various distributions)
occurs in a rather robust fashion.
An example can be built on top of the Erd{\H{o}}s-R{\'e}nyi random graph
model \cite{Bbook}, that can be viewed as percolation on the complete graph
with $n$ vertices. It will be useful to consider the random graph in a
dynamical fashion. At time $t=0$ we have $n$ vertices and no edges.
Edges become occupied, independently, at rate $1/n$. This choice of rates
ensures that locally around each vertex $O(1)$ edges appear per unit time.
Write
\eqnst
{ v^n_k(t)
= \text{proportion of vertices in clusters of size $k$ at time $t$}. }
In the limit $n \to \infty$ there is a phase transition.
A giant component containing a positive fraction of all vertices
emerges at the critical time $t_c = 1$. One way to formalize
this statement is that $v^n_k(t)$ converges in probability
to a deterministic limit $v_k(t)$, as $n \to \infty$, where the
limit satisfies:
\eqnst
{ \theta(t)
:= 1 - \sum_{k \ge 1} v_k(t)
\begin{cases}
= 0 & \text{if $t \le 1$;} \\
> 0 & \text{if $t > 1$.}
\end{cases} }
Compare with Theorem \ref{thm:phase-transition}. At the critical time
$t_c = 1$, we have the power law $v_k(t_c) \sim c k^{-3/2}$,
as $k \to \infty$.
\medbreak
\emph{Forest-fire model.}
We now modify the dynamics in a way that prevents the giant component
from emerging, and keeps the system ``at criticality''. Return to the
finite $n$ model, and let $\lambda(n)$ be a function satisfying
$1/n \ll \lambda(n) \ll 1$.
Suppose that ``lightning'' hits each vertex, independently, at rate
$\lambda(n)$. When a vertex is hit by lightning, the cluster containing
it disintegrates into individual vertices, that is, all edges within
the cluster become vacant instantaneously. Heuristically, this mechanism
should prevent clusters to reach size of order $n$, since then it is
extrememly likely that they are hit by lightning ($n \lambda(n) \gg 1$).
On the other hand, our assumption $\lambda(n) \ll 1$ implies that small
clusters are not likely to be hit by lightning, so they will be growing
more-or-less as in the Erd{\H{o}}s-R{\'e}nyi model. The heuristic suggests
that after the critical time the system remains critical forever.
R{\'a}th and T{\'o}th \cite{RT09} proved that this is indeed the case.
\begin{theorem} \cite{RT09}
Suppose that $1/n \ll \lambda(n) \ll 1$. Let $\bar{v}^n_k(t)$ be the
proportion of vertices in clusters of size $k$ at time $t$ in the
forest-fire evolution.\\
(i) There is a deterministic limit in probability
$\bar{v}_k(t) = \lim_{n \to \infty} \bar{v}^n_k(t)$.\\
(ii) If $t \le t_c$, $\bar{v}_k(t) = v_k(t)$.\\
(iii) If $t \ge t_c$ we have $\sum_{\ell \ge k} \bar{v}_\ell(t) \asymp k^{-1/2}$.
\end{theorem}
\begin{remark}
Analogous statements hold for general initial conditions statisfying
a moment condition; see \cite{RT09}.
\end{remark}
The phenomenon that a state characterized by power laws is reached and
then maintained by the dynamics is called \textbf{self-organized criticality}.
The term was introduced by Bak, Tang and Wiesenfeld \cite{BTW88}, who suggested
that this mechanism would be present in many physical systems for which
power laws had been observed empirically. Examples include: energy release in
earthquakes, avalanche sizes in rice- and sandpiles, areas of forest-fires
and many others; see the book \cite{Jensen}. Bak, Tang and Wiesenfeld used the sandpile
dynamics to illustrate their idea via numerical simulations \cite{BTW88}.
After Dhar \cite{Dhar90} discovered the Abelian property and established the
fundamental properties discussed in Section \ref{sec:model}, the Abelian sandpile
became the primary theoretical example of SOC \cite{Dhar06}.
\subsection{Self-organized criticality in the Abelian sandpile model}
\label{ssec:SOC-sandpile}
The following heuristic suggests that on a large graph, avalanches in
the stationary sandpile Markov chain will occur on all scales up to the
size of the system. Start from an empty pile. Initially, when not many
particles have been added yet, avalanches will be small. As more particles
get added, the typical size of avalanches grows. The only limit to this growth
is that particles are lost to the sink. Hence when stationarity is reached,
we can expect to see avalanches on all scales up to the size of the system.
Numerical simulations \cites{Dhar06,Manna90} of the model on subsets of
$\Z^d$ suggest that the above heuristic is correct, and various
avalanche characteristics have power law distributions, up to a cut-off
that grows with the system size. In this section we state some conjectures
that quantify this in terms of critical exponents. In our discussions,
we consider the model on the wired graph $G_n$ constructed from a finite box
$V_n = [-n,n]^d \cap \Z^d$, as in Example \ref{example:wired-graph}.
First we briefly comment on the case $d = 1$. Here one can explicitly
compute the set of recurrent configurations and the sandpile group;
see Exercise \ref{ex:1D}. The sandpile group of $G_n$ is isomorphic to
$\Z_{2n+2}$; in particular, the number of recurrent configurations grows only
linearly in $n$. This is in contrast with $d \ge 2$, where the number of
recurrent configurations grows exponentially in $n^d$.
There is significantly ``less randomness'' in $d = 1$ than
in $d \ge 2$. Avalanches can be computed explicitly in $d = 1$,
and it is found that with probability approaching $1$ all avalanches
reach the sink. On the other hand, for $d \ge 2$ most avalanches
do not reach the sink. Below we restict our attention to
$d \ge 2$, and refer to \cites{Dhar06,MRSV00} for more details on
the one-dimensional case.
We write $\nu_n$ for the stationary distribution of the sandpile
Markov chain on $G_n$. Given $z \in V_n$, we define
the \textbf{avalanche cluster at $z$} as the set of
vertices that are toppled when we add a particle at $z$
to the sandpile $\xi$:
\eqn{e:Av}
{ \Av_{z,V_n}
= \Av_{z,V_n}(\xi)
:= \{ x \in V_n : n(z,x,\xi) > 0 \}. }
We also define the \textbf{size} of the avalanche as
the number of topplings, with multiplicity:
\eqn{e:S}
{ S_{z,V_n}
= S_{z,V_n}(\xi)
:= \sum_{x \in V_n} n(z,x;\xi). }
The \textbf{radius} of the avalanche is:
\eqn{e:R}
{ \mathrm{rad}(\Av_{z,V_n}(\xi))
:= \max \{ |x - z| : x \in \Av_{z,V_n}(\xi) \}. }
\subsubsection{Two easy critical exponents}
\label{sssec:easy}
We start with computing two easy exponents.
Recall Dhar's formula from Section \ref{ssec:stationary}.
Observe that
\eqnst
{ \frac{1}{2d} (\Delta'_{V_n})_{zx}
= I_{zx} - \p^1_n(z,x), }
where $\p^1_n(z,x)$ is the transition matrix of simple random
walk on $V_n$ stopped on the first exit from $V_n$, and
$I$ is the identity matrix. Denote
\eqnsplst
{ \p^k_n(z,x)
&:= \text{$k$-step transition probability from $z$ to $x$}; \\
G_n(z,x)
&:= \Big( \Delta'_{V_n} \Big)^{-1}_{zx}, \quad z, x \in V_n. }
The matrix $G_n$ has a well-known interpretation in terms of the
simple random walk.
\begin{exercise}
\label{ex:Green}
(i) Show that
\eqnst
{ 2d\, G_n(z,x)
= \sum_{k=0}^\infty \p^k_n(z,x)
= \E^z [ \text{number of visits to $x$ before exiting $V_n$} ]. }
(ii) Show that if $z \in [-n/2,n/2]^d \cap \Z^d$,
we have
\eqnst
{ \sum_{x \in V_n} G_n(z,x)
\asymp n^2, \quad n \ge 1. }
\end{exercise}
Dhar's formula in the present setting says that
\eqn{e:}
{ \E_{\nu_n} [ n(z,x;\cdot) ]
= G_n(z,x). }
Exercise \ref{ex:Green}(ii) gives that
\eqn{e:n^2}
{ \E_{\nu_n} [ S_{o,V_n} ]
\asymp n^2, }
where we write $o$ to denote the origin in $\Z^d$.
In particular, in the stationary sandpile, the expected size of an
avalanche started at $o$ diverges as $n \to \infty$. Compare this with
the divergence of the expected cluster size for critical percolation.
Let $d \ge 3$. Then
\eqnst
{ \lim_{n \to \infty} G_n(z,x)
= G(z,x)
= (2d)^{-1} \E^z [ \text{number of visits to $x$} ]
< \infty }
exists. The function $2d \, G(z,x)$ is called the \textbf{Green function}
of the random walk. It is known that for all $d \ge 3$ the Green function
is asymptotic to $a_d |x-z|^{2-d}$ as $|x-z| \to \infty$; see \cite{LLbook}*{Theorem 4.3.1}.
Hence
\eqn{e:mean-toppling}
{ \lim_{n \to \infty} \E_{\nu_n} [ n(o,x; \cdot) ]
= G(o,x)
\sim \frac{(2d)^{-1} a_d}{|x|^{d-2}} \quad \text{ as $|x| \to \infty$.} }
In order to neatly formulate asymptotic results as $n \to \infty$,
the following theorem will be useful.
\begin{theorem}\cite{AJ04}*{Theorem 1}
Let $d \ge 2$. There is a measure $\nu$ on the space
$\{ 0, 1, \dots, 2d - 1 \}^{\Z^d}$ such that $\nu_n \Rightarrow \nu$
in the sense of weak convergence.
\end{theorem}
Now \eqref{e:mean-toppling} can be rephrased in the simpler form:
for all $d \ge 3$ we have
\eqn{e:mean-toppling-nu}
{ \E_{\nu} [ n(o,x; \cdot) ]
\sim \frac{(2d)^{-1} a_d}{|x|^{d-2}} \quad \text{ as $|x| \to \infty$.} }
Here the precise meaning of the random variable $n(o,x; \cdot)$ is as
follows. Draw a sample configuration from the limiting measure $\nu$.
Add a particle at $o$, and attempt to stabilize by toppling all unstable
sites simultaneously, whenever there are such. Then $n(o,x; \cdot)$ is
the number of induced topplings at $x$. We write
$\Av_z = \{ x \in \Z^d : n(z,x; \cdot) > 0 \}$ and
$S_z = \sum_{x \in \Z^d} n(z,x; \cdot)$.
\subsubsection{Further critical exponents}
\label{sssec:further}
The relation \eqref{e:mean-toppling-nu} gives the average
number of topplings at $x$ induced by adding a particle at $o$.
We now state a theorem and a conjecture concerning the \emph{probability}
that $x$ topples, if we add a particle at $o$.
\begin{theorem}\cite{JRS13}
\label{thm:prob-toppling-nu}
For all $d \ge 5$ there are constants $c = c(d), C = C(d) > 0$ such that
\eqn{e:prob-toppling-nu}
{ \frac{c}{|x|^{d-2}}
\le \nu [ x \in \Av_o ]
\le \frac{C}{|x|^{d-2}}, \quad x \not= 0. }
\end{theorem}
Note that the upper bound follows easily from Dhar's formula \eqref{e:mean-toppling-nu}
and Markov's inequality, so the real content of the theorem is the lower bound.
\begin{conjecture}
For $2 \le d \le 4$ there exists $\eta = \eta(d) \ge 0$ such that
\eqnst
{ \nu [ x \in \Av_o ]
= \frac{1}{|x|^{d-2 + \eta + o(1)}} \quad \text{ as $|x| \to \infty$.} }
\end{conjecture}
We have intentionally written the exponent in the form $d - 2 + \eta$,
in order to highlight the comparison with \eqref{e:prob-toppling-nu}
and \eqref{e:exponents}.
The second conjecture we state concerns the number of topplings
in an avalanche. This can be measured by the total number
of topplings, with or without multiplicity.
\begin{conjecture}
For all $d \ge 2$ there exist exponents
$\tau = \tau(d),\, \tau' = \tau'(d) \ge 0$ depending on $d$ such that
\eqnsplst
{ \nu [ S_o \ge k ]
&= k^{1-\tau + o(1)}, \quad \text{ as $k \to \infty$;} \\
\nu [ |\Av_o| \ge k ]
&= k^{1-\tau' + o(1)}, \quad \text{ as $k \to \infty$;} }
\end{conjecture}
Since $\E_\nu S_o = \infty$, we must have $\tau \le 2$, if it
exists. It is also plausible that $\E_\nu |\Av_o| = \infty$,
so we should also have $\tau' \le 2$, if it exists.
Manna \cite{Manna90} presents numerical evidence suggesting
that $\tau = \tau'$. Theorem \ref{thm:prob-toppling-nu} gives
rigorous support to this conjecture when $d \ge 5$. Indeed,
\eqref{e:prob-toppling-nu} shows that
\eqnst
{ \E_\nu [ n(o,x;\cdot) \,|\, n(o,x;\cdot) \ge 1 ]
= O(1), }
that is, the average number of topplings at $x$, conditional on
the event that $x$ topples, is $O(1)$.
There are heuristic arguments suggesting that
$\tau = \tau' = 3/2$ when $d > 4$; see \cite{Pr00}.
Finally, we state a conjecture regarding the radius of an avalanche.
\begin{conjecture}
For all $d \ge 2$ there exist $\alpha \ge 0$ such that
\eqnst
{ \nu [ \mathrm{rad}(\Av_o) > r ]
= r^{-\alpha+o(1)}, \quad \text{ as $r \to \infty$.} }
\end{conjecture}
When $d > 4$, it is conjectured that $\alpha = 2$; see \cite{Pr00}.
It is a well-known heuristic in statistical physics that the behaviour
of a lattice model in sufficiently high dimensions can be approximated
by its behaviour on an infinite $k$-regular tree with $k \ge 3$,
called the \textbf{Bethe lattice}.
In general, there exists a \textbf{critical dimension} $d_c$ such that for $d > d_c$
the values of the critical exponents do not depend on the $d$ and take on
the same values as on the $k$-regular tree. The conjectured critical
dimension for the sandpile model is $d_c = 4$ (for the percolation
model of Section \ref{ssec:percolation} it is conjectured to be $d_c = 6$).
Rigorous support to the idea that $d_c = 4$ for the Abelian
sandpile model is provided by Theorem \ref{thm:Pemantle}, showing
that there is a clear change in behaviour at dimension $4$ for the
closely related wired spanning forest measure.
When $\Z^d$ is replaced by a $k$-regular tree, the distribution of the
random set $\Av_o$ has been computed explicitly by
Dhar and Majumdar \cite{DM90} using combinatorial methods.
In particular, they obtain $\tau = 3/2$. It is natural to define the
Euclidean distance between vertices $z, x$ of the tree as the square
root of their graph distance. With respect to this distance
Dhar and Majumdar \cite{DM90} also obtain $\alpha = 2$.
\section{Connections to other models}
\label{sec:connections}
One of the appeals of the Abelian sandpile is its close relationship with
other probability models on graphs. In this section we present
connections to: spanning trees; the rotor-router walk;
the random cluster model and the Tutte polynomial.
\subsection{The burning bijection}
\label{ssec:burning}
As in Section \ref{sec:model}, let $G = (V \cup \{ s \}, E)$ be
a finite connected multigraph. The \textbf{burning algorithm} introduced
by Dhar \cite{Dhar90} provides an efficient way to check whether a
given stable sandpile is recurrent. This gives a combinatorial
characterization of recurrent sandpiles. The algorithm leads
to the \textbf{burning bijection} between recurrent sandpiles on $V$
and spanning trees of $G$. This bijection is due to Majumdar and Dhar \cite{MD92}.
\begin{definition}
Let $\eta \in \Omega_G$, and let $\es \not= F \subset V$. We say that
$\eta$ is \textbf{ample} for $F$, if there exists $x \in F$ such that
$\eta(x) \ge \deg_F(x)$ (the degree of $x$ in the subgraph induced
by the set of vertices $F$).
\end{definition}
\begin{lemma}
\label{lem:ample}
If $\eta \in \cR_G$, then $\eta$ is ample for all $\es \not= F \subset V$.
\end{lemma}
\begin{proof}
Due to Theorem \ref{thm:recurrent}(ii)(c), there exists $\xi$ such that
$\xi^\circ = \eta$ and $\xi(x) \ge \deg_G(x)$ for all $x \in F$.
Fix a stabilizing sequence for $\xi$. Observe that each vertex in $F$
must topple in this stabilizing sequence. Let $x$ be the first vertex
among the vertices in $F$ that finishes toppling.
After $x$ finishes toppling, it receives particles from each of its
neighbours in $F$ (as each of these neighbours will still topple).
The number of particles received is altogether
$\sum_{y \in F,\, y \not= x} a_{yx} = \deg_F(x)$.
Hence $\eta(x) \ge \deg_F(x)$, as required.
\end{proof}
\medbreak
\textbf{The burning algorithm.} The input of the algorithm is a
stable sandpile $\eta \in \Omega_G$.
At time $t = 0$, we declare $s$ ``burnt''. We set $B_0 = \{ s \}$,
the set of vertices burnt at time $0$, and set $U_0 = V$, the set of
vertices unburnt at time $0$.
At time $t = 1$, we declare burnt all $x \in U_0$ such that
$\eta(x) \ge \deg_{U_0}(x)$. That is, we set
\eqnsplst
{ B_1
&= \{ x \in U_0 : \eta(x) \ge \deg_{U_0}(x) \} \\
U_1
&= U_0 \setminus B_1. }
At a generic time $t \ge 1$, we declare burnt all vertices
in $x \in U_{t-1}$ such that $\eta(x) \ge \deg_{U_{t-1}}(x)$.
That is, we set
\eqnsplst
{ B_t
&= \{ x \in U_{t-1} : \eta(x) \ge \deg_{U_{t-1}}(x) \} \\
U_t
&= U_{t-1} \setminus B_t. }
\medbreak
We must eventually have $U_T = U_{T+1} = \dots$ for some $1 \le T < \infty$.
If $U_T \not= \es$, then the relation $U_T = U_{T+1}$ (equivalently:
$B_T = \es$) shows that $\eta$ is not ample for $U_T$.
Hence Lemma \ref{lem:ample} shows that $\eta$ is not recurrent
in this case. We will see shortly the converse, that is, when $U_T = \es$
we can conclude that $\eta$ is recurrent.
\medbreak
\textbf{The burning bijection.} Denote $\cT_G = \{ \text{spanning trees of $G$} \}$.
We use the burning algorithm to define a map $\varphi : \cR_G \to \cT_G$.
For every $x \in V$ fix an ordering $\prec_x$ of the edges incident
with $x$. It will be useful to think of these edges as being oriented from
$x$ towards the corresponding neighbours. Let $\eta \in \cR_G$.
The spanning tree $\varphi(\eta)$ will be defined by assigning to each
$x \in V$ an edge incident with $x$, and oriented outwards from $x$.
The construction will guarantee that the edges form a spanning tree
oriented towards $s$.
We just saw that running the burning algorithm on $\eta$ burns
all vertices. Therefore, given any vertex $x \in V$, there is a
unique $t = t(x) \ge 1$ such that $x \in B_t$. Let
\eqnsplst
{ F_x
&= \{ f : \tail(f) = x,\, \head(f) \in B_{t-1} \} \\
m_x
&= \left| \Big\{ f : \tail(f) = x,\,
\head(f) \in \bigcup_{r \le t-1} B_r \Big\} \right|. }
Here $| \cdot |$ denotes the number of elements of a set.
Observe the following properties:\\
(i) We have $F_x \not= \es$. This is because a vertex becomes burnable
in the algorithm precisely because a sufficient number of its neighbours
have burnt to satisfy the inequality $\eta(x) \ge \deg_{U_{t-1}}(x)$. Hence
there always is at least one neighbour of $x$ that burned in the previous
step.\\
(ii) We have $\deg_G(x) - m_x \le \eta(x) < \deg_G(x) - m_x + | F_x |$.
The first inequality holds because the left hand side is $\deg_{U_{t-1}}(x)$.
The second inequality holds because if it was violated, then $x$ should
have burnt at a time $\le t-1$.\\
Supposing $\eta(x) = \deg_G(x) - m_x + i$, with $0 \le i < | F_x |$, and
$F_x = \{ f_0 \prec_x f_1 \prec_x \prec_x \dots \prec_x f_{| F_x | -1} \}$,
we set $e_x = f_i$. Now put $\varphi(\eta) := \tau := \{ e_x : x \in V \}$.
Since $\head(e_x) \in B_{t(x)-1}$ for each $x$, the collection $\tau$
does not contain cycles. Therefore, it is a spanning tree of $G$
(oriented towards $s$). Now forget the orientation of the edges
to obtain an unoriented spanning tree. (Note: there is no loss of
information in doing so, since the orientation is uniquely recovered
by following paths leading to $s$).
\medbreak
\begin{exercise}
\label{ex:injective}
Show that $\varphi$ is injective. \emph{Hint:} If $\eta_1 \not= \eta_2$,
there is a first time $t$ when ``different things happen'' in the
constructions of $\varphi(\eta_1)$ and $\varphi(\eta_2)$. Check that at this
time some $e_x$ is assigned differently for the two configurations.
\end{exercise}
A well-known result in combinatorics is the Matrix-Tree Theorem \cite{Bbook},
that states that $|\cT_G| = \det ( \Delta'_G )$. We saw in Theorem \ref{thm:group}(i)
that also $|\cR_G| = \det ( \Delta'_G )$. Therefore Exercise \ref{ex:injective}
imlies the following corollary.
\begin{corollary}
The mapping $\varphi : \cR_G \to \cT_G$ is a bijection.
\end{corollary}
\begin{exercise}
\label{ex:comb}
Deduce the following combinatorial characterization of recurrent
states:
\eqnst
{ \parbox{12cm}{a sandpile $\eta$ is recurrent if and only if
it is ample for any $\es \not= F \subset V$.} }
\emph{Hint:} The spanning tree $\varphi(\eta)$ is well-defined, and
injectivity still holds, whenever $\eta$ ``passes'' the burning
algorithm, that is, when all vertices burn.
\end{exercise}
\begin{exercise}
\label{ex:1D}
Show that if $G_n = (V_n \cup \{ s \}, E_n)$ is the wired graph
constructed from $V_n = \{ 1, \dots, n \} \subset \Z$, then
$\cR_{G_n}$ consists of those sandpiles for which there is
at most one vertex with no particles (in particular, $|\cR_{G_n}| = n+1$).
Show that the sandpile group of $G_n$ is isomorphic to $\Z_{n+1}$,
and it is generated by $E_1$.
\end{exercise}
\begin{exercise}
\label{ex:inverse}
Specify explicitly the inverse map $\varphi^{-1} : \tau \mapsto \eta$.
\emph{Hint:} $x \in B_t$ if and only if $\dist_\tau(s,x) = t$,
where $\dist_\tau$ is the graph-distance in the tree $\tau$.
\end{exercise}
Recall that the stationary measure $\nu_G$ is
the uniform distribution on recurrent sandpiles. The bijection
$\varphi$ maps this measure to the uniform distribution
on the set of spanning trees of $G$. This is called the
\textbf{uniform spanning tree} measure, denoted $\mathsf{UST}_G$.
See \cite{LPbook} and \cite{BLPS} for the rich theory of
uniform spanning trees.
What makes the burning bijection a very useful tool, is that
there is a simple (and indeed very efficient) algorithm due
to Wilson \cite{W96} to generate a uniformly random element
of $\cT_G$, that is, a sample from $\UST_G$. Mapping this random
tree back via the map $\varphi^{-1} : \cT_G \to \cR_G$,
one can analyze the measure $\nu_G$. In order to describe
Wilson's algorithm, we need the procedure of loop-erasure.
Given a path $\pi = [w_0, w_1, \dots, w_k]$ in $G$, we define
the \textbf{loop-erasure} $\LE(\pi) = [v_0, \dots, v_\ell]$
of $\pi$ by chronologically erasing loops from $\pi$, as
they are created. That is, we follow the steps of $\pi$ until
the first time $t$, if any, when $w_t \in \{ w_0, \dots, w_{t-1} \}$.
Suppose $w_t = w_i$. We remove the loop
$[w_i, w_{i+1}, \dots, w_t = w_i]$ from $\pi$, and continue
tracing $\pi$. The process stops when there are no more loops
to remove, yielding a self-avoiding path denoted $\LE(\pi)$.
If $\pi$ is obtained from a random walk process on $G$, its
loop-erasure is called the loop-erased random walk (LERW)
\cite{LLbook}.
\medbreak
\textbf{Wilson's algorithm.} Fix a vertex $r$ of $G$ (for example,
in the sandpile context $r = s$ turns out to be a natural choice),
and let $v_1, v_2, \dots, v_K$ be an arbitrary enumeration of the
remaining vertices of $G$. Let $\tau_0 = \{ r \}$. Start a simple
random walk on $G$ at the vertex $v_1$, and stop it when $r$ is first
hit. We attach to $\tau_0$ the loop-erasure of the path from $v_1$ to
$r$, and call the resulting path $\tau_1$. Now we start a second simple
random walk from $v_2$, stop it when it hits $\tau_1$, and attach the
loop-erasure to $\tau_1$. This gives a tree $\tau_2$. When we have
visited all the vertices, we have a spanning tree $\tau_K$ of $G$.
Wilson's theorem \cite{W96} shows that this tree is uniformly distributed
over all spanning trees of $G$.
\medbreak
The LERW and Wilson's algorithm are also very useful when we pass to
infinite graphs (see \cite{BLPS}). In $\Z^d$, $d \ge 3$, the
loop-erasure of an infinite simple random walk path is still
well-defined, because the path visits any vertex only finitely
often, due to transience. In $\Z^2$
the definition of the infinite LERW is not as straightforward.
One possible definition is take a LERW from the origin to the
boundary of a ball of radius $n$, and take the weak limit of these
paths as $n \to \infty$ \cite{LLbook}.
\begin{exercise}
Give a direct proof (without appealing to the Matrix-Tree Theorem)
that $\varphi : \cR_G \to \cT_G$ is a bijection.
\emph{Hint:} See Exercises \ref{ex:injective} and \ref{ex:inverse},
the hint for Exercise \ref{ex:comb} and \cite{HLMPPW}*{Lemma 4.2}.
\end{exercise}
\subsection{The rotor-router model}
\label{ssec:rotor-router}
The rotor-router model, invented by Jim Propp, is a
deterministic analogue of random walk \cite{HLMPPW}. It has also
been discovered independently in the physics literature, where it
is called the Eulerian walkers model \cite{PDDK96}. In the sandpile
model, each vertex $x$ has to ``wait'' until it has collected
$\deg(x)$ chips before it can send them to its neighbours.
The rotor-router mechanism allows us to send chips one-by-one.
The principal reference for this section is \cite{HLMPPW}.
The natural setting for the rotor-router walk is directed graphs.
However, since the emphasis in this section is
the connection to Abelian sandpiles, we will restrict to connected
graphs of the form $G = (V \cup \{ s \}, E)$ as in Section \ref{sec:model},
and regard each edge of $E$ being present with both orientations.
For each $x \in V$, fix a cyclic
ordering of the edges incident with $x$, and orient these edges
outward from $x$. If $e$ is one of these edges ($\tail(e) = x$),
then we denote by $e^+$ the next edge in the cyclic ordering.
\begin{definition}
A \textbf{rotor configuration} is a choice of edges
\eqnst
{ \rho
= ( \rho(x) : x \in V ), }
such that $\rho(x) \in E$ and $\tail(\rho(x)) = x$ for each $x \in V$.
We think of $\rho(x)$ as the state of a rotor placed at the
vertex $x$. A \textbf{single-chip-and-rotor state} is a pair
$(\rho, w)$, where $w \in V$. We think of $w$ as the location
of a chip placed on the graph. The \textbf{rotor-router operation}
advances the rotor at the current position $w$ according to the
cyclic ordering, and then moves the chip following the
new direction of the rotor. That is, we assign to $(\rho, w)$
the new state $(\rho^+, w^+)$, where
\begin{eqnarray*}
\rho^+(y)
&=& \begin{cases}
(\rho(w))^+ & \text{if $y = w$;} \\
\rho(y) & \text{if $y \not= w$;}
\end{cases} \\
w^+
&=& \head(\rho^+(x)).
\end{eqnarray*}
Iterating the rotor-router operation gives the \textbf{rotor-router walk}.
When the chip arrives at the sink, we stop the walk, and remove the chip.
\end{definition}
\begin{lemma}
Starting from any single-chip-and-rotor state $(\rho,w)$, the rotor-router
walk eventually arrives at the sink.
\end{lemma}
\begin{proof}
If $x \sim s$, then after at most $\deg_G(x)$ visits to $s$, the walk will
arrive at $s$. Inducting along a path to $s$, we obtain that any vertex
can only be visited finitely many times before the walk arrives at the sink.
\end{proof}
\begin{definition}
A \textbf{chip-and-rotor state} is a pair $t = (\rho, \eta)$, where
$\rho$ is a rotor configuration and $\eta$ is a chip configuration
(sandpile) on $V$.
If $\eta(x) \ge 1$, we say that \textbf{$x$ is active}. In this case,
by \textbf{firing $x$} we mean letting a single chip at $x$ take one
rotor-router step. $t$ is stable, if there is no active vertex (all have
moved to the sink).
\end{definition}
The next lemma shows that this model has an Abelian property.
\begin{lemma}
\label{lem:rotor-stabilize}
Starting from any chip-and-rotor state $(\rho, \eta)$, we reach the same
stable state eventually (that is when all chips arrived at the sink),
regardless of what rotor-router steps we choose.
\end{lemma}
\begin{proof}
This can be proved using similar ideas as Theorem \ref{thm:stabilization}.
\end{proof}
\begin{definition}
We denote by $\eta(\rho)$ the result of adding chips to the rotor configuration
$\rho$ according to $\eta$ and stabilizing. The \textbf{chip addition operator}
$E_x$ is defined on a rotor configuration $\rho$ as the result of
adding a single chip at $x$ and stabilizing, that is: $\mathbf{1}_x(\rho)$.
\end{definition}
\begin{definition}
A rotor configuration $\rho$ is \textbf{acyclic}, if the edges $(\rho(x) : x \in V)$
form a spanning tree of $G$ (oriented towards $s$, necessarily).
\end{definition}
\begin{lemma}
\label{lem:permute}
For any chip configuration $\eta$, the map $\rho \mapsto \eta(\rho)$
permutes the collection of acyclic rotor configurations.
\end{lemma}
Perhaps the cleanest way to prove this is to consider unicycles on
strongly connected graphs; see \cite{HLMPPW}. For our specific
setting, the next two exercises sketch a proof.
\begin{exercise}
Show that in $\eta(\rho)$, each rotor is either in its original
position $\rho(x)$, or it points in the direction of the last chip
emitted from $x$. Conclude: if $\rho$ is acyclic, so is $\eta(\rho)$.
\end{exercise}
\begin{exercise}
Show that $\rho' \mapsto E_x(\rho')$, as a map from the collection of
acyclic rotor configurations to itself, is surjective for any $x \in V$.
The following steps can be used: Given $\rho$, add an oriented edge
$(s,x)$ to $\rho$.\\
(i) There is an oriented cycle starting at $s$, let $(y_1,s)$ be its
last edge. Place a chip at $y_1$, and move back the rotor at $y_1$
by one step. \\
(ii) Now there is an oriented cycle starting at $y_1$, let
$(y_2,y_1)$ be its last edge. Move the chip back to $y_2$, and
move back the rotor at $y_2$ by one step. \\
(iii) Show that eventually the chip arrives at $x$ and if the rotor
configuration at that time is $\rho'$, then we have $E_x \rho' = \rho$.
\end{exercise}
\begin{theorem} \ \\
(i) The map $(\rho, [\eta]) \mapsto \eta(\rho)$ defines an action of the
sandpile group on acyclic rotor configurations.\\
(ii) The action is transitive, that is, for any acyclic $\rho, \rho'$
there exists $\eta$ such that $\eta(\rho) = \rho'$.\\
(iii) The action is free, that is, if $\eta(\rho) = \rho$ for some
acyclic $\rho$ then $[\eta] = [0]$.
\end{theorem}
\begin{proof}
(i) From Lemma \ref{lem:rotor-stabilize} it is clear that
$\eta_2( \eta_1 ( \rho ) = (\eta_1 + \eta_2) (\rho)$. Suppose
$\eta_1 \sim \eta_2$. We show that $\eta_1 (\rho) = \eta_2 (\rho)$.
If $\eta(x) \ge \deg_G(x)$, we can advance $\deg_G(x)$ chips at $x$,
one along each edge incident with $x$, and leave the rotor at $x$
unchanged. It follows from this that $\eta(\rho) = \eta^\circ (\rho)$
for any chip configuration $\eta$. Let $I \in \cR_G$ be the sandpile
corresponding to the identity (i.e.~$[I] = [0]$). Then we have
\eqnst
{ I ( I (\rho) )
= (I + I) (\rho)
= (I + I)^\circ (\rho)
= I (\rho) }
for all acyclic rotor configurations $\rho$. Due to
Lemma \ref{lem:permute}, $\{ I(\rho) : \text{$\rho$ acyclic} \}
= \{ \rho : \text{$\rho$ acyclic} \}$, and it follows that
$I(\rho) = \rho$ for all acyclic $\rho$. Now we have
$(\eta_1 + I)^\circ = (\eta_2 + I)^\circ$, and
\eqnst
{ \eta_i (\rho)
= I ( \eta_i (\rho) )
= (I + \eta_i) (\rho)
= (I + \eta_i)^\circ, \quad i = 1, 2. }
This implies the claim.
(ii) Given $\rho$, $\rho'$, let $0 \le \alpha(x) < \deg_G(x)$ be
the number of turns the rotor at $x$ has to make from position
$\rho(x)$ to $\rho'(x)$. Adding chips to $\rho$ according to $\alpha$
and letting each chip take a single step we obtain a
chip-and-rotor stae of the form: $(\rho', \beta)$.
Choose a chip configuration $\sigma$ such that
$[\sigma] = [-\beta]$ (the inverse of $\beta$ in the sandpile
group). Let $\eta = \alpha + \sigma$. Then we have
\eqnst
{ \eta(\rho)
= (\sigma + \alpha)(\rho)
= (\sigma + \beta) (\rho')
= \rho'. }
This proves transitivity of the action.
(iii) Suppose $\eta$ is a chip configuration, $\rho$ is acyclic, and
$\eta(\rho) = \rho$. This means that adding chips according to $\eta$
the rotor at $x$ makes a non-negative integer $c_x$ number of full turns
during stabilization. Since all chips arrive at the sink, $\eta(x)$
equals the number of chips emitted from $x$ minus the number of chips
received at $x$, for each $x \in V$. Therefore:
\eqnst
{ \eta(x)
= \deg_G(x) c_x - \sum_{y \in V} a_{yx} c_y
= \sum_{y \in V} c_y \Delta'_{yx}, \quad x \in V. }
This shows that $[\eta] = [0]$.
\end{proof}
\begin{remark}
The above proof does not rely on the Matrix-Tree Theorem, and
in fact provides a new proof of it; see \cite{HLMPPW}*{Corrolary 3.18}.
\end{remark}
\begin{remark}
Regarding acyclic rotor configurations as spanning trees of $G$,
the action of $K_G$ allows one to view the sandpile Markov chain
as a dynamics on trees. This dynamics on trees seems more transparent
and explicit than the one obtained using the burning bijection.
\end{remark}
\begin{open}
Is there a meaningful link between avalanches in the Abelian sandpile
and either the rotor-router dynamics on spanning trees or the dynamics
induced by the burning bijection?
\end{open}
\subsection{The random cluster model / Tutte polynomial}
\label{ssec:Tutte}
The uniform spanning tree measure $\mathsf{UST}_G$ is a limiting
case of the so-called random cluster measure. The random cluster
model is a generalization of percolation. The relationship between
sandpiles and the random cluster measure leads to
a formula for the generating function of recurrent sandpiles
enumerated by their total number of particles. In this section
again $G = (V \cup \{ s \}, E)$ is a finite connected multigraph.
\begin{definition}
If $\eta \in \cR_G$, the \textbf{mass of $\eta$} is defined
as $m(\eta) = \sum_{x \in V} \eta(x)$. We put
$N_m = | \{ \eta \in \cR_G : m(\eta) = m \} |$, and
let $\cN(y) = \sum_{m} N_m y^m$ be the generating function
according to mass.
\end{definition}
\begin{exercise}
Show that for all $\eta \in \cR_G$ we have:
\eqnst
{ |E| - \deg_G(s)
\le m(\eta)
\le 2 |E| - \deg_G(s) - |V|. }
Show that the lower bound is achieved for any $\eta \in \cR_G$
that is \emph{minimal} in the sense that $\eta - \mathbf{1}_x \not\in \cR_G$
for all $x \in V$. \emph{Hint for the lower bound:} Use the burning algorithm.
\end{exercise}
The \textbf{random cluster model} on $G$ has two parameters:
$0 < p < 1$ and $q > 0$. It is specified by a probability measure
$\Pr_{p,q}$ on the space $\{ E ' : E' \subset E \}$, that is
given by:
\eqnst
{ \Pr_{p,q} [ E' ]
= \frac{1}{Z_{p,q}} p^{|E'|} (1 - p)^{|E|-|E'|} q^{k(E')}, }
where $k(E')$ denotes the number of connected clusters in the
edge-configuration $E'$, and $Z_{p,q}$ is a normalizing factor
to make $\Pr_{p,q}$ a probability measure. Observe that when
$q = 1$, we get the percolation model.
\begin{exercise}[See \cite{Grbook2}*{Theorem 1.23}]
As $p \to 0$ and $q/p \to 0$ we have $\Pr_{p,q} \to \mathsf{UST}_G$.
\end{exercise}
Abbreviating $v = p/(1-p)$, we have
\eqnsplst
{ Z_{p,q}
= \sum_{E' \subset E} p^{|E'|} (1 - p)^{|E|-|E'|} q^{k(E')}
= (1 - p)^{|E|} \sum_{E' \subset E} v^{|E'|} q^{k(E')}
=: (1 - p)^{|E|} Z'_{v,q}. }
Letting $q \downarrow 0$, the dominant terms are the ones
with $k(E') = 1$, that is the ones where $E'$ is connected.
The number of edges in such a graph is at least $|V|$
(with equality for spanning trees). Hence we can write:
\eqnst
{ Z'_{v,q}
= q v^{|V|} H(v) + O(q^2), \quad \text{ as $q \downarrow 0$,} }
where $H(v)$ is a polynomial. Note that $H(0)$ equals the
number of spanning trees of $G$.
The following theorem is due to Merino L{\'o}pez \cite{ML97}.
\begin{theorem}
\label{thm:Merino}
We have
\eqnst
{ \cN(y)
= y^{|E|-\deg_G(s)} H(y-1). }
\end{theorem}
See \cite{Dhar06} for a proof that follows the ideas of the
burning algorithm to associate a recurrent configuration to
groups of connected graphs $E'$.
One has $H(y-1) = T(1,y;G)$, where $T(x,y;G)$ is the so-called
Tutte polynomial of $G$, a well-known graph invariant in combinatorics
\cite{Bbook}. More generally, $Z_{p,q}$ can be expressed
in terms of the Tutte polynomial; see \cite{Grbook2}*{Section 3.6}.
\begin{exercise}
According to Theorem \ref{thm:Merino}, the number of
recurrent sandpiles that have a minimal number of particles
is $\cN(0) = H(-1) = T_G(1,0)$. It is known \cite{Bbook}
that $T_G(1,0)$ counts the number of \emph{acyclic orientations}
of $G$ with a unique sink at a fixed vertex of $G$. Taking the
unique sink to be at $s$, use the burning algorithm to
construct an explicit bijection between
\emph{minimal sandpiles} and \emph{acyclic orientations}
of $G$ with unique sink at $s$.
\end{exercise}
\section{Determinantal formulas and exact computations}
\label{sec:determinantal}
In this section we will see that certain sandpile probabilities
can be expressed in terms of determinants, and in some
cases these can be evaluated explicitly. The fundamental fact behind
this is that all finite-dimensional marginals of the uniform spanning tree
admit a determinantal formula.
\subsection{The Transfer-Current Theorem}
Let $G$ be a finite connected (unoriented) graph. Write $T_G$ for a random
spanning tree of $G$ chosen uniformly. The following
theorem is due to Burton and Pemantle.
\begin{theorem}[\textbf{Transfer Current Theorem} \cite{BP93}]
\label{thm:TCT}
There exists a matrix $Y_G$ such that for any $k \ge 1$ and
distinct edges $e_1, \dots, e_k$ of $G$ we have
\eqn{e:transfer-current}
{ \Pr [ e_1, \dots, e_k \in T_G ]
= \det ( Y_G(e_i, e_j) )_{i,j=1}^k. }
\end{theorem}
The simplest definition of the \textbf{transfer-current matrix} $Y_G$,
is in terms of random walk. (See \cite{LPbook} for a definition of $Y_G$
in terms of electrical networks.) Given \emph{oriented} edges $e,f$ of $G$,
consider the simple random walk on $G$ started at $\tail(e)$ and stopped
when it first hits $\head(e)$. Let $J^e(f)$ be the expected net usage of $f$
by the walk, i.e. the number of times $f$ was used
minus the number of times the reversal of $f$ was used.
Then $Y_G(e,f) = J^e(f)$. Note that this requires us to chose an orientation
for each edge appearing in the right hand side of the Transfer Current Theorem,
whereas in the left hand side the edges are unoriented. It is part of the
statement of the theorem that the right hand side is independent of what
orientations are chosen.\footnote{This can also be checked directly
using the time reversal of the simple random walk on $G$.}
Due to the structure present in \eqref{e:transfer-current}, the random
collection of edges $T_G$ is called a \textbf{determinantal process}
with \textbf{kernel} $Y_G$.
There is an extension of \eqref{e:transfer-current} to all
cylinder events, also due to \cite{BP93}. A simple case of it that we will
use later is:
\eqnst
{ \Pr [ e_1, \dots, e_k \not\in T_G ]
= \det ( K_G ( e_i, e_j ) )_{i,j=1}^k, }
where $K_G = I_G - Y_G$, with $I_G$ the identitiy matrix.
See the survey \cite{HKPV06} for more information on determinantal processes.
\subsection{The height $0$ probability}
\label{ssec:height-0}
No simple formula like \eqref{e:transfer-current} is known for the
finite-dimensional marginals of the sandpile measure $\nu_G$. However,
there is a method due to Majumdar and Dhar \cite{MD91} for the
computation of the probabilities of \emph{minimal configurations}.
The simplest example is computing the probability that $\eta(o) = 0$,
that we now explain.
Consider $V_n = [-n,n]^2 \cap \Z^2$, and let $G_n$ be the wired graph
obtained from $V_n$. We write $\nu_n = \nu_{G_n}$ for short.
We will obtain a formula for $\nu_n [ \eta(o) = 0 ]$ that makes it
possible to compute its limit as $n \to \infty$.
Let $j_1, j_2, j_3$ denote the south, west, north neighbours of the origin,
respectively. Let $G'_n$ be the graph obtained from $G_n$
by deleting the edges $\{ o, j_i \}$, $i = 1, 2, 3$.
Given $\eta \in \cR_{G_n}$, let
\eqnst
{ \eta'(y)
:= \begin{cases}
\eta(y) - 1 & \text{if $y = j_1, j_2, j_3$;} \\
\eta(y) & \text{otherwise.}
\end{cases} }
\begin{exercise}
\label{ex:reduce}
Show that
\eqnst
{ \eta \in \cR_{G_n},\, \eta(o) = 0 \qquad\qquad \text{ if and only if } \qquad\qquad
\eta' \in \cR_{G'_n}. }
\emph{Hint:} Use the burning algorithm.
\end{exercise}
We write $\Delta'_{G'_n}$ in the form $\Delta'_{G'_n} = \Delta'_{G_n} + B$.
Note that the matrix $B$ has nonzero entries only in the rows and columns
corresponding to $\{ o, j_1, j_2, j_3 \}$, and these are:
\eqn{e:Bmatrix}
{ \begin{blockarray}{ccccc}
o & j_1 & j_2 & j_3 & \\
\begin{block}{(cccc)c}
-3 & 1 & 1 & 1 & o \\
1 & -1 & 0 & 0 & j_1 \\
1 & 0 & -1 & 0 & j_2 \\
1 & 0 & 0 & -1 & j_3 \\
\end{block}
\end{blockarray} }
The above allows us to write
\eqnspl{e:calculate}
{ \nu_n [ \eta(o) = 0 ]
&= \frac{| \{ \eta \in \cR_{G_n} : \eta(o) = 0 \} |}{|\cR_{G_n}|}
\stackrel{\text{Ex.~\ref{ex:reduce}}}{=}
\frac{|\cR_{G'_n}|}{|\cR_{G_n}|}
= \frac{\det ( \Delta'_{G_n} + B )}{\det ( \Delta'_{G_n} )} \\
&= \det ( I + B (\Delta'_{G_n})^{-1} ). }
Due to the fact that $B$ is $0$ apart from the entries shown in \eqref{e:Bmatrix},
the determinant on the right hand side of \eqref{e:calculate}
reduces to a $4 \times 4$ determinant. Recall that
$(\Delta'_{G_n})^{-1}_{xy} = G_n(z,w)$. Since
the random walk is recurrent in two dimensions,
$\lim_{n \to \infty} G_n(z,w) = \infty$.
Hence in order to take the limit $n \to \infty$, we need to
rely on cancellations.
The \textbf{random walk potential kernel} is defined as
\eqnst
{ a(x)
= \lim_{N \to \infty} \sum_{k=0}^{N} \left[ \p^k(o,o) - \p^k(o,x) \right], }
where $\p^k(z,x)$ is the $k$-step transition probability of
simple random walk on $\Z^2$. See \cite{LLbook}*{Section 4.4.1}
for a proof that the limit exists and for further background.
Note that $a(o) = 0$. It holds that
$(\frac{1}{4}\Delta a) (x) = - \mathbf{1}_o(x)$ (see \cite{LLbook}*{Proposition 4.4.2});
in particular, $a$ is a discrete harmonic function in
$\Z^2 \setminus \{ o \}$.
The potential kernel is related to $G_n$ by the following lemma
that is well-known.
\begin{lemma}
\label{lem:pot-kern}
For all $x \in \Z^2$, we have
\eqnst
{ A(x)
= \lim_{n \to \infty} [ G_n(o,o) - G_n(o,x) ]. }
\end{lemma}
Since we are going to prove a stronger version of this statement
in Lemma \ref{lem:estimates}, Eqn.~\eqref{e:Gn(z,o)}, we omit the proof.
The values of $A(x)$ can be compute recursively from symmetry considerations
and the facts that:\\
(i) $\frac{1}{4} \Delta A(x) = - \frac{1}{4} \mathbf{1}_o(x)$;\\
(ii) $A((n,n)) = \frac{1}{\pi} \left[ 1 + \frac{1}{3} + \dots + \frac{1}{2n-1} \right]$;\\
see for example \cite{LPbook} or \cite{Spbook}*{Section 15}.
In particular,
\eqn{e:pot-kern}
{ A(o) = 0,\
A(j_1) = \frac{1}{4},\
A(j_1+j_2) = \frac{1}{\pi},\
A(j_1-j_3) = 1 - \frac{2}{\pi},\
A(j_1 + j_2 - j_3) = \frac{-1}{4} + \frac{2}{\pi}. }
Let us return to the limit of the determinant in \eqref{e:calculate}.
Since the row sums of $B$ are $0$, the computation can be recast in terms of $A$.
For example, the $o,o$ entry of $I + B (\Delta'_{G_n})^{-1}$ equals
\eqnst
{ 1 - 3 G_n(o,o) + G_n(o,j_1) + G_n(o,j_2) + G_n(o,j_3)
\stackrel{n \to \infty}{\longrightarrow} 1 - 3 A(j_1)
= \frac{1}{4}. }
Straightforward calculations using the values \eqref{e:pot-kern} and symmetry yield:
\eqn{e:p(0)}
{ p(0)
:= \lim_{n \to \infty} \nu_n [ \eta(o) = 0 ]
= \nu [ \eta(o) = 0 ]
= \frac{2}{\pi^2} - \frac{4}{\pi^3}. }
When $d \ge 3$, a similar argument can be applied. Now we have
$\lim_{n \to \infty} G_n(z,x) = G(z,x)$, so
$\nu [ \eta(o) = 0 ]$ is expressed
in terms of the Green function $G(z,x)$.
\subsection{The height $0$-$0$ correlation}
\label{ssec:corr-0-0}
The idea of Majumdar and Dhar presented in the previous section also gives
a formula for the covariance between the events $\{ \eta(o) = 0 \}$ and
$\{ \eta(y) = 0 \}$; see \cite{MD91}. Consider first
two dimensions. This time we modify the
graph both near $o$ and $y$, by removing the edges leading
from $o$ to $j_1$, $j_2$, $j_3$, and the edges leading from
$y$ to neighbours $j'_1$, $j'_2$, $j'_3$.
Then similarly to the previous section,
$\nu_n [ \eta(o) = 0,\, \eta(y) = 0 ]$ can be
written as an $8 \times 8$ determinant, that arises from $4$
blocks of size $4 \times 4$. Since the row sums of $B$ are $0$,
row and column operations can be used to reduce the size of the blocks
to $3 \times 3$. The result of this can be written in the
form (see for example \cite{Durre}):
\eqn{e:det-3x3}
{ \nu_n [ \eta(o) = 0,\, \eta(y) = 0 ]
= \det \left( I_{v = w} - K_n(v,w) \right)_{v,w \in \{o,y\}}, }
where $I_{v=w}$ is the $3 \times 3$ identity matrix when $v = w$ and the
$3 \times 3$ null matrix when $v \not= w$. The $3 \times 3$ matrix
$K_n(v,w)$ is given by:
\eqn{e:K_n}
{ K_n(v,w)
= \left( \partial^{(1)}_{e} \partial^{(2)}_{f} G_n(v,w) \right)_{e, f}, }
where for any function $h : \Z^2 \to \R$ and vector $e \in \Z^2$
we define:
\eqnsplst
{ \partial^{(1)}_e h(v,w)
&= h(v+e,w) - h(v,w), \\
\partial^{(2)}_f h(v,w)
&= h(v,w+f) - h(v,w). }
In the formula \eqref{e:K_n}, the vectors $e$ and $f$ range over the
unit vectors: $(0,-1)$, $(-1,0)$ and $(0,1)$ (these are the vectors
pointing from $o$ to $j_1$, $j_2$ and $j_3$).
Letting $n \to \infty$, we obtain an expression
\eqn{e:det-3x3-asymp}
{ \nu [ \eta(o) = 0,\, \eta(y) = 0 ]
= \det \left( I_{v = w} - K(v,w) \right)_{v,w \in \{o,y\}}, }
with
\eqn{e:K}
{ K(v,w)
= \left( \partial^{(1)}_{e} \partial^{(2)}_{f} A(w-v) \right)_{e, f}. }
Here $\det(K(o,o)) = \det(K(y,y)) = p(0)$, from the previous
section. In order to understand how correlations decay as $|y| \to \infty$,
let us examine the order of magnitude of the entries of $K(o,y)$.
It is well know (see \cite{LLbook}*{Theorem 4.4.4}) that there
exists a constant $c_0$ such that
\eqn{e:pot-kern-asymp}
{ A(y)
= \frac{1}{2 \pi} \log|y| + c_0 + O\left( |y|^{-2} \right), \quad
\text{as $|y| \to \infty$.} }
This shows that the entries of $K(o,y)$ (and that of $K(y,o)$)
are $O(|y|^{-2})$, and hence
\eqn{e:corr-asymp}
{ \nu [ \eta(o) = 0,\, \eta(y) = 0 ] - \nu [ \eta(o) = 0 ] \nu [ \eta(y) = 0 ]
= O \left( |y|^{-4} \right), \quad \text{as $|y| \to \infty$.} }
One can compute the constant in this asymptotics using more precise
information on the error term in \eqref{e:pot-kern-asymp}.
Indeed, regarding $y$ as a complex number, the error term
in \eqref{e:pot-kern-asymp} is of the form
\eqnst
{ \frac{\mathfrak{Re}\, y^4}{|y|^6} + O \left( |y|^{-4} \right); }
see \cite{FU96} or \cite{KS04}.
Therefore, after taking second differences, the error term
of \eqref{e:pot-kern-asymp} does not contribute to the $|y|^{-4}$
term in \eqref{e:corr-asymp}.
This yields the result obtained by Majumdar and Dhar \cite{MD91}:
\eqn{e:cov-0-0}
{ \nu [ \eta(o) = 0,\, \eta(y) = 0 ] - \nu [ \eta(o) = 0 ] \nu [ \eta(y) = 0 ]
\sim - \frac{p(0)^2}{2 |y|^4}, \quad \text{as $|y| \to \infty$.} }
In dimensions $d \ge 3$ a similar computation can be carried out
showing that the covariance between two $0$'s decays as
$-c |y|^{-2d}$, as $|y| \to \infty$, with $c = c(d) > 0$.
\subsection{Scaling limit of the height $0$ field}
\label{ssec:scaling-0}
The second differences of discrete Green functions considered in the
previous section converge, under rescaling, to partial derivatives
of continuous Green functions. This allows to get formulas for the
scaling limit of the covariance functions between heights $0$.
The result is especially interesting in two dimensions, as there the
continuous Green function is conformally invariant, which implies
that the covariance functions transform in a nice way under
conformal maps. Although this fact seems to be well-known by physicists
(see for example \cite{IP98}*{Section 3.3} and \cite{JPR06}),
we are not aware of a mathematically precise formulation of it in the
physics literature. We state below a theorem of D\"{u}rre \cite{Durre}
that provides such a formulation.
Let $U \subset \mathbb{C}$ be a bounded connected domain with smooth
boundary. Let $U_\eps = (U/\eps) \cap \Z^2$, and for $v \in U$
let $v_\eps \in U_\eps$ be such that $|v/\eps - v_\eps| \le 2$.
Denote $h_\eps (v) = \mathbf{1}_{\eta(v) = 0}$, which is a random field,
index by $v \in U_\eps$, under the measure $\nu_{U_\eps}$.
\begin{theorem}[\cite{Durre}*{Theorem 1}]
Let $V \subset U$ be a finite set of points in the interior of $U$. Then as
$\eps \to 0$, the rescaled joint moments
\eqnst
{ \eps^{-2 |V|} \E_{\nu_{U_\eps}} \left[ \prod_{v \in V}
\left[ h_\eps(v_\eps) - \E_{\nu_{U_\eps}} h_\eps(v) \right] \right] }
have a finite limit $E_U(v : v \in V)$, which is conformally covariant
with scale dimension $2$.
\end{theorem}
Here \textbf{conformally covariant} means that if $f: U \to U'$ is a conformal map,
then
\eqnst
{ E_{U}(v : v \in V)
= E_{U'}(f(v) : v \in V) \cdot \prod_{v \in V} \left| f'(v) \right|^2, }
and the exponent $2$ is the \textbf{scale dimension}.
When $V = \{ v, w \}$, the limit is:
\eqnst
{ E_U(v,w)
= -c \left[ \left( \partial^{(1)}_x \partial^{(2)}_x g_U \right)^2
+ \left( \partial^{(1)}_y \partial^{(2)}_y g_U \right)^2
+ \left( \partial^{(1)}_x \partial^{(2)}_y g_U \right)^2
+ \left( \partial^{(1)}_y \partial^{(2)}_x g_U \right)^2 \right], }
where $g_U(v,w) = g_U((x_1,y_1),(x_2,y_2))$ is the continuous Green function in $U$
for the Laplacian with Dirichlet boundary conditions.
Summability of the covariance function $-c/|y|^4$ of \eqref{e:cov-0-0}
suggests that if the random field $h_\eps$ is integrated against
smooth test functions then we get a Gaussian limit. This is indeed the case.
\begin{theorem}[\cite{Durre}*{Theorem 3}]
There is a constant $\mathcal{V} > 0$ such that the following holds.
Let $f_i \in C_0^\infty(U)$, $1 \le i \le n$. Then the random variables
\eqnst
{ f_i \diamond h_\eps
:= \frac{\eps}{\sqrt{\mathcal{V}}} \sum_{v \in U_\eps}
f_i(\eps v) \left( h_\eps(v) - \E_{\nu_{U_\eps}} h_\eps(v) \right) }
converge in distribution to a multivariate normal random variable with
covariance
\eqnst
{ C_{ij}
= \int_U f_i(x,y) f_j(x,y) \, dx \, dy, \quad i, j = 1, \dots, n. }
\end{theorem}
\subsection{The probabilities of heights $1$, $2$, $3$ in two dimensions}
\label{ssec:height-123}
The probabilities of heights different from $0$ are, in general, more difficult to compute.
In the case of an infinite regular tree all height probabilities can be computed
using combinatorial methods; see \cite{DM90}. However, on Euclidean lattices
of dimension at least $2$, exact results are only known when $d = 2$.
The goal of this section is to sketch the main ideas of Priezzhev \cite{Pr94}
that yield the probabilities of heights $1$, $2$, $3$ on $\Z^2$.
This section is quite long and technical in many parts, so the reader might
want to skip some of the proofs on first reading.
\subsubsection{Background}
Let us denote
\eqnst
{ p(i)
:= \nu [ \eta(o) = i ], \quad i = 0, 1, \ldots, 2d-1. }
In the case $d = 2$, Priezzhev \cite{Pr94} gave exact formulas for $p(1), p(2), p(3)$.
He was able expressed them in terms of explicit rational polynomials in $1/\pi$
and two multiple integrals. Grassberger evaluated the integrals numerically
(see \cite{Dhar06}*{Section 9.3.1}), and observed that
mysteriously the \emph{average height} $\zeta = \sum_{i=0}^3 i\, p(i)$
appears to be the simple rational number $17/8$.
Jeng, Piroux and Ruelle \cite{JPR06} extended Priezzhev's ideas, and
in particular, were able to express the $p(i)$'s in terms of a single integral.
They noticed that numerical evaluation of the unknown integral gave $1/2 \pm 10^{-12}$,
and conjectured that the this integral is exactly equal to $1/2$.
Assuming this conjecture, and combined with Priezzhev's work,
they obtained the remarkable formulas:
\eqnspl{e:p(i)'s}
{ p(0)
&= \frac{2}{\pi^2} - \frac{4}{\pi^3} \\
p(1)
&= \frac{1}{4} - \frac{1}{2 \pi} - \frac{3}{\pi^2} + \frac{12}{\pi^3} \\
p(2)
&= \frac{3}{8} + \frac{1}{\pi} - \frac{12}{\pi^3} \\
p(3)
&= 1 - p(0) - p(1) - p(2)
= \frac{3}{8} - \frac{1}{2 \pi} + \frac{1}{\pi^2} + \frac{4}{\pi^3}. }
These values indeed yield $\zeta = 17/8$ as the average height.
Poghosyan and Priezzhev \cite{PP11} observed that the average height
can be rephrased in terms of the LERW. Let
\eqnst
{ \xi
= \E [ \text{number of neighbours of $o$ visited by the infinite LERW} ]. }
Then the statement $\zeta = 17/8$ is equivalent to $\xi = 5/4$ (this
equivalence will be explained below). Levine and Peres \cite{LP13}
called $\xi$ the \textbf{looping constant of $\Z^2$}, and proved
further relations between $\xi$, the number of spanning uni-cycles
and the Tutte-polynomial. The relations they prove hold
in all dimensions $d \ge 2$.
Kenyon and Wilson \cite{KW11}, and independently,
Poghosyan, Priezzhev and Ruelle \cite{PPR11} gave different proofs that
$\xi = 5/4$, which in turn gives a rigorous confirmation that the
aforementioned integral is exactly $1/2$ and proves the values \eqref{e:p(i)'s}.
Kenyon and Wilson in fact develop a general method for
calculating the probability that the infinite LERW in two dimensions
passes through any given vertex or any given oriented edge of $\Z^2$.
The proof of Poghosyan, Prieezhev and Ruelle proceeds via a
connection to monomer-dimer coverings. They can reduce the
problem of $\xi = 5/4$ to calculating the probabilities of
some local events in the monomer-dimer model that can be expressed in
terms of finite determinants akin to the calculations in
Section \ref{ssec:height-0}.
\subsubsection{The looping constant}
\label{sssec:looping}
Let us see the connection between the average height and the looping
constant. It will be convenient at this point
to introduce a slightly different
version of the burning bijection. In what follows, let
$G_n = (V_n \cup \{ s \}, E_n)$ be the
wired graph obtained from $V_n = [-n,n]^d \cap \Z^d$, $d \ge 2$.
\medbreak
\textbf{Burning bijection anchored at the origin.}
The burning process will consist of two Phases.
\emph{Phase I.} We burn all vertices we can \emph{without} burning
the origin. That is, we define $B^{(I)}_0 = \{ s \}$, $U^{(I)}_0 = V_n$, and
for $t \ge 1$ we set:
\eqnsplst
{ B^{(I)}_{t}
&:= \left\{ x \in U^{(I)}_{t-1} \setminus \{ o \} :
\eta(x) \ge \deg_{U^{(I)}_{t-1}}(x) \right\} \\
U^{(I)}_t
&:= U^{(I)}_{t-1} \setminus B^{(I)}_t. }
At some finite time no more vertices can be burnt, that is, $B^{(I)}_{J} = \es$
for some $1 \le J < \infty$.
\emph{Phase II.} Burn all the remaining vertices in the usual way.
That is, we start with $B^{(II)}_0 = \cup_{j \ge 0} B^{(I)}_j$,
$U^{(II)}_0 = \cap_{j \ge 0} U^{(I)}_j$, and for $t \ge 1$ set
\eqnsplst
{ B^{(II)}_{t}
&:= \left\{ x \in U^{(II)}_{t-1} :
\eta(x) \ge \deg_{U^{(II)}_{t-1}}(x) \right\} \\
U^{(II)}_t
&:= U^{(II)}_{t-1} \setminus B^{(II)}_t. }
It is not difficult to see that for all $\eta \in \cR_G$,
all vertices burn eventually.
Now build a bijection $\varphi_o : \cR_G \to \cT_G$ based on the
above burning rule, similarly to what we did in Section \ref{ssec:burning}.
That is, if $x \in B^{(I)}_t$ for some $t \ge 1$, draw an oriented edge from $x$
to one of its neighbours in $B^{(I)}_{t-1}$. If $x \in B^{(II)}_t$ for
some $t \ge 1$, draw an edge from $x$ to one of its neighbours
in $B^{(II)}_{t-1}$. In both cases, break ties as in the usual bijection,
if necessary.
\medbreak
The following two claims are not difficult to verify, and are left as
exercises. See also \cite{JW12}.
\begin{exercise}
\label{ex:desc}
Put
\eqnst
{ W_n
= W_n (\eta)
= \{ \text{vertices that did not burn in Phase I} \}
= U^{(II)}_0. }
Then $W_n = \{ \text{descendants of $o$ in $\tau = \varphi_o(\eta)$} \}$.
Here a vertex $w$ is called a descendant of the vertex $v$, if $v$
lies on the unique path between $w$ and $s$ in the tree $\tau$.
\end{exercise}
\begin{exercise}
\label{ex:cond-height}
Under the measure $\nu_n$ and conditional on the event $\deg_{W_n} (o) = i$,
the random variable $\eta(o)$ is uniformly distributed on $\{ i, i+1, \dots, 2d-1 \}$.
\emph{Hint:} Condition further on the entire set $W_n$, and consider
the possible values of $\eta(o)$ in relation to the outgoing edge
from $o$ in $\varphi_o(\eta)$.
\end{exercise}
The $p(i)$'s can be rephrased in terms of the quantities
\eqnst
{ q(i)
:= \lim_{n \to \infty} \nu_n [ \deg_{W_n}(0) = i ], \quad i = 0, 1, \dots, 2d - 1. }
Existence of the limit follows, for example, from results presented in
Section \ref{sec:measures}. Due to Exercise \ref{ex:cond-height}, we have
\eqn{e:p-q}
{ p(i)
= \sum_{j=0}^i \frac{1}{2d-j} q(j). }
Linearity of expectation and Wilson's
algorithm yield
\eqnsplst
{ \sum_{i=0}^{2d-1} i q(i)
&= \lim_{n \to \infty} \sum_{x \sim o}
\nu_n [ x \in W_n ] \\
&= \lim_{n \to \infty} \sum_{x \sim o}
\Pr [ \text{LERW from $x$ to $s$ in $G_n$ visits $o$} ]. }
By the definition of the infinite LERW and translation invariance
the right hand side equals
\eqnsplst
{ &\sum_{x \sim o} \Pr [ \text{infinite LERW started from $x$ visits $o$} ] \\
&\qquad\quad = \sum_{x \sim o} \Pr [
\text{infinite LERW started from $o$ visits $-x$} ] \\
&\qquad\quad = \xi. }
The relation \eqref{e:p-q} now yields $\zeta = d + \frac{\xi - 1}{2}$.
\medbreak
We are now ready to present the main ideas of Priezzhev's computation
of $p(1)$ and $p(2)$. (We have seen in Section \ref{ssec:height-0} that
$p(0) = \frac{1}{4} q(0) = \frac{2}{\pi^2} - \frac{4}{\pi^3}$.)
For the remainder of Section \ref{ssec:height-123}, we restrict to $d = 2$.
One of our concerns will be to supply explicit error bounds that allow
one to pass to the limit $n \to \infty$ in the computations. These are
not provided in the physics literature, and we believe that such
estimates may be useful in further work on related questions,
and therefore would benefit a reader who is not yet familiar with
the details of the work of physicists. Attention is also due to the
fact that Priezzhev's integrals \cite{Pr94}*{Eqn.~(6)} include
logarithmically divergent singularities, and do not exist as Lebesgue
integrals; see our Remark \ref{rem:not-integrable}. Implicit in Priezzhev's
formula is to apply a regularization that allows divergent singularities
to cancel. We provide a suitable regularization in
Propositions \ref{prop:limn} and \ref{prop:summation}.
\subsubsection{Decomposition of $q(1)$ into three terms}
Due to \eqref{e:p-q}, it is enough to find $q(1)$ and $q(2)$. We restrict
to the computation of $q(1)$, as the computations for $q(2)$ follow similar
ideas; see \cites{Pr94,JPR06}.
Let $q_n(1) = \nu_n [ \deg_{W_n}(o) = 1 ]$. We will work in the (large)
finite graph $G_n$. Let $j_1, j_2, j_3, j_4$ be the south, west, north, east
neighbours of the origin $o$, respectively.\footnote{We have tried to keep
the notation consistent with that of \cite{Pr94} as much as possible.}
Due to symmetry, we have
\eqn{e:x1-fixed}
{ q_n(1)
= 4\, \nu_n [ \deg_{W_n}(o) = 1,\, j_1 \in W_n,\, j_2, j_3, j_4 \not\in W_n ]. }
It will be useful to regard spanning trees of $G_n$ as being oriented towards $s$.
Then specifying a spanning tree is equivalent to specifying an acyclic rotor
configuration $\rho$ on $V_n$, and we are required to count certain
acyclic rotor configurations.\footnote{\emph{Note:} Here we are using
the bijection anchored at $o$ introduced in Section \ref{sssec:looping},
and \emph{not} the sandpile group action on acyclic rotors of
Section \ref{ssec:rotor-router}.}
The event on the right hand side of \eqref{e:x1-fixed}
is equivalent to the event that the rotor at $j_1$ is pointing to $o$, and
there is no directed path from any of $j_2, j_3, j_4$ to $j_1$.
Using the idea of Exercise \ref{ex:cond-height}, we can fix the rotor at $o$
to be pointing to $j_2$, say, and introduce a factor $3$. That is:
\eqnst
{ q_n(1)
= \frac{12}{\det(\Delta'_n)} \left| \left\{ \rho:
\parbox{10cm}{$\rho$ acyclic,
$\rho(j_1) = [j_1,o]$, $\rho(o) = [o,j_2]$, $\head(\rho(j_3)) \not= o$,
$\head(\rho(j_4)) \not= o$, no oriented path from $j_3$ and $j_4$ to $j_1$}
\right\} \right|. }
Due to planarity, it is in fact enough to require that there be no
oriented path from $j_4$ to $j_1$. This is because if $j_3$ had such
a path, so would $j_4$, due to the fact that $j_2$ has an oriented
path to the sink. Hence
\eqn{e:acyclic-formula}
{ q_n(1)
= \frac{12}{\det(\Delta'_n)} \left| \left\{ \rho:
\parbox{10cm}{$\rho$ acyclic,
$\rho(j_1) = [j_1,o]$, $\rho(o) = [o,j_2]$, $\head(\rho(j_3)) \not= o$,
$\head(\rho(j_4)) \not= o$,
no oriented path from $j_4$ to $j_1$} \right\} \right|. }
The non-local constraint that there be no oriented path from
$j_4$ to $j_1$ amounts to requiring that if a \emph{second}
rotor were introduced at $o$, pointing to $j_4$, then the
resulting configuration would still be acyclic. For short
let $e = [o,j_2]$, $f = [o,j_4]$, $h = [j_1,o]$, and put
\eqnst
{ \cT_0
= \left\{ \rho_0 : \parbox{10cm}{$\rho_0$ an acyclic rotor
configuration on $V_n \setminus \{ o \}$,
$\rho_0(j_1) = h$, $\head(\rho_0(j_2)) \not= o$,
$\head(\rho_0(j_3)) \not= o$, $\head(\rho_0(j_4)) \not= o$} \right\}. }
Put
\eqnsplst
{ \cT_e
&:= \{ \rho_0 \in \cT_0 : \text{$\rho_0 \cup \{ e \}$ is acyclic} \} \\
\cT_f
&:= \{ \rho_0 \in \cT_0 : \text{$\rho_0 \cup \{ f \}$ is acyclic} \}. }
Then $|\cT_e \cap \cT_f|$ counts the number of elements of the set in the
right hand side of \eqref{e:acyclic-formula}, that we write as:
\eqn{e:decomp-AcB}
{ |\cT_e \cap \cT_f|
= |\cT_e| - |\cT_f^c| + |\cT_e^c \cap \cT_f^c|. }
\subsubsection{An extension of the Matrix-tree theorem}
In order to get formulas for the three terms in \eqref{e:decomp-AcB},
we are going to use the theorem below that states variations
on the Matrix-Tree Theorem for directed graphs \cite{Bbook}*{Theorem II.14}.
Let $G = (V \cup \{ s \}, E)$ be a \emph{directed} graph,
with $-\Delta_{xy} = a_{xy}$ the number of directed edges from $x$ to $y$, and
$\Delta_{xx} = \outdeg(x)$. We assume that $\Delta_{sx} = 0$ for
all $x \in V$. From now on, but in this section only, we are going to call an oriented
cycle of $G$ an \emph{oriented loop} (to distinguish from permutation
cycles in the proof below). In order to state the theorem,
we need some notation. Let $N_0$ denote the number of rotor configurations
on $V$ with no oriented loop (acyclic rotor configurations). Given a directed
edge $h$, let $N_1(h)$ denote the number of rotor configurations that contain
precisely one oriented loop, with the edge $h$ contained in this loop. Let
\eqnst
{ \widetilde{\Delta}_{xy}^h
= \begin{cases}
\Delta_{xy} & \text{if $[x,y] \not= h$;} \\
- \omega & \text{if $[x,y] = h$;}
\end{cases} }
where $\omega$ is a real parameter.
Similarly, given oriented egdes $f_1, f_2, f_3$ of $G$,
let $N_i(f_1,f_2,f_3)$, $i = 1, 2, 3$ respectively,
denote the number of rotor configurations that contain precisely
$i$ oriented loops, respectively, in such a way that each loop contains at least
one of $f_1, f_2, f_3$, and each of $f_1, f_2, f_3$ is contained
in at least one of the loops. Let
\eqnst
{ \widetilde{\Delta}_{xy}^{f_1,f_2,f_3}
= \begin{cases}
\Delta_{xy} & \text{if $[x,y] \not= f_1, f_2, f_3$;} \\
- \omega & \text{if $[x,y] \in \{ f_1, f_2, f_3 \}$}.
\end{cases} }
As before, let $\Delta'$ denote the matrix obtained from $\Delta$ by restricting
the indices to $V \times V$.
\begin{theorem}
\label{thm:MT-Pr}
(Priezzhev \cite{Pr94})
We have:
\eqnsplst
{ \det(\Delta')
&= N_0 \\
\lim_{\omega \to \infty} \frac{1}{\omega}
\det( (\widetilde{\Delta}^h)' )
&= - N_1(h) \\
\lim_{\omega \to \infty} \frac{1}{\omega^3}
\det( (\widetilde{\Delta}^{f_1,f_2,f_3})' )
&= - N_1(f_1,f_2,f_3) + N_2(f_1,f_2,f_3) - N_3(f_1,f_2,f_3). }
\end{theorem}
\begin{proof}
Expand $\det(\Delta')$ as a sum over permutations of $V$, and
for each permutation in the sum, consider its decomposition into
cyclic permutations. Define the \emph{weight} of a permutation cycle
$(x_1,\dots,x_k)$ of length $k \ge 2$ to be $\prod_{i=1}^k a_{x_i,x_{i+1}}$,
and the weight of a ``trivial'' permutation cycle $(x)$ of length $1$
to be $\Delta_{xx}$. Hence we have:
\eqnst
{ \det(\Delta')
= \sum_{\text{permutations}} (-1)^{\# \text{non-trivial perm.~cycles}}
\prod_{\text{perm.~cycles}} \weight(\text{perm.~cycle}). }
Note that a non-trivial permutation cycle of $k$ edges, $k \ge 2$, brings a sign
$(-1)^k$ due to $k$ factors of $-a_{x_i,x_{i+1}}$, and therefore the factor
$(-1)^{\# \text{non-trivial perm.~cycles}}$ ensures the correct sign
for the signature of the permutation. The weight of a non-trivial cycle
counts the number of oriented loops with the same vertex set.
Let us group terms according to the number of non-trivial loops
and write $\Gamma$ for the set of all oriented loops in $G$. This yields:
\eqnst
{ \det(\Delta')
= \prod_{x \in V} \outdeg(x)
- \sum_{\gamma_1 \in \Gamma} \ \prod_{x \in V \setminus \gamma_1} \outdeg(x)
+ \sum_{\substack{\gamma_1, \gamma_2 \in \Gamma \\ \gamma_1 \not= \gamma_2}}
\ \prod_{x \in V \setminus (\gamma_1 \cup \gamma_2)} \outdeg(x)
- \dots. }
The first term counts the number of all rotor configurations on $V$. The
summand in the second term is the number of all rotor configurations that
contain the oriented loop $\gamma_1$. The summand in the third term counts the
number of rotor configurations that contain both loops $\gamma_1$ and $\gamma_2$;
and so on. It follows from the inclusion-exclusion principle that the alternating
sum counts precisely the number of rotor configurations with no loops.
This proves the first statement.
Consider now the same expansion for the modified matrix $(\widetilde{\Delta}^h)'$.
Due to the factor $\frac{1}{\omega}$, the only terms that remain are the ones
where one of the oriented loops contains the edge $h$. Note that for each term
there is at most one such loop. Grouping terms according to what this loop is:
\eqnsplst
{ \lim_{\omega \to \infty} \frac{1}{\omega} \det( (\widetilde{\Delta}^h)' )
&= - \sum_{\substack{\gamma_1 \in \Gamma : \\ h \in \gamma_1}}
\Bigg[ \prod_{x \in V \setminus \gamma_1} \outdeg(x)
- \sum_{\substack{\gamma_2 \in \Gamma : \\ \gamma_2 \not= \gamma_1}}
\ \prod_{x \in V \setminus (\gamma_1 \cup \gamma_2)} \outdeg(x) \\
&\qquad\qquad\quad + \sum_{\substack{\gamma_2 \not= \gamma_3 \in \Gamma : \\ \gamma_2, \gamma_3 \not= \gamma_1}}
\ \prod_{x \in V \setminus (\gamma_1 \cup \gamma_2 \cup \gamma_3)} \outdeg(x)
- \dots \Bigg]. }
For each fixed $\gamma_1 \ni h$, the expression inside the square brackets
is an inclusion-exclusion formula for the number of rotor configurations
that contain $\gamma_1$ but no other oriented loop. Hence the
second statement follows.
The third statement can be proved similarly to the second. This time the only
terms that remain are the ones where there is a set of one, two or three loops
that together contain $f_1, f_2, f_3$. Grouping terms according to what
these loops are, we get a sum of inclusion-exclusion formulas yielding the terms
$(-1)^i N_i(f_1,f_2,f_3)$, $i = 1, 2, 3$.
\end{proof}
\subsubsection{The term $|\cT_e|$}
Let us return to the three terms in \eqref{e:decomp-AcB}. The first term
$|\cT_e|$ only involves local restrictions: certain rotor directions are
forced or forbidden. Let us denote by $j''_1, j''_2, j''_4$ the south,
west, east neighbours of $j_1$, respectively. The rotor $[o,j_2] = e$ can be
forced by deleting the oriented edges $[o,j_1]$, $[o,j_3]$, $[o,j_4]$
from the graph. The rotor $[j_1,o] = h$ can be forced by deleting the oriented
edges $[j_1,j''_1]$, $[j_1,j''_2]$, $[j_1,j''_4]$ from the graph. The
requirements $\head(\rho_0(j_3)) \not= o$ and $\head(\rho_0(j_4)) \not= o$
can be achieved by deleting the oriented edges $[j_3,o]$ and $[j_4,o]$
from the graph (and note that the requirement $\head(\rho_0(j_2)) \not= o$
becomes superfluous due to acyclicity). It follows that we can apply the first
statement of Theorem \ref{thm:MT-Pr} to the matrix
$(\Delta^{(1)}_n)' = \Delta'_n + \delta^{(1)}$,
where the only nonzero entries of $\delta^{(1)}$ are:
\eqnst
{ \delta^{(1)}
= \begin{blockarray}{cccccccc}
o & j_1 & j_3 & j_4 & j''_1 & j''_2 & j''_4 & \\
\begin{block}{(ccccccc)c}
-3 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
0 & -3 & 0 & 0 & 1 & 1 & 1 & j_1 \\
1 & 0 & -1 & 0 & 0 & 0 & 0 & j_3 \\
1 & 0 & 0 & -1 & 0 & 0 & 0 & j_4 \\
\end{block}
\end{blockarray} }
Using explicit values of the potential kernel (see~\eqref{e:pot-kern}),
we get
\eqnst
{ \lim_{n \to \infty} \frac{12 |\cT_e|}{\det(\Delta'_n)}
= \lim_{n \to \infty} 12\, \det ( I + \delta^{(1)} G_n )
= \frac{6}{\pi} - \frac{30}{\pi^2} + \frac{48}{\pi^3}. }
\subsubsection{The term $|\cT_f^c|$}
The second term $|\cT_f^c|$ in \eqref{e:decomp-AcB}
involves the non-local restriction that
$f$ is contained in a loop. Necessarily, this loop
ends with the edge $h$. We are going to force the loop by
giving $h$ the weight $-\omega$, and deleting the oriented
edges $[o,j_1], [o,j_2], [o,j_3]$. We also delete
$[j_2,o]$, $[j_3,o]$. (Note that this time the
requirement $\head(\rho_0(j_4)) \not= o$ is superfluous.)
Since $\omega \to \infty$, the rest of row $j_1$ of
the matrix is immaterial.
Hence we apply the second statement of
Theorem \ref{thm:MT-Pr} to the matrix
$(\Delta^{(2)}_n)' = (\widetilde{\Delta}^f)' = \Delta'_n + \delta^{(2)}$,
where now
\eqnst
{ \delta^{(2)}
= \begin{blockarray}{ccccc}
o & j_1 & j_2 & j_3 & \\
\begin{block}{(cccc)c}
-3 & 1 & 1 & 1 & o \\
-\omega & 0 & 0 & 0 & j_1 \\
1 & 0 & -1 & 0 & j_2 \\
1 & 0 & 0 & -1 & j_3 \\
\end{block}
\end{blockarray} }
Since this time row $j_1$ of $\delta^{(2)}$ does not sum to $0$, a divergent
term of order $\log n$ arises. (This reflects the fact that the
number of configurations containing a cycle is much larger than
the number of acyclic ones.) Since we are evaluating a probability,
the divergence will have to be cancelled by a term we get for
$|\cT_e^c \cap \cT_f^c|$. In order to deal with the divergent
terms, we need the following lemma.
\begin{lemma}
For $K \ge 1$ and $z, w \in \Z^2$ with $|z|, |w| \le K$, as
$n \to \infty$, we have
\eqn{e:G_n(z,w)}
{ G_n(z,w)
= G_n(o,o) - A(w-z) + O_K \left( \frac{\log n}{n} \right), }
where the constant implied by $O_K$ depends on $K$.
\end{lemma}
We do not prove this, as this is incorporated in Lemma \ref{lem:estimates}
to come.
Replacing each term $G_n(z,w)$ in the matrix $I + \delta^{(2)} G_n$ by the
expression on the right hand side of \eqref{e:G_n(z,w)} and
taking into account that $G_n(o,o) = O(\log n)$ (see Lemma \ref{lem:estimates})
we get:
\eqnspl{e:cTf}
{ \frac{-12 |\cT_f^c|}{\det(\Delta'_n)}
&= 12\, \lim_{\omega \to \infty} \frac{1}{\omega} \det \left( I + \delta^{(2)} G_n \right) \\
&= \frac{3}{\pi^2} - G_n(o,o)\, 12\, \left( \frac{2}{\pi^2} - \frac{4}{\pi^3} \right)
+ O \left( \frac{\log^2 n}{n} \right). }
\subsubsection{Priezzhev's ``bridge trick'' for the term $|\cT_e^c \cap \cT_f^c|$}
\label{sssec:bridge}
We are left to calculate the term $|\cT_e^c \cap \cT_f^c|$.
Let $\rho_0 \in \cT_e^c \cap \cT_f^c$, and let $\rho$ stand for
the set of edges $\rho = \rho_0 \cup \{ e \} \cup \{ f \}$.
There is a unique vertex $i_1 \in V_n \setminus \{ o \}$
such that $\rho$ contains three oriented paths:\\
(i) a path from $o$ to $i_1$ starting with $e$;\\
(ii) a path from $o$ to $i_1$ starting with $f$;\\
(iii) a path from $i_1$ to $o$ ending with $h$.\\
Moreover, the three paths are vertex-disjoint apart from
the vertices $o$ and $i_1$. We will call $i_1$ the \emph{meeting point}.
We are going to count configurations in $\cT_e^c \cap \cT_f^c$
separately for each fixed value $i_1$ of the meeting point.
The idea is to add three ``bridge'' edges between $j_1, j_2, j_4$ and
three neighbours of $i_1$, and force the bridge edges to be in loops,
via the third statement of Theorem \ref{thm:MT-Pr}.
Then the existence of the loops gives the required paths,
apart from possible flips in orientation of some of the paths.
We would need to sum over all possible locations of $i_1$
and all possible choices of three neighbours of $i_1$.
It turns out that symmetry considerations allow one to reduce
the amount of calculations, and to count only two types of
configurations according to the pattern of edges near $i_1$.
In order to define these patterns, let $w_1, w_3, w_4$ be the
south, north, east neighbours of $i_1$, respectively.
Let $G^{L,i_1}_n$ be the graph obtained from $G_n$ by
removing $[j_3,o]$ and adding three ``bridges'':
\eqnst
{ f^L_1 = [j_1,i_1], \qquad
f^L_2 = [j_2,w_4], \qquad
f^L_3 = [j_4,w_3]. }
The symbol $L$ indicates that the vertices $w_3, i_1, w_4$
form an $L$-shape. Let $G^{\Gamma,i_1}_n$ be the graph obtained
from $G_n$ by removing $[j_3,o]$ and adding the three bridges:
\eqnst
{ f^\Gamma_1 = [j_1,i_1], \qquad
f^\Gamma_2 = [j_2,w_1], \qquad
f^\Gamma_3 = [j_4,w_4]. }
The symbol $\Gamma$ indicates the $\Gamma$-shape formed by the
vertices $w_1, i_1, w_4$.
If $i_1 = (k,l)$, we denote $i_1' = (-k,l)$.
The computation is based on the following lemma.
\begin{lemma}
\label{lem:N_i's}
There is a finite (explicit) set $P \subset \Z^2$, such that
whenever $i_1, i_1' \in V_{n-1} \setminus P$, the following holds.\\
(i) We have $N_2(f^L_1, f^L_2, f^L_3; z) = 0 =
N_2(f^\Gamma_1, f^\Gamma_2, f^{\Gamma}_3 ; z)$ for $z = i_1, i_1'$.\\
(ii) We have
\eqnsplst
{ &\sum_{z \in \{ i_1, i_1' \}} \ \sum_{i = 1, 3}
\left[ N_i(f^L_1, f^L_2, f^L_3 ; z)
+ N_i(f^\Gamma_1, f^\Gamma_2, f^\Gamma_3 ; z) \right] \\
&\qquad\qquad\qquad = |\cT_e^c \cap \cT_f^c
\cap \{ \text{meeting point equals $i_1$ or $i_1'$} \}|. }
\end{lemma}
\begin{proof}
(i) This follows from planarity, as can be checked case-by-case.
(This is the point in the computation where $d = 2$ is used in a
crucial way.)
(ii) Consider first a configuration counted in
$N_3(f^L_1, f^L_2, f^L_3; i_1)$. Remove the bridges.
There are three vertex-disjoint oriented paths
$i_1 \to o$, $w_4 \to j_2$ and $w_3 \to j_4$.
Reverse the orientations of the paths arriving at
$j_2$ and $j_4$, respectively. This leaves a rotor
configuration with no rotor specified at $o$, $w_3$
and $w_4$. Adding the rotors $[w_3,i_1], [w_4,i_1]$ we get
a configuration in $\cT_e^c \cap \cT_f^c$ such that
the meeting point is $i_1$. The operations we performed are
one-to-one between $N_3(f^L_1, f^L_2, f^L_3; i_1)$ and the
configurations in the image.
We can perform similar steps
for $N_1(f^L_1, f^L_2, f^L_3; i_1)$: remove the bridges,
and reverse the orientation of the paths arriving
at $j_2$ and $j_4$. The paths can occur in two distinct
ways. One is obtained when we started with
$i_1 \to j_2$, $w_4 \to j_4$, $w_3 \to o$,
in which case, after we reversed orientations,
there is no rotor specified at $i_1$ and
$w_4$. We set these rotors as $[i_1,w_3]$ and $[w_4,i_1]$,
yielding a configuration in $\cT_e^c \cap \cT_f^c$.
The other possibility is that we started with
$i_1 \to j_4$, $w_3 \to j_2$, $w_4 \to o$, in which case
rotors will be missing at $i_1$ and $w_3$. We set these
to be $[i_1,w_4]$ and $[w_3,i_1]$ to get a configuration
in $\cT_e^c \cap \cT_f^c$.
Observe that the configurations constructed from $N_1$
are distinct from the ones arising from $N_3$.
Let us add now the configurations arising from
$N_3(f^\Gamma_1, f^\Gamma_2, f^\Gamma_3; i_1)$ and
$N_1(f^\Gamma_1, f^\Gamma_2, f^\Gamma_3; i_1)$. There are four
possibilities for the pattern of three edges incident
with $i_1$ involved in the three paths. Configurations
where the three edges only contain either the $L$- or the
$\Gamma$-shape have been counted exactly once.
Configurations where the three edges contain both
the $L$- and the $\Gamma$-shape have been counted
twice. The mirror images of these configurations
(under $i_1 \leftrightarrow i_1'$) on the other hand,
whose number is the same, are \emph{not} counted in
the corresponding terms $N_1(\cdot, \cdot, \cdot ; i_1')$,
$N_3(\cdot, \cdot, \cdot ; i_1')$. Hence adding together
the contributions for $i_1$ and $i_1'$ restores the
balance, and implies the statement.
The symmetry argument breaks down if $i_1 \in V_n \setminus V_{n-1}$.
The path arguments may break down if $i_1$ or $i_1'$ is too close to $o$,
so there is a set of exceptions $P$.
\end{proof}
\begin{remark}
When we take the limit $n \to \infty$, we are going to
sum over all $i_1 \in \Z^2$ to have a workable expression
as a Fourier integral. Then one needs to correct for the
exceptions $P$ individually. These can be handled with
the ideas we used for $|\cT_e|$ and $|\cT_f^c|$; see \cite{Pr94}.
The divergent term in \eqref{e:cTf} is cancelled by similar
divergent terms in the contributions of the exceptional
points in $P$. It also follows from the estimates in
Lemma \ref{lem:Aasymp} that the contribution of the boundary terms
$i_1, i_1' \in V_n \setminus V_{n-1}$ is negligible in the limit.
\end{remark}
\subsubsection{Summation formulas for the $N_1$ and $N_3$ terms}
In evaluating the $N_1$ and $N_3$ terms, the bridges
receive weight $-\omega$. The necessary modifications of the
graph are encoded in the matrices:
\eqnst
{ \Delta_{i_1}(L)
= \Delta'_n + \delta_{i_1}(L), \qquad\qquad
\Delta_{i_1}(\Gamma)
= \Delta'_n + \delta_{i_1}(\Gamma), }
where
\eqnsplst
{ \delta_{i_1}(L)
&= \begin{blockarray}{cccccc}
j_3 & o & i_1 & a = w_4 & b = w_3 & \\
\begin{block}{(ccccc)c}
-1 & 1 & 0 & 0 & 0 & j_3 \\
0 & 0 & -\omega & 0 & 0 & o \\
0 & 0 & 0 & -\omega & 0 & j_2 \\
0 & 0 & 0 & 0 & -\omega & j_4 \\
\end{block}
\end{blockarray} \\
\delta_{i_1}(\Gamma)
&= \begin{blockarray}{cccccc}
j_3 & o & i_1 & a = w_1 & b = w_4 & \\
\begin{block}{(ccccc)c}
-1 & 1 & 0 & 0 & 0 & j_3 \\
0 & 0 & -\omega & 0 & 0 & o \\
0 & 0 & 0 & -\omega & 0 & j_2 \\
0 & 0 & 0 & 0 & -\omega & j_4 \\
\end{block}
\end{blockarray} }
Due to Theorem \ref{thm:MT-Pr} and Lemma \ref{lem:N_i's}(i) we have:
\eqnspl{e:N_i's-det-form}
{ &N^{L,i_1}_1 + N^{L,i_1}_3
= \lim_{\omega \to \infty} \frac{-1}{\omega^3}
\frac{\det (\Delta_{i_1}(L))}{\det(\Delta'_n)}
= \lim_{\omega \to \infty} \frac{-1}{\omega^3} \det ( I + \delta_{i_1}(L) G_n ) \\
&= \begin{vmatrix}
1 + G_n(o,j_3) & G_n(o,o) & G_n(o,j_2) & G_n(o,j_4) \\
\qquad -G_n(j_3,j_3) & \qquad -G_n(j_3,o) & \qquad -G_n(j_3,j_2) & \qquad -G_n(j_3,j_4) \\[1ex]
G_n(i_1,j_3) & G_n(i_1,o) & G_n(i_1,j_2) & G_n(i_1,j_4) \\[1ex]
G_n(w_4,j_3) & G_n(w_4,o) & G_n(w_4,j_2) & G_n(w_4,j_4) \\[1ex]
G_n(w_3,j_3) & G_n(w_3,o) & G_n(w_3,j_2) & G_n(w_3,j_4)
\end{vmatrix} \\
&=: \det(M_L). }
We similarly get
\eqnspl{e:N_i's-det-form2}
{ &N^{\Gamma,i_1}_1 + N^{\Gamma,i_1}_3 \\
&= \begin{vmatrix}
1 + G_n(o,j_3) & G_n(o,o) & G_n(o,j_2) & G_n(o,j_4) \\
\qquad -G_n(j_3,j_3) & \qquad -G_n(j_3,o) & \qquad -G_n(j_3,j_2) & \qquad -G_n(j_3,j_4) \\[1ex]
G_n(i_1,j_3) & G_n(i_1,o) & G_n(i_1,j_2) & G_n(i_1,j_4) \\[1ex]
G_n(w_4,j_3) & G_n(w_4,o) & G_n(w_4,j_2) & G_n(w_4,j_4) \\[1ex]
-G_n(w_1,j_3) & -G_n(w_1,o) & -G_n(w_1,j_2) & -G_n(w_1,j_4)
\end{vmatrix} \\
&=: - \det(M_\Gamma). }
In order to deal with divergences as $n \to \infty$, we regularize
by replacing $G_n$ in rows 2--4 of the matrices $M_L$ and $M_\Gamma$
with the Green's function of the geometrically killed random walk:
\eqnst
{ G_n(z,w; r)
:= \frac{1}{4} \sum_{m = 0}^\infty r^m \Pr^z [ S(m) = w,\, \tau_{V_n^c} > m ], \quad 0 < r \le 1, }
where $\tau_{V_n^c}$ is the hitting time of $V_n$.
Let $M_{L,r}$ and $M_{\Gamma,r}$ be the matrices obtained this way.
We also let
\eqnst
{ A(z,w;r)
:= \frac{1}{4} \lim_{N \to \infty} \sum_{m=0}^N r^m \left( \Pr^z [ S(m) = w ] - \Pr^z [ S(m) = z ] \right),
\qquad z, w \in \Z^2,\, 0 < r \le 1, }
and
\eqnst
{ G_{k,l}(r)
:= \frac{1}{4} \sum_{m = 0}^\infty r^m \Pr^o [ S(m) = (k,l) ], \quad (k,l) \in \Z^2,\, 0 < r < 1. }
The following two propositions, that we prove in
Sections \ref{sssec:estimates}--\ref{sssec:summation},
state summation formulas for the $N_1$ and $N_3$ terms
in the limit $n \to \infty$. These two propositions form the remaining
part of the computation of $|\cT_e^c \cap \cT_f^c|$.
\begin{proposition}
\label{prop:limn}
We have
\eqnsplst
{ \lim_{n \to \infty} \sum_{i_1 \in V_n} \left[ \det(M_L) - \det(M_\Gamma) \right]
= \lim_{r \uparrow 1} \sum_{(k,l) \in \Z^2} \det(C_{k,l}(r)), }
where
\eqnspl{e:Mr}
{ C_{k,l}(r)
:= \begin{pmatrix}
\frac{3}{4} & \frac{1}{4} & \frac{1}{\pi} - \frac{1}{4} & \frac{1}{\pi} - \frac{1}{4} \\
G_{k,l-1} & G_{k,l} & G_{k+1,l} & G_{k-1,l} \\
G_{k+1,l-1} & G_{k+1,l} & G_{k+2,l} & G_{k,l} \\
G_{k,l} - G_{k,l-2} & G_{k,l+1} - G_{k,l-1} & G_{k+1,l+1} - G_{k+1,l-1} & G_{k-1,l+1} - G_{k-1,l-1}
\end{pmatrix}, }
with each $G$-entry evaluated at $r$.
\end{proposition}
\begin{proposition}
\label{prop:summation}
We have
\eqnst
{ \sum_{(k,l) \in \Z^2} \det(C_{k,l}(r))
= \frac{1}{32 \pi^4} \iint \iint
\frac{i \sin \beta_1 \, \det(M_1)\, d\alpha_1\, d\beta_1\, d\alpha_2\, d\beta_2}{D_r(\alpha_1,\beta_1)
D_r(\alpha_2, \beta_2) D_r(\alpha_1 + \alpha_2,\beta_1 + \beta_2)}, }
where
\eqnst
{ M_1
:= \begin{pmatrix}
\frac{3}{4} & \frac{1}{4} & \frac{1}{\pi} - \frac{1}{4} & \frac{1}{\pi} - \frac{1}{4} \\
e^{i(\beta_1 + \beta_2)} & 1 & e^{-i(\alpha_1 + \alpha_2)} & e^{i(\alpha_1 + \alpha_2)} \\
e^{i(\alpha_2 - \beta_2)} & e^{i \alpha_2} & e^{2i \alpha_2} & 1 \\
e^{-i\beta_1} & 1 & e^{i \alpha_1} & e^{-i \alpha_1}
\end{pmatrix}, }
and $D_r(\alpha,\beta) = 2 - r (\cos \alpha + \cos \beta)$.
\end{proposition}
\begin{remark}
\label{rem:not-integrable}
When $r = 1$, the integral does not exist as a Lebesgue integral. In order to
see this, consider the region of integration where $(\alpha_2,\beta_2)$ is in
a small neighbourhood of $(0,0)$, and $(\alpha_1, \beta_1)$ is in a small
neighbourhood of $(\pi/4,\pi/4)$, say. Subtract the last column from
the other columns, and expand the determinant along the third row.
The first three terms each contain a factor that vanishes as
$(\alpha_2,\beta_2) \to (0,0)$, making the singularity due to
$D_1(\alpha_2,\beta_2)$ integrable. But the last term is
proportional to $(D_1(\alpha_2,\beta_2))^{-1}$, which is not
integrable. Proposition \ref{prop:summation} exhibits a delicate
cancellation taking place.
\end{remark}
\subsubsection{Green function estimates}
\label{sssec:estimates}
For the proofs of Propositions \ref{prop:limn} and \ref{prop:summation},
we are going to need some estimates on Green functions.
\begin{lemma}
\label{lem:Aasymp} \ \\
(i) There exists a constant $K$ such that
\eqnst
{ A(z,w;1)
= \frac{1}{2 \pi} \log |w - z| + K + O \left( \frac{1}{|w - z|^2} \right),
\quad \text{$|w - z| \ge 1$.} }
(ii) Uniformly in $0 < r \le 1$ and $w \in \Z^2$, we have
\eqnst
{ A(z,w;r) - A(o,w;r)
= O (\log |z|),
\quad \text{$|z| \ge 2$.} }
(iii) Uniformly in $0 < r \le 1$ and for $|f| = 1 = |h|$ we have
\eqnsplst
{ \partial^{(1)}_f A(z,o;r)
&= O \left( \frac{1}{|z|} \right),
\quad \text{$|z| \ge 1$;} \\
\partial^{(2)}_h A(z,o;r)
&= O \left( \frac{1}{|z|} \right),
\quad \text{$|z| \ge 1$;} \\
\partial^{(1)}_f \partial^{(2)}_h A(z,o;r)
&= O \left( \frac{1}{|z|^2} \right),
\quad \text{$|z| \ge 1$.} }
\end{lemma}
\begin{proof}
Statement (i) is \cite{LLbook}*{Theorem 4.4.4}.
When $r = 1$, statements (ii) and (iii) follow immediately
from (i). For the case $0 < r < 1$, the proof
of \cite{LLbook}*{Theorem 4.4.4} can be adapted, and we
sketch how this can be done. By considering a ``lazy''
random walk (that holds in place with probability $\eps$
on each step), we may replace the simple random walk by
an aperiodic one. (Indeed, for the lazy walk
$G_{\eps}(z,w;r) = (1 - \eps r)^{-1} G(z,w;r)$; see
\cite{LLbook}*{(4.17)}.)
Following the proof of \cite{LLbook}*{Theorem 4.4.4}, write
\eqnsplst
{ A(o,z;r)
&= \sum_{m \le |z|^2} r^m \p^m(o,o) - \sum_{m \le |z|^2} r^m \p^m(o,z)
+ \sum_{m > |z|^2} r^m (\p^m(o,o) - \p^m(o,z)). }
Let
\eqnst
{ B(z;r)
= \sum_{1 \le m \le |z|^2} \frac{r^m}{m}. }
Then the compuations in \cite{LLbook} show that
\eqnst
{ \sum_{m \le |z|^2} r^m \p^m(o,o)
= c_1 \, B(z;r) + C + O(|z|^{-2}), }
with $c_1$ and $C$ independent of $z$ and $r$. We also have that
$\sum_{m \le |z|} r^m \p^m(o,z)$ decays faster than any power of $|z|$
(uniformly in $r$). An application of the local central limit
theorem yields that
\eqnst
{ \sum_{|z| < m \le |z|^2} \left[ r^m \p^m(o,z) - r^m \frac{c_1}{m} e^{-|z|^2/m} \right]
= O ( |z|^{-2} ). }
Also, the proof of \cite{LLbook}*{Lemma 4.3.2} shows that
\eqnsplst
{ \sum_{|z| < m \le |z|^2} \frac{r^m}{m} e^{-|z|^2/m}
&= \int_1^\infty \frac{1}{y} \exp \left( - \frac{y}{2} - \frac{\beta |z|^2}{y} \right)
+ O (|z|^{-2}) \\
&=: I_1(z;r) + O(|z|^{-2}), }
where $- \beta = \log r \in (-\infty,0)$.
Therefore,
\eqnst
{ \sum_{m \le |z|^2} r^m \left( \p^m(o,o) - \p^m(o,z) \right)
= c_1 B(z;r) + C + c_1 I_1(z;r) + O(|z|^{-2}). }
A similar computation yields
\eqnsplst
{ \sum_{m > |z|^2} r^m \left( \p^m(o,o) - \p^m(o,z) \right)
&= c_1 \int_0^1 \frac{1}{y} ( 1 - e^{-y/2} )
e^{- \beta |z|^2 / y }
+ O(|z|^{-2}) \\
&=: c_1 I_2(z;r) + O(|z|^{-2}). }
Note that $I_1(z;r) = O(1)$ and $I_2(z;r) = O(1)$, uniformly in
$z$ and $r$. Statement (ii) now follows from
\eqnsplst
{ |A(z,w;r) - A(o,w;r)|
&= c_1 |B(w-z;r) - B(w;r)| + O(1) \\
&\le c_1 |B(w-z;1) - B(w;1)| + O(1) \\
&= O ( \log |z| ). }
In order to prove the statements in (iii), write
\eqnspl{e:partial1A}
{ \partial^{(1)}_f A(z,o;r)
&= c_1 [B(z+f;r) - B(z;r)] + c_1 [I_1(z+f;r)-I_1(z;r)] \\
&\qquad + c_1 [I_2(z+f;r) - I_2(z;r)] + O(|z|^{-2}). }
Using that $|z+f|^2 - |z|^2 = 2 \langle z, f \rangle + 1 = O(|z|)$,
the first term is $O(|z|^{-1})$. In order to estimate the second term,
write
\eqnst
{ I_1(z+f;r) - I_1(z;r)
= \int_1^\infty \frac{1}{y} e^{-y/2}
\left( e^{-\beta |z+f|^2/y} - e^{-\beta |z|^2/y} \right). }
We treat the cases $\beta/y \le |z|^{-1}$ and $\beta/y > |z|^{-1}$ separately.
When $\beta/y \le |z|^{-1}$, we have
\eqnspl{e:beta|z|2/y1}
{ \left| e^{-\beta |z+f|^2/y} - e^{-\beta |z|^2/y} \right|
&= e^{-\beta |z|^2/y} \left| e^{-\beta (2 \langle z, f \rangle + 1)} - 1 \right| \\
&\le e^{-\beta |z|^2/y} \frac{C \beta |z|}{y} \\
&= \frac{C'}{|z|} \frac{\beta |z|^2}{y} e^{-\beta |z|^2/y} \\
&= O(|z|^{-1}). }
When $\beta/y > |z|^{-1}$, we have
\eqn{e:beta|z|2/y2}
{ e^{-\beta |z+f|^2/y},\, e^{-\beta |z|^2/y}
= O \left( e^{-c|z|} \right). }
This shows that the second term in \eqref{e:partial1A} is
$O(|z|^{-1})$. Similar considerations apply to the third term in
\eqref{e:partial1A}. The agument for $\partial^{(2)}_h A(z,o;r)$
is identical.
Finally, for the last statement of (iii) we write
\eqnspl{e:partial12A}
{ \partial^{(1)}_f \partial^{(2)}_h A(z,o;r)
&= c_1 [B(z+f-h;r) - B(z+f;r) - B(z-h;r) + B(z;r)] \\
&\qquad + c_1 [I_1(z+f-h;r) - I_1(z+f;r) - I_1(z-h;r) + I_1(z;r)] \\
&\qquad + c_1 [I_2(z+f-h;r) - I_2(z+f;r) - I_2(z-h;r) + I_2(z;r)] \\
&\qquad + O(|z|^{-2}). }
In the first term, cancellations take place between the four
summations. The net result is that apart from $O(1)$ terms (that are
each $O(|z|^{-2})$), there are $O(|z|)$ pairs of terms that come with
opposite signs. For each pair (treating the cases $r \ge 1 - |z|^{-1}$ and
$r < 1 - |z|^{-1}$ separately), we have the estimate
\eqnst
{ \frac{r^{m_1}}{m_1} - \frac{r^{m_2}}{m_2}
= O(|z|^{-3}),
\quad \text{if $m_1 = |z|^2 + O(|z|)$ and $m_2 = |z|^2 + O(|z|)$.} }
Summing these we get that the first term in \eqref{e:partial12A} is
$O(|z|^{-2})$. For the second term in \eqref{e:partial12A},
we argue similarly to \eqref{e:beta|z|2/y1}--\eqref{e:beta|z|2/y2}
(treating the cases $\beta/y \le |z|^{-1}$ and $\beta/y > |z|^{-1}$
separately). This gives
\eqnst
{ e^{-\beta |z+f-h|^2/y} - e^{-\beta |z+f|^2/y} - e^{-\beta|z-h|^2/y} + e^{-\beta |z|^2/y}
= O (|z|^{-2}), }
and it follows that the second term in \eqref{e:partial12A} is
$O(|z|^{-2})$. The argument for the third term is similar.
This completes the proof.
\end{proof}
Let $\{ S_r(m) \}_{m \ge 0}$ be the random walk killed at a
$\mathsf{Geometric}(1-r)$ time that is independent of the walk.
Below we interpret $A(S_r(m),w;r)$ as $0$ after the killing time.
\begin{lemma}
\label{lem:Gn(z,w)}
For all $0 < r \le 1$, $z, w \in V_n$ we have
\eqnst
{ G_n(z,w;r)
= \E^z \left[ A(S_r(\tau_{V_n^c}),w;r) \right] - A(z,w;r). }
\end{lemma}
\begin{proof}
The case $r = 1$ is \cite{LLbook}*{Lemma 4.6.2(b)}. The proof
is similar when $0 < r < 1$. Note that
$M_m := A(S_r(m),w;r) - \frac{1}{4} \sum_{j = 0}^{m-1} \mathbf{1}_{S_r(j) = w}$
is a martingale. This gives
\eqnst
{ A(z,w;r)
= \E^z [ A(S_r(N \wedge \tau_{V_n^c}),w;r) ]
- \E^z \left[ \sum_{0 \le j < N \wedge \tau_{V_n^c}} \mathbf{1}_{S_r(j) = w} \right]. }
Letting $N \to \infty$ and using bounded and monotone convergence,
respectively, for the two terms we get the statement of the Lemma.
\end{proof}
\begin{lemma}
\label{lem:estimates}
Uniformly in $0 < r \le 1$, $z \in V_n$, $n \ge 1$ and for $|f| = 1 = |h|$, we have
\begin{align}
G_n(o,o;r)
&= O \left( \log n \right)
\label{e:Gn(o,o)} \\
G_n(z,o;r) - G_n(o,o;r)
&= -A(z,o;r) + O \left( |z| \frac{\log n}{n} \right)
= O \left( \log |z| \right)
\label{e:Gn(z,o)} \\
\partial^{(1)}_f G_n(z,o;r)
&= \partial^{(1)}_f A(z,o;r)
+ O \left( \frac{\log n}{\dist(z,V_n^c)} \right) \\
&= O \left( \frac{1}{|z|} \right) + O \left( \frac{\log n}{\dist(z,V_n^c)} \right)
\label{e:partial(1)} \\
\partial^{(2)}_h G_n(z,o;r)
&= \partial^{(2)}_h A(z,o;r)
+ O \left( \frac{1}{n} \right)
= O \left( \frac{1}{|z|} \right)
\label{e:partial(2)} \\
\partial^{(1)}_f \partial^{(2)}_h G_n(z,o;r)
&= \partial^{(1)}_f \partial^{(2)}_h A(z,o;r)
+ O \left( \frac{1}{n\, \dist(z,V_n^c)} \right) \\
&= O \left( \frac{1}{|z|^2} \right) + O \left( \frac{1}{n\, \dist(z,V_n^c)} \right).
\label{e:partialboth}
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:estimates}]
The estimate \eqref{e:Gn(o,o)} follows from
\eqnst
{ G_n(o,o;r)
\le G_n(o,o;1)
= \E^o [ A(S(\tau_{V_n^c})) ] = O ( \log n ). }
In order to prove \eqref{e:Gn(z,o)}, we use Lemma \ref{lem:Gn(z,w)}
to write
\eqnsplst
{ G_n(z,o;r) - G_n(o,o;r)
&= -A(z,o;r) + \E^z [ A(S_r(\tau_{V_n^c}),o;r) ]
- \E^o [ A(S_r(\tau_{V_n^c}),o;r) ]. }
Due to Lemma \ref{lem:Aasymp}(ii), the first term is
$O(\log |z|)$. By the same lemma, the random variable
inside the expectations is $O(\log n)$. Due to a difference
estimate for harmonic functions \cite{LLbook}*{Theorem 6.3.8},
the total variation distance between the exit distributions
of the random walk (without killing) started from $z$ and $o$,
respectively, is $O(|z|/n)$. This implies the same for the
killed random walk, and the statement follows.
The proofs of \eqref{e:partial(1)} and \eqref{e:partial(2)} are similar
to the proof of \eqref{e:partialboth}, so we only give the latter.
We write
\eqnsplst
{ \partial^{(1)}_{f} \partial^{(2)}_{h} G_n(z,o;r)
&= - \partial^{(1)}_f \partial^{(2)}_h A(z,o;r)
+ \E^{z+f} \left[ A(S_{\bar{\tau}_n},h;r) - A(S_{\bar{\tau}_n},o;r) \right] \\
&\qquad\qquad - \E^{z} \left[ A(S_{\bar{\tau}_n},h;r) - A(S_{\bar{\tau}_n},o;r) \right]. }
Due to Lemma \ref{lem:Aasymp}(iii), the first term is $O(|z|^{-2})$.
The random variable inside the expectations is $O(\frac{1}{n})$, again
due to Lemma \ref{lem:Aasymp}(iii). Again due to the difference estimate for
harmonic functions \cite{LLbook}*{Theorem 6.3.8}, the total variation distance
between exit distributions of the random walk (without killing) started
from $z$ and $z+f$, respectively, is $O(1/\dist(z,V_n^c))$. This
implies the claim.
\end{proof}
\subsubsection{Proof of the summation formulas}
\label{sssec:summation}
\begin{proof}[Proof of Proposition \ref{prop:limn}.]
In the first row of $M_L - M_\Gamma$ we can take the limit $n \to \infty$ directly
and we obtain the first row of $M_r$. In order to deal with the divergent entries,
we use row and column operations to exhibit cancellations in the determinant
that allow us to take the limit $n \to \infty$.
Subtracting the second column from the other columns and then subtracting
the second row from the third and fourth rows we have:
\eqnsplst
{ &\det(M_{L} - M_{\Gamma}) \\
&\qquad = \begin{vmatrix}
\frac{2}{4} + o(1) & \frac{1}{4} + o(1) & \frac{1}{\pi} - \frac{2}{4} + o(1) & \frac{1}{\pi} - \frac{2}{4} + o(1) \\
\partial^{(2)}_{e_2} G_n & G_n & \partial^{(2)}_{-e_1} G_n & \partial^{(2)}_{e_1} G_n \\
\partial^{(1)}_{e_1}\partial^{(2)}_{e_2} G_n & \partial^{(1)}_{e_1} G_n & \partial^{(1)}_{e_1}\partial^{(2)}_{-e_1} G_n & \partial^{(1)}_{e_1}\partial^{(2)}_{e_1} G_n \\
(\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2}) \partial^{(2)}_{e_2} G_n
& (\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2}) G_n
& (\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2})\partial^{(2)}_{-e_1} G_n
& (\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2})\partial^{(2)}_{e_1} G_n \end{vmatrix}, }
where each entry in rows 2--4 is evaluated at $(i_1,o;r=1)$.
We split the determinant into two terms by writing $G_n(i_1,o;1)$
(the second entry in the second row) as
\eqnst
{ G_n(i_1,o;1)
= G_n(o,o;1) + (G_n(i_1,o;1) - G_n(o,o;1)). }
This gives the terms:
\eqnsplst
{ \det(M_{L} - M_{\Gamma})
= G_n(o,o;1) \, \det(\tilde{M}^o) + \det(\tilde{M}^\diff), }
where $\tilde{M}^o$ is the minor of $M_L - M_\Gamma$ obtained
by removing the second row and second column,
and $\tilde{M}^\diff$ is obtained by replacing the entry $G_n$
by $G_n(i_1,o;1) - G_n(o,o;1)$.
The estimates of Lemma \ref{lem:estimates} with $z = i_1$ show that
\eqnst
{ \tilde{M}^o
= \begin{pmatrix}
\frac{2}{4} + o(1) & \frac{1}{\pi} - \frac{2}{4} + o(1) & \frac{1}{\pi} - \frac{2}{4} + o(1) \\
O (|z|^{-2} + n^{-1} \dist(z,V_n^c)^{-1}) & O(|z|^{-2} + n^{-1} \dist(z,V_n^c)^{-1}) & O(|z|^{-2} + n^{-1} \dist(z,V_n^c) \\
O (|z|^{-2} + n^{-1} \dist(z,V_n^c)^{-1}) & O(|z|^{-2} + n^{-1} \dist(z,V_n^c)^{-1}) & O(|z|^{-2} + n^{-1} \dist(z,V_n^c) \end{pmatrix}. }
This implies that
\eqnst
{ G_n(o,o;1) \sum_{i_1 \in V_n} \det(\tilde{M}^o)
= G_n(o,o;1) \sum_{z \in \Z^2} \det(M^o) + O \left( \frac{\log n}{n} \right), }
where
\eqnst
{ M^o
:= \begin{pmatrix}
\frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} \\
\partial^{(1)}_{e_1}\partial^{(2)}_{e_2} A & \partial^{(1)}_{e_1}\partial^{(2)}_{-e_1} A & \partial^{(1)}_{e_1}\partial^{(2)}_{e_1} A \\
(\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2}) \partial^{(2)}_{e_2} A
& (\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2})\partial^{(2)}_{-e_1} A
& (\partial^{(1)}_{-e_2}-\partial_{e_2})\partial^{(2)}_{e_1} A \end{pmatrix}, }
with each entry evaluated at $(z,o;1)$.
\begin{lemma}
\label{lem:sum0}
We have $\sum_{z \in \Z^2} \det(M^o) = 0$.
\end{lemma}
\begin{proof}
We are going to use that $A(z,o;1) = \lim_{r \uparrow 1} A(z,o;r)$. (We note that it is not
strictly necessary here to regularize, and we could argue directly at $r = 1$.
But this helps to avoid some delicate integrability issues, and we will
need regularization anyway when we consider $\tilde{M}^\diff$.)
Due to the uniformity in $r$ of the bounds in Lemma \ref{lem:Aasymp}(iii),
by dominated convergence we get
\eqnst
{ \sum_{z \in \Z^2} \det(M^o)
= \lim_{r \uparrow 1} \sum_{z \in \Z^2} \det(M^o_r), }
where $M^o_r$ is defined the same way as $M^o$, but each entry evaluated
at $(z,o;r)$. We are going to use the
Fourier formula (see \cite{LLbook}*{Proposition 4.2.3}):
\eqnst
{ A(z,o;r)
= \frac{1}{8 \pi^2} \iint \frac{1 - e^{i (\alpha k + \beta l)}}{2 - r(\cos \alpha + \cos \beta)}
\, d\beta \, d\alpha,
\quad 0 < r < 1, }
where $z = (k,l)$, and both integrals are over $[-\pi,\pi]$. This implies
\eqnst
{ \partial^{(1)}_{e_1} \partial^{(2)}_{e_2} A(z,o;r)
= \frac{1}{8 \pi^2} \iint e^{i (\alpha k + \beta l)}
\frac{(e^{i \alpha} - 1)(e^{-i \beta} - 1)}{2 - r(\cos \alpha + \cos \beta)}
\, d\beta \, d\alpha, }
and similar formulas hold for the other entries in rows 2--3.
It follows that
\eqn{e:Fourier}
{ \det(M^o_r)
= \frac{1}{64 \pi^4} \iint\!\iint \frac{e^{i(\alpha_1 + \alpha_2)k + i(\beta_1 + \beta_2)l}
\det(C^o) \, d\beta_1\, d\alpha_1\, d\beta_2\, d\alpha_2}
{(2 - r (\cos \alpha_1 + \cos \beta_1))(2 - r (\cos \alpha_2 + \cos \beta_2))}, }
where
\eqnsplst
{ \det(C^o)
&= \begin{vmatrix}
\frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} \\
(e^{i \alpha_1} - 1)(e^{-i \beta_1} - 1) & (e^{i \alpha_1} - 1)(e^{i \alpha_1} - 1) & (e^{i \alpha_1} - 1)(e^{-i \alpha_1} - 1) \\
-2 i \sin \beta_2 (e^{-i \beta_2} - 1) & -2 i \sin \beta_2 (e^{i \alpha_2} - 1) & -2 i \sin \beta_2 (e^{-i \alpha_2} - 1)
\end{vmatrix} \\
&= -2 i \sin \beta_2 (e^{i \alpha_1} - 1) \begin{vmatrix}
\frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} \\
e^{-i \beta_1} - 1 & e^{i \alpha_1} - 1 & e^{-i \alpha_1} - 1 \\
e^{-i \beta_2} - 1 & e^{i \alpha_2} - 1 & e^{-i \alpha_2} - 1
\end{vmatrix}. }
Since the integrand in \eqref{e:Fourier} is smooth, summation over $(k,l) = z \in \Z^2$
amounts to setting $(\alpha_2, \beta_2) = (-\alpha_1, -\beta_1)$ and keeping
only the integrals over $\alpha_1, \beta_1$. Therefore,
\eqnst
{ \sum_{(k,l) \in \Z^2} \det(M^o_r)
= \frac{1}{16 \pi^2} \iint e^{i (\alpha_1 k + \beta_1 l)} (2 i \sin \beta_1)(e^{i \alpha_1} - 1)
\frac{\begin{vmatrix}
\frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} \\
e^{-i \beta_1} - 1 & e^{i \alpha_1} - 1 & e^{-i \alpha_1} - 1 \\
e^{i \beta_1} - 1 & e^{-i \alpha_1} - 1 & e^{i \alpha_1} - 1
\end{vmatrix}}{(2 - 2r (\cos \alpha_1 + \cos \beta_1))^2}. }
Write the factor in front of the determinant as
$e^{i \alpha_1} - 1 = (\cos \alpha_1 - 1) + (i \sin \alpha_1)$, and
split the intergal into a sum of two terms. Then the first term is
anti-symmetric under $\alpha_1 \leftrightarrow - \alpha_1$ (since this
exchanges the second and third columns in the determinant). The second term is
anti-symmetric under $(\alpha_1,\beta_1) \leftrightarrow (-\alpha_1,-\beta_1)$
(since this exchanges the second and third rows). Hence both
terms contribute $0$ to the integral and this completes the proof of the lemma.
\end{proof}
We return to the proof of Proposition \ref{prop:limn}.
Applying the estimates of Lemma \ref{lem:estimates} now to
the entries of $\tilde{M}^\diff$, we get
\eqnst
{ \sum_{i_1 \in V_n} \det(\tilde{M}^\diff)
= \sum_{z \in \Z^2} \det(M^\diff) + O \left( \frac{\log n}{n} \right), }
where
\eqnst
{ M^\diff
= - \begin{vmatrix}
\frac{2}{4} & \frac{1}{4} & \frac{1}{\pi} - \frac{2}{4} & \frac{1}{\pi} - \frac{2}{4} \\
\partial^{(2)}_{e_2} A & A & \partial^{(2)}_{-e_1} A & \partial^{(2)}_{e_1} A \\
\partial^{(1)}_{e_1}\partial^{(2)}_{e_2} A & \partial^{(1)}_{e_1} A & \partial^{(1)}_{e_1}\partial^{(2)}_{-e_1} A & \partial^{(1)}_{e_1}\partial^{(2)}_{e_1} A \\
(\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2}) \partial^{(2)}_{e_2} A
& (\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2}) A
& (\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2})\partial^{(2)}_{-e_1} A
& (\partial^{(1)}_{-e_2}-\partial^{(1)}_{e_2})\partial^{(2)}_{e_1} A \end{vmatrix}, }
with each entry in rows 2--4 evaluated at $(z,o;1)$.
We use that $A(z,o;1) = \lim_{r \uparrow 1} A(z,o;r)$.
The bounds of Lemma \ref{lem:Aasymp}(ii),(iii) and dominated convergence imply that
\eqnst
{ \sum_{z \in \Z^2} \det(M^\diff)
= \lim_{r \uparrow 1} \sum_{z \in \Z^2} \det(M^\diff_r), }
where $M^\diff_r$ is defined in the same way as $M^\diff$, except
the $A$-entries are evaluated at $(z,o;r)$.
Since we now have $0 < r < 1$, we can write
\eqnsplst
{ A(z,o;r)
&= G(o,o;r) - G(z,o;r) \\
\partial^{(1)}_f A(z,o;r)
&= - \partial^{(1)}_f G(z,o;r) \\
\partial^{(2)}_h A(z,o;r)
&= - \partial^{(2)}_h G(z,o;r) \\
\partial^{(1)}_f \partial^{(2)}_h A(z,o;r)
&= - \partial^{(1)}_f \partial^{(2)}_h G(z,o;r). }
Due Lemma \ref{lem:sum0}, we can drop the term $G(o,o;r)$ from the $A$ entry
in $M^\diff_r$. Now undoing the row and column operations brings the
determinant to the form \eqref{e:Mr}, as required.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:summation}.]
We use the Fourier formula:
\eqnst
{ G_{k,l}(r)
= \frac{1}{8 \pi^2} \iint \frac{e^{i(\alpha k + \beta l)}}{D_r(\alpha,\beta)}
\, d\alpha\, d\beta }
with variables $(\alpha_1,\beta_1)$ in row 4, with $(\alpha_2,\beta_2)$ in row 3
and $(\alpha_3,\beta_3)$ in row 2. This writes
$\det(C_{k,l}(r))$ as a 6-fold integral. Since $0 < r < 1$, the integrand is smooth,
and therefore summation over $(k,l) \in \Z^2$ amounts to setting the Fourier variables
$\alpha_3 = - (\alpha_1 + \alpha_2)$ and $\beta_3 = -(\beta_1 + \beta_2)$.
This yields the formula in the statement.
\end{proof}
\subsection{Further computations in 2D}
\label{ssec:corr-123}
Poghosyan, Grigorev, Priezzhev and Ruelle \cite{PGPR10} extended
Priezzhev's calculations of the height probabilities to the
correlation function between height $0$ and height $1, 2, 3$.
Their result is that for some explicit non-zero constants $c_h$, $d_h$
one has
\eqnst
{ \nu [ \eta(o) = 0 ,\, \eta(y) = h ]
- p(0) p(h)
= \frac{c_h \log |y| + d_h}{|y|^4} + O \left( \frac{1}{|y|^5} \right),
\quad \text{as $|y| \to \infty$, $h = 1, 2, 3$.} }
The presence of the logarithmic term in the correlation function had been
predicted by Piroux and Ruelle \cite{PR05}, who also predicted,
based on conformal field theory calculations, that
\eqnst
{ \nu [ \eta(o) = h_1 ,\, \eta(y) = h_2 ]
- p(h_1) p(h_2)
\sim c_{h_1,h_2} \, \frac{\log^2 |y|}{|y|^4}
\quad \text{as $|y| \to \infty$, $h_1, h_2 = 1, 2, 3$.} }
\begin{open}
Show that the correlation between heights $1, 2, 3$ is of order
$(\log^2 |y|)/|y|^4$.
\end{open}
The presence of logarithmic terms was brought to light by computing
height probabilities near the boundary on the discrete upper half
plane \cites{PR05,JPR06}. Two natural boundary conditions are:
\begin{itemize}
\item[(i)] \textbf{open}: at boundary sites $\Delta^\op_{xx} = 4$,
one particle leaves the system on toppling;
\item[(ii)] \textbf{closed}: at boundary sites $\Delta^\cl_{xx} = 3$,
no particle leaves the system on toppling.
\end{itemize}
The potential kernel on the discrete upper-half plane, with either
boundary condition, is easily expressed in terms of the potential
kernel on $\Z^2$, and Jeng, Piroux and Ruelle use this to
extend Priezzhev's calculations to these cases. After lengthy
computations they arrive at the following result. Let
\eqnsplst
{ p^\op(i;m)
&= \nu^\op [ \eta((0,m)) = i ]; \\
p^\cl(i;m)
&= \nu^\cl [ \eta((0,m)) = i ]. }
Then, with explicit constants $a_i, b_i$,
\eqnsplst
{ p^\op(i;m)
&= p(i) + \frac{1}{m^2} \left( a_i + \frac{b_i}{2} + b_i \log m \right)
+ o(m^{-2}), \quad \text{as $m \to \infty$, $i = 1, 2, 3$;} \\
p^\cl(i;m)
&= p(i) - \frac{1}{m^2} \left( a_i + b_i \log m \right)
+ o(m^{-2}), \quad \text{as $m \to \infty$, $i = 1, 2, 3$.} }
Jeng, Piroux and Ruelle make the remarkable observation that,
up to terms of order $o(m^{-2})$, the probabilities of heights $2$
and $3$ are linear combinations of the probabilities of heights
$0$ and $1$. That is, with either boundary condition $* = \op, \cl$
they get:
\eqnst
{ \frac{48 - 12 \pi + 5 \pi^2 - \pi^3}{2(\pi - 2)} p^*(0;m)
+ (\pi - 8) p^*(1;m) + 2 (\pi - 2) p^*(2;m)
= \frac{(\pi - 2)(\pi - 1)}{\pi} + o(m^{-2}). }
They conjecture that the same relationship between the height probabilities
will hold in all domains and with any boundary condition.
\subsection{Minimal configurations}
We close this section by giving a theorem that states the
general type of events for which the determinantal
computations sketched in Sections \ref{ssec:height-0},
\ref{ssec:corr-0-0} and \ref{ssec:scaling-0} can be applied.
At the same time, we give an alternative formulation that
highlights the connection with the Transfer-Current Theorem.
Let $G = (V \cup \{ s \}, E)$ be a finite connected multigraph.
\begin{definition}
Let $W \subset V$, and let $\xi$ be a particle configuration
on $W$. We say that $\xi$ is \textbf{minimal}, if the sandpile
$\eta^*$ defined by
\eqnst
{ \eta^*(x)
:= \begin{cases}
\xi(x) & \text{if $x \in W$;} \\
\eta^\max(x) & \text{if $x \in V \setminus W$;}
\end{cases} }
is recurrent, but $\eta^* - \mathbf{1}_x \not\in \cR_G$
for any $x \in W$.
\end{definition}
\begin{theorem}\cites{MD91,JW12}
Let $\xi$ be minimal on $W$. There exists a set of edges
$\cE_W$ touching $W$, such that
\eqnst
{ \nu_G [ \eta : \eta(x) = \xi(x),\, x \in W ]
= \det ( K_G (e,f) )_{e, f \in \cE_W}. }
\end{theorem}
The reason the theorem works is that minimal sandpile events
can be expressed, via a particular version of the burning bijection,
as the absence of a fixed set of edges from the uniform spanning tree.
\section{Infinite graphs}
\label{sec:measures}
In this section we look at whether the sandpile dynamics
via topplings can be extended to infinite graphs.
Let $G = (V,E)$ be a locally finite, connected, infinite graph.
A sequence $V_1 \subset V_2 \subset \dots \subset V$
of finite sets of vertices such that $\cup_{n=1}^\infty V_n = V$
is called an \textbf{exhaustion} of $G$. We let
$G_n = (V_n \cup \{ s \}, E_n)$ denote the wired graph
obtained from $V_n$. We write $\nu_n$ for the
stationary distribution of the sandpile Markov chain
on $G_n$. The questions we will be interested in are:\\
(i) Does $\nu_n \Rightarrow \text{ some limit } \nu$?\\
(ii) If yes, are avalanches $\nu$-a.s.~finite, if we add
a single particle to the infinite configuration?
A fruitful approach to the above questions turns out to
be to translate them into questions about the uniform
spanning tree via the burning bijection.
\subsection{The wired uniform spanning forest}
\label{ssec:WSF}
The following theorem, due to \cites{Pem91,Hagg95}, says
that the measure $\UST_{G_n}$ converges
to a unique limit, regardless of the exhaustion.
In order to formulate this as weak convergence of probability
measures, regard a spanning tree as a set of edges. Then
$\UST_{G_n}$ can be be viewed as a probability measure
on $2^E$ (note that edges in $E_n$, including the ones
leading to $s$, are uniquely idenitified with elements of $E$).
\begin{theorem}
There exists a measure $\mathsf{WSF}$ such that for
any exhaustion we have $\mathsf{UST}_{G_n} \Rightarrow \mathsf{WSF}$
as $n \to \infty$, in the sense of weak convergence of
probability measures. Under $\mathsf{WSF}$, each connected
component is an infinite tree, almost surely.
\end{theorem}
The limit $\mathsf{WSF}$ is called the
\textbf{wired uniform spanning forest} measure.
\begin{definition}
An infinite tree has \textbf{one end}, if any two infinite
self-avoiding paths in the tree have a finite symmetric difference.
\end{definition}
\begin{theorem}\cites{Pem91,BLPS}
\label{thm:Pemantle}
When $G = \Z^d$, we have the following.\\
(i) If $2 \le d \le 4$ then $\mathsf{WSF}$-a.s.~there is a single tree
and it has one end.\\
(ii) If $d \ge 5$ then $\mathsf{WSF}$-a.s.~there are infinitely many
trees and each one has one end.
\end{theorem}
Given a spanning tree $\tau$ in a finite graph with a sink $s$,
we say that vertex $y$ is a \textbf{descendant} of vertex $x$,
if $x$ lies on the unique path from $y$ to $s$ in $\tau$.
This notion extends naturally to infinite one-ended trees:
in this case $y$ is called a \textbf{descendant} of $x$,
if $y$ lies on the unique infinite self-avoiding path
starting at $x$.
It is usually not an easy problem to decide whether for a given
infinite graph each tree has one end $\WSF$-a.s. Nevertheless the
one end property is known for a large class of graphs beyond $\Z^d$;
see \cites{BLPS,LMS08}. Examples of graphs where the one end property
fails are given by the direct product of $\Z$ with any finite
connected graph. On such graphs, $\WSF$-a.s.~there is a single tree with two
ends.
\subsection{One end property and the sandpile model}
\label{ssec:one-end}
The usefulness of the one end property for the sandpile model
is illustrated by the following theorem.
\begin{theorem}
\label{thm:measure} \cite{JW12}
Suppose that $\mathsf{WSF}$-a.s each tree has one end. Then
there exists a measure $\nu$ such that for any exhaustion
$\nu_n \Rightarrow \nu$. That is, for any finite $Q \subset V$
and any particle configuration $\xi$ on $Q$ we have
\eqnst
{ \lim_{n \to \infty} \nu_n \left[ \eta : \eta_Q = \xi \right]
= \nu \left[ \eta : \eta_Q = \xi \right]. }
\end{theorem}
The main idea of the proof is a decomposition of the burning
bijection of Majumdar and Dhar into two phases. Such a
decomposition was used in \cite{Pr94} when $Q = \{ o \}$
(see Section \ref{ssec:height-123}), and is also implicit
in \cite{MD91}.
Fix a finite set $Q \subset V$, and suppose that our aim is to
show that under $\nu_n$, the marginal distribution of the
sandpile in $Q$ converges as $n \to \infty$.
\medbreak
{\bf Burning bijection anchored at $Q$.} We split the burning
process into two phases.
\emph{Phase I.} Follow the usual burning process with the
restriction that no vertex of $Q$ is allowed to burn. Phase I ends
when there are no more vertices that can be burnt this way.
\emph{Phase II.} Now follow the usual burning process
to burn the remaining vertices.
A formal definition can be given along the lines of the case
$Q = \{ o \}$ presented in Section \ref{ssec:height-123}.
\medbreak
Given $\eta \in \cR_{G_n}$, let $W_n(\eta)$ denote the
set of vertices that are burnt in Phase II (so that
$Q \subset W_n(\eta) \subset V_n$). A bijection
$\varphi_Q : \cR_{G_n} \to \cT_{G_n}$ can be constructed
from the above burning rule similarly to Sections
\ref{ssec:burning} and \ref{ssec:height-123}.
The proof of Theorem \ref{thm:measure} is based on
the following lemma that we state without proof; see \cite{JW12}.
\begin{lemma}
\label{lem:cond-indep}
Fix $W$ such that $Q \subset W \subset V_n$.
Under $\nu_n$, the restrictions of sandpile to $V \setminus W$
and $W$, respectively, are conditionally independent, given
the event $\{ \eta : W_n(\eta) = W \}$.
\end{lemma}
\begin{proof}[Sketch of the proof of Theorem \ref{thm:measure}.]
Due to Lemma \ref{lem:cond-indep} we can write
\eqnsplst
{ \nu_n [ \eta : \eta_Q = \xi ]
&= \sum_{Q \subset W \subset V_n} \ \nu_n [ \eta : W_n(\eta) = W,\, \eta_Q = \xi ] \\
&= \sum_{Q \subset W \subset V_n} \ \nu_n [ \eta: W_n(\eta) = W ] \, p_{W,\xi}, }
where the numbers $p_{W,\xi}$ do not depend on $n$. On the other hand,
the event $\{ W_n = W \}$ is spanning tree local, and hence
$\nu_n [ W_n = W ] \to q_W$, for some numbers $q_W$. This suggests
that
\eqnst
{ \nu [ \eta_Q = \xi ]
= \sum_{\substack{Q \subset W \subset V \\ \text{$W$ finite}}} \ q_W \, p_{W,\xi}. }
The proof can be completed by showing that the random sets $W_n$
converge weakly to a limit $W_\infty$ that is a.s.~finite.
For this, the following property is key: in the \emph{first step} of Phase II,
no vertex in $W_n \setminus Q$ can burn. Indeed, in Phase I we have
examined such vertices, and they were found to be not burnable.
This implies that the spanning tree will contain no edges
between $W_n \setminus Q$ and $V \setminus W_n$. In fact,
$W_n$ equals the set of all descendants of $Q$ under the bijection;
see Exercise \ref{ex:desc}. Due to the one end hypothesis of the
theorem, the set of descendants of $Q$ is finite $\WSF$-a.s.
From this one can conclude the convergence $W_n \Rightarrow W_\infty$,
where $W_\infty$ is the set of all descendants of $Q$ in the wired
spanning forest.
\end{proof}
The following theorem shows that on transient graphs at least,
the \textbf{sandpile measure} $\nu$ constructed in
Theorem \ref{thm:measure} is nicely behaved, in that it
has finite avalanches. The theorem can be proved along the
lines of \cite{JR08}*{Theorem 3.11}, although that proof was
stated in $\Z^d$.
\begin{theorem}\cite{JR08}*{Theorem 3.11}
\label{thm:finite}
Assume the hypotheses of Theorem \ref{thm:measure}. If in addition
$G$ is transient, then for $\nu$-a.e.~$\eta$ and all $x \in V$,
the configuration $\eta + \mathbf{1}_x$ can be stabilized with
finitely many topplings.
\end{theorem}
\begin{proof}[Idea of the proof.]
On transient graphs, $\E_\nu [ n(x,y,\cdot) ] = G(x,y) < \infty$.
Hence $\nu$-a.s., every site topples finitely many times, when a
particle is added at $x$. However, this is not enough, since we may
still have infinitely many vertices toppling (note that
$\sum_{y \in V} G(x,y) = \infty$).
In order to show that only finitely many vertices topple, one can
use a decomposition of the avalanche into \textbf{waves},
introduced by Ivashkevich, Ktitarev and Priezzhev \cite{IKP94}.
Waves are defined as follows. After
we added a particle at $x$, topple $x$, and all other vertices
that can be toppled, but do not allow $x$ to topple a second time.
It is not difficult to see that each vertex topples at most once
under this restriction. The set of vertices that toppled is called
the \textbf{first wave}. After the first wave, if $x$ is still
unstable (this will be the case if and only if all of its
neighbours were in the first wave), topple $x$ a second time and
topple all other vertices that can be toppled, not allowing $x$
to topple a third time. This is called the \textbf{second wave}, etc.
Ivashkevich, Ktitarev and Priezzhev show that the ensemble of
all possible waves started at $x$ is in bijection with the ensemble of
all spanning forests of $G_n$ with two components such that $x$ and
$s$ are in distinct components.
The expected number of waves under $\nu$ is finite:
$\nu [ n(x,x,\cdot) ] = G(x,x) < \infty$. Therefore, it is sufficient
to show that each wave is finite $\nu$-a.s. The latter can be deduced
from the one end assumption.
\end{proof}
On recurrent graphs, finiteness of avalanches is much more subtle,
and this is largely open.
\begin{open}
Consider $\Z^2$. Is it true that for $\nu$-a.e.~$\eta$ the
configuration $\eta + \mathbf{1}_0$ can be stabilized
with finitely many topplings? Note that for \emph{all} $x \in \Z^2$
we have
\eqnst
{ \E_\nu [ n(o, x; \cdot) ]
= \lim_{n \to \infty} \E_{\nu_n} [ n(o, x; \cdot) ]
= \lim_{n \to \infty} G_n(o,x)
= \infty. }
Hence \emph{on average} every vertex topples infinitely often.
\end{open}
On graphs of the form $\Z \times G_0$, where $G_0$ is a finite connected
graph, avalanches are not finite in general. When $G_0$ consists of a single
vertex, this follows from the fact that $\nu$ concentrates on the
single configuration $\eta \equiv 1$ (and hence all vertices
topple infinitely often); see Exercise \ref{ex:1D} and \cite{MRSV00}.
When $G_0$ has at least two vertices, the stationary distributions
do not have a unique weak limit point. Let $G_{-n,m}$ denote the
wired graph constructed from $\{ -n, -n+1, \dots, m-1, m \} \times G_0$, and
let $\nu_{-n,m}$ denote the stationary distribution of the sandpile
on $G_{-n,m}$. It can be shown \cite{JL07} that there are two distinct ergodic weak limit
points of $\nu_{-n,m}$, as $n, m \to \infty$. This arises from the
fact that the burning process on $G_{-n,m}$ operates both from the left
and the right end of the graph. A typical recurrent configuration
has a ``left-burnable'' and a ``right-burnable'' region, and these
give rise to two distinct ergodic sandpile measures $\nu^L$ and $\nu^R$.
With respect to either $\nu^L$ and $\nu^R$, there is a strictly positive probability
of both finite and infinite avalanches; see \cite{JL07}.
\subsection{The sandpile group of infinite graphs}
When avalanches are $\nu$-a.s.~finite, the addition operators
$E_x$ are defined $\nu$-a.e.~for all $x \in V$. It can be shown
that they leave $\nu$ invariant and the Abelian property holds:
$E_x E_y = E_y E_x$; see \cite{JR08}. Hence the addition operators
generate an Abelian group of measure-preserving transformations of
$(\cR, \nu)$. Little is known about this group. The case that
is perhaps best understood is sandpiles that dissipate particles
on every toppling.
For $d \ge 1$ and an integer $\gamma \ge 1$ we define the
\textbf{dissipative sandpile} with bulk dissipation $\gamma$ as follows.
Let $V_n \subset \Z^d$ be finite, and let
$G^{(\gamma)}_n = (V_n \cup \{ s \}, E^{(\gamma)}_n)$ denote the
graph obtained from the wired graph $(V_n,E_n)$ by adding $\gamma$
edges between each $x \in V_n$ and $s$. That is, a vertex $x$ in
configuration $\eta$ will be stable when $\eta(x) < 2d + \gamma$, and
when an unstable vertex is toppled, it sends $\gamma$ particles
to the sink, in addition to sending one particle to each of its
neighbours. The effect of toppling can be formally written
in terms of the graph Laplacian of $G^{(\gamma)}_n$. This is
\eqnst
{ \Delta^{(\gamma)}_{xy}
= \begin{cases}
2d + \gamma & \text{if $x = y$;} \\
-1 & \text{if $x \sim y$;} \\
0 & \text{otherwise.}
\end{cases} }
Then
\eqnst
{ T_x \eta(y)
= \eta(y) - \sum_{z \in V_n} \ \Delta^{(\gamma)}_{xz}, \quad x,y \in V_n. }
Maes, Redig and Saada \cite{MRS04} show that Dhar's the formalism of the sandpile group
carries through in the limit $V_n \uparrow \Z^d$, and the limiting sandpile meausre
$\nu$ can be identified with Haar measure on a compact Abelian group.
\section{Stabilizability of infinite configurations}
\label{sec:infinite-conf}
In Theorem \ref{thm:finite} we saw that under certain conditions
we can add paticles to a $\nu$-typical configuration, and only
finitely many topplings result a.s. A more general question that
is interesting in its own right is: what infinite configurations can
be stabilized (in some appropriate sense)? A more basic question
that is still not fully understood is: what happens if we add a single column of
$n$ particles to a stable background configuration, and attempt to
stabilize?
\subsection{Relaxing a column of particles}
\label{ssec:column}
If we start with a large number of particles at the origin and stabilize,
what will be the shape of the region visited by the particles?
We collect some results on this question in three related models.
Striking computer simluations of these questions are available:
see for example \cites{LP10,LP09,PS13}.
\subsubsection{Three models}
{\bf A. Sandpile.} Start with $n$ particles at $0 \in \Z^d$, and no particles
elsewhere. Now stabilize via topplings. Let
\eqnst
{ S_n
= \{ x \in \Z^d : \text{$x$ was visited by a particle during stabilization} \}. }
More generally: start with $h$ particles at each $x \in \Z^d \setminus \{ 0 \}$,
where $h \le 2d - 2$ (the case $h = 2d - 1$ being trivial). Here $h$ is
allowed to be \emph{negative}, that is, we allow a ``hole'' of depth $|h|$
that first has to be ``filled'', before topplings can occur. Let $S_{n,h}$
denote the set of vertices visited.
\medbreak
{\bf B. Rotor-router.} Start with $n$ particles at the origin and
arbitrary initial rotors everywhere on $\Z^d$. Each particle in turn
follows rotor-router walk until it arrives at a vertex that has not
been visited before, and there it stops. Let
\eqnst
{ A_n
= \{ \text{vertices occupied after all particles stopped} \}. }
\medbreak
{\bf C. Divisible sandpile.} Start with a non-negative real mass $m$
at the origin and no mass anywhere else. If $x \in \Z^d$ has mass
$\ge 1$, distribute the mass in excess of $1$ equally among the neighbours.
In this model topplings do not commute, but the stabilization is still
well-defined; see \cite{LP09}. Let
\eqnst
{ D_m
= \{ x \in \Z^d : \text{$x$ has final mass $=1$} \}. }
Heuristically, this model corresponds to
taking $n = m |h|$ in Model A and letting $h \to -\infty$.
\subsubsection{Shape theorems / shape estimates}
Let us write $B_r = \{ x \in \Z^d : |x| < r \}$ and let
$\omega_d = \text{volume of the unit ball in $\R^d$}$.
Levine and Peres \cite{LP09} show that the rotor-router model
and the divisible sandpile satisfy spherical shape theorems
in the strong sense that there exist $c, c' > 0$ such that
if $m = \omega_d r^d$ then
\eqnst
{ B_{r - c \log r} \subset A_n \subset B_{r ( 1 + c' r^{-1/d} \log r )}; }
and there exist $c, c' > 0$ depending on $d$ such that
if $m = \omega_d r^d$ then
\eqn{e:strong-circ}
{ B_{r-c} \subset D_m \subset B_{r+c'}. }
Simulations suggest that for the sandpile the asymptotic shape
is \emph{not} circular. Levine and Peres \cite{LP09} prove
that if $-h \ge 2-2d$, then
\eqnst
{ B_{c_1 r - c_2} \subset S_{n,h}, }
where $c_1 = (2d-1-h)^{-1/d}$ and $c_2$ only depends on $d$.
Also, when $-h \ge 1-d$, then for every $\eps > 0$ they get
\eqnst
{ S_{n,h} \subset B_{c'_1 r + c'_2}, }
where $c'_1 = (d - \eps - h)^{-1/d}$, and $c'_2$ depends only on
$d$, $h$ and $\eps$.
The inner and outer bounds approach each other as
$h \downarrow -\infty$. This reinforces the idea that this
limit corresponds to the divisible sandpile, for which the
limit shape is circular in the strong sense \eqref{e:strong-circ}.
For the values $d \le h \le 2d - 2$, Fey, Levine and Peres \cite{FLP10}
prove an outer bound of a cube of order $n^{1/d}$:
for any $\eps > 0$ they get
\eqn{e:outer-cube}
{ S_n \subset \{ x \in \Z^d : \| x \|_\infty \le r \}, }
where $r = \frac{d+\eps}{2d-1-h} (n/\omega_d)^{1/d}$.
\subsubsection{Scaling limit of the final configuration}
In the sandpile model, simulations show intricate fractal
patterns in the final configuration reached from a
column of height $n$. Pegden and Smart \cite{PS13} prove that this pattern
has a scaling limit. In order to state their result, let
\eqnsplst
{ s_n(x)
&= (n \mathbf{1}_0)^\circ(x); \\
\bar{s}_n(x)
&= s_n(n^{1/d} x); }
where $\bar{s}_n$ is extended to all of $\R^d$ in a piecewise
constant way.
\begin{theorem}\cite{PS13}
There exists a unique $s \in L^\infty(\R^d)$, such that
for all functions $\varphi$ continuous with compact support
we have
\eqnst
{ \int_{\R^d} \bar{s}_n\, \varphi\, dx \stackrel{n \to \infty}{\longrightarrow}
\int_{\R^d} s\, \varphi\, dx. }
Moreover, $\int_{\R^d} s\, dx = 1$, $0 \le s \le 2d-1$, and $s$
vanishes outside some ball.
\end{theorem}
The main idea of the proof is the following. Let $v_n(x)$ be the number
of times $x$ topples (called the \textbf{odometer} function, in analogy with
the rotor-router model). Then the discrete Laplacian of $v_n$ is
a bounded function away from $0$, and hence $v_n$ can be compared
with the function $\Phi_n$, where $\Phi_n$ is discrete harmonic away from
$0$ and has discrete Laplacian equal to $-n$ at $0$. Let
$w_n = v_n - \Phi_n$. The proof consists of two parts. First,
compactness considerations show that along subsequences $s_n$ and $w_n$
have limits $s$ and $w$ satisfying Laplace's equation: $\Delta w = s$,
with the PDE interpreted in the weak sense. Then it is shown that
subsequential limits are unique. The second part relies on regularity
properties of weak solutions of the PDE $\Delta w = s$, where
$s \in L^\infty(\R^d)$.
\subsection{Explosions}
\label{ssec:explosions}
Given an unstable sandpile on $\Z^d$, we can attempt to stabilize it
by carrying out (legal) topplings in such a way that if at any time
some vertex $x$ is unstable, then our procedure ensures that $x$
is toppled at a later time. Let us call such a toppling
sequence \textbf{exhaustive}.
\begin{definition} We call a sandpile $\eta$ on $\Z^d$
\textbf{stabilizable}, if there exists an exhaustive toppling sequence
such that every vertex topples finitely often.
\end{definition}
\begin{definition}
A stable background configuration $\eta$ on $\Z^d$ is called
\textbf{explosive}, if there exists $1 \le n < \infty$ such that
in attempting to stabilize $\eta + n \mathbf{1}_0$ all of
$\Z^d$ topples. The background is called \textbf{robust}, if
there are finitely many topplings for all $n \ge 1$.
\end{definition}
{\bf Note:} Explosive implies that in fact all vertices
topple infinitely many times.
\begin{example}
Write $\overline k$ for the configuration that equals the constant
value $k$ everywhere. It is easy to see that $\overline{2d - 1}$ is explosive.
On the other hand, $\overline{2d-2}$ is robust, due to \eqref{e:outer-cube}.
\end{example}
The following two examples, due to Fey, Levine and Peres \cite{FLP10},
show that there are robust configurations arbitrarily close to
$\overline{2d-1}$, and explosive ones arbitrarily close to
$\overline{2d-2}$. For the first example, let
\eqnst
{ \Lambda(m)
= \{ x \in \Z^d : m \not\vert x_i,\, 1 \le i \le d \}, }
that is, remove from $\Z^d$ all vertices that have a coordinate
divisible by $m$. Then for any $m \ge 1$ the background
$\overline{2d-2} + \mathbf{1}_{\Lambda(m)}$ is robust;
see \cite{FLP10}*{Theorem 1.2}. For the second example, let
\eqnst
{ \beta(x)
= \begin{cases}
1 & \text{with probability $\eps$;} \\
0 & \text{with probability $1 - \eps$.}
\end{cases} }
Then for any $\eps > 0$, with probability $1$,
the background $\overline{2d-2} + \beta$ is explosive; see \cite{FLP10}*{Proposition 1.4}.
\subsection{Ergodic configurations}
Finally, we state some results on stabilizability of sandpiles that are
random samples from a translation invariant ergodic measure on
$\{ 0, 1, 2, \dots \}^{\Z^d}$. It is tempting to assume that the
boundary for stabilizability would be given by whether the
particle density is above or below the critical sandpile density
$\rho_c = \E_\nu [ \eta(0) ]$. However, this is
not so, even for product measures, as is demonstrated in various
ways in \cite{FLW10}.
The following theorem states some results proved by
Fey and Redig \cite{FR05} and Meester and Quant \cite{MQ05}.
\begin{theorem}\cites{FR05,MQ05}
\label{thm:stabilize}
Let $\mu$ be a translation invariant ergodic measure on sandpiles on $\Z^d$. \\
(a) If $\E_\mu [ \eta(0) ] < d$, then $\mu$-a.e.~$\eta$ is stabilizable. \\
(b) If $\E_\mu [ \eta(0) ] > 2d - 1$, then $\mu$-a.e.~$\eta$ is not stabilizable.
\end{theorem}
The picture of stabilizability is more complete in dimension $d = 1$,
since the upper and lower bounds in Theorem \ref{thm:stabilize}(a),(b) coincide.
Fey, Meester and Redig \cite{FMR09} determined what happens at the critical density.
\begin{theorem}\cite{FMR09}*{Theorem 3.2}
Let $\mu$ be a product measure on sandpiles on $\Z$ such that
$\E_\mu [ \eta(0) ] = 1$ and $\mu [ \eta(0) = 0 ] > 0$. Then
$\mu$-a.e.~$\eta$ is not stabilizable.
\end{theorem}
\bigbreak
{\bf Acknowledgements.}
I thank Lionel Levine and Laurent Saloff-Coste for offering the opportunity
to give a course at the Summer School. I thank Mathav Murugan for being
available to run two tutorials at short notice. The Summer School has been a
very stimulating environment. I thank all participants for their
questions, comments and feedback, that I have attempted to incorporate
into this survey.
\begin{bibdiv}
\begin{biblist}[\resetbiblist{99}]
\bib{AJ04}{article}{
author={Athreya, Siva R.},
author={J{\'a}rai, Antal A.},
title={Infinite volume limit for the stationary distribution of abelian
sandpile models},
journal={Comm. Math. Phys.},
volume={249},
date={2004},
number={1},
pages={197--213},
issn={0010-3616},
review={\MR{2077255 (2005m:82106)}},
}
\bib{BTW88}{article}{
author={Bak, Per},
author={Tang, Chao},
author={Wiesenfeld, Kurt},
title={Self-organized criticality},
journal={Phys. Rev. A (3)},
volume={38},
date={1988},
number={1},
pages={364--374},
issn={1050-2947},
review={\MR{949160 (89g:58126)}},
}
\bib{BA91}{article}{
author={Barsky, D. J.},
author={Aizenman, M.},
title={Percolation critical exponents under the triangle condition},
journal={Ann. Probab.},
volume={19},
date={1991},
number={4},
pages={1520--1536},
issn={0091-1798},
review={\MR{1127713 (93b:60224)}},
}
\bib{BLPS}{article}{
author={Benjamini, Itai},
author={Lyons, Russell},
author={Peres, Yuval},
author={Schramm, Oded},
title={Uniform spanning forests},
journal={Ann. Probab.},
volume={29},
date={2001},
number={1},
pages={1--65},
issn={0091-1798},
review={\MR{1825141 (2003a:60015)}},
}
\bib{BR02}{article}{
author={Le Borgne, Yvan},
author={Rossin, Dominique},
title={On the identity of the sandpile group},
note={LaCIM 2000 Conference on Combinatorics, Computer Science and
Applications (Montreal, QC)},
journal={Discrete Math.},
volume={256},
date={2002},
number={3},
pages={775--790},
issn={0012-365X},
review={\MR{1935788 (2003j:82054)}},
}
\bib{Bbook}{book}{
author={Bollob{\'a}s, B{\'e}la},
title={Modern graph theory},
series={Graduate Texts in Mathematics},
volume={184},
publisher={Springer-Verlag},
place={New York},
date={1998},
pages={xiv+394},
isbn={0-387-98488-7},
review={\MR{1633290 (99h:05001)}},
}
\bib{BH57}{article}{
author={Broadbent, S. R.},
author={Hammersley, J. M.},
title={Percolation processes. I. Crystals and mazes},
journal={Proc. Cambridge Philos. Soc.},
volume={53},
date={1957},
pages={629--641},
review={\MR{0091567 (19,989e)}},
}
\bib{BP93}{article}{
author={Burton, Robert},
author={Pemantle, Robin},
title={Local characteristics, entropy and limit theorems for spanning
trees and domino tilings via transfer-impedances},
journal={Ann. Probab.},
volume={21},
date={1993},
number={3},
pages={1329--1371},
issn={0091-1798},
review={\MR{1235419 (94m:60019)}},
}
\bib{Dhar90}{article}{
author={Dhar, Deepak},
title={Self-organized critical state of sandpile automaton models},
journal={Phys. Rev. Lett.},
volume={64},
date={1990},
number={14},
pages={1613--1616},
issn={0031-9007},
review={\MR{1044086 (90m:82053)}},
}
\bib{Dhar06}{article}{
author={Dhar, Deepak},
title={Theoretical studies of self-organized criticality},
journal={Phys. A},
volume={369},
date={2006},
number={1},
pages={29--70},
issn={0378-4371},
review={\MR{2246566 (2007g:82042)}},
}
\bib{DM90}{article}{
author={Dhar, Deepak},
author={Majumdar, S. N.},
title={Abelian sandpile model on the Bethe lattice},
journal={J. Phys. A},
volume={23},
date={1990},
number={19},
pages={4333--4350},
issn={0305-4470},
review={\MR{1076905 (91m:82098)}},
}
\bib{Durre}{article}{
author={D{\"u}rre, Maximilian},
title={Conformal covariance of the abelian sandpile height one field},
journal={Stochastic Process. Appl.},
volume={119},
date={2009},
number={9},
pages={2725--2743},
issn={0304-4149},
review={\MR{2554026 (2011c:60311)}},
}
\bib{FLP10}{article}{
author={Fey, Anne},
author={Levine, Lionel},
author={Peres, Yuval},
title={Growth rates and explosions in sandpiles},
journal={J. Stat. Phys.},
volume={138},
date={2010},
number={1-3},
pages={143--159},
issn={0022-4715},
review={\MR{2594895 (2011c:82051)}},
}
\bib{FLW10}{article}{
author={Fey, Anne},
author={Levine, Lionel},
author={Wilson, David B.},
title={Approach to criticality in sandpiles},
journal={Phys. Rev. E (3)},
volume={82},
date={2010},
number={3},
pages={031121, 14},
issn={1539-3755},
review={\MR{2787987 (2012a:82060)}},
}
\bib{FMR09}{article}{
author={Fey, Anne},
author={Meester, Ronald},
author={Redig, Frank},
title={Stabilizability and percolation in the infinite volume sandpile
model},
journal={Ann. Probab.},
volume={37},
date={2009},
number={2},
pages={654--675},
issn={0091-1798},
review={\MR{2510019 (2010c:60289)}},
}
\bib{FR05}{article}{
author={Fey-den Boer, Anne},
author={Redig, Frank},
title={Organized versus self-organized criticality in the abelian
sandpile model},
journal={Markov Process. Related Fields},
volume={11},
date={2005},
number={3},
pages={425--442},
issn={1024-2953},
review={\MR{2175021 (2006g:60136)}},
}
\bib{FU96}{article}{
author={Fukai, Yasunari},
author={Uchiyama, K{\^o}hei},
title={Potential kernel for two-dimensional random walk},
journal={Ann. Probab.},
volume={24},
date={1996},
number={4},
pages={1979--1992},
issn={0091-1798},
review={\MR{1415236 (97m:60098)}},
}
\bib{Grimmett}{book}{
author={Grimmett, Geoffrey},
title={Percolation},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={321},
edition={2},
publisher={Springer-Verlag},
place={Berlin},
date={1999},
pages={xiv+444},
isbn={3-540-64902-6},
review={\MR{1707339 (2001a:60114)}},
}
\bib{Grbook2}{book}{
author={Grimmett, Geoffrey},
title={The random-cluster model},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={333},
publisher={Springer-Verlag},
place={Berlin},
date={2006},
pages={xiv+377},
isbn={978-3-540-32890-2},
isbn={3-540-32890-4},
review={\MR{2243761 (2007m:60295)}},
}
\bib{Hagg95}{article}{
author={H{\"a}ggstr{\"o}m, Olle},
title={Random-cluster measures and uniform spanning trees},
journal={Stochastic Process. Appl.},
volume={59},
date={1995},
number={2},
pages={267--275},
issn={0304-4149},
review={\MR{1357655 (97b:60170)}},
}
\bib{Hamm57a}{article}{
author={Hammersley, J. M.},
title={Percolation processes: Lower bounds for the critical probability},
journal={Ann. Math. Statist.},
volume={28},
date={1957},
pages={790--795},
issn={0003-4851},
review={\MR{0101564 (21 \#374)}},
}
\bib{Hamm59}{article}{
author={Hammersley, J. M.},
title={Bornes sup\'erieures de la probabilit\'e critique dans un
processus de filtration},
language={French},
conference={
title={Le calcul des probabilit\'es et ses applications. Paris, 15-20
juillet 1958},
},
book={
series={Colloques Internationaux du Centre National de la Recherche
Scientifique, LXXXVII},
publisher={Centre National de la Recherche Scientifique},
place={Paris},
},
date={1959},
pages={17--37},
review={\MR{0105751 (21 \#4487)}},
}
\bib{HvdHS03}{article}{
author={Hara, Takashi},
author={van der Hofstad, Remco},
author={Slade, Gordon},
title={Critical two-point functions and the lace expansion for spread-out
high-dimensional percolation and related models},
journal={Ann. Probab.},
volume={31},
date={2003},
number={1},
pages={349--408},
issn={0091-1798},
review={\MR{1959796 (2005c:60130)}},
}
\bib{HS90}{article}{
author={Hara, Takashi},
author={Slade, Gordon},
title={Mean-field critical behaviour for percolation in high dimensions},
journal={Comm. Math. Phys.},
volume={128},
date={1990},
number={2},
pages={333--391},
issn={0010-3616},
review={\MR{1043524 (91a:82037)}},
}
\bib{HS00a}{article}{
author={Hara, Takashi},
author={Slade, Gordon},
title={The scaling limit of the incipient infinite cluster in
high-dimensional percolation. I. Critical exponents},
journal={J. Statist. Phys.},
volume={99},
date={2000},
number={5-6},
pages={1075--1168},
issn={0022-4715},
review={\MR{1773141 (2001g:82053a)}},
}
\bib{HS00b}{article}{
author={Hara, Takashi},
author={Slade, Gordon},
title={The scaling limit of the incipient infinite cluster in
high-dimensional percolation. II. Integrated super-Brownian excursion},
note={Probabilistic techniques in equilibrium and nonequilibrium
statistical physics},
journal={J. Math. Phys.},
volume={41},
date={2000},
number={3},
pages={1244--1293},
issn={0022-2488},
review={\MR{1757958 (2001g:82053b)}},
}
\bib{Harris}{article}{
author={Harris, T. E.},
title={A lower bound for the critical probability in a certain
percolation process},
journal={Proc. Cambridge Philos. Soc.},
volume={56},
date={1960},
pages={13--20},
review={\MR{0115221 (22 \#6023)}},
}
\bib{HLMPPW}{article}{
author={Holroyd, Alexander E.},
author={Levine, Lionel},
author={M{\'e}sz{\'a}ros, Karola},
author={Peres, Yuval},
author={Propp, James},
author={Wilson, David B.},
title={Chip-firing and rotor-routing on directed graphs},
conference={
title={In and out of equilibrium. 2},
},
book={
series={Progr. Probab.},
volume={60},
publisher={Birkh{\"a}user},
place={Basel},
},
date={2008},
pages={331--364},
review={\MR{2477390 (2010f:82066)}},
}
\bib{HKPV06}{article}{
author={Hough, J. Ben},
author={Krishnapur, Manjunath},
author={Peres, Yuval},
author={Vir{\'a}g, B{\'a}lint},
title={Determinantal processes and independence},
journal={Probab. Surv.},
volume={3},
date={2006},
pages={206--229},
issn={1549-5787},
review={\MR{2216966 (2006m:60068)}},
}
\bib{IKP94}{article}{
author={Ivashkevich, Eugene V.},
author={Ktitarev, Dmitri V.},
author={Priezzhev, Vyatcheslav B},
title={Waves of topplings in an Abelian sandpile},
journal={Phys. A},
volume={209},
number={3--4},
pages={347--360},
year={1994},
issn={0378-4371},
}
\bib{IP98}{article}{
author={Ivashkevich, E. V.},
author={Priezzhev, Vyatcheslav B.},
title={Introduction to the sandpile model},
journal={Phys. A},
volume={254},
number={1--2},
date={1998},
pages={97--116},
issn={0378-4371},
}
\bib{JL07}{article}{
author={J{\'a}rai, Antal A.},
author={Lyons, Russell},
title={Ladder sandpiles},
journal={Markov Process. Related Fields},
volume={13},
date={2007},
number={3},
pages={493--518},
issn={1024-2953},
review={\MR{2357385 (2010c:82064)}},
}
\bib{JR08}{article}{
author={J{\'a}rai, Antal A.},
author={Redig, Frank},
title={Infinite volume limit of the abelian sandpile model in dimensions
$d\geq 3$},
journal={Probab. Theory Related Fields},
volume={141},
date={2008},
number={1-2},
pages={181--212},
issn={0178-8051},
review={\MR{2372969 (2009c:60268)}},
}
\bib{JRS13}{article}{
author={J{\'a}rai, Antal A.},
author={Redig, Frank},
author={Saada, Ellen},
title={Zero dissipation limit in the abelian avalanche model},
status={In preparation},
}
\bib{JW12}{article}{
author={J{\'a}rai, Antal A.},
author={Werning, Nicol{\'a}s},
title={Minimal configurations and sandpile measures},
journal={J. Theoret. Probab.},
issn={0894-9840},
}
\bib{JPR06}{article}{
author={Jeng, Monwhea},
author={Piroux, Geoffroy},
author={Ruelle, Philippe},
title={Height variables in the Abelian sandpile model:
scaling fields and correlations},
journal={J. Stat. Mech. Theory Exp.},
date={2006},
pages={P10015+63},
issn={1742-5468},
}
\bib{Jensen}{book}{
author={Jensen, Henrik Jeldtoft},
title={Self-organized criticality},
series={Cambridge Lecture Notes in Physics},
volume={10},
note={Emergent complex behavior in physical and biological systems},
publisher={Cambridge University Press},
place={Cambridge},
date={1998},
pages={xiv+153},
isbn={0-521-48371-9},
review={\MR{1689042 (2001d:92003)}},
}
\bib{KW11}{article}{
author={Kenyon, Richard},
author={Wilson, David B.},
title={Spanning trees of graphs on surfaces and the intensity
of loop-erased random walk on $Z^2$},
date={2011},
status={Preprint. {\tt http://arxiv.org/abs/1107.3377}},
}
\bib{Kesten80}{article}{
author={Kesten, Harry},
title={The critical probability of bond percolation on the square lattice
equals $\frac{1}{2}$},
journal={Comm. Math. Phys.},
volume={74},
date={1980},
number={1},
pages={41--59},
issn={0010-3616},
review={\MR{575895 (82c:60179)}},
}
\bib{Kesten87}{article}{
author={Kesten, Harry},
title={Scaling relations for $2$D-percolation},
journal={Comm. Math. Phys.},
volume={109},
date={1987},
number={1},
pages={109--156},
issn={0010-3616},
review={\MR{879034 (88k:60174)}},
}
\bib{KN11}{article}{
author={Kozma, Gady},
author={Nachmias, Asaf},
title={Arm exponents in high dimensional percolation},
journal={J. Amer. Math. Soc.},
volume={24},
date={2011},
number={2},
pages={375--409},
issn={0894-0347},
review={\MR{2748397 (2012a:60273)}},
}
\bib{KS04}{article}{
author={Kozma, Gady},
author={Schreiber, Ehud},
title={An asymptotic expansion for the discrete harmonic potential},
journal={Electron. J. Probab.},
volume={9},
date={2004},
pages={no. 1, 1--17 (electronic)},
issn={1083-6489},
review={\MR{2041826 (2005f:60165)}},
}
\bib{LLbook}{book}{
author={Lawler, Gregory F.},
author={Limic, Vlada},
title={Random walk: a modern introduction},
series={Cambridge Studies in Advanced Mathematics},
volume={123},
publisher={Cambridge University Press},
place={Cambridge},
date={2010},
pages={xii+364},
isbn={978-0-521-51918-2},
review={\MR{2677157 (2012a:60132)}},
}
\bib{LSW02}{article}{
author={Lawler, Gregory F.},
author={Schramm, Oded},
author={Werner, Wendelin},
title={One-arm exponent for critical 2D percolation},
journal={Electron. J. Probab.},
volume={7},
date={2002},
pages={no. 2, 13 pp. (electronic)},
issn={1083-6489},
review={\MR{1887622 (2002k:60204)}},
}
\bib{LP09}{article}{
author={Levine, Lionel},
author={Peres, Yuval},
title={Strong spherical asymptotics for rotor-router aggregation and the
divisible sandpile},
journal={Potential Anal.},
volume={30},
date={2009},
number={1},
pages={1--27},
issn={0926-2601},
review={\MR{2465710 (2010d:60112)}},
}
\bib{LP13}{article}{
author={Levine, Lionel},
author={Peres, Yuval},
title={The looping constant of ${\mathbb{Z}}^d$},
journal={Random Structures Algorithms},
issn={1098-2418},
url={http://dx.doi.org/10.1002/rsa.20478},
status={To appear.},
year = {2013},
}
\bib{LP10}{article}{
author={Levine, Lionel},
author={Propp, James},
title={What is $\dots$ a sandpile?},
journal={Notices Amer. Math. Soc.},
volume={57},
date={2010},
number={8},
pages={976--979},
issn={0002-9920},
review={\MR{2667495}},
}
\bib{LMS08}{article}{
author={Lyons, Russell},
author={Morris, Benjamin J.},
author={Schramm, Oded},
title={Ends in uniform spanning forests},
journal={Electron. J. Probab.},
volume={13},
date={2008},
pages={no. 58, 1702--1725},
issn={1083-6489},
review={\MR{2448128 (2010a:60031)}},
}
\bib{LPbook}{book}{
author={Russell Lyons {\rm with} Yuval Peres},
title={Probability on Trees and Networks},
publisher={Cambridge University Press},
status={In preparation. Current version available at \hfil\break
{\tt http://mypage.iu.edu/\string~rdlyons/}},
date={2013},
}
\bib{MRS04}{article}{
author={Maes, Christian},
author={Redig, Frank},
author={Saada, Ellen},
title={The infinite volume limit of dissipative abelian sandpiles},
journal={Comm. Math. Phys.},
volume={244},
date={2004},
number={2},
pages={395--417},
issn={0010-3616},
review={\MR{2031036 (2004k:82070)}},
}
\bib{MRSV00}{article}{
author={Maes, Christian},
author={Redig, Frank},
author={Saada, Ellen},
author={Van Moffaert, A.},
title={On the thermodynamic limit for a one-dimensional sandpile process},
journal={Markov Process. Related Fields},
volume={6},
date={2000},
number={1},
pages={1--21},
issn={1024-2953},
review={\MR{1758981 (2001k:60142)}},
}
\bib{MD91}{article}{
author={Majumdar, S. N.},
author={Dhar, D.},
title={Height correlations in the Abelian sandpile model},
journal={J. Phys. A},
volume={24},
date={1991},
number={7},
pages={L357--L362},
issn={0305-4470},
}
\bib{MD92}{article}{
author={Majumdar, S. N.},
author={Dhar, D.},
title={Equivalence between the Abelian sandpile model and the
$q \to 0$ limit of the Potts model},
journal={Phys. A},
volume={185},
date={1992},
number={1--4},
pages={129--145},
issn={0378-4371},
}
\bib{Manna90}{article}{
author={Manna, S. S.},
title={Large-scale simulation of avalanche cluster distribution in sand pile model},
journal={J. Statist. Phys.},
volume={59},
date={1990},
number={1-2},
pages={509--521},
issn={0022-4715},
}
\bib{MQ05}{article}{
author={Meester, Ronald},
author={Quant, Corrie},
title={Connections between `self-organised' and `classical' criticality},
journal={Markov Process. Related Fields},
volume={11},
date={2005},
number={2},
pages={355--370},
issn={1024-2953},
review={\MR{2150148 (2006d:82054)}},
}
\bib{MRZ01}{article}{
author={Meester, Ronald},
author={Redig, Frank},
author={Znamenski, Dmitri},
title={The abelian sandpile: a mathematical introduction},
journal={Markov Process. Related Fields},
volume={7},
date={2001},
number={4},
pages={509--523},
issn={1024-2953},
review={\MR{1893138 (2003f:60175)}},
}
\bib{ML97}{article}{
author={Merino L{\'o}pez, Criel},
title={Chip firing and the Tutte polynomial},
journal={Ann. Comb.},
volume={1},
date={1997},
number={3},
pages={253--259},
issn={0218-0006},
review={\MR{1630779 (99k:90232)}},
}
\bib{PS13}{article}{
author={Pegden, Wesley},
author={Smart, Charles K.},
title={Convergence of the Abelian sandpile},
journal={Duke Math. J.},
volume={162},
date={2013},
number={4},
pages={627--642},
issn={0012-7094},
review={\MR{3039676}},
}
\bib{Pem91}{article}{
author={Pemantle, Robin},
title={Choosing a spanning tree for the integer lattice uniformly},
journal={Ann. Probab.},
volume={19},
date={1991},
number={4},
pages={1559--1574},
issn={0091-1798},
review={\MR{1127715 (92g:60014)}},
}
\bib{PR05}{article}{
author={Piroux, Geoffroy},
author={Ruelle, Philippe},
title={Logarithmic scaling for height variables in the Abelian sandpile model},
journal={Phys. Lett. B},
year={2005},
volume={607},
pages={188--196},
issn={0370-2693},
}
\bib{PGPR10}{article}{
author={Poghosyan, Vahagn S.},
author={Grigorev, S. Y.},
author={Priezzhev, Vyatcheslav B.},
author={Ruelle, Philippe},
title={Logarithmic two-point correlators in the abelian sandpile model},
journal={J. Stat. Mech. Theory Exp.},
date={2010},
number={7},
pages={P07025, 27},
issn={1742-5468},
review={\MR{2720344 (2012a:82026)}},
}
\bib{PP11}{article}{
author={Poghosyan, Vahagn S.},
author={Priezzhev, Vyatcheslav B.},
title={The problem of predecessors on spanning trees},
journal={Acta Polytechnica},
year={2011},
volume={51},
number={2},
issn={1210-2709},
}
\bib{PPR11}{article}{
author={Poghosyan, Vahagn S.},
author={Priezzhev, Vyatcheslav B.},
author={Ruelle, Philippe},
title={Return probability for the loop-erased random walk and
mean height in the Abelian sandpile model: a proof},
journal={J. Stat. Mech. Theory Exp.},
date={2011},
pages={P10004+12},
issn={1742-5468},
}
\bib{Pr94}{article}{
author={Priezzhev, Vyatcheslav B.},
title={Structure of two-dimensional sandpile. I. Height probabilities},
journal={J. Statist. Phys.},
volume={74},
date={1994},
number={5--6},
pages={955--979},
issn={0022-4715},
}
\bib{Pr00}{article}{
author={Priezzhev, Vyatcheslav B.},
title={The upper critical dimension of the abelian sandpile model},
journal={J. Statist. Phys.},
volume={98},
date={2000},
number={3--4},
pages={667--684},
issn={0022-4715},
review={\MR{1749227 (2000m:82022)}},
}
\bib{PDDK96}{article}{
author={Priezzhev, Vyatcheslav B.},
author={Dhar, Deepak},
author={Dhar, Abhishek},
author={Krishnamurthy, Supriya},
title={Eulerian walkers as a model of self-organized criticality},
journal={Phys. Rev. Lett.},
volume={77},
date={1996},
number={25},
pages={5079--5082},
issn={0031-9007},
}
\bib{RT09}{article}{
author={R{\'a}th, Bal{\'a}zs},
author={T{\'o}th, B{\'a}lint},
title={Erd\H os-R\'enyi random graphs $+$ forest fires $=$ self-organized
criticality},
journal={Electron. J. Probab.},
volume={14},
date={2009},
pages={no. 45, 1290--1327},
issn={1083-6489},
review={\MR{2511285 (2010h:60269)}},
}
\bib{Redig}{article}{
author={Redig, Frank},
title={Mathematical aspects of the abelian sandpile model},
conference={
title={Mathematical statistical physics},
},
book={
publisher={Elsevier B. V., Amsterdam},
},
date={2006},
pages={657--729},
review={\MR{2581895 (2011g:60182)}},
}
\bib{Smirnov01}{article}{
author={Smirnov, Stanislav},
title={Critical percolation in the plane: conformal invariance, Cardy's
formula, scaling limits},
language={English, with English and French summaries},
journal={C. R. Acad. Sci. Paris S\'er. I Math.},
volume={333},
date={2001},
number={3},
pages={239--244},
issn={0764-4442},
review={\MR{1851632 (2002f:60193)}},
}
\bib{SW01}{article}{
author={Smirnov, Stanislav},
author={Werner, Wendelin},
title={Critical exponents for two-dimensional percolation},
journal={Math. Res. Lett.},
volume={8},
date={2001},
number={5-6},
pages={729--744},
issn={1073-2780},
review={\MR{1879816 (2003i:60173)}},
}
\bib{Spbook}{book}{
author={Spitzer, Frank},
title={Principles of random walk},
edition={2},
note={Graduate Texts in Mathematics, Vol. 34},
publisher={Springer-Verlag},
place={New York},
date={1976},
pages={xiii+408},
review={\MR{0388547 (52 \#9383)}},
}
\bib{W96}{article}{
author={Wilson, David Bruce},
title={Generating random spanning trees more quickly than the cover time},
conference={
title={ Computing},
address={Philadelphia, PA},
date={1996},
},
book={
publisher={ACM},
place={New York},
},
date={1996},
pages={296--303},
review={\MR{1427525}},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1507.01469
|
\section{Introduction}\label{sect:intro}
Nematic networks change their shape when their orientational order is induced to change thermally or, if dye molecules are present, optically. Mechanical strains can be between several hundred \% for elastomers and e.g. 4-5\% for glassy networks. We concentrate on the latter since they are strong (moduli $\sim 10^9-10^{10}$ Pa) and their directors immobile, eliminating further causes of mechanical response.
The experiments of Ikeda and Yu \cite{Ikeda:03} show two uniquely interesting new phenomena:\hfil\break
(i) Large, optically-driven response in the direction of the polarization of light. Samples are polydomain so the only definers of directions are $\vec{k}_0$, the incident light's wave vector, and $\vec{E}_0$, the incident light's electric (and polarization) vector. At once, both the ease of delivery of stimulus and the control of mechanics by optics were demonstrated.
These aspects have been explored by many authors \cite{Finkphoto,Corbett:06,Mahadevan04,Corbett:08,Cviklinski:03,White:08}.\hfil\break
(ii) Curling, in the direction of $\vec{E}_0$, occurred with large amplitude. Remarkably, curling continued so that much of the photo-glass sheet eclipsed itself; see Fig.~\ref{fig:still}. In shadow, one might expect curling to cease, or indeed reverse since the initially lower side of the sheet is now uppermost and being irradiated.
\begin{figure}[!b]
\includegraphics[width=0.4\columnwidth]{still_unbend.pdf}
\caption{A nematic sheet of Ikeda and Yu bending in response to illumination from above and self-eclipsing as the deformation develops. The right hand half is stuck to the support.}\label{fig:still}
\end{figure}
No explanation of this seemingly paradoxical phenomenon was advanced by \cite{Ikeda:03} or by subsequent authors. Our theory here shows that the phenomena uncovered by these seminal experiments reveal much about the non-linear and dynamical processes behind nematic photo-solid absorption of light and mechanical response. We suggest further experiments.
\section{Absorption, photomechanics and bend actuation}\label{sect:photomechanics}
Dye molecules are linear in their ground ({\it trans}, t) states and bent when excited ({\it cis}, c) by photon absorption. The number fraction of {\it cis}, $n\s{c}=1-n\s{t}$, increases by illumination, $I(x)$, and decreases by thermal recovery to $n\s{t}$ at a rate $1/\tau$, where $\tau$ is the c-lifetime, and where $I(x)$ is the intensity (Poynting flux) at depth $x$ into the sheet/cantilever. Thus
\begin{equation}
\tau\frac{\partial n\s{c}}{\partial t}=\frac{I}{I_m}n\s{t}-n\s{c}\equiv\dot n\s{c}=-(\alpha I/I_0+1)n\s{c}+\alpha I/I_0,\label{eq:dynamics}
\end{equation}
where $I_m$ is a material parameter $1/I_m =\Gamma\tau$ and $\alpha=I_0/I_m \equiv \Gamma I_0/(1/\tau)$ \cite{Corbett_PRL:07}. The constant $\Gamma$ subsumes an absorption cross section per chromophore and a quantum efficiency, while the reduced time $t/\tau$ derivative $\partial/\partial(t/\tau)$ is denoted by $\cdot$ . We neglect c-absorption, background absorption and scattering in order to simply establish the qualitative aspects of the Ikeda and Yu phenomenon, which we succeed in doing. Thus such additional sources of absorption are seemingly not central, and our model evidently has the essence of this mysterious effect.
The parameter $\alpha$ measures the ratio of the forward rate $\Gamma I_0$ (using the surface light intensity) to the back rate $1/\tau$. Large $\alpha$ implies strong perturbation from $n\s{c}=0$. Small $\alpha$ is the Beer limit where $n\s{c}\simeq0$ and absorption is by a dye population little perturbed from the dark state. We show that the Ikeda and Yu experiments reveal non-linearity ($\alpha\gtrsim1$) is vital. The photo-stationary state, $\dot n\s{c}=0$, gives
\begin{equation}
n\s{c}=\frac{\alpha\mathcal{I}}{1+\alpha\mathcal{I}},\label{eq:stationary}
\end{equation}
where $\mathcal{I}(x)=I(x)/I_0$ is an intensity at depth $x$ reduced by the intensity of light just having entered (at $x=0$).
Intensity is reduced with depth by the photon absorption in eq.~(\ref{eq:dynamics}) above, leading to $n\s{t}\rightarrow n\s{c}$
\begin{equation}
\frac{\partial I}{\partial x}=-\frac{n\s{t}}{d}I,\label{eq:light absorption}
\end{equation}
where the Beer Length $d$ subsumes cross sections, number densities {\it etc.}, and absorption depends on the number fraction $n\s{t}$ of absorbers. When $n\s{t} =1$, ($\alpha \ll 1$), the Beer limit $I(x)=I_0\e{-x/d}$ is obtained. We require finite conversion to get a mechanical response at all and to obtain the observed dynamics. Hence (\ref{eq:light absorption}) must be solved in the non-linear limit of $n\s{c}(x)\neq0$, that is, $n\s{t}$ is a function of $I$ itself \cite{Corbett_PRL:07}, either statically, eq.~(\ref{eq:stationary}), or dynamically, eq.~(\ref{eq:dynamics}).
Creation of {\it cis} isomers lowers order and gives a photo-contraction along $\vec E$, that is a strain $\epsilon\s{p}=-Cn\s{c}$ in its simplest form, with $C$ a dimensionless scaling. For $\epsilon\s{p}\sim -0.04$ and $n\s{c}\sim 0.8$ (say), then $C\sim 1/20$. An $\epsilon\s{p}(x)$ varying with depth gives curvature $1/R$ as the solid aims to reduce the elastic cost of deviating from its new, natural local shape. The effective strain is
\begin{equation}
\epsilon(x)=\frac{x}{R}+K-\epsilon\s{p}.
\end{equation}
The longitudinal stress at a depth $x$ in the sheet is a modulus times this strain. Integrating the stress and the moment of the stress through the thickness, $w$, of the sheet to get the force and the torque, setting both these to zero, and cancelling the modulus yields the two equations \cite{Corbett_PRL:07}
\begin{equation}
0=\int_{0}^{w}[\frac{x}{R}+K+ Cn\s{c}(x)]\textrm{d} x=\int_{0}^{w}\frac{x}{R}+K+ C n\s{c}(x)]x\textrm{d} x.
\end{equation}
Eliminating between these two equations for $w/R$ one obtains
\begin{equation}
\frac{w}{R}=\frac{12C}{w^2}\int_{0}^{w}\left(\frac{w}{2}-x\right)n\s{c}(x)\textrm{d} x.\label{eq:curvature}
\end{equation}
These equations must hold generally, even in a dynamically evolving system, in a limit where inertia can be ignored, for instance in the creeping motion seen by \cite{Ikeda:03}.
\section{Photo-stationary dye populations and mechanical response}\label{sect:stationary}
Using the stationary population (\ref{eq:stationary}) in the form $n\s{t}=1-n\s{c}=1/(1+\alpha\mathcal{I})$ in (\ref{eq:light absorption}) for $\mathcal{I}$ gives $\partial\mathcal{I}/\partial(x/d)=-\mathcal{I}/(1+\alpha\mathcal{I})$.
Integration gives \cite{Statman:03,Corbett_PRL:07}
\begin{equation}
\ln(\mathcal I(x))+\alpha({\mathcal I(x)}-1)=-x/d.\label{eq:productlog}
\end{equation}
The solution for ${\mathcal I(x)}$ is in terms of the Lambert-W function (or ProductLog function), $W(c)$, which satisfies $c = W(c)\e{W(c)}$. Thus ${\mathcal I(x)}= \frac{1}{\alpha} W(\alpha \e{\alpha -x/d})$. For large $\alpha$ -- intense light giving a high forward ${\it t}\rightarrow{\it c}$ rate compared with the decay rate $1/\tau$ -- the penetration is linear and very deep, $\mathcal I\sim1-x/(\alpha d)$ rather than exponential, ${\mathcal I}=\e{-x/d}$, which accounts for substantial mechanical response \cite{Corbett_PRL:07,Huo:10,Huo:11} when one would otherwise expect little response due to only a thin skin $d\ll w$ being Beer-penetrated. Dynamics will turn out to be strong evidence for non-linear effects.
Taking the solution (\ref{eq:productlog}) for $\mathcal I$ into $n\s{c}=\alpha\mathcal{I}/(1+\alpha\mathcal{I})$ and putting $n\s{c}(\mathcal I(x))$ into eq.~(\ref{eq:curvature}), then integrating using a variable change $\textrm{d} (x/d) =-(1/\mathcal I+\alpha)\textrm{d} \mathcal I$, and finally eliminating for $1/R$ gives \cite{Mahadevan04,Corbett_PRL:07}:
\begin{equation}
\frac{(w/D)}{R}=\alpha\!\left[\frac{w}{d}\mathcal{I}_w-(1-\mathcal{I}_w)(1-\frac{w}{2d})-\frac{\alpha}{2}(1-\mathcal{I}_w^2)\right]\label{eq:curvature_Iw}
\end{equation}
where the dimensionless combination $D=12C\!\!\left(\frac{d}{w}\right)^{\! 2}$ sets a scale to the reduced curvature $w/R$. (For $C=1/20$ and $w/d = 3$ of our illustration, $D=1/15$.) Recall the definition of the reduced intensity, $\mathcal{I}$, after eq.~(\ref{eq:stationary}).
The solutions of (\ref{eq:productlog}) for $x = w$ are injected in this curvature expression.
The curvature $1/R$ increases as incident intensity, measured by $\alpha$, increases, but eventually must decrease again for very high $\alpha$ -- penetration is deep and $\mathcal{I}(w)\sim1-w/(\alpha d) \rightarrow 1$. The {\it cis} fraction $n\s{c}$ saturates to a high value (dye is depleted) and the consequent small variation of photo-strain $\epsilon\s{p}(x)$ with depth cannot induce bending.
\begin{figure}[h]
\includegraphics[width=0.8\columnwidth]{curvature-light.pdf}
\caption{Curvature as a function of incident light intensity $\alpha$ for $w/d=3$, with the optimal intensity, $\alpha\s{m}$ at $\diamondsuit$, for this thickness indicated. Two intensities $\alpha = 1, 5$ ($\times,\circ$) are used illustratively below.}\label{fig:curvature-light}
\end{figure}
Fig.~\ref{fig:curvature-light} shows the non monotonic function of curvature against incident light intensity $\alpha$: When incident light is very weak, the {\it cis} fraction is small thereby producing a small curvature. When incident light is very strong and therefore deeply penetrating, the {\it cis} fraction is close to 1 and nearly uniform through the thickness, and thus again there is hardly any curvature. Curvature is maximized at an optimal intermediate reduced light intensity, $\alpha\s{m}$ say.
\subsection{Photo-stationary shapes of sheets}\label{subsect:stationary}
We are concerned with the interplay between bend as a function of incident intensity, but also bend making the sheet oblique to the incident light and thereby itself influencing the effective intensity and penetration of the light responsible for the bend. This connection and feedback determines the photo-stationary state and also the complex dynamics that we later examine. To establish and illustrate such qualitative effects, we adopt the simplest possible variation of intensity penetrating the upper surface of the sheet, namely $I_0\cos\theta$ which simply expresses the dilution of the incident flux $I_0$ to an effective flux by obliquity. This assumption will be deficient in detail since (a) there are complicated Fresnel coefficients governing the wave amplitude refracted into the medium, and (b) the wave entering is not completely refracted to be along the normal to the sheet (though it is nearly so in the case of strong absorption where Snell's law takes an extreme form). In consequence, the electric vector will not be along the local beam direction, see Fig.~\ref{fig:geometry}(b), and its effect on inducing bend will have another angular factor. Consequently, though our analysis reproduces the qualitative features of self-eclipsing and dynamics dependent on \textit{trans}-depletion front penetration, the precise shapes of sheets to be compared with future experiments may depend on these refinements of angular dependence.
\begin{figure}[!hbt]
\includegraphics[width=\columnwidth]{set_up.pdf}
\caption{A cantilever illuminated from above by light entering the solid with intensity $I_0$. (a) Initially flat, (b) curled up so that the tangent $\vec{t}(s)$ at $s$ makes an angle $\theta(s)$ with $\vec{y}$, and light is incident at angle $\theta(s)$ to the cantilever's normal.}\label{fig:geometry}
\end{figure}
At an arc distance $s$ along the sheet, the tangent and normal are rotated through an angle $\theta(s)$, see Fig.~\ref{fig:geometry}(b). It is the local $x$ direction (the thickness direction) that enters the attenuation equation (\ref{eq:productlog}) and the effective intensity is $\alpha_0 \cos\theta$, where $\alpha_0= I_0/I_m$ is the effective intensity were the beam to strike normally. Hence $\theta(s)$ enters the solution ${\mathcal I}$ to be injected into eq.~(\ref{eq:curvature_Iw}) for the local curvature, itself having explicit $\theta$ dependence through the $\alpha_0\cos\theta$ factor appearing.
Eq.~(\ref{eq:curvature_Iw}) can be written in a superficially simpler form as
\begin{equation}
\textrm{d} \theta/\textrm{d} s=\frac{1}{R}=\frac{D}{w}a \cos\theta(s),\label{eq:curvature cos}
\end{equation}
where there is also $\theta$-dependence in $a$: \begin{equation} a= \alpha_0[\frac{w}{d}\mathcal{I}_w-(1-\mathcal{I}_w)(1-\frac{w}{2d})-\frac{\alpha_0\cos\theta}{2}(1-\mathcal{I}_w^{2})]\label{eq:a}
\end{equation} both explicitly, and buried in $\mathcal{I}_w$. This differential equation can be integrated to give $\theta(s)$, and then a second integration gives the photo-stationary shape $(x(s),y(s))$ of the sheet bending in the $x-y$ plane. See Fig.~\ref{fig:stationary shape} for shapes corresponding to two effective intensities $\alpha_0$ (that would be falling on flat sheets) that are less than or greater than $\alpha\s{m}$; the two cases are qualitatively different.
\begin{figure}[h]
\includegraphics[width=0.8\columnwidth]{stationary-shape.pdf}
\caption{Photo-stationary shapes for $\alpha_0=1$ and $\alpha_0=5$. The positions of maximum curvature correspond to angles, and therefore arc positions, marked on Fig.~\ref{fig:curvature-light} ($\times,\diamondsuit$). The reduction of curvature by $w/D$ is used here for lengths too. Sheets of reduced length $L/(w/D) = 14$ are shown. For a given $\alpha_0$ and $w/d$, these are master curves; sheets of smaller $L$ will terminate at intermediate places along the curve.}\label{fig:stationary shape}
\end{figure}
Consider incident light with $\alpha_0=1 < \alpha\s{m}$, that is smaller than the $\alpha$ for optimal curvature for this $w/d$. The effective intensity incident locally on the sheet, $\alpha=\alpha_0\cos\theta$, decreases with $\theta$. Therefore for $\alpha_0=1$, the maximum curvature obtains at $s=0$, as shown in Fig.~\ref{fig:stationary shape}, where $\alpha(s)$ is greatest, namely $\alpha_0$. However incident light with $\alpha_0=5$ is more intense than optimal: As $\theta$ increases, $\alpha$ decreases down to $\alpha\s{m}$ (if the total length $L$ is large enough) and curvature increases to a maximum. If $\theta$ continues to increase, then $\alpha(s)$ and thus curvature decreases to 0. Thus the maximum curvature for $\alpha_0=5$ obtains at an intermediate $s$ in the sheet; see Fig.~\ref{fig:stationary shape}.
To give a simple insight into the effect of curvature being arc-position dependent, one can look for solutions of eq.~(\ref{eq:curvature cos}) for curvature where we take the explicit, leading $\theta$-dependence via $\cos\theta$, and ignore the further $\theta$-dependence in $a$ which we now set to be constant.
Reducing arc lengths by $w/(aD)$, that is $s=u\,w/(aD)$, then $\textrm{d} \theta/\textrm{d} u=\cos\theta(u)$ integrates simply to $\sin\theta(u)=\tanh(u)$. Recognising that $(x(u),y(u))=w/(aD)\int_{0}^{u}\textrm{d} u'(\cos\theta(u'),\sin\theta(u'))$ and using $\cos\theta(u)=\textrm{d} \theta/\textrm{d} u$ yields the parametric forms.
\begin{align}
&x(u)=\frac{w}{aD}\theta(u)=\frac{w}{aD}\sin^{-1}(\tanh(u))\notag\\
&y(u)=\frac{w}{aD}\ln(\cosh(u))
\end{align}
and eliminating $u$ gives
\begin{equation}
y(x)=-\frac{w}{aD}\ln\left(\cos[x/(w/aD)]\right)
\end{equation}
which is qualitatively of the form in Fig.~\ref{fig:stationary shape}.
\subsection{The marginal effect of gravity}\label{sect:gravity}
To rule out any role of gravity in determining static shapes, and later the dynamics of a glass sheet or cantilever, we calculate an extreme bound from Fig.~\ref{fig:geometry}(a). Considering such a sheet of width $W$, we can calculate the curvature close to the clamped end induced by gravity, the torque being estimated as if the sheet were flat ({\it i.e.} estimating the most critical location for curvature and over-estimating the torque). Equating elastic and gravitational torques,
\begin{equation}
\frac{EWw^3}{12R}=\frac{\rho wWgL^2}{2}
\Rightarrow\frac{w}{R}=\frac{\rho g}{E}\frac{6L^2}{w}\equiv\frac{6L^2}{lw}
\end{equation}
where $l=E/\rho g$ is a characteristic length emerging from matching elasticity ($E$ -- the Young's modulus) with gravity ($g,\rho$ -- acceleration of gravity and density of the photo-glass). For photo-glasses $l\sim10^5$m.
If gravity were to change the beam's curvature by that of a quarter circle of radius $R$, thus $L=\pi R/2$, then $1/R=\pi/2L$ in the above yields a length $L\s{g}=(\pi lw^2/12)^{1/3}$ before gravitational effects compete with elastic. Ikeda and Yu \cite{Ikeda:03} have $w=7\mu$m and $L\sim3$mm (half their total sample length since they clamped in the middle and not one end), whence $L\s{g}\sim10^{-2}$m is comfortably larger than their $L$. Mol~{\it et al} \cite{Mol:05} have $w\sim10\mu$m -- 40$\mu$m, and their $L=10^{-2}$m is comfortably within the (extremely low) estimates of $L\s{g}=2-4\times10^{-2}$m. We henceforth ignore gravitational effects.
\section{Dynamical photomechanical response and eclipsing}\label{sect:dynamics}
The varying {\it cis}-fraction of dye with depth and time, $n\s{c}(x,t)$, arises from penetration of a {\it trans} depletion front. Sometimes this front is loosely called a bleaching wave or front, since the converted dye is not an effective absorber of the original colour of light when in the {\it cis} state. However this bleaching is reversible (not chemical) and is otherwise referred to as saturated absorption \cite{Stryland:00}. The resulting curvature (\ref{eq:curvature}) determined by $n\s{c}$ is thus a function of time, non monotonic since $1/R$ depends on the spatial variation of $n\s{c}(x)$. Thus we have time dependent over-bend, and this is further complicated by (a) the flux driving the depletion front varies with $\theta(s)$, which results from the accumulation of curvature $\partial\theta/\partial s$ from $s'=0$ to $s$, and (b) development of angles $\theta(s,t)>\pi/2$ giving eclipsing, the incident light being blocked from falling on the sheet at arc positions before that $s$ where $\theta=\pi/2$. Such sections are in the dark, their $n\s{c}(x)$ fraction of {\it cis} recovers, and they lose their curvature. The sections of sheet that are doing the eclipsing necessarily have their under sides now exposed to the light and their curvature is reduced and eventually reversed. A reversal can lead to double eclipsing. We now explore this complex dynamics.
The {\it cis} fraction at any time can be taken from (\ref{eq:light absorption}), solving for $n\s{t}$ and replacing it by $1-n\s{c}$. Thus $n\s{c}=1+(d/\mathcal{I})\,\partial\mathcal I/\partial x=1-d\,\partial A/\partial x$ where the absorption $A=-\ln(\mathcal I)$. When the above $n\s{c}$ is injected into eq.~(\ref{eq:curvature}), the $1$ term in $n\s{c}$ gives a vanishing integral. The gradient term either integrates trivially against $w/2$, or integrates by parts against $-x$ to give overall
\begin{equation}
\frac{w}{D}\frac{\partial\theta}{\partial s}\equiv\frac{w/D}{R}={\textstyle \frac{1}{2}} \frac{w}{d}A(w,t)-\int_{0}^{w}A(x,t)\frac{\textrm{d} x}{d}.\label{eq:curvature-A}
\end{equation}
We have used $A(0,t)=0$ since $\mathcal I(0,t)=1$.
The coupled, non-linear pair of PDEs (\ref{eq:dynamics}) and (\ref{eq:light absorption}) for $\dot n\s{c}(\equiv-\dot n\s{t})$ and $\mathcal I'$ can be reduced to a single temporal quadrature for $A$ \cite{Corbett:08b} at each $s$:
\begin{equation}
\dot A=x/d-A+\alpha_0\cos\theta(s,t)(\e{-A}-1),\label{eq:A}
\end{equation}
with $A(0,t)=0$ and $A(x,0)=x/d$. The latter is Beer's law of exponential decay $\mathcal I=\e{-x/d}$ of light in an as-yet undepleted dye population. This 2nd condition needs careful reexamination after eclipsing when sheets start to be irradiated from the back face.
We solve (\ref{eq:A}) (see \cite{Corbett:08b}) for depletion front solutions and inject them into (\ref{eq:curvature-A}) for $(w/D)/R(s,t)$. The form of the dynamics, especially eclipsing, depends critically on the sheet length $L$ and the reduced intensity $\alpha$. Longer $L$ means that higher angles $\theta(s,t)=\int_{0}^{s}\textrm{d} s'\partial\theta/\partial s'$ can be accumulated and $\theta\rightarrow\pi/2$ is more achievable. When the tip of the sheet approaches $\pi/2$, it can be convected over to $\theta>\pi/2$ by continuing light penetration at $s<L$ since curvature increases with a further increasing gradient of $n\s{c}(x)$ deeper through the sheet thickness. This non-linearity in response leads to eclipsing before later recovery.
\begin{figure}[h]
\includegraphics[width=0.8\columnwidth]{dynamics_stack.pdf}
\caption{Bend curve sequences in $t/\tau$ of 0.01, 0.1, 0.3, 4, $\infty$ for (a) $\alpha_0 = 5$ and (b) $\alpha_0=1$ (where the $t/\tau = 4$ curve sits on top of that for $t/\tau = \infty$). Reduced sheet length is again $L/(w/D) = 14$, and $w/d = 3$.}\label{fig:dynamics}
\end{figure}
Fig.~\ref{fig:dynamics}(a) shows the dynamics for $\alpha_0=5$. The corresponding photo-stationary state in Fig.~\ref{fig:stationary shape} is the dashed line in Fig.~\ref{fig:dynamics}(a). The sheet starts bending from an initially flat shape. One already sees overshoot at $t/\tau=0.1$. The overshot section is now illuminated on what was the back face, while at least part of the sheet with $\theta(s) < \pi/2$ is eclipsed. The maximum curvature is more or less in the middle of the sheet where it was most strongly bent before being eclipsed. After overshoot, parts with $\theta(s) > \pi/2$ bend backward because of the reversed illumination, as can be seen in $t/\tau=0.3$, while eclipsed parts unbend exponentially in time since they are in the dark. Approach to the stationary state takes a long time, in terms of the fundamental time scale $\tau$, because of the complex sequence of overshoots.
Fig.~\ref{fig:dynamics}(b) has $\alpha_0=1$, which is slightly less than the maximal value $\alpha_m$ for $w/d = 3$. Now strongest bending is closer to the fixed end, as expected from Fig.~\ref{fig:stationary shape} for this $\alpha_{0}$. Overshoot is not so extreme and the approach to stationarity is much quicker. For smaller $\alpha_{0}$, for instance 0.5 for this length $L$, overshoot still occurs, but only slightly, whereas for $\alpha_{0} = 0.1$ with this $L$, it is lost.
\section{Conclusions}\label{sect:conclusions}
We have demonstrated that because the effective illumination is controlled by orientation, and at the same time drives orientation since it induces bend, the bending of photo-responsive sheets is subtle. To explore the qualitative behaviour that should arise, both in statics and in dynamics, a simple geometrically-inspired dependence of light penetration on angle of incidence is adopted. The other essential physical driver of this photo-mechanics is that conversion of the isomerising guest dye molecules to their excited state has to be considerable (i) to perturb the local structure of the glass and induce mechanical change, and (ii) to allow deep penetration (via a front of depletion of the ground state species) so that bend can actually occur. We then find:
\begin{itemize}
\item Photo-stationary shapes arising are qualitatively different according to whether the intensity of illumination normally incident is more or less than a characteristic value that depends on both a material constant and on the thickness of the sheet reduced by the Beer length for adsorption. These stationary states cannot be self-eclipsing.
\item Seminal experiments on nematic glass bend response did display self-eclipsing as response proceeded. By analysing the dynamical evolution of the mechanics, we show that this must have arisen as a transient effect and, accordingly, must be impossible to explain with just linear, Lambert-Beer, light absorption. We further predict that the route to the final photo-stationary state must show partial unbending via back bend, and could display multiple eclipses.
\end{itemize}
The two regimes both suggest further experiments which should also
give insight into how complex the true angular dependence of
absorption is. For example the experiments of Ikeda and Yu
\cite{Ikeda:03} could be illuminated for longer, such that appreciable
backbend (in addition to eclipsing) is observed. Eclipsing itself
depends quite crucially on the length of the photo-responsive sheets,
therefore repeating the experiments of Ikeda and Yu with different
sample lengths should also reveal a complicated behaviour. Perhaps most simple of all would be to coat the back face of the sheet with a reflective coating, thus removing effects from illuminating the back surface and allowing one to focus entirely on front-surface illumination and eclipsing.
\vspace{.3cm}
\noindent \textit{Acknowledgements.} DC is grateful for support from BBSRC and XC from the Chinese Government visiting student programme. We thank Professor Yu for the photograph of a self-eclipsing photo-responsive nematic glass sheet. DC would like to dedicate this paper to the memory of Tess Tracey
|
1306.6302
|
\section*{Appendix}
The appendix provides additional details
and proofs that were omitted from the main body of the paper due to
space constraints.
\section{Proof of Lemma~\ref{prop:lowerbound} ($SDP^2$ Provides a Lower Bound)}
In the following we consider performing sequential regression similar
to the second simple approach,
but where in each step the action variants are not standardized
apart.
We show that the result of our algorithm,
which uses a different computational procedure, is equivalent to this procedure.
We then argue that this approach provides a lower bound.
Recall that the input value function $V_i$ has the form $V= \max_x \mbox{avg}_y V(x,y)$
which we can represent in explicit expanded form as
$\max_x \frac{1}{n}[V(x,1) + V(x,2) + \ldots + V(x,n)]$.
Figure~\ref{Fig:IC_ExoRegrExp}(a) shows this expanded form of $V$ for our running example.
To establish this relationship we show that after the sequential algorithm regresses $E(1),$ $\ldots,$ $E(k)$ the intermediate value function has the form
\begin{equation}
\label{eq:template-form}
\max_x \frac{1}{n}[W(x,1) + W(x,2) + \ldots + W(x,k) + V(x,k+1) +\ldots + V(x,n)]
\end{equation}
as shown in Figure~\ref{Fig:IC_ExoRegrExp}(b).
That is, the first $k$ portions $V(x,\ell)$ change in the same structural manner into a diagram $W(x,\ell)$ and the remaining portions retain their original form. In addition, $W(x,\ell)$ is the result of regressing $V(x,\ell)$ through $E(\ell)$ which is the same form as calculated by step 3 of the template method.
Therefore, when all $E(\ell)$ have been regressed, the result is $V= \max_x \mbox{avg}_y W(x,y)$ which is the same as the result of the template method.
We prove the form in Eq~(\ref{eq:template-form}) by induction over $k$. The base case $k=0$ when no actions have been regressed clearly holds.
We next consider regression of $E(k)$.
We use the restriction that regression (via TVDs) does not introduce new
variables to conclude that we can regress $V$ by regressing each
element in the sum separately. Similarly, we use the restriction that
probability choice functions do not introduce new variables to
conclude that we can push the multiplication $prob(E_j(k)) \otimes
Regr(V, E_j(k))$ into each element of the sum (cf.\
\cite{SannerBo09,JoshiKeKh11} for similar claims).
Therefore, each action variant $E_j(k)$ produces a function of the form
$V^j= \max_x$ $\frac{1}{n}[U^j_1(x,1) + U^j_2(x,2) + \ldots + U_k^j(x,k) + U_{k+1}^j(x,k+1) +\ldots +
U^j_n(x,n)]$
where the superscript $j$ indicates regression by the $j$th variant and the form and subscript in $U_\ell$ indicate that different portions may have changed differently.
To be correct, we must standardize apart these functions
and add them using the binary operation $\oplus$.
We argue below that (C1)
if we do not standardize apart in this step then we get a lower bound on the
true value function,
and (C2) when we do not standardize apart the result has a special form
where only the $k$'th term is changed and all the terms $\ell\not=k$
retain the same value they had before regression.
In addition the $k$'th term changes in a generic way from $V(x,k)$ to $W(x,k)$.
In other words,
if we do not standardize apart the action variants of $E(k)$ then the
result of regression has the form
in Eq~(\ref{eq:template-form}).
It remains to show that C1 and C2 hold.
C1 is true because for any functions $f^1$ and $f^2$ we have
$[\max_{x_1} \mbox{avg}_{y_1} f^1(x_1,y_1)] + [\max_{x_2} \mbox{avg}_{y_2} f^2(x_2,y_2)]
\geq
\max_{x} [\mbox{avg}_{y_1}$ $f^1(x,y_1) + \mbox{avg}_{y_2} f^2(x,y_2)]
=
\max_{x} \mbox{avg}_{y} [ (f^1(x,y) + f^2(x,y))]
$ where the last equality holds because $y_1$ and $y_2$ range over the same
set of objects.
For C2 we consider the regression operation and the restriction on the
dynamics of exogenous actions. Recall that we allow only unary
predicates to be changed by the exogenous actions. To simplify the
argument assume that there is only one such predicate $sp()$.
According to the conditions of
the proposition $V_i=\max_x \mbox{avg}_y V(x,y)$ can refer to $sp()$ only as
$sp(y)$. That is, the only argument allowed to be used with $sp()$ is
the unique variable for which we have average aggregation.
Now consider the regression of $E(k)$ over the explicit sum
$V=\max_x \frac{1}{n}[W(x,1) + W(x,2) + \ldots + W(x,k-1) + V(x,k) +\ldots + V(x,n)]$ which is the form guaranteed by the inductive assumption.
Because $E(k)$ can only change $sp(k)$, and because $sp(k)$ can appear only in
$V(x,k)$, none of the other terms
is changed by the regression.
This holds for all action variants $E_j(k)$.
The sequential algorithm next multiplies each element of the sum by the
probability of the action variant, and then adds the sums without
standardizing apart. Now, when $\ell\not=k$, the $\ell$'th term
is not changed by regression of $E_j(k)$.
Then for each $j$ it is multiplied by $Pr(E_j(k))$ and
finally all the $j$ terms are summed together.
This yields exactly the original term ($W(x,\ell)$ for $\ell<k$ and $V(x,\ell)$ for $\ell>k$).
The term $\ell=k$ does change and this is exactly as in the template method, that is
$V(x,k)$ changes to $W(x,k)$.
Therefore C2 holds.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{Diagrams/ExoRegrExpansion.png}
\caption{Regression via the Template method (a) Expanded form of
Figure~\ref{Fig:IC_DynRegr}(e). (b) Expanded form of
the value function after regressing $E(1),E(2),\ldots, E(k)$.
}
\label{Fig:IC_ExoRegrExp}
\end{center}
\end{figure}
\section{Proof of Theorem~\ref{MLBThm} (Monotonic Lower Bound)}
The proof of Lemma~\ref{prop:lowerbound} and the text that follows it imply that for all $V$ satisfying {\bf A1-A4} we have $T'[V]\leq T[V]$.
Now, when $R$ is non-negative, $V_0=R$ and $V_{i+1}=T'[V_{i}]$
this implies that for all $i$, we have $T'[V_i]\leq T[V_i]\leq V^*$.
We next show that under the same conditions on $V_0$ and $R$
we have that for all $i$
\begin{equation}
\label{eq:monlbstep}
V_i \leq T'[V_i] = V_{i+1}.
\end{equation}
Combining the two we get
$V_i \leq V_{i+1} = T'[V_i] \leq T[V_i] \leq V^*$
as needed.
We prove Eq (\ref{eq:monlbstep})
by induction on $i$. For the base case it is obvious that $V_0\leq V_1$ because $V_0=R$ and $V_1=R+W$ where $W$ is the regressed and discounted value function which is guaranteed to be non-negative.
For the inductive step, note that all the individual operations we use with GFODDs (regress, $\oplus$, $\otimes$, $\max$) are monotonic. That is, consider any functions (GFODDs) such that $f_1\geq f_2$ and $f_3\geq f_4$ then $regress(f_1)\geq regress(f_2)$ and $op(f_1,f_3)\geq op(f_2,f_4)$. As a result, the same is true for any sequence of such operations and in particular for the sequence of operations that defines $T'[V]$. Therefore, $V_{i-1} \leq V_{i}$ implies
$V_i = T'[V_{i-1}] \leq T'[V_i] = V_{i+1} $.
\section{Proof of Observation~\ref{obs:slplans} (Relation to Straight Line Plans)}
The template method provides
symbolic way to calculate a lower bound on the value function.
It is interesting to consider what kind of lower bound this provides.
Consider regression over $E(k)$ and the source of approximation in the
sequential argument where we do not standardize apart. Treating $V_n$ as the
next step value function, captures the ability to take the best action in the
next state which is reached after the current exogenous action.
Now by calculating $\max_{x} \mbox{avg}_{y} [ (f^1(x,y) + f^2(x,y))]$
the choice of the next action (determined by $x$) is done without knowledge
of which action variant $E_j(k)$ has occurred. Effectively, we have pushed the
expectation over action variants $E_j(k)$ into the $\max$ over actions for
the next step. Now, because this is done for all $k$, and at every iteration
of the value iteration algorithm, the result is similar to having replaced the
true $m$ step to go value function
\begin{eqnarray*}
\max_{\alpha_1} Exp_{\beta_1} \max_{\alpha_2} Exp_{\beta_2} \ldots \max_{\alpha_m} Exp_{\beta_m} f(R,\{\alpha_i\},\{\beta_i\})
\end{eqnarray*}
(where $\alpha_i$ is the user action in the
$i$'th step and $\beta_i$ is the compound exogenous action in the $i$'th step)
with
$\max_{\alpha_1}$$\max_{\alpha_2}$$\ldots$
$\max_{\alpha_m}Exp_{\beta_1} $$Exp_{\beta_2} $$\ldots $$Exp_{\beta_m} $$f(R,\{\alpha_i\},\{\beta_i\})$.
The last expression is the value of the best linear plan, known as the
{\em straight line plan approximation}. The analogy given here does not go through
completely due to two facts. First, the $\max$ and expectation are over
arguments and not actions.
In particular, when there is more than one agent action template (e.g., $load$ $unload$, $drive$), we explicitly maximize over agent actions in Step~\ref{sdp_4} of $SDP^1$. These max steps are therefore done correctly and are not swapped with expectations.
Second, we do still standardize apart agent actions so that their outcomes are
taken into consideration. In other words the expectations due to randomization in the outcome of agent actions are performed correctly and are not swapped with max steps.
On the other hand,
when there is only one agent action template and the action is
deterministic we get exactly
straight line plan approximation.
\section{Preparation for Proof of Theorem~\ref{GFODDThm} (Correctness of Model Evaluation Algorithm)}
We start by proving the correctness of the evaluation step on its own
without the specialization for $\max_x \mbox{avg}_y$ aggregation and the
additional steps for reductions.
The pseudocode for the Eval procedure was given above.
Note that the two children of node $n$ may have aggregated different sets of
variables (due to having additional parents). Therefore in the code we
aggregate the table from each side separately (down to $maxvar(n)+1$) before
taking the union. Once the two sides are combined we still need to aggregate
the variables between $maxvar(n)+1$ and $maxabove(n)+1$ before returning the
table.
We have the following:
\begin{proposition}
\label{prop:gfoddeval-correct}
The value returned by the Eval procedure is exactly $\mbox{map}_B(I)$.
\end{proposition}
\begin{proof}
Given a node $n$, the value of $maxabove(n)$, and a concrete substitution
$\zeta$ (for variables $z_1$ to $z_{maxabove(n)}$)
reaching $n$ in $I$ we consider the
corresponding block in the brute force evaluation procedure and in our
procedure.
For the brute force evaluation we fix the values of $z_1$ to
$z_{maxabove(n)}$
to agree with $\zeta$
and
consider the aggregated value when all variables down to $z_{maxabove(n)}+1$
have been aggregated.
For Eval($n$) we consider the entry in the table returned by the procedure
which is consistent with $\zeta$.
Since the table may
include some variables (that are smaller than $maxabove(n)$ but do not appear
below $n$) implicitly we simply expand the table entry with the values from
$\zeta$.
We next prove by induction over the structure of the diagram that the
corresponding entries are identical. First, note that if this holds at the
root where $above(n)$ is the empty set, then the proposition holds because
all variables are aggregated and the value is $\mbox{map}_B(I)$.
For the base case, it is easy to see that the claim holds at a leaf, because
all substitutions reaching the leaf have the same value, and the block is
explicitly aggregated at the leaf.
Given any node $n$, we have two cases. In the first case, $maxself(n)$
$\leq$ $maxabove(n)$, that is, all variables in $n.lit$ are already substituted in $\zeta$. In this case, for any $\zeta$,
the entire block
traverses $n_{\downarrow c}$ (where $c$ is either $t$ or $f$ as appropriate).
Clearly, the join with $bl^{n_{\downarrow c}}(I)$ identifies the correct child
$c$ with
respect to the entry of $\zeta$. Consider the table entries in
$M^{\downarrow c}(I)$
that are extensions of the substitution $\zeta$
possibly specifying more variables.
More precisely, if the the child node is $n'$ the entries include the
variables up to
$\ell=maxabove(n')$.
By the inductive hypothesis the value in each entry is a correct aggregation
of all the variables down to $\ell+1$.
Now since the remaining variables are explicitly aggregated at $n$, the value
calculated at $n$ is correct.
In the second case, $maxself(n)>
maxabove(n)$ which means that some extensions of $\zeta$
traverse $n_{\downarrow t}$ and some traverse $n_{\downarrow f}$.
However, as in the previous case, by the inductive hypothesis we know that
the extended entries at the children are correct aggregations of their
values. Now it is clear that the union operation correctly collects these
entries together into one block, and as before because the remaining
variables are explicitly aggregated at $n$, the result is correct.
\qed
\end{proof}
\section{Proof of Theorem~\ref{GFODDThm} (Correctness of Edge Marking in Model Evaluation Algorithm)}
We start by giving a more detailed version of the algorithmic
extension of the algorithm to collect edge sets.
In addition to the the substitution and value,
every table entry is associated with a set of edges.
\\
(1)
When calculating the join we add the edge $n_{\downarrow f}$
to the corresponding table returned by the call to
Eval($n_{\downarrow f}$)
and similarly for $n_{\downarrow t}$ and
Eval($n_{\downarrow t}$).
\\
(2)
When a node aggregates an average variable the set of edges for the
new entry is the union of edges in all the entries aggregated.
\\
(3)
When a node aggregates a max variable the set of edges for the
new entry is the set of edges from the winning value. In case of a tie
we pick the set of edges which is smallest lexicographically.
\\
(4) A leaf node returns the empty set as its edge set.
The proof of Theorem~\ref{GFODDThm}
is similar to the proof above, in that we define a property of nodes
and prove it inductively, but in this case it is simpler to argue by way of
contradiction.
\begin{proof}
The correctness of the value returned was already shown in Proposition~\ref{prop:gfoddeval-correct}. We therefore focus on showing that the set of edges returned is identical to the one returned by the brute force method.
For a node $n$ and a concrete substitution $\zeta$ (for
variables $z_1$ to $z_{maxabove(n)}$) reaching $n$ in $I$, define $B_\zeta$
to be the sub-diagram of $B$ rooted at $n$ where $z_1$ to $z_{maxabove(n)}$
are substituted by $\zeta$, and with the aggregation function of
$z_{maxabove(n)+1},\ldots,z_{N}$ as in $B$ where $z_N$ is the last variable
in the aggregation function.
We claim that for each node $n$, and $\zeta$ that reaches $n$,
the entry in the table
returned by $n$ which is consistent with $\zeta$ has the value $v=\mbox{map}_{B_\zeta}(I)$ and
set of edges $E$, where $E$ is the lexicographically smallest set of edges of
a block achieving the value $v$.
Note that if the claim holds at the root $n$ then the theorem holds because
$above(n)$ is empty. In the rest of the proof we
argue that the set of edges returned is lexicographically smallest.
Now consider any $I$ and any $B$ and assume by way of contradiction that the
claim does not hold for $I$ and $B$. Let $n$ be the lowest node in $B$ for
which this happens. That is the claim does hold for all descendants of $n$.
It is easy to see that such a node $n$ cannot be a leaf, because for any leaf
the set
$E$ is the empty set and this is what the procedure returns.
For an internal node $n$, again we have two cases.
If
$maxself(n)\leq
maxabove(n)$,
then
the entire block corresponding to $\zeta$
traverses $n_{\downarrow c}$ (where as above $c$ is $t$ or $f$).
In this case, if the last variable (the only one with average aggregation)
has not yet been aggregated then the tables are full and the claim clearly
holds because aggregation is done directly at node $n$. Otherwise, $n$'s
child aggregated the variables beyond $z_k$ for some $k\geq m=maxabove(n)$.
Let $\eta$ be a substitution for $z_{m+1},\ldots,z_k$. Then by the assumption
we know that each entry in the table returned by the child,
which is consistent with
$\zeta,\eta$ has value $\mbox{map}_{B_{\zeta,\eta}}(I)$ and the lexicographically
smallest set of edges corresponding to a block achieving this value.
Now, at node $n$ we aggregate $z_{m+1},\ldots,z_k$ using this table.
Consider the relevant sub-table with entries $\zeta,\eta_i,v_i,\hat{E}_i$
where $\hat{E}_i$ is $E_i$ with the edge $n_{\downarrow c}$ added to it by
the join operation.
Because $z_{m+1},\ldots,z_k$ use $\max$ aggregation, the aggregation at $n$
picks a $v_i$ with the largest value and the corresponding $\hat{E}_i$ where in
case of tie in $v_i$ we pick the entry with smallest $\hat{E}_i$.
By our assumption this set $\hat{E}_i$ is not the lexicographically smallest set
corresponding to a block of substitutions realizing the value
$\mbox{map}_{B_{\zeta}}(I)$.
Therefore, there must be a block of valuations
$\zeta \eta'$ where
$\eta'$ is the substitution for $z_{m+1},\ldots,z_k$
realizing the same value $v_i$ and whose edge set $E'$ is
lexicographically smaller than $\hat{E}_i$. But in this case $\eta'=\eta_j$ for
some $j$, and $E'\setminus n_{\downarrow c}$ is
lexicographically smaller than $E_i$
which (by construction, because the algorithm chose $E_i$) is
lexicographically smaller than $E_j$.
Thus the entry for $E_j$ is incorrect.
This contradicts our assumption that $n$ is the lowest node violating
the claim.
The second case, where $maxself(n)>
maxabove(n)$ $\zeta$ is argued similarly. In this case the substitutions
extending $\zeta$ may traverse either $n_{\downarrow t}$ or $n_{\downarrow
f}$.
We first aggregate some of the variables in each child's table.
We then take the union of the tables to form the block of $\zeta$ (as well as
other blocks) and aggregate the remaining $z_{m+1},\ldots,z_k$.
As in the previous case, both of these direct aggregation steps
preserve the minimality of the corresponding sets $E_i$
\qed
\end{proof}
\section{Conclusions}
The paper presents service domains as an abstraction of planning problems with additive rewards and with multiple simultaneous but independent exogenous events. We provide a new relational SDP algorithm and the first complete analysis of such an algorithm with provable guarantees. In particular our algorithm, the template method, is guaranteed to provide a monotonic lower bound on the true value function
under some technical conditions. We have also shown that this lower bound lies between the value of straight line plans and the true value function. As a second contribution we introduce new evaluation and reduction algorithms for the GFODD representation, that in turn facilitate efficient implementation of the SDP algorithm.
Preliminary experiments demonstrate the viability of our approach and that our algorithm can be applied even in situations that violate some of the assumptions used in the analysis.
The paper provides a first step toward analysis and solutions of general problems with exogenous events by focusing on a well defined subset of such models.
Identifying more general conditions for existence of compact solutions, representations for such solutions, and associated algorithms is an important challenge for future work. In addition,
the problems involved in evaluation and application of diagrams
are computationally demanding. Techniques to speed up these computations are
an important challenge for future work.
\section{Experimental Validation}
In this section we present an empirical demonstration of our
algorithms. To that end
we implemented our algorithms in Prolog as an
extension of the {\sc FODD-Planner} \cite{JoshiKh08}, and compared it
to SPUDD \cite{HoeyStHuBo99} and MADCAP \cite{SannerUtDe10}
that take advantage of propositionally factored state spaces,
and implement VI
using propositional algebraic decision diagrams (ADD)
and affine ADDs respectively.
For SPUDD and MADCAP, the domains
were specified in
the Relational Domain Description Language (RDDL)
and translated into propositional descriptions
using software provided for the IPPC 2011 planning competition
\cite{Sanner10}.
All experiments were run
on an Intel Core 2 Quad CPU @ 2.83GHz. Our
system was given
$3.5$Gb of memory and SPUDD and MADCAP were given $4$Gb.
We tested all three systems on the IC domain as described above where
shops and trucks have binary inventory levels (empty or full).
We present results for the IC domain,
because it satisfies all our assumptions and because
the propositional systems fare better in this case.
We also present results for a more complex
IC domain (advanced IC or AIC below) where the inventory can be in
one of $3$ levels 0,1 and 2
and a shop can have one of $2$ consumption
rates $0.3$ and $0.4$.
AIC does not satisfy assumption {\bf A3}.
As the experiments show, even with this small extension, the combinatorics
render the propositional approach infeasible.
In both cases, we constructed the set of focus states to include all
possible states over 2 shops.
This provides exact reduction for states with 2 shops but the reduction is approximate for larger states as in our experiments.
Figure~\ref{Fig:exp} summarizes our results, which we discuss from left to right and top to bottom.
The top left plot shows runtime as a function of iterations for AIC and illustrates
that the variable elimination method is significantly faster than
brute force evaluation and that it enables us to run many more iterations.
The top right plot shows the total time (translation from
RDDL to a propositional description and off-line
planning for 10 iterations of VI) for
the 3 systems for one problem instance per size for AIC.
SPUDD runs out of memory and fails on more than 4 shops and MADCAP
can handle at most 5 shops.
Our planning time (being domain size agnostic) is constant.
Runtime plots for IC are omitted but they show a similar qualitative picture,
where the propositional systems fail with more than
$8$ shops for SPUDD and $9$ shops for MADCAP.
The middle two plots show the cost of using the policies, that is, the
on-line execution time as a function of increasing
domain size in test instances.
To control run time for our policies we show the time
for the GFODD policy produced after 4 iterations, which is
sufficient to solve any problem in IC and AIC.\footnote{
Our system does not achieve structural convergence because the reductions
are not comprehensive. We give results at 4 iterations as this is
sufficient for solving all problems in this domain.
With more iterations, our policies are larger and their execution is
slower.
}
On-line time for
propositional systems is fast for the domain sizes they solve,
but our system can solve problems of much larger size (recall that the
state space grows exponentially with the number of shops).
The bottom two plots show the total discounted reward accumulated by
each system (as well as a random policy) on $15$ randomly generated problem instances
averaged over 30 runs.
In both cases all algorithms are significantly better than the random
policy. In IC our approximate policy is not distinguishable from the
optimal (SPUDD). In AIC the propositional
policies are slightly better
(differences are statistically significant).
In summary, our system provides a non-trivial approximate policy
but is
sub-optimal in some cases, especially in AIC where {\bf A3} is
violated. On the other hand its offline planning time is independent
of domain size, and it can solve instances that cannot be solved by
the propositional systems.
\section{Evaluation and Reduction of GFODDs}
The symbolic operations in the SDP algorithm yield
diagrams that are redundant in the sense that portions
of them can be removed without changing the values they compute.
Recently, \cite{JoshiKeKh11,JoshiKeKh10}
introduced the idea of model checking
reductions to compress such diagrams.
The basic idea is simple. Given a
set of ``focus states'' $S$, we evaluate the diagram on every
interpretation in $S$. Any portion of the diagram that does not ``contribute" to
the final value in any of the interpretations is removed.
The result is a diagram which is exact on the focus states, but
may be approximate on other states. We refer the reader to \cite{JoshiKeKh11,JoshiKeKh10}
for further motivation and justification. In that work,
several variants of this idea have been analyzed formally (for $\max$ and $\min$ aggregation), have been shown
to perform well empirically (for $\max$ aggregation), and methods for generating $S$ via random walks have been developed.
In this section we develop the second contribution of the paper,
providing an efficient realization of this idea
for
$\max_x\mbox{avg}_y$ aggregation.
The basic reduction algorithm, which we refer to below as brute
force model checking for GFODDs, is:
(1)
Evaluate the diagram on each example
in our focus set $S$
marking all edges that actively participate in
generating the final value returned for that example. Because we have
$\max_x \mbox{avg}_y$ this value is given by the ``winner'' of max
aggregation. This is a block of substitutions that includes one assignment to
$x$ and all possible assignments to $y$. For each such block collect
the set of edges traversed by any of the substitutions in the block. When
picking the max block,
also collect the edges traversed by that
block, breaking ties by lexicographic ordering over edge sets.
(2)
Take the union of marked edges over all examples, connecting
any edge not in this set to 0.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{Diagrams/VE2.png}
\caption{GFODD Evaluation (a) Brute Force method.
(b) Variable Elimination Method.}
\label{Fig:VE}
\end{center}
\vspace{-0.2in}
\end{figure*}
Consider again the example of evaluation in Figure~\ref{Fig:VE}(a), where
we assigned node identifiers 1,2,3.
We identify edges by their parent node and its
branch so that the left-going edge from the root is edge $1t$. In this case
the final value $7/3$ is achieved by multiple blocks of
substitutions, and two distinct sets of edges $1t2f3t3f$ and $1f3t3f$.
Assuming $1$$<$$2$$<$$3$ and $f$$<$$t$,
$1f3t3f$ is lexicographically smaller and is chosen as the marked set.
This process is illustrated in the tables of Figure~\ref{Fig:VE}(a).
Referring to the reduction procedure, if our focus set $S$ includes only this interpretation, then the edges
$1t, 2t, 2f$ will be redirected to the value 0.
\section{Introduction}
Relational Markov Decision Processes (RMDPs) offer an attractive formalism
to study both reinforcement learning and probabilistic planning in relational
domains.
However, most work on RMDPs has
focused on planning and learning when the only transitions in the
world are a result of the agent's actions.
We are interested in a class of problems modeled as
{\em service domains}, where
the world is affected by exogenous service requests
in addition to the agent's actions.
In this paper we use the
inventory control (IC) domain as a motivating running example and for experimental validation.
The domain models
a retail company faced with the task of maintaining the inventory
in its shops to meet consumer demand.
Exogenous events (service requests) correspond to arrival of customers at shops
and, at any point in time,
any number of service requests can occur
independently of each other and independently of the agent's action.
Although we focus on IC,
independent exogenous service requests are common
in many other problems, for example, in
fire and emergency response, air traffic control,
and service centers such as
taxicab companies, hospitals, and restaurants.
Exogenous events present a challenge for planning
and reinforcement learning algorithms
because the number of possible next states,
the ``stochastic branching
factor'', grows exponentially
in the number of possible
simultaneous service requests.
In this paper we consider symbolic dynamic programming (SDP) to solve
RMDPs, as it allows to reason more
abstractly than what is typical in forward planning and reinforcement
learning.
The SDP solutions for propositional MDPs can be adapted to
RMDPs by grounding the RMDP for each size to get
a propositional encoding, and
then using a ``factored approach'' to solve the resulting
planning
problem, e.g., using algebraic decision diagrams (ADDs) \cite{HoeyStHuBo99} or
linear function approximation \cite{GuestrinKoPaVe03}.
This approach can easily model exogenous events
\cite{BoutilierDeHa99}
but it plans for a fixed domain size and requires increased time and space due to
the grounding.
The relational (first order logic) SDP approach
\cite{BoutilierRePr01} provides a solution which is independent of the domain
size, i.e., it holds for any problem instance.
On the other hand, exogenous events make the first order formulation much
more complex. To our knowledge, the only work to have approached this
is \cite{SannerBo07,Sanner08}.
While Sanner's work
is very
ambitious in that it attempted
to solve a very general class of problems,
the solution used linear function approximation,
approximate policy iteration, and some heuristic logical
simplification steps to
demonstrate that some problems can be solved
and it is not clear when the combination of ideas
in that work
is applicable,
both in terms of the algorithmic approximations and in
terms of the symbolic simplification algorithms.
In this paper we make a different compromise by constraining the class
of problems and aiming for a complete symbolic solution.
In particular, we introduce the class of service domains, that have
a simple form of independent object-focused exogenous events, so that
the transition in each step can be modeled as first taking the agent's action,
and then following a sequence of ``exogenous actions'' in any order.
We then investigate a relational SDP approach to solve such problems.
The main contribution of this paper is a new symbolic algorithm
that is proved to provide a lower bound approximation on the true value function for service domains under certain technical assumptions.
While the assumptions are somewhat strong, they allow us to provide the first complete analysis of relational SDP with exogenous events which is important for understanding such problems. In addition, while the assumptions are needed for the analysis, they are not needed for the algorithm that can be applied in more general settings.
Our second main contribution provides algorithmic support to implement
this algorithm
using the GFODD
representation of \cite{JoshiKeKh11}. GFODDs provide a scheme for capturing and manipulating functions over relational structures. Previous work has analyzed some theoretical properties of this representation but did not provide practical algorithms.
In this paper we develop
a model evaluation algorithm for GFODDs
inspired by variable elimination (VE),
and a model checking reduction for GFODDs.
These are crucial for efficient
realization of the new approximate SDP algorithm.
We illustrate the new algorithm
in two variants of the IC domain,
where one satisfies our assumptions and the other does not.
Our results demonstrate that
the new algorithm can be implemented efficiently, that its
size-independent solution scales much better than propositional
approaches \cite{HoeyStHuBo99,SannerUtDe10}, and that
it produces high quality policies. %
\subsubsection*{Acknowledgements}
This work was partly supported by NSF under grants IIS-0964457 and IIS-0964705
and the CI fellows award for Saket Joshi. Most of this work was done when Saket
Joshi was at Oregon State University.
\bibliographystyle{splncs03}
\section{Preliminaries: Relational Symbolic Dynamic Programming}
We assume familiarity with basic notions of Markov Decision Processes
(MDPs) and First Order Logic
\cite{RusselNo02,Puterman1994}. Briefly, a MDP is given by a set of states $S$, actions $A$,
transition function $Pr(s'|s,a)$, immediate reward function $R(s)$ and
discount factor $\gamma<1$.
The solution of a MDP is a policy that maximizes
the expected discounted total reward obtained by following that policy starting from any state.
The Value Iteration algorithm (VI), calculates the optimal value function $V^*$ by iteratively performing Bellman backups
$V_{i+1} = T[V_i]$ defined for each state $s$ as,
\begin{equation}
\label{eq:viflat}
V_{i+1}(s)\leftarrow \max_a\{ R(s) + \gamma \sum_{s'} Pr(s'|s,a) V_i(s')\}.
\end{equation}
\noindent{\bf Relational MDPs:}
Relational MDPs are simply MDPs where the states and actions are described
in a function-free first order logical language. In particular, the
language allows a set of logical constants, a set of logical
variables, a set of predicates (each with its associated arity),
but no functions of arity greater than 0.
A state corresponds to an \emph{interpretation} in first order logic
(we focus on finite interpretations)
which specifies (1) a finite set of $n$ domain elements also known as objects, (2)
a mapping of constants to domain elements, and (3) the truth values of all the predicates over tuples of domain elements of appropriate size (to match the arity of the predicate).
Atoms are predicates applied to appropriate tuples of arguments. An atom is said to be ground when all its arguments are constants or domain elements.
For example, using this notation ${empty}({x_1})$ is an atom and
${empty}({shop23})$
is a ground atom involving the predicate
${empty}$
and object
${shop23}$
(expressing that the shop $shop23$ is empty in the IC domain).
Our notation does not distinguish constants
and variables as this will be clear from the context.
One of the advantages of relational SDP algorithms, including the one in this paper, is that the number of objects $n$ is not known or used at planning time and the resulting policies generalize across domain sizes.
The state transitions induced by agent actions are modeled exactly as in previous SDP work \cite{BoutilierRePr01}.
The agent has a set of
action types $\{A\}$ each parametrized with a tuple of objects to
yield an action template $A(x)$ and a concrete ground action $A(o)$
(e.g. template $unload(t,s)$ and concrete action $unload(truck1,shop2)$). To
simplify notation, we use $x$ to refer to a
single variable or a tuple of variables of the appropriate arity.
Each agent action has a finite number of action variants
$A_j(x)$
(e.g., action success vs. action failure),
and when the user performs $A(x)$ in state $s$ one of the variants
is chosen randomly using the state-dependent action choice distribution $Pr(A_j(x) | A(x))$.
Similar to previous work we model the reward as some additive function over the domain. To avoid some technical complications, we use average instead of sum in the reward function; this yields the same result up to a multiplicative factor.
\noindent{\bf Relational Expressions and GFODDs:}
To implement planning algorithms for relational MDPs we require a symbolic representation of functions
to compactly describe the rewards, transitions, and eventually value functions.
In this paper we use the GFODD representation of \citeAY{JoshiKeKh11} but
the same ideas work for any
representation that can express open-expressions and closed
expressions over interpretations (states).
An expression represents
a function mapping interpretations to real values.
An open expression $f(x)$, similar to an open formula in first order logic,
can be evaluated in interpretation $I$ once we substitute the variables $x$
with concrete objects in $I$.
A closed expression $(\mbox{aggregate}_x f(x))$, much like a closed first
order logic formula, aggregates the value of $f(x)$ over all possible
substitutions of $x$ to objects in $I$.
First order logic limits $f(x)$ to
have values
in $\{0,1\}$ (i.e., evaluate to {\em false} or {\em true}) and provides the aggregation $\max$
(corresponding to existential quantification)
and $\min$
(corresponding to universal quantification)
that can be used individually on each variable in $x$.
Expressions are more general allowing for additional aggregation functions (for example, average) so that aggregation generalizes quantification in logic, and
allowing $f(x)$ to take numerical values.
On the other hand, our expressions require aggregation operators
to be at the front of the formulas and thus correspond to logical expressions in prenex normal form.
This enables us to treat the aggregation portion and formula portion separately in our algorithms.
In this paper we
focus on average and max aggregation.
For example, in the IC domain we might use the expression:
``$\max_t, \mbox{avg}_s,$ (if $\neg empty(s)$ then 1, else if $tin(t,s)$ then 0.1, else 0)''. Intuitively, this awards a 1 for any non-empty shop and at
most one shop is awarded a 0.1 if there is a truck at that shop. The value of this expression is given by picking one $t$ which maximizes the average over $s$.
GFODDs provide a graphical representation and associated algorithms to
represent open and closed expressions. A GFODD is given by an aggregation
function, exactly as in the expressions, and a labeled directed acyclic
graph that represents the open formula portion of the expression. Each leaf
in the GFODD is labeled with a non-negative numerical value, and each
internal node is labeled with a first-order atom (allowing for equality atoms)
where we allow atoms to use constants or variables as arguments.
As in propositional diagrams \cite{BaharFrGaHaMaPaSo93}, for efficiency reasons, the order over nodes in the diagram must conform to a fixed ordering over node labels, which are first order atoms in our case.
Figure~\ref{Fig:IC_Dynamics}(a) shows an
example GFODD capturing the expression given in the previous
paragraph. %
Given a diagram
$B=(\mbox{aggregate}_x f(x))$, an interpretation $I$, and a substitution of variables in
$x$ to objects in $I$, one can traverse a path to a leaf which gives the
value for that substitution. The values of all substitutions are aggregated
exactly as in expressions.
In particular,
let the variables as ordered in the aggregation function be $x_1,\ldots,x_n$.
To calculate
the final value, $\mbox{map}_B(I)$,
the semantics prescribes that we enumerate all substitutions
of variables $\{x_i\}$ to objects in $I$ and then perform the aggregation
over the variables, going from $x_n$ to $x_1$. We can therefore think of
the aggregation as if it organizes the substitutions into blocks (with fixed
value to the first $k-1$ variables and all values for the $k$'th variable), and then
aggregates the value of each block separately, repeating this from $x_n$ to
$x_1$.
We call the algorithm that follows this definition directly {\em brute
force evaluation}.
A detailed example is shown in Figure~\ref{Fig:VE}(a).
To evaluate the diagram in Figure~\ref{Fig:VE}(a) on the interpretation shown there we enumerate all $3^3=27$ substitutions of 3 objects to 3 variables, obtain a value for each, and then aggregate the values.
In the block where $x_1=a$, $x_2=b$, and $x_3$ varies over $a,b,c$ we get the values $3, 2, 2$ and an aggregated value of $7/3$.
This can be done for every block, and then we can aggregate over substitutions of $x_2$ and $x_1$. The final value in this case is $7/3$.
Any binary operation $op$ over real values can be generalized to
open and closed
expressions in a natural way. If $f_1$ and $f_2$ are
two closed expressions, $f_1\ op\ f_2$ represents the function which
maps each interpretation $w$ to $f_1(w)\ op\ f_2(w)$.
We follow the general convention of using
$\oplus$ and $\otimes$ to denote $+$ and $\times$ respectively
when they are applied to expressions.
This provides a definition but not an implementation of binary operations over expressions.
The work in
\cite{JoshiKeKh11} showed that
if the binary operation is
{\em safe}, i.e.,\ it distributes with respect to all aggregation operators,
then there is a simple algorithm (the Apply procedure)
implementing the binary operation over expressions.
For example
$\oplus$ is safe w.r.t.\ $\max$ aggregation, and it is easy to see that
$(\max_x f(x)) \oplus (\max_x g(x))$
= $\max_x \max_y f(x)+ g(y)$, and
the open formula portion (diagram portion) of the result
can be
calculated directly from the open expressions $f(x)$ and $g(y)$.
The Apply procedure \cite{WangJoKh08,JoshiKeKh11} calculates a diagram
representing $f(x)+ g(y)$ using operations over the graphs representing $f(x)$ and $g(y)$.
Note that we
need to standardize
apart, as in the renaming of $g(x)$ to
$g(y)$ for such operations.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{Diagrams/RewdTVDs1.png}
\caption{IC Dynamics and Regression
(a) An example GFODD.
(b) TVD for $empty(s)$ under
the deterministic action $unload(t^*,s^*)$.
(c) Regressing the GFODD of (a)
over $unload(t^*,s^*)$.
(d) Object Maximization.
In these diagrams and throughout the paper, left-going edges represent the true
branch out of the node and right-going edges represent the false branch.
}
\label{Fig:IC_Dynamics}
\end{center}
\end{figure}
\noindent{\bf SDP for Relational MDPs:}
SDP provides a symbolic
implementation
of the value iteration update of Eq~(\ref{eq:viflat}) that avoids state enumeration
implicit in that equation.
The SDP algorithm of \cite{JoshiKeKh11} generalizing \cite{BoutilierRePr01} calculates one
iteration of value iteration as follows. As input we get (as GFODDs) closed
expressions $V_n$, $R$ (we use Figure~\ref{Fig:IC_Dynamics}(a) as the reward in the example below), and open expressions for the probabilistic choice of
actions $Pr(A_j(x)|A(x))$ and for the dynamics of deterministic action
variants.
The action dynamics are specified by providing a diagram (called truth value
diagram or TVD) for each variant $A_j(x)$ and predicate template
$p(y)$. The corresponding TVD, $T(A_j(x),p(y))$, is an open expression that specifies the
truth value of $p(y)$ {\em in the next state} when $A_j(x)$ has been executed
{\em in the current state}.
Figure~\ref{Fig:IC_Dynamics}(b) shows the TVD of $unload(t^*,s^*)$ for predicates $empty(s)$.
Note that in contrast to other representations
of planning operators (but similar to the successor state axioms of \cite{BoutilierRePr01}) TVDs
specify the truth value after
the action and not the change in truth value.
Since unload is deterministic we have only one variant and
$Pr(A_j(x)|A(x))=1$. We illustrate probabilistic actions in the
next section.
Following \cite{WangJoKh08,JoshiKeKh11} we require
that $Pr(A_j(x)|A(x))$ and $T(A_j(x),p(y))$
have no aggregations and cannot introduce new variables,
that is, the first refers to $x$ only and the
second to $x$ and $y$ but no other variables. This implies that the
regression and product terms in the algorithm below do not change the
aggregation function and therefore enables the analysis of the algorithm.
The SDP algorithm of \cite{JoshiKeKh11} implements Eq~(\ref{eq:viflat}) using the following
4 steps. We denote this as $V_{i+1}=SDP^1(V_i)$.
\begin{enumerate}
\item \label{sdp_1} {\bf Regression:} The $n$ step-to-go value function
$V_n$ is regressed over every deterministic variant $A_j(x)$ of every
action $A(x)$ to produce $Regr(V_n, A_j(x))$.
Regression is conceptually similar to goal regression in deterministic planning but it needs to be done for all (potentially exponential number of) paths in the diagram, each of which can be thought of as a goal in the planning context.
This can be done efficiently
by replacing every atom in the open formula portion of $V_{n}$
(a node in the GFODD representation)
by its corresponding TVD without changing the aggregation function.
Figure~\ref{Fig:IC_Dynamics}(c) illustrates the process of block replacement for the diagram of
part (a).
Note that $tin()$ is not affected by the action. Therefore its
TVDs simply repeats the predicate value, and the corresponding node is
unchanged by block replacement.
Therefore, in this example, we are effectively replacing only one node with its TVD. The TVD leaf valued 1 is connected to the left child (true branch) of the node and the 0 leaf is connected to the right child (false branch).
To maintain the diagrams sorted we must in fact
use a different implementation than block replacement;
the implementation
does not affect the constructions or proofs in the paper and we therefore refer the reader to \cite{WangJoKh08} for the details.
\item \label{sdp_2} {\bf Add Action Variants:} The Q-function
$Q_{V_n}^{A(x)}$ $=$ $R$ $\oplus$ $[\gamma$ $\otimes$
$\oplus_j(Pr(A_j(x))$ $\otimes$ $Regr(V_n, A_j(x)))]$ for each
action $A(x)$ is generated by combining regressed diagrams using the
binary operations $\oplus$ and $\otimes$ over expressions.
Recall that probability diagrams do not refer to additional
variables. The multiplication can therefore be done directly on the open formulas
without changing the aggregation function.
As argued by \cite{WangJoKh08}, to guarantee correctness, both
summation steps
($\oplus_j$ and $R\oplus$ steps) must
standardize apart the functions before adding them.
\item \label{sdp_3} {\bf Object Maximization:} Maximize over the action
parameters $Q_{V_n}^{A(x)}$ to produce $Q_{V_n}^A$ for each action $A(x)$,
thus obtaining the value achievable by the best ground instantiation of
$A(x)$ in each state. This step is implemented by converting action parameters $x$ in
$Q_{V_n}^{A(x)}$ to variables, each associated with the $max$ aggregation
operator, and appending these operators to the head of the aggregation
function.
For example, if object maximization were applied to the diagram of Figure~\ref{Fig:IC_Dynamics}(c)
(we skipped some intermediate steps) then $t*, s*$
would be replaced with variables
and given max aggregation so that the aggregation is
as shown in part (d) of the figure.
Therefore, in step 2, $t*, s*$ are constants (temporarily added to the logical language) referring to concrete objects in the world, and in step 3 we turn them into variables and specify the aggregation function for them.
\item \label{sdp_4} {\bf Maximize over Actions:} The $n+1$ step-to-go value function $V_{n+1}$ $=$
$\max_A Q_{V_n}^A$, is generated by combining the diagrams using
the binary operation $\max$ over expressions.
\end{enumerate}
The main advantage of this approach is that the regression operation, and the
binary operations over expressions $\oplus$, $\otimes$, $\max$
can be
performed symbolically and therefore the final value function output by the
algorithm is a closed expression in the same language.
We therefore get a completely symbolic form of value iteration.
Several instantiations
of this idea have been implemented \cite{KerstingOtDe04,HolldoblerKaSk06,SannerBo09,WangJoKh08}.
Except for the work of \cite{JoshiKeKh11,SannerBo09} previous work has handled
only max aggregation.
Previous work
\cite{JoshiKeKh11} relies on the fact that the binary operations
$\oplus$, $\otimes$, and $\max$ are safe with respect to $\max,\min$
aggregation to provide a GFODD based SDP algorithm for problems
where the reward function has $\max$ and $\min$ aggregations .
In this paper we use reward functions with
$\max$ and $\mbox{avg}$ aggregation.
The binary operations
$\oplus$ and $\otimes$ are safe with respect to $\mbox{avg}$ but the binary operation $\max$ is not.
For example $2 + \mbox{avg}\{1,2,3\} = \mbox{avg}\{2+1,2+2,2+3\}$ but $\max\{2 , \mbox{avg}\{1,2,3\}\} \not=\mbox{avg}\{\max\{2,1\}, \max\{2,2\}, \max\{2,3\}\}$.
To address this issue we introduce a new implementation for this case in the next section.
\section{Model and Algorithms for Service Domains}
We now proceed to describe our extensions to SDP to handle
exogenous events. Exogenous events
refer to spontaneous changes to the state without
agent action.
Our main modeling assumption, denoted {\bf A1}, is that we have
{\em object-centered exogenous actions} that are
automatically taken in every time step.
In particular, for every object $i$
in the domain we have action $E(i)$ that acts on object $i$ and
the conditions and effects of $\{E(i)\}$ are such that they
are mutually non-interfering: given any state $s$, all the actions
$\{E(i)\}$ are applied simultaneously, and this is equivalent to their
sequential application in any order.
We use the same GFODD action representation described in the previous section to capture
the dynamics of $E(i)$.
\noindent
{\bf Example: IC Domain.}
We use a simple version of the inventory control domain (IC) as a
running example, and for some of the experimental results.
In IC the objects are a depot, a truck and a number of shops.
A shop can be empty or full, i.e.,
the inventory has only two levels and the truck can either be at the depot or at a shop.
The reward is the fraction (average) of non-empty shops.
Agent actions are deterministic and they capture stock replacement. In particular,
a shop can be filled by {\it unload}ing inventory from the
truck in one step. The truck can be {\it load}ed in a depot and
{\it drive}n from any location (shop or depot) to any location in one
step.
The exogenous action $E(i)$ has two variants;
the success variant $E_{succ}(i)$ (customer
arrives at shop $i$, and if non-empty the inventory
becomes empty)
occurs with probability 0.4 and the fail variant $E_{fail}(i)$ (no customer, no changes
to state) occurs with probability 0.6.
Figure~\ref{Fig:IC_DynRegr} parts (a)-(d) illustrate the model for IC and its GFODD representation.
In order to facilitate the presentation of algorithmic steps, Figure~\ref{Fig:IC_DynRegr}(e) shows a slightly different reward function (continuing previous examples) that is used as the reward in our running example.
For our analysis we make two further modeling assumptions. {\bf A2}: we assume that
exogenous action $E(i)$ can only affect
unary properties of the object $i$.
To simplify the presentation we consider a single such predicate $sp(i)$
that may be affected, but any number of such predicates can be handled.
In IC, the special predicate $sp(i)$ is $empty(i)$
specifying whether the shop is empty. {\bf A3:} we assume that $sp()$ does
not appear in the precondition of any agent action. It follows that $E(i)$ only affects $sp(i)$ and
that $sp(i)$ can appear in the precondition of $E(i)$ but cannot appear in the precondition of any other action.
\subsection{The Template Method}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{Diagrams/ExoDynamics1.png}
\caption{Representation and template method for IC. (a) TVD for $empty(j)$ under action variant $E_{succ}(i)$. (b) TVD for $empty(j)$ under action variant $E_{fail}(i)$.
(c) A specialized form of (a) under $i=j$.
This is simply the value 1 and is therefore a GFODD given by a single leaf node.
(d) $Pr(E_{succ}(i)|E(i))$ which is simply the value 0.4.
(e) A simple reward function. (f) Grounding (e) using Skolem constant $a$. (g) Regressing (f) over $E_{succ}(a)$ and multiplying with the probability diagram in (d). (h) Regressing (f) over $E_{fail}(a)$ and multiplying by its probability diagram. (i) Adding (g) and (h) without standardizing apart. (j) Reintroducing the Avg aggregation.}
\label{Fig:IC_DynRegr}
\end{center}
\vspace*{-0.2in}
\end{figure}
Extending SDP to handle exogenous events is complicated because the events depend on
the objects in the domain and on their number and exact solutions can result in complex expressions
that require counting formulas over the domain
\cite{SannerBo07,Sanner08}.
A possible simple approach would explicitly calculate
the composition of the agent's actions with all the exogenous events. But this
assumes that we know the number of objects $n$ (and thus does not generalize) and
results in an exponential number of action variants, which makes it infeasible.
A second simple approach would be to directly modify the SDP algorithm so that
it sequentially regresses the value function over each of
the ground exogenous actions
before performing the regression over the agent
actions, which is correct by our assumptions. However, this approach, too, requires us to know $n$
and because it effectively grounds the solution it suffers in terms of
generality.
We next describe the {\em template method}, one of our main contributions, which
provides a completely abstract approximate SDP solution for the exogenous event model.
We make our final assumption, {\bf A4}, that the reward function (and inductively $V_i$)
is a closed expression of the
form $\max_x\mbox{avg}_y V(x,y)$ where $x$ is a (potentially empty) set of variables and $y$
is a single variable, and
in $V(x,y)$ the predicate $sp()$
appears instantiated only as $sp(y)$.
The IC domain as described above satisfies all our assumptions.
The template method
first runs the following 4 steps, denoted $SDP^2(V_i)$,
and then follows with the 4 steps of SDP as given above for user actions.
The final output of our approximate Bellman backup, $T'$,
is $V_{i+1}= T'(V_i) = SDP^1(SDP^2(V_i))$.
\noindent
1. %
{\bf Grounding:}
Let $a$ be a Skolem constant
not in $V_i$.
Partially ground $V$ to get $V= \max_x V(x,a)$ \\
2. \label{sdp_1new} {\bf Regression:} The function $V$ is
regressed over every deterministic variant $E_j(a)$ of the exogenous
action centered at $a$ to produce $Regr(V, E_j(a))$. \\
3. \label{sdp_2new} {\bf Add Action Variants:} The value function
$V=$ $\oplus_j$$(Pr(E_j(a))$ $\otimes$ $Regr(V, E_j(a)))$ is updated.
As in $SDP^1$, multiplication is done directly on the open formulas
without changing the aggregation function.
Importantly, in contrast with $SDP^1$,
here we do not
standardize apart the functions when performing $\oplus_j$. This leads to an
approximation. \\
4. %
{\bf Lifting:}
Let the output of the previous step be
$V= \max_x W(x,a)$.
Return $V=\max_x\mbox{avg}_y W(x,y)$.
Thus, the algorithm grounds $V$ using a generic object for exogenous actions,
it then performs regression for a single generic exogenous action, and then
reintroduces the aggregation. Figure~\ref{Fig:IC_DynRegr} parts (e)-(j) illustrate this
process.
We now show that our algorithm provides a monotonic lower bound on the value function.
The crucial step is the analysis of $SDP^2(V_i)$. We have:
\begin{lemma}
\label{prop:lowerbound}
Under assumptions {\bf A1, A2, A4}
the
value function calculated by $SDP^2(V_i)$ is a lower bound
on the value of
regression of $V_i$ through all exogenous actions.
\end{lemma}
Due to space constraints the complete proof is omitted and we only provide a sketch.
This proof and other omitted details can be found in the full version of this paper \cite{JoshiKhTaRaFe-arxiv-2013}.
\begin{proof} (sketch) The main idea in the proof is to show that, under our assumptions, the result of our algorithm
is equivalent to
sequential regression of all exogenous
actions, where in each step the action variants are not standardized
apart.
Recall that the input value function $V_i$ has the form $V= \max_x \mbox{avg}_y V(x,y)$ $= \max_x \frac{1}{n}[V(x,1) + V(x,2) + \ldots + V(x,n)]$.
To establish this relationship we show that after the sequential algorithm regresses $E(1),\ldots,E(k)$ the intermediate value function has the form
$\max_x \frac{1}{n}[W(x,1) + W(x,2) + \ldots + W(x,k) + V(x,k+1) +\ldots + V(x,n)]$. That is, the first $k$ portions change in the same structural manner into a diagram $W$ and the remaining portions retain their original form $V$. In addition, $W(x,\ell)$ is the result of regressing $V(x,\ell)$ through $E(\ell)$ which is the same form as calculated by step 3 of the template method.
Therefore, when all $E(\ell)$ have been regressed, the result is $V= \max_x \mbox{avg}_y W(x,y)$ which is the same as the result of the template method.
The sequential algorithm is correct by definition when standardizing apart but yields a lower bound when not
standardizing apart.
This is true because for any functions $f^1$ and $f^2$ we have
$[\max_{x_1} \mbox{avg}_{y_1} f^1(x_1,y_1)] + [\max_{x_2} \mbox{avg}_{y_2} f^2(x_2,y_2)]
\geq
\max_{x} [\mbox{avg}_{y_1}$ $f^1(x,y_1) + \mbox{avg}_{y_2} f^2(x,y_2)]
=
\max_{x} \mbox{avg}_{y} [ (f^1(x,y) + f^2(x,y))]
$ where the last equality holds because $y_1$ and $y_2$ range over the same
set of objects. Therefore, if $f^1$ and $f^2$ are the results of regression for different variants from step 2, adding them without standardizing apart as in the last equation yields a lower bound.
\qed
\end{proof}
The lemma requires that $V_i$ used as input satisfies {\bf A4}.
If this
holds for the reward function, and if $SDP^1$ maintains this property then
{\bf A4} holds inductively for all $V_i$. Put together this implies that the template method
provides a lower bound on the true Bellman backup.
It therefore remains to show how $SDP^1$ can be implemented for $\max_x \mbox{avg}_y$ aggregation
and that it maintains the form {\bf A4}.
First consider regression. If assumption
{\bf A3} holds, then our algorithm using regression through TVDs
does not introduce new occurrences of $sp()$ into $V$.
Regression also does not change the aggregation function.
Similarly, the probability diagrams do not introduce $sp()$ and do not change the aggregation function.
Therefore {\bf A4} is maintained by these steps. For the other steps we need to discuss
the binary operations
$\oplus$ and $\max$.
For $\oplus$, using the same argument as above, we see that
$[\max_{x_1} \mbox{avg}_{y_1} f^1(x_1,y_1)] + [\max_{x_2} \mbox{avg}_{y_2} f^2(x_2,y_2)]
=
\max_{x_1} \max_{x_2} [\mbox{avg}_{y}$ $f^1(x_1,y) + f^2(x_2,y)]$ and therefore it suffices to standardize apart the $x$ portion but $y$ can be left intact and {\bf A4} is maintained.
Finally, recall that we need a new implementation for the binary operation $\max$ with $\mbox{avg}$ aggregation.
This can be done as follows:
to perform $\max\{[\max_{x_1}$ $\mbox{avg}_{y_1}$ $f^1(x_1,y_1)],$
$[\max_{x_2} \mbox{avg}_{y_2} f^2(x_2,y_2)]\}$ we can introduce two new variables
$z_1,z_2$ and write the expression: ``$\max_{z_1,z_2} \max_{x_1} \max_{x_2}
\mbox{avg}_{y_1} \mbox{avg}_{y_2}$ (if $z_1=z_2$ then $f^1(x_1,y_1)$ else
$f^2(x_2,y_2)$)''. This is clearly correct whenever the interpretation has at least two objects because $z_1,z_2$ are
unconstrained.
Now, because the branches of the if statement are mutually exclusive, this expression can be further simplified to
``$\max_{z_1,z_2} \max_{x} \mbox{avg}_{y} $ (if $z_1=z_2$ then $f^1(x,y)$ else
$f^2(x,y)$)''. The implementation uses an equality node at the root with label $z_1=z_2$, and hangs $f^1$ and $f^2$ at the true and false branches.
Crucially it does not need to standardize apart the
representation of $f^1$ and $f^2$ and thus {\bf A4} is maintained.
This establishes that the approximation returned by
our algorithm, $T'[V_i]$,
is a lower bound of the true Bellman backup $T[V_i]$.
An additional argument (details available in \cite{JoshiKhTaRaFe-arxiv-2013})
shows
that this is a monotonic lower bound, that is, for all $i$ we have
$T[V_i]\geq V_i$ where
$T[V]$ is the true Bellman backup.
It is well known
(e.g., \cite{McMahanLG05})
that if this holds then the value of the greedy policy w.r.t.\ $V_i$ is at least $V_i$
(this follows from the monotonicity of the policy update operator $T_{\pi}$).
The significance is, therefore, that $V_i$
provides an immediate certificate on the quality of the resulting greedy policy.
Recall that
$T'[V]$ is our approximate backup,
$V_0=R$ and $V_{i+1}=T'[V_i]$. We have:
\begin{theorem}
When assumptions {\bf A1, A2, A3, A4} hold and the reward function is non-negative we have for all $i$:
$V_i \leq V_{i+1} = T'[V_i] \leq T[V_i] \leq V^*$.
\label{MLBThm}
\end{theorem}
As mentioned above, although the assumptions are required for our analysis, the algorithm can be applied more widely.
Assumptions {\bf A1} and {\bf A4} provide our basic modeling assumption per object centered exogenous events and additive rewards. It is easy to generalize the algorithm to have events and rewards based on object tuples instead of single objects.
Similarly, while the proof fails when {\bf A2} (exogenous events only affect special unary predicates) is violated
the algorithm can be applied directly without modification. When {\bf A3} does not hold, $sp()$ can appear with multiple arguments and the algorithm needs to be modified.
Our implementation introduces an additional approximation and at iteration
boundary we unify all the arguments of $sp()$ with the average
variable $y$. In this way the algorithm can be applied inductively for all $i$.
These extensions of the algorithm are demonstrated in our experiments.
\smallskip
\noindent{\bf Relation to Straight Line Plans:}
The template method provides
symbolic way to calculate a lower bound on the value function.
It is interesting to consider what kind of lower bound this provides.
Recall that the
{\em straight line plan approximation} (see e.g., discussion in \cite{BoutilierDeHa99})
does not calculate a policy and instead at any state it seeks the best linear plan with
highest expected reward.
As the next observation argues
(proof available in \cite{JoshiKhTaRaFe-arxiv-2013}) the template method
provides a related approximation.
We note, however, that unlike previous work on straight line plans
our computation is done symbolically and
calculates the approximation for all start states simultaneously.
\begin{observation}
\label{obs:slplans}
The template method provides an approximation that is related to the value
of the best straight line plan.
%
When there is only one
deterministic agent action template we
get exactly the value of the straight line plan.
Otherwise,
the approximation is bounded between the
value of the straight line plan and the optimal value.
\end{observation}
|
1306.5906
|
\section{Introduction}
Second-harmonic microscopy is a promising imaging technique based
on a phenomenon called second-harmonic generation (SHG) or
frequency-doubling. SHG requires an intense laser beam passing
through a material with non vanishing second-order susceptibility
\cite{GeneralConsideration}. A second electromagnetic field is
emitted at exactly twice the frequency of the incoming field.
Roughly speaking,
\begin{equation}
\mathbf{E}_{2\omega} \sim \mathbf{E}_\omega \chi^{(2)}
\mathbf{E}_\omega,
\end{equation} where $\chi^{(2)}$ is the second-order susceptibility tensor.
A condition for an object to have non vanishing second-order
susceptibility tensor is to have a noncentrosymmetric structure.
Thus SHG only occurs in a few types of physical bodies: crystals
\cite{miller1964}, interfaces like cell membranes \cite{heinz1991,
campagnola2003}, nanoparticle \cite{zavelani2008, hui2004}, and
natural structures like collagen or neurons
\cite{brown2003,mertz2004}. This makes SHG a very good contrast
mechanism for microscopy, and has been used in biomedical imaging.
SHG signals have a very low intensity because the coefficients in
$\chi^{(2)}$ have a typical size of picometer $/V$
\cite{choy1976}. This is the reason why a high intensity laser
beam is required in order to produce a second-harmonic field that
is large enough to be detected by the microscope. Second-harmonic
microscopy has several advantages. Among others, the fact that the
technique does not involve excitation of molecules so it is not
subject to phototoxicity effect or photobleaching. The excitation
uses near infrared light which has a very good penetration
capacity, and a lot of natural structures (like collagen for
instance) exhibit strong SHG properties, so there is no need for
probes or dyes in certain cases. SHG images can be collected
simultaneously with standard microscopy and
two-photon-excitation-fluorescence microscopy for membrane imaging
(see, for instance, \cite{campagnola2003}).
The coherent nature of the SHG signal allows us to use nonlinear
holography for measuring the complex two-dimensional (amplitude
and phase) SHG signal \cite{hsieh2009three, pu2008harmonic}. The
idea is quite similar to conventional linear holography
\cite{cuche1999digital, schnars1994direct}. A frequency doubling
crystal is used to produce a coherent reference beam at the
second-harmonic frequency, which allows to measure the phase of
the one emitted from the reflector \cite{hsieh2011imaging}.
On the other hand,
since only the dye/membrane produces the second-harmonic signal,
SHG microscopy allows a precise imaging of the dye/membrane, clear
from any scattering from the surrounding medium, contrary to the
fundamental frequency image, where the signal measured is produced
by both the reflector and the medium. As it will be shown in this
paper, this is the main feature which makes second-harmonic
imaging very efficient when it is not possible to obtain an image
of the medium without the dye in order to filter the medium noise.
In practical situations \cite{hsieh2011imaging}, it is not
possible to get an image without the reflector. The main purpose
of this work is to justify that the second-harmonic generation
acts in such situations as a powerful contrast imaging approach.
More precisely, we study the case of a nanoparticle with non
vanishing second-order susceptibility tensor $\chi^{(2)}$ embedded
in a randomly heterogeneous medium illuminated by an incoming
electromagnetic field at a fixed frequency $\omega$. We give
asymptotic formulas for the electromagnetic field diffracted by
the particle and the medium at the fundamental frequency and at
the second-harmonic frequency. Then we use a backpropagation
algorithm in order to recover the position of the particle from
boundary measurements of the fields. We study the images obtained
by backpropagation both in terms of resolution and stability. In
particular, we elucidate that the second-harmonic field provides
a more stable image than that from fundamental frequency imaging,
with respect to medium noise, and that the signal-to-noise ratio
for the second-harmonic image does not depend neither on
$\chi^{(2)}$ nor on the volume of the particle. The aforementioned
are the main findings of this study.
The paper is organized as follows. In section \ref{sec2} we
formulate the problem of SHG. In section \ref{sec3}, asymptotic
expansions in terms of the size of the small reflector (the
nanoparticle) of the scattered field at the fundamental frequency
and the second-harmonic generated field are derived. In section
\ref{sec5}, we introduce backpropagation imaging functions for
localizing the point reflector using the scattered field at the
fundamental frequency as well as the second-harmonic field. In
section \ref{sec6}, we perform a stability and resolution analysis
of the backpropagation imaging functions. We show that the medium
noise affects the stability and resolution of the imaging
functions in different ways. We prove that using the
second-harmonic field renders enhanced stability for the
reconstructed image. Our main findings are
delineated by a few numerical examples in section \ref{sec7}. The
paper ends with a short discussion.
\section{Problem formulation} \label{sec2}
Consider a small electric reflector $\Omega_r$ embedded in a
randomly heterogeneous medium in $\mathbb{R}^2$. We assume that
the medium has random fluctuations described by a random process $
\mu$ with Gaussian statistics and mean zero. Furthermore, we
assume that $\mu$ is compactly supported in $\mathbb{R}^2$ and let
$\Omega_\mu := \mbox{supp}(\mu)$. We also assume that the
refractive index of the background homogeneous medium
$\mathbb{R}^2 \setminus \overline{\Omega_\mu}$ is $1$. The medium
is illuminated by a plane wave at frequency $\omega >0$, intensity
$U_I >0$, and direction $\theta \in \mathbb{S}^1$:
\begin{equation} U_0 (x)= U_I e^{i \omega \theta
\cdot x},\end{equation} with $\mathbb{S}^1$ being the unit circle.
We assume that the incoming plane wave is polarized in the
transverse magnetic direction.
The small reflector $\Omega_r$ is in $\Omega_\mu$ and has a
refractive index given by
\begin{equation}
[\sigma_r-1]\textbf{1}_{\Omega_r}(x),
\end{equation}
where $\sigma_r$ is the refractive index contrast of the reflector,
$\Omega_r$ is compactly supported in $\Omega_\mu$ with volume $|\Omega_r|$,
and $\textbf{1}_{\Omega_r}$ is the characteristic function of $\Omega_r$.
The squared refractive index $n(x)$ in the whole space has then the following form:
\begin{equation}
\frac{1}{n(x)}= \left(1+\mu(x) +
[\sigma_r-1]\textbf{1}_{\Omega_r}(x)\right).
\end{equation}
The scattered field $u_s$ generated by the plane wave satisfies
the Helmholtz equation:
\begin{equation} \label{sgheq1}
\left\{
\begin{aligned}
\nabla\cdot\left(([{\sigma_r}-1]\textbf{1}_{\Omega_r} + \mu + 1) \nabla (u_s + U_0)\right)
+\omega^2 (u_s + U_0) = 0 \quad \mbox{in } \mathbb{R}^2, \\
\lim\limits_{\vert x\vert \to \infty} \sqrt{\vert
x \vert}(\frac{\partial u_s}{\partial \vert x \vert} -i\omega
u_s) =
0,
\end{aligned}
\right.
\end{equation}
The point reflector also scatters a second field $v$ at frequency
$2\omega$. The field $v$ satisfies, up to $O(||\mu||^2_{L^\infty
(\Omega_\mu)})$, the following Helmholtz equation
\cite{LightWaves,GeneralConsideration,Soussi}:
\begin{equation} \label{sgheq2}
\left\{
\begin{aligned}
\left(\Delta + \frac{(2 \omega)^2}{[\sigma_r -1] \textbf{1}_{\Omega_r} + 1} (1- \frac{\mu}{[\sigma_r -1]
\textbf{1}_{\Omega_r} + 1}
)\right) v=
\sum_{k,l= 1,2} \chi_{kl} \partial_{x_k} U \partial_{x_l} U \textbf{1}_{\Omega_r} \quad \mbox{in } \mathbb{R}^2,\\
\lim\limits_{\vert x\vert \to \infty} \sqrt{\vert x \vert}\left(\frac{\partial v}{\partial \vert x \vert} -
2 i\omega v\right) = 0,
\end{aligned}
\right.
\end{equation}
where $\chi$ is the electric polarization of the reflector, and
can be written as $\chi(x)=(\chi_{ij})_{i,j=
1,2}\mathbf{1}_{\mbox{r}}(x)$ and $U=u_s+U_0$ is the total field.
Here the second-harmonic field is assumed to be in the transverse
electric mode. The polarization of the second-harmonic field is
given by symmetry properties of the second-order susceptibility
tensor $\chi$. This transverse magnetic--transverse electric
polarization mode is known to be supported by a large class of
optical nonlinear materials
\cite{shen1984principles}. We choose this polarization mode so that a two-dimensional study of the second
harmonic generation with scalar fields would be possible. The results would be pretty similar in a general
three-dimensional case, but the computations would be much elusive.
The coupled problems (\ref{sgheq1}) and (\ref{sgheq2})
have been mathematically investigated in \cite{bao1, bao3, bao2}.
Let us consider $\Omega$ to be a domain large enough so that
$\Omega_\mu = \mbox{supp}(\mu) \Subset \Omega$ and measure the
fields $u_s$ and $v$ on its boundary $\partial \Omega$. The goal
of the imaging problem is to locate the reflector from the
far-field measurements of the scattered field $u_s$ at the
fundamental frequency and/or the second-harmonic generated field
$v$. It will be shown in this paper that the use of the
second-harmonic field yields a better stability properties than
the use of the scattered field at the fundamental frequency in the
presence of medium noise.
\section{Small-volume expansions} \label{sec3}
In this section, we establish small-volume expansions for the
solutions of problems (\ref{sgheq1}) and (\ref{sgheq2}). We
assume that the reflector is of the form $\Omega_r =z_r+\delta B$,
where its characteristic size $\delta$ is small, $z_r$ is its
location, and $B$ is a smooth domain such that $B\subset B(0,1)$.
\subsection{Fundamental frequency problem}
Let $U^{(\mu)} =u_s^{(\mu)}+U_0$ be the total field that would be
observed in the absence of any reflector. The scattered field
$u_s^{(\mu)}$ satisfies
\begin{equation} \label{eqdepart} \left\{
\begin{aligned}\nabla\cdot \left((1+\mu) \nabla (u^{(\mu)}_s
+U_0) \right) + \omega^2 (u^{(\mu)}_s + U_0) = 0 \quad \mbox{in
}
\mathbb{R}^2, \\
\lim\limits_{\vert x\vert \to \infty} \sqrt{\vert
x \vert}(\frac{\partial u^{(\mu)}_s}{\partial \vert x \vert} -i\omega
u^{(\mu)}_s) =
0.
\end{aligned}
\right.
\end{equation}
Therefore,
$$
\nabla\cdot (1+\mu) \nabla u^{(\mu)}_s + \omega^2 u^{(\mu)}_s = -
\nabla \cdot \mu \nabla U_0 \quad \mbox{in } \mathbb{R}^2.
$$
Since $\Omega_\mu \Subset \Omega$, the following estimate holds
\begin{equation} \label{estimateborn}
|| u_s^{(\mu)} ||_{H^1(\Omega)}
\leq C ||\mu||_{L^\infty}
\end{equation}
for some positive constant $C$ independent of $\mu$. Here,
$H^1(\Omega)$ is the set of functions in $L^2(\Omega)$, whose weak
derivatives are in $L^2(\Omega)$. We refer the reader to Appendix
\ref{appenda} for a proof of (\ref{estimateborn}), which uses the
same arguments as those in \cite{abboud2,abboud1}. Actually, one
can prove that
$$
u_s^{(\mu)}(x) = - \int_{\Omega_\mu} \mu(y) \nabla U_0(y) \cdot
\nabla G^{(0)}_\omega(x,y) dy + O (||\mu||_{L^\infty}^2), \quad x
\in \Omega.
$$
Moreover, writing
$$
\nabla\cdot \left((1+\mu) \nabla (u^{(\mu)}_s +U_0) \right) = -
\omega^2 (u^{(\mu)}_s + U_0),
$$
it follows by using Meyers' theorem \cite{meyers} (see also
\cite[pp. 35-45]{meyers2}) that there exists $\eta>0$ such that
for all $0\leq \eta^\prime \leq \eta$,
$$
\begin{array}{lll}
|| \nabla u_s^{(\mu)} ||_{L^{2+\eta^\prime}(\Omega^\prime)}
&\leq &|| \nabla (u_s^{(\mu)} + U_0) ||_{L^{2+\eta^\prime}(\Omega)}
+ || \nabla U_0 ||_{L^{2+\eta^\prime}(\Omega)}\\ \noalign{\medskip} &\leq& C
|| u_s^{(\mu)} + U_0 ||_{L^{2+\eta^\prime}(\Omega)}
+ || \nabla U_0 ||_{L^{2+\eta^\prime}(\Omega)} \\ \noalign{\medskip} &\leq& C
|| u_s^{(\mu)}||_{L^{2+\eta^\prime}(\Omega)} + C^\prime
\end{array}
$$
for some positive constants $C$ and $C^\prime$, where
$\Omega^\prime \Subset \Omega$. From the continuous embedding of
$H^1(\Omega)$ into $L^{2+\eta^\prime}(\Omega)$ and
(\ref{estimateborn}) we obtain
$$
|| u_s^{(\mu)}||_{L^{2+\eta^\prime}(\Omega)} \leq
C^{\prime\prime},
$$
for some constant $C^{\prime\prime}$ independent of $\mu$.
Therefore,
\begin{equation} \label{estimateborn2}
|| \nabla u_s^{(\mu)} ||_{L^{2+\eta^\prime}(\Omega^\prime)}
\leq C
\end{equation}
for some constant $C$ independent of $\mu$.
Now, on one hand, by subtracting (\ref{sgheq1}) from
(\ref{eqdepart}), we get
\begin{multline} \label{subtract}
\nabla\cdot\left(([{\sigma_r}-1] \textbf{1}_{\Omega_r} + \mu + 1)
\nabla (u_s - u_s^{(\mu)})\right)
+\omega^2 (u_s - u_s^{(\mu)}) = - \nabla \cdot
[{\sigma_r}-1] \textbf{1}_{\Omega_r} \nabla U_0 \\ - \nabla \cdot
[{\sigma_r}-1] \textbf{1}_{\Omega_r} \nabla u_s^{(\mu)} \quad \mbox{in } \mathbb{R}^2.
\end{multline}
On the other hand, we have
$$\begin{array}{lll}
|| [{\sigma_r}-1] \textbf{1}_{\Omega_r} \nabla u_s^{(\mu)}
||_{L^2(\Omega)} &\leq &\displaystyle C | \Omega_r|^{\frac{\eta}{8 + 2\eta}}
|| \nabla u_s^{(\mu)} ||_{L^{2+
\frac{\eta}{2}}(\Omega)} \\
\noalign{\medskip} &\leq & C | \Omega_r|^{\frac{\eta}{8 + 2\eta}} || \nabla
u_s^{(\mu)} ||_{L^{2}(\Omega)}^{\frac{1}{4+\eta}} || \nabla
u_s^{(\mu)} ||_{L^{2+\eta}(\Omega)}^{\frac{1}{4+\eta}},\\
\end{array}
$$
and hence, by (\ref{estimateborn}) and (\ref{estimateborn2}), we
arrive at
$$
|| [{\sigma_r}-1]\textbf{1}_{\Omega_r} \nabla u_s^{(\mu)}
||_{L^2(\Omega)} \leq C | \Omega_r|^{\frac{\eta}{8 + 2 \eta}}
||\mu||_{L^\infty}^{\frac{2}{4+\eta}}.$$ Therefore, we can neglect
in (\ref{subtract}) the term $\nabla \cdot
[{\sigma_r}-1]\textbf{1}_{\Omega_r} \nabla u_s^{(\mu)}$ as
$||\mu||_{L^\infty} \rightarrow 0$.
Let $w^{(\mu)}$ be defined by
\begin{equation}\label{eqwmu}
\begin{array}{l} \displaystyle \nabla \cdot (1 + \mu+ [\sigma_r -1] \textbf{1}_{\Omega_r}) \nabla w^{(\mu)} + \omega^2 w^{(\mu)} =
\nabla \cdot [{\sigma_r}-1]\textbf{1}_{\Omega_r} \nabla (x-z_r)
\quad \mbox{in }
\mathbb{R}^2,
\end{array}
\end{equation}
subject to the Sommerfeld radiation condition. Using the Taylor
expansion
$$
U_0(x) = U_0(z_r) + (x-z_r)\cdot \nabla U_0(z_r) + O(|x-z_r|^2),
$$
one can derive the inner expansion
\begin{equation}
\label{inner} (u_s-u_s^{(\mu)})(x) = w^{(\mu)}(x) \cdot \nabla
U_0(z_r) + O(\delta^2),
\end{equation}
for $x$ near $z_r$. The following estimate holds. We refer the
reader to Appendix \ref{appendb} for its proof.
\begin{proposition} \label{propb} There exists a positive constant $C$
independent of $\delta$ such that
$$|| u_s - u_s^{(\mu)} - w^{(\mu)}(x)
\cdot \nabla U_0(z_r) ||_{H^1(\Omega)} \leq C \delta^2.
$$
\end{proposition}
Let $G_{\omega}^{(\mu)}$ be the outgoing Green function in the
random medium, that is, the solution to
\begin{equation} \label{gmuomega} (\nabla \cdot (1+ \mu) \nabla +\omega^2) G_{\omega}^{(\mu)}(.,z)=-\delta_z \
\quad \mbox{in } \mathbb{R}^2,
\end{equation}
subject to the Sommerfeld radiation condition. Here, $\delta_z$ is
the Dirac mass at $z$. An important property satisfied by
$G_{\omega}^{(\mu)}$ is the reciprocity property
\cite{ammarimethods}:
\begin{equation}
\label{reciprocity} G_{\omega}^{(\mu)}(x,z) =
G_{\omega}^{(\mu)}(z,x), \qquad x\neq z.
\end{equation}
Let us denote by $G_{\omega}^{(0)}$ the outgoing background Green
function, that is, the solution to
\begin{equation} \label{gomega} (\Delta+\omega^2) G_{\omega}^{(0)}(.,z)=-\delta_z \
\qquad \mbox{in } \mathbb{R}^2,
\end{equation}
subject to the Sommerfeld radiation condition.
The Lippmann-Schwinger representation formula:
$$\begin{array}{lll}
(G_\omega^{(\mu)} - G_{\omega}^{(0)})(x,z_r) &=&\displaystyle
\int_{\Omega_\mu} \mu(y) \nabla G_\omega^{(\mu)}(y,z_r)\cdot
\nabla G_{\omega}^{(0)}(x,y)\, dy\\
\noalign{\medskip} &=&\displaystyle \int_{\Omega_\mu} \mu(y) \nabla
G_\omega^{(0)}(y,z_r)\cdot \nabla G_{\omega}^{(0)}(x,y)\, dy \\
\noalign{\medskip} && \displaystyle + \int_{\Omega_\mu} \mu(y) \nabla (G_\omega^{(\mu)} -
G_{\omega}^{(0)})(y,z_r)\cdot \nabla G_{\omega}^{(0)}(x,y)\, dy
\end{array}
$$
holds for $x \in \partial \Omega$. Since $\Omega_\mu \Subset
\Omega$, we have
$$\begin{array}{l}
\displaystyle \bigg|(G_\omega^{(\mu)} - G_{\omega}^{(0)})(x,z_r) -
\int_{\Omega_\mu} \mu(y) \nabla G_\omega^{(0)}(y,z_r)\cdot \nabla
G_{\omega}^{(0)}(x,y)\, dy \bigg| \leq\\ \noalign{\medskip} \qquad\displaystyle
||\mu||_{L^\infty} || \nabla
G_{\omega}^{(0)}(x,\cdot)||_{L^\infty(\Omega_\mu)} || \nabla
(G_\omega^{(\mu)} -
G_{\omega}^{(0)})(\cdot,z_r)||_{L^2(\Omega_\mu)}.
\end{array}
$$
Similarly to (\ref{estimateborn}), one can prove that
\begin{equation} \label{estimateborng} ||
\nabla (G_\omega^{(\mu)} - G_{\omega}^{(0)})(\cdot,z_r)
||_{L^2(\Omega_\mu)}
\leq C ||\mu||_{L^\infty},
\end{equation}
and hence, there exists a positive constant $C$ independent of
$\mu$ such that
\begin{equation} \label{guome}
\displaystyle \bigg|(G_\omega^{(\mu)} - G_{\omega}^{(0)})(x,z_r) -
\int_{\Omega_\mu} \mu(y) \nabla G_\omega^{(0)}(y,z_r)\cdot \nabla
G_{\omega}^{(0)}(x,y)\, dy \bigg| \leq C ||\mu||_{L^\infty}^2,
\end{equation}
uniformly in $x \in \partial \Omega$.
Since \begin{equation} \label{nabla2} || \nabla \nabla
G_{\omega}^{(0)}(x,\cdot)||_{L^\infty(\Omega_\mu)} \leq C
\end{equation} uniformly in $x \in \partial \Omega$,
the estimate
\begin{equation} \label{guomenabla}
\displaystyle \bigg| \nabla (G_\omega^{(\mu)} - G_{\omega}^{(0)})(x,z_r) -
\nabla \int_{\Omega_\mu} \mu(y) \nabla G_\omega^{(0)}(y,z_r)\cdot
\nabla G_{\omega}^{(0)}(x,y)\, dy \bigg| \leq C
||\mu||_{L^\infty}^2,
\end{equation}
holds in exactly the same way as in (\ref{guome}). Therefore, the
following Born approximation holds.
\begin{proposition} We have
$$\begin{array}{lll}
\displaystyle G_\omega^{(\mu)}(x,z_r) &=& \displaystyle G_{\omega}^{(0)}(x,z_r) -
\int_{\Omega_\mu} \mu(y) \nabla G_\omega^{(0)}(y,z_r)\cdot \nabla
G_{\omega}^{(0)}(x,y)\, dy + O( ||\mu||_{L^\infty}^2),\\
\noalign{\medskip} \displaystyle \nabla G_\omega^{(\mu)} (x,z_r) &=&\displaystyle \nabla
G_{\omega}^{(0)}(x,z_r) - \nabla \int_{\Omega_\mu} \mu(y) \nabla
G_\omega^{(0)}(y,z_r)\cdot \nabla G_{\omega}^{(0)}(x,y)\, dy +
O(||\mu||_{L^\infty}^2)
\end{array}
$$
uniformly in $x \in \partial \Omega$.
\end{proposition}
We now turn to an approximation formula for $w^{(\mu)}$ as
$||\mu||_{L^\infty} \rightarrow 0$. By integrating by parts we get
$$
w^{(\mu)}(x) = (1-\sigma_r) \int_{\Omega_r} \nabla (w^{(\mu)}(y) -
(y-z_r))\cdot \nabla G_{\omega}^{(\mu)}(x,y)\, dy, \quad x \in
\mathbb{R}^2.
$$
Using (\ref{nabla2}) we have, for $x$ away from $\Omega_r$,
\begin{equation} \label{wmuapp3}
w^{(\mu)}(x) = (1-\sigma_r) [\int_{\Omega_r} \nabla (w^{(\mu)}(y)
- (y-z_r))\, dy] \cdot [\nabla G_{\omega}^{(\mu)}(x,z_r) +
O(\delta)].
\end{equation}
Now let $\textbf{1}_B$ denote the characteristic function of $B$.
Let $\widetilde{w}$ be the solution to
\begin{equation} \label{fv1conduc}\left\{
\begin{array}{l} \displaystyle \nabla \cdot (1 + [\sigma_r -1] \textbf{1}_B) \nabla \widetilde{w} = 0 \quad \mbox{in }
\mathbb{R}^2,\\
\noalign{\medskip}
\widetilde{w}(\widetilde{x}) - \widetilde{x} \rightarrow 0 \quad \mbox{as } |\xi| \rightarrow
+
\infty.\\
\end{array} \right. \end{equation}
The following result holds. We refer the reader to Appendix
\ref{appendc} for its proof.
\begin{proposition} \label{propappendc} We have
\begin{equation}
\label{wmuapp}\nabla\left( w^{(\mu)}(y) - (y-z_r) \right)= \delta
\nabla \widetilde{w}(\widetilde{y}) + O( \delta [||\mu||_{L^\infty} +
(\delta \omega)^2]),
\end{equation}
where the scaled variable $$\widetilde{y} =
\frac{y-z_r}{\delta}.$$
\end{proposition}
From (\ref{wmuapp}), it follows that
\begin{equation}
\label{wmuapp2} \int_{\Omega_r} \nabla (w^{(\mu)}(y) - (y-z_r))\,
dy = \delta^2 \int_B \nabla \widetilde{w}(\widetilde{x}) \,
d\widetilde{x} + O(\delta^3[ ||\mu||_{L^\infty} + (\delta
\omega)^2]).
\end{equation}
Define the polarization tensor associated to $\sigma_r$ and $B$ by
(see \cite{ammari2004reconstruction})
$$
M(\sigma_r, B) := (\sigma_r -1) \int_B \nabla
\widetilde{w}(\widetilde{x})\, d\widetilde{x} ,
$$
where $\widetilde{w}$ is the solution to (\ref{fv1conduc}). The
matrix $ M(\sigma_r, B)$ is symmetric definite (positive if
$\sigma_r>1$ and negative if $\sigma_r<1$). Moreover, if $B$ is a
disk, then $M(\sigma_r, B)$ takes the form
\cite{ammari2004reconstruction}:
$$M(\sigma_r, B) =
\frac{2 (\sigma_r -1)}{\sigma_r+1} \vert B\vert I_2,$$ where $I_2$ is the
identity matrix.
To obtain an asymptotic expansion of $u_s(x) - u_s^{(\mu)}(x)$ in
terms of the characteristic size $\delta$ of the scatterer, we
take the far-field expansion of (\ref{inner}). Plugging formula
(\ref{wmuapp2}) into (\ref{wmuapp3}), we obtain the following
small-volume asymptotic expansion.
\begin{proposition} \label{propappendb}
We have
\begin{equation}\label{DAusnf}
u_s(x)=u_s^{(\mu)}(x) - \delta^2 M(\sigma_r, B) \nabla
U_0(z_r)\cdot \nabla G_\omega^{(\mu)} (x,z_r) +O(\delta^3[1 +
||\mu||_{L^\infty} + (\delta \omega)^2]),
\end{equation}
uniformly in $x \in \partial \Omega$.
\end{proposition}
Finally, using (\ref{guomenabla}) we arrive at the following
result.
\begin{thm} \label{propappendbf}
We have as $\delta$ goes to zero
\begin{equation}\label{DAus} \begin{array}{lll}
(u_s-u_s^{(\mu)})(x) &=& - \displaystyle \delta^2 M(\sigma_r, B) \nabla
U_0(z_r)\cdot \bigg[ \nabla G_\omega^{(0)} (x,z_r) + \nabla
\int_{\Omega_\mu} \mu(y) \nabla G_\omega^{(0)}(y,z_r)\cdot \nabla
G_{\omega}^{(0)}(x,y)\, dy \bigg] \\ \noalign{\medskip} && +O(\delta^3 [1 +
||\mu||_{L^\infty} + (\delta \omega)^2] + \delta^2
||\mu||^2_{L^\infty} ),
\end{array}
\end{equation}
uniformly in $x \in \partial \Omega$.
\end{thm}
Theorem \ref{propappendbf} shows that the asymptotic expansion
(\ref{DAus}) is uniform with respect to $\omega$ and $\mu$,
provided that $\omega \leq C/\delta$ and $||\mu||_{L^\infty} \leq
C^\prime \sqrt{\delta}$ for two positive constants $C$ and
$C^\prime$.
\subsection{Second-harmonic problem}
We apply similar arguments to derive a small-volume expansion for
the second-harmonic field at frequency $2 \omega$.
Introduce $G_{2\omega}^{(\sigma_r,\mu)}(.,z)$ the outgoing
solution of $$ \left(\Delta + \frac{(2 \omega)^2}{[\sigma_r -1]
\textbf{1}_{\Omega_r} + 1} (1- \frac{\mu}{[\sigma_r -1]
\textbf{1}_{\Omega_r} + 1} )\right)
G_{2\omega}^{(\sigma_r,\mu)}(.,z) = -\delta_z \qquad \mbox{in }
\mathbb{R}^2.$$ Let $G_{2\omega}^{(0)}$ be the outgoing solution
to (\ref{gomega}) with $\omega$ replaced by $2\omega$.
Similarly to (\ref{DAus}), an asymptotic expansion for
$G_{2\omega}^{(\sigma_r,\mu)}$ in terms of $\delta$ can be
derived. We have
$$
(G_{2\omega}^{(\sigma_r,\mu)} - G_{2\omega}^{(\mu)}) (x,z) =
O(\delta^2)$$ for $x\neq z$ and $x,z$ away from $z_r$. Here
$G_{2\omega}^{(\mu)}$ is the solution to (\ref{gmuomega}) with
$\omega$ replaced by $2\omega$. Moreover, the Born approximation
yields
$$\begin{array}{lll} (G_{2\omega}^{(\sigma_r,\mu)} -
G_{2\omega}^{(0)})(x,z)&=&\displaystyle - (2\omega)^2 \int_{\Omega_\mu}
\mu(y)G_{2\omega}^{(0)}(y,z)G_{2\omega}^{(0)}(x,y) dy + O(\delta^2
+ ||\mu||_{L^\infty}^2)
\end{array}
$$
for $x\neq z$ and $x,z$ away from $z_r$. From the integral
representation formula:
$$
v(x)= -\int_{\Omega_r}
\sum_{k,l= 1,2} \chi_{kl} \partial_{x_k} U(y) \partial_{x_l} U(y) G_{2\omega}^{(\sigma_r,\mu)}(x,y) dy,
$$
it follows that
\begin{equation}
v(x)=-\delta^2 |B| \left(\sum_{k,l} \chi_{kl}
\partial_{x_k} U(z_r) \partial_{x_l} U(z_r) \right) G^{(\sigma_r,\mu)}_{2\omega}(x,z_r) +
O(\delta^3),
\end{equation}
where $|B|$ denotes the volume of $B$, and hence, keeping only the
terms of first-order in $\mu$ and of second-order in
$\delta$:\begin{multline}
v(x)=-\delta^2 |B| \left(\sum_{k,l}
\chi_{kl} \partial_{x_k}U(z_r) \partial_{x_l} U(z_r)\right)
\\ \left[ \G{x}{z_r} - 4\omega^2 \int_\Omega \mu(y) G_{2\omega}^{(0)} (x,y)
\G{y}{z_r}dy
+ O(||\mu||^2_{L^\infty}) \right]+
O(\delta^3).
\end{multline}
We denote by $(S)^\theta$ the source term (the source term
strongly depends on the angle $\theta$ of the incoming plane
wave):
\begin{equation}(S)^\theta=\left(\sum_{k,l}\chi_{kl} \partial_{x_k}U(z_r)
\partial_{x_l} U(z_r)\right).
\end{equation}
Now, since
\begin{equation}
U(x)=U_Ie^{i\omega\theta\cdot x} + \int_\Omega \mu(y) \nabla
G_\omega^{(0)}(x,y) \cdot \nabla U_0(y) dy +
O(||\mu||^2_{L^\infty} + \delta),
\end{equation}
which follows by using the Born approximation and the inner
expansion (\ref{inner}), we can give an expression for the partial
derivatives of $U$. We have
\begin{equation}
\partial_{x_k}U(x)=i\omega \theta_k U_I e^{i\omega \theta
\cdot x} - i \omega \theta \cdot \int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_k}
G_\omega^{(0)}(x,y) dy + O(||\mu||^2_{L^\infty} + \delta).
\end{equation}
We can rewrite the source term as
\begin{multline}
\left(\sum_{k,l}\chi_{k,l} \partial_{x_k}U(z_r)\partial_{x_l} U(z_r)
\right)= - \omega^2 U_I^2 \sum_{k,l} \chi_{kl} \bigg[ \theta_k \theta_l
e^{i\omega \theta\cdot z_r} \\
- \theta_k \theta \cdot \int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_l}
G_\omega^{(0)}(z_r,y) dy - \theta_l \theta \cdot \int_\Omega
\nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_k}
G_\omega^{(0)}(z_r,y) dy \\
+ \theta \cdot \int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_l}
G_\omega^{(0)}(z_r,y) dy \theta \cdot \int_\Omega \nabla (\mu(y)
e^{i
\omega \theta \cdot y}) \partial_{x_k}
G_\omega^{(0)}(z_r,y) dy \bigg]
\\
+ O(||\mu||^2_{L^\infty} + \delta).
\end{multline}
Assume that $\mu \in \mathcal{C}^{0,\alpha}$ for $0<\alpha<{1/2}$.
From
\begin{equation}\label{transfoSrand}\begin{array}{lll}
\displaystyle \int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_l}
G_\omega^{(0)}(z_r,y) dy &=&\displaystyle \int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y} - \mu(z_r) e^{i
\omega \theta \cdot z_r}) \partial_{x_l}
G_\omega^{(0)}(z_r,y) dy \\ \noalign{\medskip} &=&\displaystyle - \int_\Omega \nabla
\partial_{x_l} G_\omega^{(0)}(z_r,y) (\mu(y) e^{i
\omega \theta \cdot y} - \mu(z_r) e^{i
\omega \theta \cdot z_r}) dy
\end{array}
\end{equation}
one can show that, for $0<\alpha^\prime \leq \alpha$, we have
\cite{trudinger}
$$
\bigg| \theta \cdot \int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_l}
G_\omega^{(0)}(z_r,y) dy \theta \cdot \int_\Omega \nabla (\mu(y)
e^{i
\omega \theta \cdot y}) \partial_{x_k}
G_\omega^{(0)}(z_r,y) dy \bigg| \leq C
||\mu||^2_{\mathcal{C}^{0,\alpha^\prime}},$$ where $C$ is a
positive constant independent of $\mu$.
So, if we split $(S)^\theta$ into a deterministic part and a
random part:
$$(S)^\theta=(S)_{det}^\theta + (S)_{rand}^\theta + O(||\mu||^2_{\mathcal{C}^{0,\alpha}}
+ \delta),$$ we get
\begin{equation}\label{DEFSDET}
(S)_{det}^\theta = - \omega^2 U_I^2 e^{i2\omega \theta \cdot z_r} \sum_{k,l} \chi_{k,l} \theta_k
\theta_l,
\end{equation}
and
\begin{equation} \label{DEFSRAND}
\begin{array}{lll} \displaystyle (S)_{rand}^\theta &= & \displaystyle\omega^2 \sum_{k,l} \chi_{k,l} \bigg[
\theta_k \theta \cdot \int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_l}
G_\omega^{(0)}(z_r,y) dy \\ \noalign{\medskip} && \displaystyle + \theta_l \theta \cdot
\int_\Omega \nabla (\mu(y) e^{i
\omega \theta \cdot y}) \partial_{x_k}
G_\omega^{(0)}(z_r,y) dy\bigg].
\end{array}
\end{equation}
Finally, we obtain the following result.
\begin{thm} Assume that $\mu \in \mathcal{C}^{0,\alpha}$ for $0<\alpha<{1/2}$.
Let $0<\alpha^\prime\leq \alpha$. The following asymptotic
expansion holds for $v$ as $\delta$ goes to zero:
\begin{multline}\label{DAv}
v(x) = -\delta^2 |B| \bigg( (S)_{det}^\theta \left[ \G{x}{z_r}
- 4\omega^2 \int_\Omega \mu(y) G_{2\omega}^{(0)} (x,y)
\G{y}{z_r}dy \right] + (S)_{rand}^\theta \G{x}{z_r}
\bigg)\\
+ O(\delta ^3 + \delta^2 ||\mu||^2_{\mathcal{C}^{0,\alpha^\prime}})
\end{multline}
uniformly in $x\in \partial \Omega$.
\end{thm}
\section{Imaging functional} \label{sec5}
In this section, two imaging functionals are presented for
locating small reflectors. For the sake of simplicity, we assume
that $B$ and $\Omega$ are disks centered at $0$ with radius $1$
and $R$, respectively.
\subsection{The fundamental frequency case}
We assume that we are in possession of the following data: $\{
u_s(x), \ x\in \partial\Omega\}$. We introduce the
reverse-time imaging functional\begin{equation}\label{DefI}
\forall z^S \in \Omega,\ I(z^S)=\int_{\partial \Omega \times
\mathbb{S}^1} \frac{1}{i\omega} e^{-i\omega \theta \cdot
z^S}\theta^\top \overline{\nabla G_\omega^{(0)} (x,z^S)}
u_s(x) d\sigma(x) d\sigma(\theta),
\end{equation}
where $\top$ denotes the transpose. Introduce the matrix:
\begin{equation}\label{DEFR}
R_\omega(z_1,z_2)= \int_{\partial \Omega }\overline{\nabla
G_\omega^{(0)} (x,z_1) } \nabla G_\omega^{(0)} (x,z_2)^\top
d\sigma(x), \qquad z_1,z_2 \in \Omega^\prime \Subset \Omega.
\end{equation}
Using (\ref{DAus}), we have the following expansion for $I(z^S),
z^S \in \Omega^\prime$,
\begin{multline}\label{DAI}
I(z^S)= \int_{\partial \Omega \times
\mathbb{S}^1} \frac{1}{i\omega} e^{-i\omega \theta \cdot
z^S}\theta^\top \overline{\nabla G_\omega^{(0)} (x,z^S)}
u_s^{(\mu)}(x) d\sigma(x) d\sigma(\theta) \\- \frac{2 \pi \delta^2 (\sigma_r-1)}{\sigma_r+1} U_I
\int_{\mathbb{S}^1} e^{-i\omega \theta \cdot (z^S-z_r) }
\theta^\top \bigg[ R_\omega(z^S,z_r)\\
\noalign{\medskip} + \int_{\partial \Omega} \overline{\nabla G_\omega^{(0)}
(x,z^S)} \left( \nabla \int_{\Omega_\mu} \mu (y) \nabla
G_\omega^{(0)}(y,z_r) \cdot \nabla G_\omega^{(0)}(x,y) dy\right)^\top
d\sigma(x)
\bigg] \theta d\sigma(\theta) \\
+ O(\delta^3 + \delta^2 ||\mu||_{L^\infty}^2).
\end{multline}
Note that
$$\begin{array}{l}\displaystyle
\int_{\partial \Omega} \overline{\nabla G_\omega^{(0)} (x,z^S)}
\left(\nabla \int_{\Omega_\mu} \mu (y) \nabla
G_\omega^{(0)}(y,z_r) \cdot \nabla G_\omega^{(0)}(x,y) dy \right)^\top
d\sigma(x) \\ \qquad \displaystyle = \int_{\Omega_\mu} \mu (y)
\int_{\partial \Omega} \overline{\nabla G_\omega^{(0)}
(x,z^S)} \left(\nabla\nabla G_\omega^{(0)}(x,y) \nabla
G_\omega^{(0)}(y,z_r)\right)^\top d\sigma(x) dy. \end{array}$$
\begin{rem}
Here, the fact that not only we backpropagate the boundary data
but also we average it over all the possible illumination angles
in $\mathbb{S}^1$ has two motivations. As will be shown later in
section \ref{sec6}, the first reason is to increase the resolution
and make the peak at the reflector's location isotropic. If we do
not sum over equi-distributed illumination angles over the sphere,
we get more of "8-shaped" spot, as shown in Figure~\ref{graphI}.
The second reason is that an average over multiple measurements
increases the stability of the imaging functional with respect to
measurement noise. \end{rem}
\begin{rem}
If we could take an image of the medium in the absence of
reflector before taking the real image, we would be in possession
of the boundary data $\{ u_s - u_s^{(\mu)}, \ x\in \partial
\Omega\}$, and thus we would be able to detect the reflector in a
very noisy background. But in some practical situations
\cite{hsieh2011imaging}, it is not possible to get an image
without the reflector. As it will be shown in section \ref{sec6},
second-harmonic generation can be seen as a powerful contrast
imaging approach \cite{hsieh2011imaging}. In fact, we will prove
that the second harmonic image is much more stable with respect to
the medium noise and to the volume of the particle than the
fundamental frequency image.
\end{rem}
\subsection{Second-harmonic backpropagation}
If we write a similar imaging functional for the second-harmonic field $v$, assuming that we are in possession
of the boundary data $\{ v(x), \ x\in \partial\Omega\}$, we
get \begin{equation}\label{DefJ}
\forall z^S \in \Omega,\ J_\theta (z^S)=\int_{\partial\Omega \times \mathbb{S}^1} v(x)\overline{G_{2\omega}^{(0)}(x,z^S)}
e^{-2i \omega \theta \cdot z^S} d\sigma(x)d\sigma(\theta).
\end{equation} As before, using (\ref{DAv}) we can expand $J$ in terms of $\delta$ and $\mu$. Considering
first-order terms in $\delta$ and $\mu$ we get
\begin{multline}
J(z^S)=- \pi \delta^2 \int_{\mathbb{S}^1} e^{-2i\omega \theta \cdot z^S}
\bigg[ (S)_{det}^\theta \Big(\int_{\partial \Omega} \overline{\G{x}{z^S}}
\G{x}{z_r} d\sigma(x) \\ - 4 \omega^2 \int_{\partial \Omega} \overline{\G{x}{z^S}}\int_\Omega
\mu(y) \G{y}{x} \G{y}{z_r} dy d\sigma(x) \Big) \\ +(S)_{rand}^\theta \int_{\partial \Omega}
\overline{\G{x}{z^S}} \G{x}{z_r} d\sigma(x) \bigg] d\sigma(\theta)+
O(\delta^3 + \delta^2
||\mu||_{\mathcal{C}^{0,\alpha^\prime}}^2),
\end{multline}
where $0<\alpha^\prime\leq \alpha$. Now, if we define
$Q_{2\omega}$ as
\begin{equation}\label{DefQ}
Q_{2\omega}(x,z)= \int_{\partial \Omega} \G{y}{x} \overline{\G{y}{z}}
d\sigma(y).
\end{equation}
We have
\begin{multline}\label{DAJ}
J(z^S)= - \pi \delta^2 \int_{\mathbb{S}^1} e^{-2i\omega \theta \cdot z^S} \bigg[
(S)_{det}^\theta \left( Q_{2\omega}(z_r,z^S) -
4\omega^2 \int_{\Omega_\mu} \mu(y) \G{y}{z_r} Q_{2\omega}(y,z^S) dy \right) \\+ (S)_{rand}^\theta Q_{2\omega}(z_r,z^S)
\bigg] d\sigma(\theta) +
O(\delta^3 + \delta^2 ||\mu||_{\mathcal{C}^{0,\alpha^\prime}}^2 ).
\end{multline}
\section{Statistical analysis} \label{sec6}
In this section, we perform a resolution and stability analysis of
both functionals. Since the image we get is a superposition of a
deterministic image and of a random field created by the medium
noise, we can compute the expectation and the covariance functions
of those fields in order to estimate the signal-to-noise ratio.
For the reader's convenience we give our main results in the
following proposition.
\begin{proposition} Let $l_\mu$ and $\sigma_\mu$ be respectively the
correlation length and the standard deviation of the process
$\mu$. Assume that $l_\mu$ is smaller than the wavelength
$2\pi/\omega$. Let $(SNR)_I$ and $(SNR)_J$ be defined by
\begin{equation}
\label{snridef}
(SNR)_I=\frac{\mathbb{E}[I(z_r)]}{(Var[I(z_r)])^{1/2}},
\end{equation}
and
\begin{equation}
\label{snrjdef} (SNR)_J= \frac{\mathbb{E}[J(z_r)]
}{(Var[J(z_r)])^{\frac{1}{2}}}.
\end{equation}
We have
\begin{equation}
(SNR)_I \approx \frac{\sqrt{2}\pi^{3/2}\omega\delta^2
U_I}{\sigma_\mu l_\mu \sqrt{\omega\text{ diam } \Omega_\mu}}
\frac{|\sigma_r-1|}{\sigma_r+1},
\end{equation}
and
\begin{equation}
(SNR)_J \geq \frac{ l_\mu^{\alpha} \left(\int_{\mathbb{S}^1}\left(\sum_{k,l} \chi_{k,l} \theta_k
\theta_l \right) d\theta\right) }{\sqrt{C} \sigma_\mu \min(\omega^{-\alpha},1) \max_{k,l} \left\vert \chi_{k,l} \right\vert\sqrt{ \left(\omega \text{diam } \Omega_\mu\right)^{3+2\alpha} + 1 } }.
\end{equation}
Here, $\text{diam}$ denotes the diameter, $\alpha $ is the upper bound for Holder-regularity of the random
process $\mu$ (see section \ref{noiseassumptions}).
\end{proposition}
\subsection{Assumptions on the random process $\mu$}\label{noiseassumptions}
Let $z(x)$, $x\in \mathbb{R}^2$ be a stationary random process with
Gaussian statistics, zero mean, and a covariance function given by
$R(\vert x-y \vert )$ satisfying $R(0)= \sigma_\mu^2$, $\vert R(0)- R(s)\vert \ \leq \sigma_\mu^2 \frac{s^{2\alpha}}{l_\mu^{2\alpha}} $ and $R$ is decreasing. Then, $z$ is a
$\mathcal{C}^{0,\alpha'}$ process for any $\alpha' <\alpha$
(\cite[Theorem 8.3.2]{adler2010geometry}). Let $F$ be a smooth
odd bounded function, with derivative bounded by one. For example
$F=\arctan$ is a suitable choice. Take $$\mu(x)
=F[z(x)].$$ Then $\mu$ is a bounded
$\mathcal{C}^{0,\alpha'}$ stationary process with zero
mean. We want to compute the expectation of its norm. Introduce
\begin{equation}
p(h)=\max_{\Vert x-y \Vert \leq \sqrt{2} h } \mathbb{E} \vert z(x) - z(y) \vert.
\end{equation}
One can also write $p(u)=\sqrt{2} \sqrt{R(0) - R(\sqrt{2}u)}$.
According to \cite{adler2010geometry}, for all $h, t \in
\Omega_\mu$, almost surely,
\begin{equation}
\vert z(t+h) - z(t) \vert \leq 16\sqrt{2 }[\log(B)]^{1/2} p(\frac{\vert h \vert}{l_\mu} ) + 32\sqrt{2} \int_0^{\frac{\vert h \vert}{l_\mu}} \left(-\log u \right)^{1/2} dp(u),
\end{equation} where $B$ is a positive random variable with $\mathbb{E} [B^n] \leq (4\sqrt{2})^n$
(\cite[Formula 3.3.23]{adler2010geometry}). We have that
\begin{equation}
p( \vert h \vert ) \leq \sqrt{2}^{1+\alpha} \sigma_\mu \frac{\vert h \vert^\alpha}{l_\mu^\alpha} .
\end{equation} By integration by parts we find that
\begin{equation}
\int_0^{\frac{\vert h \vert}{l_\mu}} \left(-\log u \right)^{1/2} dp(u) = \left[ (-\log u)^{1/2} p(u)
\right]_0^{\frac{\vert h \vert}{l_\mu}} + \frac{1}{2} \int_0^{\frac{\vert h \vert}{l_\mu} } (-\log u)^{-1/2} u^{-1} p(u) du.
\end{equation}
For any $\varepsilon >0$, since $s^\varepsilon\sqrt{-\log s} \leq
\frac{1}{\sqrt{ \varepsilon}} e^{1/2} $ on $[0,1]$,
we have, as $\vert h \vert $ goes to $0$, that \begin{equation} \left[ (-\log
u)^{1/2} p(u) \right]_0^{\frac{\vert h \vert}{l_\mu}} \leq
e^{\frac{1}{2 }} \frac{\sqrt{2}^{1+\alpha}\sigma_\mu}{\sqrt{ \varepsilon}}
\frac{\vert h \vert^{\alpha-\varepsilon}}{l_\mu^\alpha} .
\end{equation}
Similarly, when $\vert h \vert <\frac{1}{2e}$, for every $0<u<\vert h \vert $, $$(-\log
u)^{-1/2} s^{-1} p(u) \leq \sqrt{2}^{1+\alpha} \sigma_\mu
\frac{u^{\alpha-1}}{l_\mu^\alpha }.$$ So we get, when $\vert h \vert $ goes to $0$, for every
$\varepsilon >0$,
\begin{equation}
\int_0^{\frac{\vert h \vert}{l_\mu}} \left(-\log u \right)^{1/2} dp(u) \leq \frac{e^{\frac{1}{2}} \sqrt{2}^{1+\alpha} \sigma_\mu }{\sqrt{ \varepsilon}} \frac{ \vert h \vert^{\alpha-\varepsilon}}{l_\mu^\alpha} + \frac{ \sqrt{2}^{1+\alpha} \sigma_\mu }{\alpha } \frac{\vert h \vert^\alpha}{l_\mu^\alpha}.
\end{equation}
Therefore, when $\vert h \vert $ goes to zero, we have for any $\varepsilon>0$:
\begin{equation}
\vert z(t+h) -z(t) \vert \leq 32 \sqrt{2}^\alpha \log(B)^{1/2} \sigma_\mu \frac{\vert h \vert^\alpha}{l_\mu^\alpha} +64 e^{\frac{1}{2}} \sqrt{2}^\alpha \sigma_\mu \frac{1}{l_\mu^\alpha} \left[ \frac{1}{\sqrt{\varepsilon}} \vert h \vert^{\alpha-\varepsilon} +\frac{1}{2} \vert h \vert^\alpha \right].
\end{equation}
Since $F'\leq 1$, composing by $F$ yields, for any $x,y \in
\mathbb{R}^2$,
\begin{equation}
\vert \mu(x) -\mu(y) \vert \leq \vert z(x)-z(y) \vert .
\end{equation} We get the following estimate on $\Vert \mu \Vert_{\mathcal{C}^{0,\alpha'}}$, for any $\alpha' \in ]0,\alpha[$,
almost surely
\begin{equation}
\sup_{\substack{x, y \in \Omega_\mu \\ \vert x-y \vert \leq h }}\frac{\vert \mu (x) - \mu (y) \vert }{\vert x - y \vert^{\alpha'}} \leq 32 \sqrt{2}^\alpha \log(B)^{1/2} \sigma_\mu \frac{h^{\alpha-\alpha'}}{l_\mu^\alpha} +64 e^{\frac{1}{2}} \sqrt{2}^\alpha \sigma_\mu \frac{1}{l_\mu^\alpha} \left[ \frac{1}{\sqrt{\alpha-\alpha'}} +\frac{1}{2} h^{\alpha-\alpha'} \right]
\end{equation}
\begin{equation}
\Vert \mu \Vert_{\mathcal{C}^{0,\alpha'}} \leq 64\sqrt{2}^\alpha \frac{e^{\frac{1}{2}} \left[ \log (B)^{1/2} +1 \right] }{\sqrt{\alpha- \alpha'}}
\frac{\sigma_\mu}{l_\mu^{\alpha}},
\end{equation} which gives, since $\mathbb{E}[\log B] \leq \mathbb{E}[B] -1 \leq 4\sqrt{2} -1$
\begin{equation} \label{estc0alpha}
\mathbb{E}[\Vert \mu \Vert_{\mathcal{C}^{0,\alpha'}}^2] \leq 64^2 2^{4+\alpha} \frac{e}{\alpha- \alpha'} \frac{\sigma_\mu^2}{l_\mu^{2\alpha}}.
\end{equation}
\subsection{Standard backpropagation}
\subsubsection{Expectation}
We use (\ref{DAI}) and the fact that $\mathbb{E}(\mu)(x)=0, \
\forall x\in \Omega$, to find that
\begin{equation}\label{ExpectI}
\mathbb{E}[I(z^S)]=-2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 } U_I\int_{\mathbb{S}^1}e^{-i\omega \theta \cdot (z^S-z_r) }\theta^\top R_\omega(z^S,z_r) \theta d\theta.
\end{equation}
We now use the Helmholtz-Kirchoff theorem. Since (see \cite{ammarimethods}):
\begin{equation}
\lim_{R\rightarrow \infty} \int_{\vert x\vert
=R}\nabla G_\omega^{(0)}(x,y) \overline{\nabla G_\omega^{(0)}(z,y)}^\top dy = \frac{1}{\omega} \nabla_z \nabla_x \text{ Im}\left[ G_\omega^{(0)} (x,z)\right]
\end{equation}
and \begin{equation}\label{IMG}
\text{ Im}\left[ G_\omega^{(0)}(x,z)\right]=\frac{1}{4}J_0(\omega \vert x-z\vert),
\end{equation} we can compute an approximation of $R_\omega$.
\begin{multline}\label{DAR1}
\frac{1}{\omega} \nabla_z \nabla_x \text{ Im}\left[ G_\omega^{(0)} (x,z)\right] = \frac{1}{4}
\bigg[ \omega J_0(\omega \vert x-z \vert) \left(\frac{(x-z)}{\vert x-z\vert }\frac{(x-z) ^\top}{\vert x-z\vert }
\right) \\ -\frac{2J_1(\omega \vert x-z \vert)}{\vert x-z \vert }\left(\frac{(x-z)}{\vert x-z\vert }
\frac{(x-z) ^\top}{\vert x-z\vert } \right) \\ + \frac{J_1(\omega \vert x-z\vert)}{\vert x-z\vert }
I_2 \bigg],
\end{multline}
where $I_2$ is the $2\times2$ identity matrix. We can see that
$R_\omega$ decreases as $\vert z_r-z^S\vert^{-\frac{1}{2}}$. The
imaging functional has a peak at location $z^S=z_r$. Evaluating
$R_\omega$ at $z^S=z_r$ we get
\begin{equation}\
R_\omega (z_r,z_r)= \frac{\omega}{8} I_2.
\end{equation}So we get the expectation of $I$ at point $z_r$:
\begin{equation}\label{ExpectIb}
\mathbb{E}[I(z_r)]\approx -\frac{\pi^2(\sigma_r-1)}{2(\sigma_r+1)}\omega\delta^2 U_I.
\end{equation}
\subsubsection{Covariance}
Let
\begin{equation} \label{eq41}
\text{Cov}\left(I(z^S),I(z^{S'}) \right) =
\mathbb{E}\bigg[\left(I(z^S)- \mathbb{E}[I(z^S)] \right)
\overline{\left( I(z^{S'})- \mathbb{E}[I(z^{S'})]\right)} \bigg].
\end{equation}
Define \begin{equation} \widetilde{R}_\omega (z^S,z_r,y) =
\int_{\partial \Omega} \overline{\nabla G_\omega^{(0)}(x,z^S)}
\left(\nabla\nabla G_\omega^{(0)}(x,y) \nabla
G_\omega^{(0)}(y,z_r)\right)^\top d\sigma(x).
\end{equation}
Using (\ref{DAI}) and (\ref{ExpectIb}), we get
\begin{multline}
I(z^S)- \mathbb{E}[I(z^S)]= \int_{\partial \Omega \times
\mathbb{S}^1} \frac{1}{i\omega} e^{-i\omega \theta \cdot z^S}\theta^\top \overline{\nabla G_\omega^{(0)}
(x,z^S)^\top } u_s^{(\mu)}(x) dxd\theta\\ -2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 }
U_I \int_{\mathbb{S}^1} e^{-i\omega \theta \cdot (z^S-z_r) }
\bigg[\int_\Omega \mu(y) \theta^\top \widetilde{R}_\omega (z^S,z_r,y)\theta dy \bigg]
d\theta.
\end{multline}
The computations are a bit tedious. For brevity, we write the
quantity above as
\begin{equation}
I(z^S)-\mathbb{E}[I(z^S)] = A_I(z^S) + B_I(z^S),
\end{equation}
with
\begin{equation}
A_I(z^S)= \int_{\partial \Omega\times \mathbb{S}^1}
\frac{1}{i\omega} e^{-i\omega \theta \cdot z^S}\theta^\top
\overline{\nabla G_\omega^{(0)} (x,z^S) } u_s^{(\mu)}(x)
dxd\theta,
\end{equation}
and
\begin{equation}
B_I(z^S) = -2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 } U_I
\int_{\mathbb{S}^1} e^{-i\omega \theta \cdot (z^S-z_r) }
\bigg[\int_\Omega \mu(y) \theta^\top \widetilde{R}_\omega
(z^S,z_r,y)\theta dy \bigg] d\theta.
\end{equation}
We now compute each term of the product in (\ref{eq41})
separately.
\paragraph{Main speckle term:}
We need to estimate the typical size of $A_I$. From
(\ref{estimateborn}), keeping only terms of first-order in $\mu$
yields
\begin{equation}
A_I(z^S)= -\int_{\partial \Omega\times \mathbb{S}^1}
\frac{1}{i\omega} e^{-i\omega \theta \cdot z^S}\theta^\top
\overline{\nabla G_\omega^{(0)} (x,z^S) } \int_\Omega \mu(y)
\nabla G_\omega^{(0)}(x,y)\cdot \nabla U_0(y)dydxd\theta
+O(\Vert \mu \Vert_\infty^2),
\end{equation}
so we have:
\begin{equation}
A_I(z^S)= -U_I \int_{\Omega\times \mathbb{S}^1}
e^{-i\omega\theta\cdot (z^S-y)} \mu(y) \theta^\top
R_\omega(z^S,y)\theta dyd\theta,
\end{equation}
and hence,
\begin{multline}
A_I (z^S)\overline{A_I (z^{S'})} =U_I^2 \int_{\mathbb{S}^1}
e^{-i\omega \theta \cdot (z^S-z^{S'})} \\ \bigg[ \int
\int_{\Omega \times \Omega} e^{i\omega \theta \cdot (y-y')} \mu(y)
\mu(y') \theta^\top R_\omega(z^S,y) \overline{R_\omega(z^{S'},y')}
\theta dydy'\bigg]d \theta.
\end{multline}
We assume that the medium noise is localized and stationary on its
support $\Omega_\mu$. We also assume that the correlation length
$l_\mu$ is smaller than the wavelength. We note $\sigma_\mu$ the
standard deviation of the process $\mu$. We can then write:
\begin{equation}
\mathbb{E}\bigg[A_I (z^S)\overline{A_I (z^{S'})}\bigg] = U_I^2
\sigma_\mu^2 l_\mu^2\int_{\mathbb{S}^1}e^{i\omega \theta \cdot
(z^S-z^{S'}) } \int_{\Omega _\mu} \theta^\top R_\omega (z^S,y)
\overline{R_\omega (z^{S'},y)}\theta dy d\theta.
\end{equation}
We introduce
\begin{equation}\label{defP}
P_\omega(z^S,y,z^{S'}):=\int_{\mathbb{S}^1}e^{i\omega \theta \cdot
(z^S-z^{S'}) } \theta^\top R_\omega (z^S,y) \overline{R_\omega
(z^{S'},y)}\theta d\theta,
\end{equation}
where $R_\omega$ is defined by (\ref{DEFR}). Therefore, we have
\begin{equation}\label{EstA}
\mathbb{E}\bigg[A_I (z^S)\overline{A_I (z^{S'})}\bigg] = U_I^2
\sigma_\mu^2 l_\mu^2 \int_{\Omega _\mu} P_\omega (z^S,y,z^{S'})
dy.
\end{equation}
Hence, $A_I$ is a complex field with Gaussian statistics of mean
zero and covariance given by (\ref{EstA}). It is a speckle field
and is not localized.
We compute its typical size at point $z^S=z^{S'}=z_r$, in order to
get signal-to-noise estimates. Using (\ref{DAR1}), we get that for $\vert x-z\vert >>1$:
\begin{equation*}
\lim_{R\rightarrow \infty} \int_{\vert x\vert =R}\nabla
G_\omega^{(0)}(x,y) \overline{\nabla G_\omega^{(0)}(z,y)}^\top dy
= \frac{\omega }{4}J_0(\omega \vert x-z\vert)
\left(\frac{(x-z)}{\vert x-z\vert }\frac{(x-z) ^\top}{\vert
x-z\vert } \right).
\end{equation*}
Since we have, for $\vert x-z\vert >> 1$, \begin{equation}\label{equivJ0}
J_0(\omega \vert x -z \vert ) \sim \frac{\sqrt{2} \cos(\omega \vert x-z \vert -
\frac{\pi}{4})}{\sqrt{\pi \omega \vert x-z\vert}},
\end{equation} we obtain that \begin{equation}\label{DAR}
R_\omega (x,z) \approx \frac{\sqrt{\omega}\cos (\omega \vert x-z\vert -\pi/4)}{2\sqrt{2\pi }} \vert x-z\vert ^{-1/2} \left(\frac{(x-z)}{\vert x-z\vert }\frac{(x-z) ^\top}{\vert x-z\vert } \right) \ \text{for } \vert x-z\vert >>1.
\end{equation}
Now we can write
\begin{equation}
\mathbb{E}\bigg[A_I (z_r)\overline{A_I (z_r)}\bigg] \approx U_I^2
\sigma_\mu^2 l_\mu^2 \int_{\Omega _\mu}
\left(\frac{\sqrt{\omega}}{2\sqrt{2\pi }}\right)^2\frac{1}{2} \vert
y-z_r\vert ^{-1} \int_{\mathbb{S}^1} \theta^\top
\left(\frac{(y-z_r)}{\vert y-z_r\vert }\frac{(y-z_r) ^\top}{\vert
y-z_r\vert }\right)\theta d\theta dy.
\end{equation}
If we compute the term:
\begin{equation}
\int_{\mathbb{S}^1} \theta^\top \left(\frac{(y-z_r)}{\vert y-z_r\vert }\frac{(y-z_r) ^\top}{\vert y-z_r\vert }\right)\theta d\theta = \int_0^{2\pi} \bigg[\left( \frac{(y-z_r)_1}{\vert y-z_r \vert}\right)^2 \cos^2\theta + \left(\frac{(y-z_r)_2}{\vert y-z_r \vert}\right)^2 \sin^2\theta\bigg] d\theta,
\end{equation}
then, after linearization and integration, we get
\begin{equation}\label{eqpi}
\int_{\mathbb{S}^1} \theta^\top \left(\frac{(y-z_r)}{\vert
y-z_r\vert }\frac{(y-z_r) ^\top}{\vert y-z_r\vert }\right)\theta
d\theta = \pi.
\end{equation}
So we have:
\begin{equation}
\mathbb{E}\bigg[A_I (z_r)\overline{A_I (z_r)}\bigg] \approx \pi
U_I^2 \sigma_\mu^2 l_\mu^2 \int_{\Omega _\mu}
\left(\frac{\sqrt{\omega}}{4\sqrt{\pi }}\right)^2 \vert
y-z_r\vert ^{-1} dy,
\end{equation}
and therefore,
\begin{equation}\label{TERMA}
\mathbb{E}\bigg[A_I (z_r)\overline{A_I (z_r)}\bigg] \approx \pi \frac{\omega }{8} U_I^2
\sigma_\mu^2 l_\mu^2 \text{diam } \Omega_\mu.
\end{equation}
\paragraph{Secondary speckle term:}
We have
\begin{multline}
B_I (z^S)\overline{B_I (z^{S'})}=\left(2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 }
U_I \right)^2 \int_{\mathbb{S}^1} e^{-i\omega \theta \cdot (z^S-z^{S'}) }
\\\bigg[\int_\Omega \mu(y) \mu(y') \theta^\top \widetilde{R}_\omega (z^S,z_r,y)
\overline{\widetilde{R}_\omega (z^{S'},z_r,y')}\theta dy dy'\bigg]
d\theta.
\end{multline}
So we get the expectation:
\begin{multline}
\label{DAspeckle2} \mathbb{E}\bigg[B_I (z^S)\overline{B_I
(z^{S'})}\bigg] =\left(2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 } U_I \right)^2 \sigma_\mu^2
l_\mu^2 \\ \int_{\mathbb{S}^1} e^{-i\omega \theta \cdot (z^S-z^{S'}) } \theta^\top\bigg[ \int_{\Omega_\mu} \widetilde{R}_\omega(z^S,z_r,y)\overline{\widetilde{R}_\omega(z^{S'},z_r,y)} dy \bigg]\theta d\theta.
\end{multline} This term also creates a speckle field on the image.
As before, we compute the typical size of this term at point $z_r$.
We first get an estimate on $\widetilde{R}_\omega$.
\begin{equation}
\vert\left( \widetilde{R}_\omega(z^S,z_r,y)\right)_{i,j} \vert \leq \vert \partial_j G_\omega^{(0)}(y,z_r) \vert \vert \sum_{k=1,2}\int_{\partial \Omega} \partial_{y_i}\overline{G_\omega^{(0)}(x,z^S)} \partial_{y_i} \partial_{y_k} G_\omega^{(0)}(x,y) d\sigma(x) \vert.
\end{equation}
We recall the Helmholtz-Kirchoff theorem
\begin{equation}
\int_{\partial \Omega} \overline{G_\omega^{(0)}(x,y)}
G_\omega^{(0)}(x,z) d\sigma(x) \sim \frac{1}{4\omega} J_0(\omega
\vert y-z\vert) \quad \mbox{as } R\rightarrow\infty,
\end{equation}
from which
\begin{equation}
\int_{\partial \Omega} \partial_{y_i}\overline{G_\omega^{(0)}(x,z^S)} \partial_{y_i} \partial_{y_k} G_\omega^{(0)}(x,y) d\sigma(x) = \frac{1}{4\omega} \left(\partial_i \partial_i \partial_k f\right) (z^S-y),
\end{equation}
where $f$ is defined by $f(x)=J_0(\omega \vert x \vert)$.
We have
\begin{equation}
\partial_i \partial_j \partial_k f (x) = \omega \left( \frac{3\left(a_{i,j,k}(x)-b_{i,j,k}(x)\right)}{\vert x \vert^2 }\left[ J_0'(\omega \vert x \vert) -\omega \vert x \vert J_0''(\omega \vert x \vert) \right]+ a_{i,j,k}(x) \omega^2 J_0^{(3)}(\omega \vert x \vert )\right),
\end{equation} where $a_{i,j,k}$ and $b_{i,j,k}$ are rational fractions in the coefficients of $x$ bounded by $1$.
Now, recall the power series of $J_0$:
\begin{equation}
J_0(z) = \sum_k (-1)^k \frac{\left(\frac{1}{4}z^2\right)^k}{(k!)^2}.
\end{equation} We can write
\begin{equation}
J_0'(\omega \vert x \vert) -\omega \vert x \vert J_0''(\omega \vert x \vert)
=-\frac{\omega^3}{4} \vert x\vert^3 +o(\vert x \vert^3).
\end{equation}
Hence, since $J_0^{(3)}(x)\sim \frac{3}{4} x$ when $x\rightarrow
0$, we can prove the following estimate for $x$ around $0$:
\begin{equation}
\frac{1}{4\omega}(\partial_i \partial_j \partial_k f)(x) \sim \frac{3b_{i,j,k}(x)}{16} \omega^3 \vert x \vert .
\end{equation}
In order to get the decay of $\widetilde{R}_\omega$ for large
arguments we use the following formulas: $J_0'=-J_1$,
$J_0''=\frac{1}{x}J_1-J_0$, and $J_0^{(3)}= J_1-\frac{1}{x^2}J_1+
\frac{1}{x} J_0$. We get
\begin{equation}
\frac{1}{4\omega}\vert \partial_i \partial_j \partial_k f (x)
\vert \leq \omega^2 (\omega \vert x \vert )^{-1/2} \quad \mbox{as
} x\rightarrow \infty.
\end{equation}
We also have the following estimate:
\begin{equation}
\vert \nabla G_\omega^{(0)}(y,z_r)
\vert \leq \left(\frac{2}{\pi}\right)^{1/2}\max \left( \frac{1}{ \vert y-z_r \vert},
\frac{\omega}{\sqrt{\omega \vert y-z_r \vert }}\right).
\end{equation}
We can now write the estimate on $\widetilde{R_\omega}_{i,j}$
\begin{equation}
\vert \widetilde{R_\Omega}(z^S,z_r,y)_{i,j}\vert \leq \omega^2 \left(\frac{2}{\pi}\right)^{1/2} \min\left(\omega \vert y-z_r \vert , \frac{1}{\sqrt{\omega \vert y-z^S \vert}} \right) \max \left( \frac{1}{\omega \vert y-z_r \vert}, \frac{1}{\sqrt{\omega \vert y-z_r \vert }}\right).
\end{equation}We can now go back to estimating the term $B_I$. We split the domain of integration $\Omega_\mu =B(z_r,\omega^{-1})
\cup \Omega_\mu \backslash B(z_r,\omega^{-1})$ to get
\begin{multline}
\left\vert \mathbb{E}\bigg[B_I(z_r)\overline{B_I(z_r)}\bigg] \right\vert \leq \left(2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 } U_I \right)^2 \sigma_\mu^2
l_\mu^2 \\ 4\pi \omega^4 \frac{2}{\pi}\bigg[ \int_{\Omega_\mu \backslash B(z_r,\omega^{-1})} \frac{1}{\vert y-z_r\vert^2}dy + \int_{B(z_r,\omega^{-1})} \omega^2 figuresdy \bigg].
\end{multline}
Hence,
\begin{equation}\label{TERMB}
\left\vert \mathbb{E}\bigg[B_I(z_r)\overline{B_I(z_r)}\bigg] \right\vert \leq 8\left(2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 } U_I \right)^2 \omega^4 \sigma_\mu^2
l_\mu^2 \log(\omega \text{ diam }\Omega_\mu ).
\end{equation}
\paragraph{Double products:}
The double products $A_I \overline{B_I}$ and $B_I \overline{A_I}$ have a typical amplitude that is the geometric mean of the
typical amplitudes of $A_I$ and $B_I$. So they are always
smaller than one of the main terms $\vert A_I\vert^2$ or $\vert
B_I\vert^2$.
\subsubsection{Signal-to-noise ratio estimates}
We can now give an estimate of the signal-to-noise ratio $(SNR)_I$
defined by (\ref{snridef}). Using (\ref{ExpectIb}), (\ref{TERMA}), and
(\ref{TERMB}) we get
\begin{equation}\label{SNRI1}
(SNR)_I \approx \frac{\frac{\pi^2(\sigma_r-1)}{2(\sigma_r+1)}\omega\delta^2 U_I}{\sigma_\mu l_\mu \left( \pi
\frac{\omega}{8} \text{ diam }\Omega_\mu
+8\left(2\pi \delta^2\frac{\sigma_r-1}{\sigma_r+1 } U_I \right)^2 \omega^4 \log(\omega \text{ diam }\Omega_\mu )\right)^{1/2}},
\end{equation}
Since $\delta << \frac{2\pi}{\omega}$ we have that $\delta \omega
<< 1$, so we can estimate $(SNR)_I$ as follows
\begin{equation}\label{SNRI}
(SNR)_I \approx \frac{\sqrt{2}\pi^{3/2}\frac{\sigma_r-1}{\sigma_r+1}\omega\delta^2 U_I}{\sigma_\mu l_\mu \sqrt{\omega \text{ diam }
\Omega_\mu}}.
\end{equation}
The perturbation in the image $I$ comes from different phenomena.
The first one, and the most important is the fact that we image
not only the field scattered by the reflector, but also the field
scattered by the medium's random inhomogeneities. This is why the
signal-to-noise ratio depends on the volume and the contrast of
the particle we are trying to locate. It has to stand out from
the background. The other terms in the estimate (\ref{SNRI1}) of
$(SNR)_I$ are due to the phase perturbation of the field scattered
by the particle when it reaches the boundary of $\Omega$ which
can be seen as a travel time fluctuation of the scattered wave by
the reflector. Both the terms are much smaller than the first one.
$(SNR)_I$ depends on the ratio ${\omega}/{l_\mu}$. If the medium
noise has a shorter correlation length, then the perturbation
induced in the phase of the fields will more likely self average.
\subsection{Second-harmonic backpropagation}
\subsubsection{Expectation}
We have:
\begin{multline}
\mathbb{E}[J(z^S)] = -\pi \delta^2 \int_{\mathbb{S}^1} e^{-2i\omega \theta \cdot z^S} \bigg[
(S)_{det}^\theta\int_{\partial \Omega} \overline{\G{x}{z^S}} \G{x}{z_r} dx \\
+ \mathbb{E}[(S)_{rand}^\theta] \int_{\partial \Omega}
\overline{\G{x}{z^S}} \G{x}{z_r} dx \bigg]d\theta.
\end{multline}
Since $\mathbb{E}[(S)_{rand}^\theta] =0$ we obtain by using
(\ref{DEFSDET}) that
\begin{equation}
\mathbb{E}[J(z^S)] =
\pi \delta^2 \omega^2 U_I^2 \int_{\mathbb{S}^1} \left(\sum_{k,l} \chi_{k,l} \theta_k \theta_l \right) e^{2i\omega \theta \cdot (z_r-z^S)} d\theta \int_{\partial \Omega} \overline{\G{x}{z^S}} \G{x}{z_r}
dx.
\end{equation}
If we define $\widetilde{Q}_{2\omega}$ as
\begin{equation}\label{defqtilde}
\widetilde{Q}_{2\omega}(x,y)= \int_{\mathbb{S}^1} \left(\sum_{k,l} \chi_{k,l} \theta_k
\theta_l \right) e^{2i\omega \theta \cdot (x-y)} d\theta,
\end{equation}
then it follows that
\begin{equation}
\mathbb{E}[J(z^S)] = \delta^2 \omega^2 U_I^2 \widetilde{Q}_{2\omega}(z_r,z^S) Q_{2\omega}(z_r,z^S),
\end{equation} where $Q_{2\omega}$ is given by (\ref{DefQ}).
To get the typical size of this term we first
use the Helmholtz-Kirchhoff theorem \cite{ammarimethods}:
\begin{equation}\label{EQUIVQ}
Q_{2\omega}(z_r,z^S) \sim \frac{1}{2\omega}\text{ Im} \left( \G{z_r}{z^S}\right).
\end{equation}
Therefore, we obtain that
\begin{equation}\label{ExpectJ}
\mathbb{E}[J(z_r)] = \frac{\pi}{8} \delta^2\omega U_I^2 \int_{\mathbb{S}^1} \left(\sum_{k,l} \chi_{k,l} \theta_k
\theta_l \right) d\theta.
\end{equation}
\subsubsection{Covariance}
We have:
\begin{multline}
J(z^S)-\mathbb{E}[J](z^S) = \pi \delta^2 \int_{\mathbb{S}^1}e^{-2i\omega \theta \cdot z^S}\Big[ (S)_{det}^\theta 4\omega^2 \int_\Omega \G{s}{z_r}\mu(s) Q_{2\omega}(s,z^S)ds \\ - (S)_{rand}^\theta Q_{2\omega}(z_r,z^S) \Big] d\theta.
\end{multline}
Denote by \begin{equation} A_J(z^S)= 4 \pi \delta^2 \omega^2 \int_{\mathbb{S}^1}e^{-2i\omega \theta \cdot z^S} (S)_{det}^\theta \int_\Omega
\G{s}{z_r}\mu(s) Q_{2\omega}(s,z^S)ds d\theta,
\end{equation} and
\begin{equation}B_J(z^S)=\pi \delta^2 \int_{\mathbb{S}^1}e^{-2i\omega \theta \cdot z^S} (S)_{rand}^\theta Q_{2\omega}(z_r,z^S) d\theta.
\end{equation}
Then we can write the covariance function,
\begin{equation}
\text{Cov}\left(J(z^S), J(z^{S'}) \right) = \mathbb{E}\bigg[\left(J(z^S)- \mathbb{E}[J(z^S)] \right)
\overline{\left( J(z^{S'})- \mathbb{E}[J(z^{S'})]\right)} \bigg],
\end{equation}
in the form
\begin{equation}
\text{Cov}\left(J(z^S), J(z^{S'}) \right) =\mathbb{E}\bigg[ A(z^S)
\overline{A(z^{S'})} +B(z^S)\overline{B_J(z^{S'})}
+A_J(z^S)\overline{B_J(z^{S'})} +
\overline{A_J(z^S)}B_J(z^{S'})\bigg].
\end{equation}
We will now compute the first two terms separately and then we deal with the double products.
\paragraph{The speckle term $A_J\overline{A_J}$:}
\medskip
From
\begin{multline}
A_J(z^S) \overline{A_J(z^{S'})} = 16 \pi^2\delta^4\omega^{4} \int_{\mathbb{S}^1} e^{-2i\omega \theta \cdot( z^S-z^{S'})}\vert (S)_{det}^\theta \vert^2
\\ \int \int _{\Omega \times \Omega} \G{s}{z_r}
\overline{\G{s'}{z_r}} \mu(s) \overline{\mu(s')}
Q_{2\omega}(s,z^S) \overline{Q_{2\omega}(s',z^{S'})} ds ds' d\theta,
\end{multline}
it follows by using (\ref{DEFSDET}) that
\begin{multline}
A_J(z^S) \overline{A_J(z^{S'})} =16 \pi^2\delta^4 \omega^{8} U_I^4 \int_{\mathbb{S}^1} e^{-2i\omega \theta \cdot( z^S-z^{S'})} \vert \sum_{k,l} \chi_{k,l} \theta_k \theta_l \vert^2 d\theta \\ \int
\int _{\Omega \times \Omega} \G{s}{z_r} \overline{\G{s'}{z_r}} \mu(s)
\overline{\mu(s')} Q_{2\omega}(s,z^S) \overline{Q_{2\omega}(s',z^{S'})} ds
ds'.
\end{multline}
If we write $C_\mu(s,s')=\mathbb{E}[\mu(s) \mu(s')]$, then we find
that
\begin{multline}
\mathbb{E}[A_J(z^S) \overline{A_J(z^{S'})}] = 16 \pi^2\delta^4 \omega^{8} U_I^4 \int_{\mathbb{S}^1}e^{-2i\omega \theta \cdot( z^S-z^{S'})} \vert \sum_{k,l} \chi_{k,l} \theta_k \theta_l \vert^2 d\theta \\
\int \int _{\Omega \times \Omega} \G{s}{z_r} \overline{\G{s'}{z_r}} C_\mu(s,s') Q_{2\omega}(s,z^S)
\overline{Q_{2\omega}(s',z^{S'})} ds ds' ,
\end{multline}
since $\mu$ is real.
As previously, we assume that the medium noise is localized and
stationary on its support (which is $\Omega_\mu$). We note
$\sigma_\mu$ the standard deviation of the process $\mu$ and
$l_\mu$ its correlation length. We can then write
\begin{multline}
\mathbb{E}[A_J(z^S) \overline{A_J(z^{S'})}] =
16 \pi^2\delta^4 \omega^{8} U_I^4 \sigma_\mu^2 l_\mu^2 \int_{\mathbb{S}^1}e^{-2i\omega \theta \cdot( z^S-z^{S'})} \vert \sum_{k,l} \chi_{k,l} \theta_k \theta_l \vert^2 d\theta \\ \int _{\Omega_\mu} \vert \G{s}{z_r}\vert^2 Q_{2\omega}(s,z^S)
\overline{Q_{2\omega}(s,z^{S'})} ds .
\end{multline}
The term $\mathbb{E}[A_J(z^S) \overline{A_J(z^{S'})}]$ shows the generation of a non localized
speckle image, creating random secondary peaks. We will later
estimate the size of those peaks in order to find the
signal-to-noise ratio. We compute the typical size of this term.
We get, using (\ref{EQUIVQ}):
\begin{multline} \label{CovA}
\mathbb{E}[A_J(z^S) \overline{A_J(z^{S'})}] \approx 4\pi^2 U_I^4\delta^4 \omega^{6} \sigma_\mu^2 l_\mu^2 \\ \int_{\mathbb{S}^1}\vert \sum_{k,l}
\chi_{k,l} \theta_k \theta_l \vert^2d\theta \int
_{\Omega_\mu} \vert \G{s}{z_r}\vert^2 \text{ Im } \G{s}{z^S}
\text{ Im } \G{s}{z^{S'}} ds .
\end{multline}
Then we use the facts that $$\vert \G{x}{y} \vert \approx \frac{1}{4\sqrt{\pi 2\omega}} \vert
x-y \vert^{-1/2}$$ and $$\text{ Im }\G{x}{y} = \frac{1}{4}
J_0(2\omega \vert x-y \vert )\approx \frac{\cos \left( 2\omega \vert x-y \vert - \pi/4 \right) }{4 \sqrt{\pi \omega}}
\vert x-y\vert ^{-1/2}$$ if $\vert x-y\vert >> 1.$ Then, as
previously, we write $\Omega_\mu= \Omega_\mu \backslash
B(z_r,\omega^{-1}) \cup B(z_r,\omega^{-1})$. Using (\ref{CovA}), we arrive at
\begin{multline}
\mathbb{E}[A_J(z_r) \overline{A_J(z_r)}] \approx 4\pi^2 U_I^4\delta^4 \omega^{6} \sigma_\mu^2 l_\mu^2\int_{\mathbb{S}^1} \vert \sum_{k,l} \chi_{k,l} \theta_k \theta_l \vert^2 d\theta \\ \bigg( \frac{1}{512\pi^2\omega^2}
\int_{\Omega_\mu \backslash B(z_r,\omega^{-1})}\frac{ \cos^2 \left( 2\omega \vert s-z_r \vert - \pi/4 \right) }{ \vert s-z_r\vert^{2}} ds +\frac{1}{16}\int_{ B(z_r,\omega^{-1})} \vert G_{2\omega}^{(0)}(s,z_r) \vert^2 J_0(2\omega \vert s-z_r \vert )^2ds\bigg),
\end{multline}
which yields
\begin{equation}\label{estAJ}
\mathbb{E}[A_J(z_r) \overline{A_J(z_r)}] \approx \frac{\pi}{128} U_I^4\delta^4\omega^4\sigma_\mu^2 l_\mu^2\log(\omega \text{ diam }\Omega_\mu )\int_{\mathbb{S}^1} \vert \sum_{k,l} \chi_{k,l} \theta_k \theta_l \vert^2 d\theta .
\end{equation}
\paragraph{The localized term $B_J\overline{B_J}$:}
\medskip
We have
\begin{equation}
B_J(z^S) \overline{B_J(z^{S'})} = \pi^2\delta^4 Q_{2\omega}(z_r,z^S)
\overline{Q_{2\omega}(z_r, z^{S'})}\int_{\mathbb{S}^1} e^{-2i\omega \theta
\cdot( z^S-z^{S'})} \vert (S)_{rand}^\theta \vert ^2 d\theta.
\end{equation}
Using (\ref{DEFSRAND}) and (\ref{transfoSrand}) we have that
$(S)^\theta_{rand}$ can be re-written as
\begin{multline}
(S)^\theta_{rand}=- \omega^2 U_I^2 \int_\Omega \left(\mu(y) e^{i\omega
\theta \cdot y} - \mu(z_r) e^{i\omega \theta \cdot z_r}\right) \\
\bigg[ \sum_{k,l} \chi_{k,l} \left(\theta_k \theta \cdot \nabla \partial_{x_l}
G^{(0)}_\omega (z_r,y)+ \theta_l \theta \cdot \nabla \partial_{x_k} G^{(0)}_\omega (z_r,y) \right) \bigg]
dy.
\end{multline}
We need to get an estimate on $S^\theta_{rand}$'s variance. As in
section \ref{sec2} we have the following estimate for any
$0<\alpha' <1/2$:
\begin{equation}
\frac{1}{4}\vert y-z_r \vert^{\alpha'} \left\vert \partial_{x_k} \partial_{x_l} H_0^1(\omega \vert y-z_r \vert ) \right\vert \leq \frac{1}{2} \min \left( 1, \sqrt{\frac{2}{\pi}} \omega^{3/2} \vert y-z_r \vert^{\alpha'-1/2} \right) \max\left( 1, \vert y-z_r \vert^{\alpha'-2} \right).
\end{equation}
We get, for any $\alpha' < \min(\alpha, \frac{1}{2})$,
\begin{equation}
\vert S^\theta_{rand} \vert \leq \omega^2 U_I^2\Vert \mu
\Vert_{\mathcal{C}^{0,\alpha'}} \max_{k,l} \left\vert \chi_{k,l}
\right\vert\omega^{2-2\alpha'} \bigg[
\frac{8\sqrt{2\pi}}{3/2+\alpha'} \left( \omega \text{diam }
\Omega_\mu \right)^{3/2+\alpha'} + \frac{\pi}{\alpha'} \bigg],
\end{equation}
and
\begin{multline}
\left\vert \mathbb{E}[B_J(z^S)\overline{B_J(z^{S'})}] \right\vert \leq \frac{128 \pi^3}{(3/2+\alpha')^2} \omega^{4-2\alpha'} \delta^4 U_I^4\max_{k,l} \left\vert \chi_{k,l} \right\vert ^2 \mathbb{E}\left[ \Vert \mu \Vert_{\mathcal{C}^{0,\alpha'}}^2\right] \\ \bigg[\left( \omega \text{diam } \Omega_\mu \right)^{3+2\alpha'} + \frac{1}{\alpha'} \bigg] Q_{2\omega}(z_r,z^S)
\overline{Q_{2\omega}(z_r, z^{S'})}.
\end{multline}
Note that $Q_{2\omega}(z_r,z^S)$, defined in (\ref{DefQ}), behaves
like $\frac{1}{8\omega}J_0(2\omega \vert z_r - z^S \vert)$ which
decreases like $\vert z_r -z^S \vert^{-1/2}$ as $\vert z_r -z^S
\vert$ becomes large. The term $B_J$ is localized around $z_r$. It
may shift, lower or blur the main peak but it will not contribute
to the speckle field on the image. We still need to estimate its
typical size at point $z_r$ in order to get the signal-to-noise
ratio at point $z_r$. Using (\ref{EQUIVQ}) and (\ref{estc0alpha})
we get
\begin{equation}
\mathbb{E}[B_J(z_r)\overline{B_J(z_r)}] \leq \frac{2^{17+\alpha} \pi^3}{(3/2+\alpha')^2} \frac{e}{\alpha-\alpha'} \omega^{2-2\alpha'} \delta^4 U_I^4 \max_{k,l} \left\vert \chi_{k,l} \right\vert ^2 \bigg[\left( \omega \text{diam } \Omega_\mu \right)^{3+2\alpha'} + \frac{1}{\alpha'} \bigg] \frac{\sigma_\mu^2}{l_\mu^{2\alpha}}.
\end{equation}
We can write $(\omega \text{diam } \Omega_\mu )^{3+2\alpha' } \leq (\omega \text{diam } \Omega_\mu)^{3+2\alpha} +1$. We can take $\alpha'=\frac{\alpha}{2}$. Let $C = \frac{2^{18+1/2} \pi^3 e }{(3/2)^2}$.
We get that
\begin{equation}\label{estBJ}
\mathbb{E}[B_J(z_r)\overline{B_J(z_r)}] \leq C \omega^2\min \left(\omega^{-2\alpha},1\right) \delta^4 U_I^4 \max_{k,l} \left\vert \chi_{k,l} \right\vert ^2 \frac{\sigma_\mu^2}{l_\mu^{2\alpha}} \bigg[ \left( \omega \text{diam } \Omega_\mu \right)^{3+2\alpha} + 1 \bigg] .
\end{equation}
\begin{rem}
We note that even though the term $B_J$ is localized, meaning it
would not create too much of a speckle far away from the
reflector, it is still the dominant term of the speckle field
around the reflector's location.
\end{rem}
\paragraph{The double products $A_J\overline{B_J}$ and $\overline{A_J}B_J$:}
\medskip
This third term has the size of the geometric mean of the first
two terms $A_J$ and $B_J$. So we only need to
concentrate on the first two terms. Also this term is still
localized because of $Q(z_r,z^S)$ that decreases as $\vert z_r -
z^S\vert^{-1/2}$.
\subsubsection{Signal-to-noise ratio}
As before, we define the signal-to-noise ratio $(SNR)_J$ by
(\ref{snrjdef}).
Using (\ref{ExpectJ}), (\ref{estAJ}) and (\ref{estBJ}),
\begin{equation}
\frac{\mathbb{E}[J(z_r)]}{(Var(J(z_r))^{\frac{1}{2}}}\geq \frac{ l_\mu^{\alpha} \left(\int_{\mathbb{S}^1}\left(\sum_{k,l} \chi_{k,l} \theta_k
\theta_l \right) d\theta\right) }{\sqrt{C} \sigma_\mu \min(\omega^{-\alpha},1) \max_{k,l} \left\vert \chi_{k,l} \right\vert\sqrt{ \left(\omega \text{diam } \Omega_\mu\right)^{3+2\alpha} + 1 } }.
\end{equation}
The difference here with the standard backpropagation is that the
$(SNR)$ does not depend on neither the dielectric contrast of the
particle, the nonlinear susceptibility nor even the particle's
volume. All the background noise created by the propagation of the
illuminating wave in the medium is filtered because the small
inhomogeneities only scatter waves at frequency $\omega$. The
nanoparticle is the only source at frequency $2\omega$ so it does
not need to stand out from the background. The perturbations seen
on the image $J$ are due to travel time fluctuations of the wave
scattered by the nanoparticle (for the speckle field) and to the
perturbations of the source field at the localization of the
reflector (for the localized perturbation). The second-harmonic
image is more resolved than the fundamental frequency image.
\subsection{Stability with respect to measurement noise}
We now compute the signal-to-noise ratio in the presence of
measurement noise without any medium noise ($\mu=0$). The signal
$u_s$ and $v$ are corrupted by an additive noise $\nu(x)$ on
$\partial \Omega$. In real situations it is of course impossible
to achieve measurements for an infinity of plane waves
illuminations. So in this part we assume that the functional $J$
is calculated as an average over $n$ different illuminations,
uniformly distributed in $\mathbb{S}^1$. We consider, for each
$j\in [0,n]$, an independent and identically distributed random
process $\nu^{(j)}(x),\ x\in
\partial \Omega$ representing the measurement noise. We use the
model of \cite{TDerivativ}: if we assume that the surface of
$\Omega$ is covered with sensors half a wavelength apart and that
the additive noise has variance $\sigma$ and is independent from
one sensor to another one, we can model the additive noise process
by a Gaussian white noise with covariance function:
$$\mathbb{E}(\nu(x) \overline{\nu(x')}) = \sigma_\nu^2
\delta(x-x'),$$ where $\sigma_\nu = \sigma^2 \frac{\lambda}{2}$.
\subsubsection{Standard backpropagation}
We write, for each $j\in [0,n]$, $u_s^{(j)}$ as
\begin{equation}
u_s^{(j)}(x)=-2\pi \delta^2 \frac{\sigma_r-1}{\sigma_r+1} U_I e^{i \omega \theta^{(j)} \cdot z_r}
\nabla G_\omega^{(0)}(x,z_r) \cdot(i\omega \theta^{(j)} )+ o(\delta^2)+ \nu^{(j)}(x),
\end{equation} where $\nu^{(j)}$ is the measurement noise associated with the $j$-th illumination.
We can write $I$ as
\begin{equation}
I(z^S) =\frac{1}{n} \sum_{j=1}^n\int_{\partial \Omega}\frac{1}{i\omega}e^{-i \omega \theta^{(j)} \cdot z^S}(\theta^{(j)})^\top\overline{\nabla G_\omega^{(0)}(x,z^S)} u_s(x)dx,
\end{equation}
Further, \begin{multline} I(z^S)= -2\pi \delta^2 \frac{\sigma_r-1}{\sigma_r+1} U_I \frac{1}{n}\sum_{j=1}^n e^{i \omega \theta^{(j)}
\cdot (z_r-z^S)}(\theta^{(j)})^\top R_\omega(z_r,z^S) \theta^{(j)}
\\ + \frac{1}{n}\sum_{j=1}^n \int_{\partial \Omega}\frac{1}{i\omega} e^{-i \omega
\theta^{(j)} \cdot z^S}(\theta^{(j)})^\top\overline{\nabla
G_\omega^{(0)}(x,z^S)} \nu^{(j)} (x)dx.
\end{multline}
We get that
\begin{equation}
\mathbb{E}[I(z^S)]= -2\pi \delta^2 \frac{\sigma_r-1}{\sigma_r+1} U_I \frac{1}{n}\sum_{j=1}^n e^{i \omega \theta^{(j)}
\cdot (z_r-z^S)}(\theta^{(j)})^\top R_\omega(z_r,z^S) \theta^{(j)},
\end{equation}
so that, using (\ref{DAR1}) and (\ref{IMG})
\begin{equation}
\mathbb{E}[I(z_r)]\sim -\frac{\pi(\sigma_r-1)}{4(\sigma_r+1)}\omega\delta^2 U_I.
\end{equation}
We compute the covariance
\begin{multline}
Cov(I(z^S),I(z^{S'})) = \mathbb{E}\bigg[\frac{1}{n^2}\left( \sum_{j=1}^n \frac{1}{i\omega}e^{-i\omega \theta^{(j)} \cdot
z^S} \int_{\partial\Omega } \nu^{(j)}(x) (\theta^{(j)})^\top
\overline{\nabla G_\omega^{(0)}(x,z^S)} dx\right) \\ \left(\sum_{l=1}^n\frac{-1}{i\omega} e^{i\omega \theta^{(l)} \cdot
z^{S'}} \int_{\partial\Omega} \nu^{(l)}(x') (\theta^{(l)})^\top\nabla G_\omega^{(0)}(x',z^{S'})dx' \right) \bigg],
\end{multline}
and obtain that
\begin{equation}
Cov(I(z^S),I(z^{S'})) =\sigma^2 \frac{\lambda}{2}\frac{1}{\omega^2 n^2} \sum_{j=1}^n e^{-i\omega \theta^{(j)}
\cdot (z^S-z^{S'})} (\theta^{(j)})^\top R_\omega(z^S,z^{S'})
\theta^{(j)}.
\end{equation}
The signal-to-noise ratio is given by
\begin{equation}
(SNR)_I = \frac{\mathbb{E}[I(z_r)] }{(Var(I(z_r))^{\frac{1}{2}}}.
\end{equation}
If we compute
\begin{equation}
Var(I(z_r))\sim \sigma^2 \frac{\pi}{8 \omega^2 n},
\end{equation}
then $(SNR)_I$ can be expressed as
\begin{equation}
(SNR)_I=\frac{ \sqrt{ \pi n} \delta^2 \omega^2 [\sigma_r-1] U_I }{ [\sigma_r +1] \sigma }.
\end{equation}
The backpropagation functional is very stable with respect to
measurement noise. Of course, the number of measurements increases
the stability because the measurement noise is averaged out. We
will see in the following that the second-harmonic imaging is also
pretty stable with respect to measurement noise.
\subsubsection{Second-harmonic backpropagation}
We write, for each $j\in [0,n]$, $v_j$ as
\begin{equation}
v^{(j)}(x)=-\delta^2 (2\omega)^2 \left(\sum_{k,l} \chi_{k,l} \partial_{x_k}U^{(j)}(z_r) \partial_{x_l}
U^{(j)}(z_r)\right) \G{x}{z_r} + \nu^{(j)}(x),
\end{equation}
where $\nu_j$ is the measurement noise at the $j$-th measurement. Without any medium noise
the source term $(S)$ can be written as
\begin{equation}
(S)^{\theta^{(j)}}= \sum_{k,l} \chi_{k,l} \partial_{x_k}U^{(j)}(z_r) \partial_{x_l}
U^{(j)}(z_r) =- \omega^2U_I^2 e^{2i\omega \theta^{(j)} \cdot z_r} \sum_{k,l}
\chi_{k,l} \theta^{(j)}_k \theta^{(j)}_l.
\end{equation}
So we can write $J$ as
\begin{equation}
J(z^S)=\frac{1}{n}\sum_{j=1}^n \int_{ \partial \Omega}v^{(j)}(x) \overline{\G{x}{z^S}}
e^{-2i\omega \theta^{(j)} \cdot z^S} dx,
\end{equation}
or equivalently,
\begin{multline}
J(z^S)= -\delta^2 (2\omega)^2\frac{1}{n}\sum_{j=1}^n (S)^{\theta^{(j)}}\int_{\partial \Omega} \G{x}{z_r} \overline{\G{x}{z^S}}
e^{-2i\omega \theta^{(j)} \cdot z^S} dx\\+\frac{1}{n}\sum_{j=1}^n \int_{\partial \Omega } \nu^{(j)}(x) \overline{\G{x}{z^S}}
e^{-2i\omega \theta^{(j)} \cdot z^S}dx .
\end{multline}
We get that
\begin{equation}
\mathbb{E}[J(z^S)]= -\delta^2 (2\omega)^2\frac{1}{n}\sum_{j=1}^n (S)^{\theta^{(j)}} e^{-2i\omega \theta^{(j)}\cdot z^S} Q_{2\omega}(z_r,z^S),
\end{equation}
so that, using (\ref{EQUIVQ}):
\begin{equation}
\mathbb{E}[J(z_r)]\sim \delta^2 U_I^2 \frac{\omega^3}{2n} \sum_{k,l,j} \chi_{k,l} \theta^{(j)}_k \theta^{(j)}_l.
\end{equation}
We can compute the covariance
\begin{multline}
Cov(J(z^S),J(z^{S'}))=\mathbb{E }\bigg[ \frac{1}{n^2}\left(\sum_{j=1}^ne^{-2i\omega \theta^{(j)} \cdot
z^S} \int_{\partial \Omega}
\nu^{(j)}(x) \overline{\G{x}{z^S}} dx \right) \\ \left( \sum_{l=1}^n e^{2i\omega \theta^{(l)} \cdot
z^{S'}} \int_{\partial \Omega}\nu^{(l)}(x)\G{x'}{z^{S'}}dx'\right) \bigg],
\end{multline}
which yields
\begin{equation}
Cov(J(z^S),J(z^{S'}))=
\sigma^2 \frac{\lambda}{2} Q_{2\omega}(z^{S'},z^S)\frac{1}{n^2}\sum_{j=1}^ne^{-2i\omega
\theta^{(j)} \cdot (z^S-z^{S'})} .
\end{equation}
Now we have
\begin{equation}
Var(J(z_r))^{1/2}\sim \frac{\sigma}{2\omega} \sqrt{\frac{\pi}{2n}}.
\end{equation}
The signal-to-noise ratio,
\begin{equation}
(SNR)_J=\frac{\mathbb{E}[J(z_r)] }{(Var(J(z_r))^{\frac{1}{2}}},
\end{equation}
is given by
\begin{equation}
(SNR)_J= \frac{ 2\delta^2 \omega^2 U_I\left(\sum_j \sum_{k,l} \chi_{k,l}
\theta^{(j)}_k \theta^{(j)}_l \right) }{\pi\sigma\sqrt{n}}.
\end{equation}
Even though it appears that the $(SNR)$ is proportional to
$\frac{1}{\sqrt{n}}$, the term $\sum_j \theta^{(j)}_k
\theta^{(j)}_l $ is actually much bigger. In fact, if we pick
$\theta^{(j)}=\frac{2j\pi}{n}$ we get that \begin{equation}
\sum_{k,l} \chi_{k,l} \sum_j \theta^{(j)}_k \theta^{(j)}_l=
\sum_{j=1}^n \left( \chi_{1,1}\cos^2 \frac{2j\pi}{n} +
\chi_{2,2}\sin^2\frac{2j\pi}{n} +2\chi_{1,2}\sin \frac{2j\pi}{n}
\cos \frac{2j\pi}{n} \right),
\end{equation}
and hence,
\begin{equation}
\sum_{k,l} \chi_{k,l} \sum_j \theta^{(j)}_k \theta^{(j)}_l
\sim\frac{n}{2} \max[\chi_{1,1}, \chi_{2,2}] .
\end{equation}
Therefore, we can conclude that
\begin{equation}
(SNR)_J= \frac{ \delta^2 \omega^2U_I^2\sqrt{n}\max[\chi_{1,1}, \chi_{2,2}] }{\pi \sigma_\nu }.
\end{equation}
The signal-to-noise ratio is very similar to the one seen in the
classic backpropagation case. So the sensitivity with respect to
relative measurement noise should be similar. It is noteworthy
that in reality, due to very small size of the (SHG) signal
($\chi$ has a typical size of $10^{-12} \ m/V$), the measurement
noise levels will be higher for the second-harmonic signal.
\section{Numerical results} \label{sec7}
\subsection{The direct problem}
We consider the medium to be the square $[-1,1]^2$. The medium has
an average propagation speed of $1$, with random fluctuations with
Gaussian statistics (see Figure \ref{graphMssref}). To simulate
$\mu$ we use the algorithm described in \cite{TDerivativ} which
generates random Gaussian fields with Gaussian covariance function
and take a standard deviation equal to $0.02$ and a correlation
length equal to $0.25$. We consider a small reflector in the
medium $\Omega_r=z_r+\delta B(0,1)$ with $z_r=(-0.2,0.5)$ and
$\delta=0.004/\pi$, represented on Figure \ref{graphM}. The
contrast of the reflector is $\sigma_r=2$. We fix the frequency to
be $\omega=8$. We get the boundary data $u_{s}$ when the medium is
illuminated by the plane wave $U_{I}(x)=e^{i \omega \theta \cdot
x}$. The correlation length of the medium noise was picked so that
it has a similar size as the wavelength of the illuminating plane
wave. We get the boundary data by using an integral representation
for the field $u_{s,\theta}$. We also compute the boundary data
for the second-harmonic field $v$. We compute the imaging
functions $I$ and $J$ respectively defined in (\ref{DefI}) and
(\ref{DefJ}), averaged over two different lightning settings.
(see Figures \ref{graphI} and \ref{graphJ} for instance).
\begin{figure}[!h]
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{medwithref.eps}
\caption{\label{graphM} Medium with the reflector.}
\end{minipage}
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{medwithoutref.eps}
\caption{\label{graphMssref} Medium without the reflector (permittivity variations zoomed out).}
\end{minipage}
\end{figure}
\begin{figure}[!h]
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{Ui.eps}
\caption{\label{UI} Incoming field $U_I$.}
\end{minipage}
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{Us0.eps}
\caption{\label{Us0} Background field in the absence of a reflector $u_s^{(\mu)}$.}
\end{minipage}
\end{figure}
\begin{figure}[!h]
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{US.eps}
\caption{\label{graphus} Total scattered field $u_s$.}
\end{minipage}
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{V.eps}
\caption{\label{graphItilde30} Second-harmonic field $v$.}
\end{minipage}
\end{figure}
\subsection{The imaging functionals and the effects of the number of plane wave illuminations}
We compute the imaging functionals $I$ and $J$ respectively
defined in (\ref{DefI}) and (\ref{DefJ}), averaged over four
different illuminations settings. We fix the noise level
($\sigma_\mu =0,02$), the volume of the particle ($v_r=10^{-2}$)
and the contrast $\sigma_r = 2$. In Figures~\ref{graphI} and
\ref{graphJ} the image is obtained after backpropagating the
boundary data from one illumination ($\theta =0$). On the
following graphs, we average over several illumination
angles:\begin{itemize} \item $4$ uniformly distributed angles for
Figures~\ref{graphI4} and ~\ref{graphJ4}. \item $8$ uniformly
distributed angles for Figures~\ref{graphI8} and ~\ref{graphJ8}.
\item $32$ uniformly distributed angles for
Figures~\ref{graphI32} and ~\ref{graphJ32}.
\end{itemize}
As predicted, the shape of the spot on the fundamental frequency
imaging is very dependant on the illumination angles, whereas with
second-harmonic imaging we get an acceptable image with only one
illumination. In applications, averaging over different
illumination is useful because it increases the stability with
respect to measurement noise. It is noteworthy that, as expected,
the resolution of the second-harmonic image is twice higher than
the regular imaging one.
\begin{figure}[!h]
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{I1.eps}
\caption{\label{graphI} $I$ with $1$ illumination.}
\end{minipage}
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{J1.eps}
\caption{\label{graphJ} $J$ with $1$ illumination.}
\end{minipage}
\end{figure}
\begin{figure}[!h]
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{I4.eps}
\caption{\label{graphI4} $I$ with $4$ illuminations.}
\end{minipage}
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{J4.eps}
\caption{\label{graphJ4} $J$ with $4$ illuminations.}
\end{minipage}
\end{figure}
\begin{figure}[!h]
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{I8.eps}
\caption{\label{graphI8} $I$ with $8$ illuminations.}
\end{minipage}
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{J8.eps}
\caption{\label{graphJ8} $J$ with $8$ illuminations.}
\end{minipage}
\end{figure}
\begin{figure}[!h]
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{I32.eps}
\caption{\label{graphI32} $I$ with $32$ illuminations.}
\end{minipage}
\begin{minipage}{.45\linewidth}
\centering\includegraphics[width=\linewidth]{J32.eps}
\caption{\label{graphJ32} $J$ with $32$ illuminations.}
\end{minipage}
\end{figure}
\subsection{Statistical analysis}
\subsubsection{Stability with respect to medium noise}
Here we show numerically that the second-harmonic imaging is more
stable with respect to medium noise. In Figure~\ref{graphbruitIJ},
we plot the standard deviation of the error $\vert
z_{est}-z_r\vert $ where $z_{est}$ is the estimated location of
the reflector. For each level of medium noise we compute the error
over $120$ realizations of the medium, using the same parameters,
as above. The functional imaging $J$ is clearly more robust than
earlier.
\begin{figure}[!h]
\centering\includegraphics[width=\linewidth]{bruitmilieulog.eps}
\caption{\label{graphbruitIJ} Standard deviation of the
localization error with respect to the medium noise level for
standard backpropagation (top) and second-harmonic image
(bottom).}
\end{figure}
\subsubsection{Effect of the volume of the particle}
We show numerically that the quality of the second-harmonic image
does not depend on the volume of the particle. We fix the medium
noise level ($\sigma_\mu =0.02$) and plot the standard deviation
of the error with respect to the volume of the particle
(Figure~\ref{graphvolumeIJ}). We can see that if the particle is
too small, the fundamental backpropagation algorithm cannot
differentiate the reflector from the medium and the main peak gets
buried in the speckle field. The volume of the particle does not
have much influence on the second-harmonic image quality.
\begin{figure}[!h]
\centering\includegraphics[width=\linewidth]{volumelog.eps}
\caption{\label{graphvolumeIJ} Standard deviation of the
localization error with respect to the reflector's volume (log
scale) for standard backpropagation (top) and second-harmonic
image (bottom).}
\end{figure}
\subsubsection{Stability with respect to measurement noise}
We compute the imaging functionals with a set of data
obtained without any medium noise and perturbed with
a Gaussian white noise for each of $8$ different illuminations.
For each noise level, we average the results over $100$ images.
Figure~\ref{graphmeasnoise} shows that both functionals have
similar behaviors.
\begin{figure}[!h]
\centering\includegraphics[width=\linewidth]{bruitmesure.eps}
\caption{\label{graphmeasnoise} Standard deviation of the
localization error with respect to measurement noise level for
standard backpropagation (top) and second-harmonic image
(bottom).}
\end{figure}
As mentioned before, in applications, the weakness of the SHG
signal will induce a much higher relative measurement noise than
in the fundamental data. Since the model we use for measurement
noise has a zero expectation, averaging measurements over
different illuminations can improve the stability significantly as
shown in Figure~\ref{graphmeasnoise10}, where the images have been
obtained with $16$ illuminations instead of $8$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\linewidth]{bruitmesurelog16.eps}
\caption{\label{graphmeasnoise10} Standard deviation of the localization error with respect to measurement
noise level, when averaged over $16$ illuminations of angles uniformly distributed between $0$ and
$2\pi$ for standard backpropagation (top) and second-harmonic image (bottom).}
\end{center}
\end{figure}
\section{Concluding remarks}
We have studied how second-harmonic imaging can be used to locate
a small reflector in a noisy medium, gave asymptotic formulas for
the second-harmonic field, and investigated statistically the
behavior of the classic and second-harmonic backpropagation
functionals. We have proved that the backpropagation algorithm is
more stable with respect to medium noise. Our results can also be
extended to the case of multiple scatterers as long as they are
well-separated.
|
1306.6244
|
\section{introduction}
Quantum groups, named after Drinfeld's seminal work \cite{dr}, are natural Hopf algebraic generalizations of usual groups, arising in several branches of mathematics. As in classical group theory, the problem of their classification is a fundamental one.
An important aspect of the classification problem for quantum groups is the determination of the quantum subgroups of the known quantum groups. Let us recall some significant contributions to this topic.
\begin{enumerate}
\item Podles \cite{po} was the first to consider this problem, and he classified the compact quantum subgroups of Woronowicz' quantum group $SU_{q}(2)$, for $q \in [-1,1] \setminus\{0\}$. For other approaches, see \cite{bn} (for the finite quantum subgroups when $q=-1$) or \cite{fst} (when $q \not=-1$).
\item The finite quantum subgroups of $GL_q(n)$ were classified by M\"uller \cite{mu}, for $q$ an odd root of unity. From this work arose in particular an infinite family of pairwise non-isomorphic Hopf algebras of the same dimension: this was one of the series of counterexamples to Kaplansky's tenth conjecture.
\item The work of M\"uller was subsequently generalized by Andruskiewitsch and Garcia in \cite{ag}, where they determined the quantum subgroups of $G_q$, with $G$ a connected, simply connected simple algebraic group and $q$ a root of unity of odd order.
\item Another generalization of Muller's work was provided by Garcia \cite{gar}, who studied the two-parameter deformations $GL_{\alpha, \beta}(n)$, and classified the quantum subgroups in the odd root of unity case.
\item The compact quantum subgroups of $SO_{-1}(3)$ were determined by Banica and the first author in \cite{bb09}: these are the compact quantum groups acting faithfully on the classical space consisting of $4$ points.
\item The compact quantum subgroups of $O_n^*$, the half-liberated orthogonal quantum groups from \cite{basp}, were determined by Dubois-Violette and the first author in \cite{bicdub}.
\end{enumerate}
From these works emerged several new interesting classes of quantum groups, and several hints of what the classification of quantum groups should be. The approaches in (2), (3) and (4) deal with non-semisimple quantum groups and do not treat the case $q=-1$, while this is certainly the most interesting case if we have semisimple finite quantum groups in mind.
The present paper is a contribution to the case $q=-1$: we determine the compact quantum subgroups of the compact quantum group $SU_{-1}(3)$, as follows.
\begin{theorem}\label{subSu3}
Let $G$ be a non-classical compact quantum subgroup of $SU_{-1}(3)$. Then one of the following statements holds.
\begin{enumerate}
\item $G$ is isomorphic to a $K_{-1}$, a twist at $-1$ of a compact subgroup $K \subset SU(3)$ containing the subgroup of diagonal matrices having $\pm 1$ as entries.
\item $G$ is isomorphic to a quantum subgroup of $U_{-1}(2)$.
\end{enumerate}
\end{theorem}
The quantum subgroups of $U_{-1}(2)$ can be determined by using similar techniques to those of Podles \cite{po}.
We shall not discuss this here. Note that it follows from Theorem \ref{subSu3} and its proof that if $G$ is a non-classical compact quantum subgroup of $SU_{-1}(3)$ acting irreducibly on $\C^3$,
then $G$ is isomorphic to a $K_{-1}$, a twist at $-1$ of a compact subgroup $K \subset SU(3)$ containing the subgroup of diagonal matrices having $\pm 1$ as entries, and acting irreducibly on $\C^3$. Thus for any quantum subgroup of $SU_{-1}(3)$ acting irreducibly on the fundamental representation, the tensor category of representations is symmetric (in Hopf algebra terms, the Hopf algebra $\rep(G)$ is cotriangular). This seems to be an interesting phenomenon, that does not hold in general (e.g. for the quantum group $U_{-1}(2)$).
As in \cite{bb09}, the starting point is that $SU_{-1}(3)$ is a twist at $-1$ of the classical group $SU(3)$ (a 2-cocycle deformation).
This furnishes a number of representation-theoretic tools, developed in Section 3, to study the $\cs^*$-algebra $C(SU_{-1}(3))$ and its quotients, which are used in an essential way to prove Theorem \ref{subSu3}. Note that the representation theory of twisted function algebras on finite groups is fully discussed in \cite{eg2}, with a precise description of the irreducible representations. However the fusion rules, which would lead to the full classification of the Hopf algebra quotients, are not discussed in \cite{eg2}, and we do not see any general method to compute them. What we get here in the case of $SU_{-1}(3)$ are some partial fusion rules, for some special representations of $C(SU_{-1}(3))$, which however are sufficiently generic to get the necessary information to classify the quantum subgroups.
The paper is organized as follows. Section 2 consists of preliminaries. In Section 3 we recall the twisting (2-cocycle deformation) procedure for Hopf algebras and develop the aforementioned representation-theoretic tools for representations of twisted $\cs^*$-algebras of functions. In Section 4 we briefly recall how the quantum group $SU_{-1}(2m+1)$ can be obtained by twisting, and Section 5 is devoted to the proof of Theorem \ref{subSu3}.
\bigskip
We would like to thank S.~Echterhoff for informative discussions.
\section{Preliminaries}
\subsection{Compact quantum groups} We first recall some basic facts concerning compact quantum groups. The book \cite{ks}
is a convenient reference for the topic of compact quantum groups,
and all the defintions we omit can be found there.
All algebras in this paper will be unital, and
$\otimes$ will denote the minimal tensor product of $\cs^*$-algebras
as well as the algebraic tensor product; this should cause no confusion.
\begin{definition}
A \textbf{Woronowicz algebra} is a $\cs^*$-algebra $A$ endowed with
a $*$-morphism
$\Delta : A \to A \otimes A$
satisfying the coassociativity condition and the cancellation law
$$\overline{\Delta(A)(A \otimes 1)} = A \otimes A
= \overline{\Delta(A)(1\otimes A)}$$
The morphism $\Delta$ is called
the comultiplication of $A$.
\end{definition}
The category of
Woronowicz algebras is defined in the obvious way (see \cite{wa0} for details).
A commutative Woronowicz algebra is necessarily isomorphic with $C(G)$, the
algebra of continuous functions on a compact group $G$, unique up to isomorphism,
and the category of \textbf{compact quantum groups} is defined to be the category
dual to the category of Woronowicz algebras.
Hence to any Woronowicz algebra $A$ corresponds a unique compact quantum group
according to the heuristic formula $A = C(G)$.
Woronowicz's original definition for matrix compact quantum groups
\cite{wo} is still the most useful in concrete situations, and
we have the following fundamental result \cite{wo2}.
\begin{theorem}
Let $A$ be a $\cs^*$-algebra endowed with a $*$-morphism
$\Delta : A \to A \otimes A$. Then $A$ is a Woronowicz algebra if and only if
there exists a family of unitary matrices
$(u^\lambda)_{\lambda \in \Lambda} \in M_{d_\lambda}(A)$ satisfying the following three
conditions.
\begin{enumerate}
\item The $*$-subalgebra $A_0$ generated by the entries $u_{ij}^\lambda$
of the matrices $(u^\lambda)_{\lambda \in \Lambda}$ is dense in $A$.
\item For $\lambda \in \Lambda$ and $i,j \in \{1, \ldots, d_\lambda \}$,
one has $\Delta(u_{ij}^\lambda) = \sum_{k=1}^{d_{\lambda}}
u_{ik}^\lambda \otimes u_{kj}^\lambda$.
\item For $\lambda \in \Lambda$, the transpose matrix $(u^\lambda)^t$ is
invertible.
\end{enumerate}
\end{theorem}
In fact the $*$-algebra $A_0$ in the theorem is canonically defined, and is what we call a compact Hopf algebra (a CQG algebra in \cite{ks}): a Hopf $*$-algebra
having all its finite-dimensional comodules equivalent to unitary ones, or equivalently a Hopf $*$-algebra having a positive and faithful Haar integral (see \cite{ks} for details).
The counit and antipode of $A_0$, denoted respectively $\varepsilon$ and $S$,
are referred to as the counit and antipode of $A$.
The Hopf algebra $A_0$ is called the \textbf{algebra of representative functions}
on the compact quantum group $G$ dual to $A$, with another heuristic formula
$A_0 = \rep(G)$.
Conversely, starting from
a compact Hopf algebra, the universal
$\cs^*$-completion yields a Woronowicz algebra in the above sense: see the book
\cite{ks}. In fact, in general, there are possibly several different $\cs^*$-norms
on $A_0$, in particular the reduced one (obtained from the GNS-construction associated to the Haar integral), but we will not be concerned with this problem, the compact quantum groups considered in this paper being co-amenable.
Of course, any group-theoretic statement about a compact quantum group $G$ must be
interpreted in terms of the Woronowicz algebra $C(G)$ or of the Hopf $*$-algebra $\mathcal R(G)$. In particular,
as usual, a (compact) \textbf{quantum subgroup} $H \subset G$ corresponds to a surjective Woronowicz algebra
morphism $C(G)\to C(H)$, or to a surjective
Hopf $*$-algebra morphism $\rep(G)\to\rep(H)$.
\subsection{The quantum groups $U_{-1}(n)$ and $SU_{-1}(n)$} In this subsection we briefly recall the definition of the compact quantum groups $U_{-1}(n)$ and $SU_{-1}(n)$ \cite{wo88,koe,ro}.
\begin{definition}
The $*$-algebra $\rep(U_{-1}(n))$ is the universal $*$-algebra generated by variables $(u_{ij})_{1\leq i,j \leq n}$ with relations making the matrix $u=(u_{ij})$ unitary and
$$u_{ij}u_{kl} = (-1)^{\delta_{ik} + \delta_{jl}} u_{kl}u_{ij}, \ \forall i,j,k,l$$
The $\cs^*$-algebra $C(U_{-1}(n))$ is the enveloping $\cs^*$-algebra of $\rep(U_{-1}(n))$.
\end{definition}
The relations $u_{ij}^*u_{kl} = (-1)^{\delta_{ik} + \delta_{jl}} u_{kl}u_{ij}^*$ automatically hold in $\rep(U_{-1}(n))$
and $C(U_{-1}(n))$, hence the matrix $u^t$ is also unitary.
It follows that $\rep(U_{-1}(n))$ is a compact Hopf $*$-algebra, and hence that $C(U_{-1}(n))$ is a Woronowicz algebra, with comultiplication, counit and antipode defined by
$$\Delta(u_{ij}) = \sum_k u_{ik}\otimes u_{kj}, \
\varepsilon(u_{ij}) = \delta_{ij}, \ S(u_{ij}) = u_{ji}^*$$
The quantum determinant
$$D= \sum_{\sigma \in S_n} u_{1\sigma(1)} \cdots u_{n\sigma(n)} =\sum_{\sigma \in S_n} u_{\sigma(1)1} \cdots u_{\sigma(n)n} $$
is a unitary central group-like element of $\rep(U_{-1}(n))$.
\begin{definition}
The $*$-algebra $\rep(SU_{-1}(n))$ is the quotient of $\rep(U_{-1}(n))$ by the $*$-ideal generated by $D-1$, and the $\cs^*$-algebra $C(SU_{-1}(n))$ is the enveloping $\cs^*$-algebra of $\rep(SU_{-1}(n))$.
\end{definition}
It follows, since $D$ is group-like, that $\rep(SU_{-1}(n))$ is a compact Hopf $*$-algebra, and that $\cs(SU_{-1}(n))$ is a Woronowicz algebra, with comultiplication, counit and antipode defined by the same formulas as above.
The following Lemma will be used in Section 5.
\begin{lemma}\label{reduc}
For any $i\in \{1, \ldots , n+1\}$, there exists a surjective Hopf $*$-algebra map $\pi_i : \rep(SU_{-1}(n+1))\rightarrow \rep(U_{-1}(n))$ whose kernel is the Hopf $*$-ideal generated by the elements $u_{ki}$, $u_{ik}$, $k \not=i$. In particular, if $\pi : \rep(SU_{-1}(n+1))\twoheadrightarrow A$ is a surjective Hopf $*$-algebra map such that
for some fixed $i$ we have $\pi(u_{ki})=0=\pi(u_{ik})$ for $k \not=i$, then there exists a surjective Hopf $*$-algebra map $\rep(U_{-1}(n)) \twoheadrightarrow A$.
\end{lemma}
\begin{proof}
It follows from the definitions that there exists a Hopf $*$-algebra map $\pi_i$ such that $\pi_i(u_{ki})=0=\pi_i(u_{ik})$ for $k \not=i$, $\pi_i(u_{ii})=D^{-1}$, $\pi_i(u_{jk})=u_{jk}$ for $j,k<i$, $\pi_i(u_{jk}) =u_{j,k-1}$ for $j<i$ and $k>i$, $\pi_i(u_{jk})=u_{j-1,k}$ for $j>i$ and $k<i$, $\pi_i(u_{jk})=u_{j-1,k-1}$ for $j,k>i$. By definition $\pi_i$ vanishes on $I$, the $*$-ideal generated by the elements in the statement of the lemma, so induces a surjective $*$-algebra map $\overline{\pi_i} : \rep(SU_{-1}(n+1))/I\rightarrow \rep(U_{-1}(n))$, and it is not difficult to construct an inverse isomorphism to $\overline{\pi_i}$, and hence $I = {\rm Ker}(\pi_i)$. The last assertion is an immediate consequence of the first one.
\end{proof}
\subsection{Representations of $\cs^*$-algebras}
In this short subsection, we collect a few useful facts on representations of $*$-algebras and $\cs^*$-algebras.
If $A$ is $*$-algebra, a representation of $A$ always means a Hilbert space representation of $A$, i.e. a $*$-algebra map $A \rightarrow \mathcal B(H)$ into the $*$-algebra of bounded operators on a Hilbert space $H$. As usual, the set of isomorphism classes of irreducible representations of $A$ is denoted by $\widehat{A}$. If $\rho, \pi$ are representations of $A$, we write $ \rho \prec \pi$ if $\rho$ is isomorphic to a sub-representation of $\pi$.
The following classical result will be a key tool. See e.g. \cite{dix} for a proof.
\begin{theorem}\label{extend}
Let $A \subset B$ be an inclusion of $\cs^*$-algebras, and let $\rho$ be an irreducible representation of $A$. Then there exists an irreducible representation $\pi$ of $B$ such that $\rho \prec \pi_{|A}$.
\end{theorem}
Let $A$ be a $*$-algebra. If $\rho : A \rightarrow \mathcal B(H)$ is a finite-dimensional representation, then the character of $\rho$ is the linear map $\chi = {\rm tr} \rho$, where ${\rm tr}$ is the usual trace. Two finite-dimensional representations of $A$ are isomorphic if and only if they have the same character.
\medskip
Now assume that $A$ is a Hopf $*$-algebra. The trivial representation is $\varepsilon$, the counit of $A$. Let $\rho : A \rightarrow \mathcal B(H)$ be a finite-dimensional representation of $A$. Recall that the dual representation $\rho^\vee: A \rightarrow \mathcal B(\overline{H})$ (where $\overline{H}$ is the conjugate Hilbert space of $H$) is defined by $\rho^\vee(a)(\overline{x})=\overline{\rho (S(a^*))(x)}$, for any $a\in A$ and $x \in H$. We have $\varepsilon \prec \rho\otimes \rho^\vee$, and when $\rho$ is irreducible, this property characterizes $\rho^\vee$ up to isomorphism.
\section{2-cocycle deformations}
We now recall the usual twisting (2-cocycle deformation) construction for Hopf algebras, which is dual to the theory initiated by Drinfeld, and developed by Doi \cite{do}. We also develop the representation theoretic machinery needed to study the quotients of a twisting of a Hopf algebra of representative functions on a compact group.
Let $Q$ be a Hopf $*$-algebra. We use Sweedler's notation
$\Delta(x) = x_{1} \otimes x_{2}$. Recall (see e.g. \cite{do})
that a \textbf{unitary 2-cocycle} on $Q$ is a convolution invertible linear map
$\sigma : Q \otimes Q \longrightarrow \C$ satisfying
$$\sigma(x_{1},y_{1}) \sigma(x_{2}y_{2},z) =
\sigma(y_{1},z_{1}) \sigma(x,y_{2} z_{2})$$
$$\sigma^{-1}(x,y)=\overline{\sigma(S(x)^*,S(y)^*)}$$
and $\sigma(x,1) = \sigma(1,x) = \varepsilon(x)$, for $x,y,z \in Q$.
Following \cite{do} and \cite{sc1}, we associate various $*$-algebras to
a unitary 2-cocycle.
$\bullet$ First consider the $*$-algebra
$_{\sigma} \! Q$. As a vector space we have $_{\sigma} \! Q = Q$ and the product and involution
of $_{\sigma}Q$ are defined to be
$$\{x\} \{y\} = \sigma(x_{1}, y_{1}) \{x_{2} y_{2}\},
\quad
\{x\}^*=\sigma^{-1}(x_2^*, S(x_1)^*)\{x_3^*\}, \quad x,y \in Q,$$
where an element $x \in Q$ is denoted $\{x\}$, when viewed as an element
of $_{\sigma} \!Q$.
$\bullet$ We also have the $*$-algebra $Q_{\sigma^{-1}}$. As a vector space we have
$Q_{\sigma^{-1}} = Q$ and the product and involution of
$Q_{\sigma^{-1}}$ are defined to be
$$\langle x \rangle \langle y \rangle = \sigma^{-1}(x_{2}, y_{2}) \langle x_{1} y_{1} \rangle, \quad
\langle x\rangle^*=\sigma(S(x_3)^*, x_2^*)\langle x_1^*\rangle,
\quad x,y \in Q.$$
where an element $x \in Q$ is denoted $\langle x \rangle$, when viewed as an element
of $Q_{\sigma^{-1}}$. The unitary cocycle condition ensures that $_{\sigma} \! Q$
and $Q_{\sigma^{-1}}$ are associative $*$-algebras with $1$ as a unit. The algebras $_{\sigma} \! Q$
and $Q_{\sigma^{-1}}$ are in fact anti-isomorphic, see e.g. \cite{bi03}.
If $Q$ is a compact Hopf algebra, then the Haar integral on $Q$, viewed as a linear map on $_{\sigma}Q$ and $Q_{\sigma^{-1}}$, is still a faithful state (this can been seen by using the orthogonality relations \cite{wo, ks}).
We denote by $\cs^*_r(_{\sigma}Q)$ and $\cs_r^*(Q_{\sigma^{-1}})$ the respective $\cs^*$-completions obtained from the GNS-constructions associated to the Haar integral.
$\bullet$ Finally we have the
Hopf $*$-algebra $Q^{\sigma} = {_{\sigma} \! Q}\!_{\sigma^{-1}}$.
As a coalgebra $Q^{\sigma}
= Q$. The product and involution of $Q^{\sigma}$ are defined to be
$$[x] [y]= \sigma(x_{1}, y_{1})
\sigma^{-1}(x_{3}, y_{3}) [x_{2} y_{2}], \quad [x]^*=\sigma(S(x_5)^*, x_4^*) \sigma^{-1}(x_2^*, S(x_1)^*) [x_3^*]
\quad x,y \in Q,$$
where an element $x \in Q$ is denoted $[x]$, when viewed as an element
of $Q^{\sigma}$,
and we have the following formula for the antipode of
$Q^{\sigma}$:
$$S^{\sigma}([x]) = \sigma(x_{1}, S(x_{2}))
\sigma^{-1}(S(x_{4}), x_{5})[ S(x_{3})].$$
The Hopf algebras $Q$ and $Q^{\sigma}$ have equivalent tensor categories of comodules
\cite{sc1}. If $Q$ is a compact Hopf algebra, then $Q^{\sigma}$ is also a compact Hopf algebra, the Haar integral on $Q^\sigma$ being the one of $Q$, and the $\cs^*$-tensor categories of unitary comodules over $Q$ and $Q^\sigma$ are equivalent \cite{bdv}. If $Q= \rep(G)$, the algebra of representative functions on a compact group $G$, we denote by $C(G)^\sigma$ the enveloping $\cs^*$-algebra of $\rep(G)^\sigma$.
\medskip
Very often unitary 2-cocycles are induced by simpler quotient Hopf $*$-algebras (quantum subgroups).
More precisely let $\pi : Q \to L$ be a Hopf $*$-algebra surjection and let
$\sigma : L \otimes L \to \C$ be a unitary 2-cocycle on $L$.
Then $\sigma_{\pi} = \sigma \circ (\pi \otimes \pi) : Q \otimes Q \to \C$ is a unitary 2-cocycle.
In what follows the cocycle $\sigma_\pi$ will simply be denoted by $\sigma$, this should cause not cause any confusion.
We first record the following elementary result from \cite{bb09}.
\begin{proposition}\label{corespsubgroup}
Let $\pi : Q \to L$ be a Hopf $*$-algebra surjection and let
$\sigma : L \otimes L \to \C$ be a unitary $2$-cocycle. Denote by $[\pi]:Q^\sigma \to L^\sigma$ the map $[x]\mapsto[\pi(x)]$.
Then there is a bijection between the following data.
\begin{enumerate}
\item Surjective Hopf $*$-algebra maps $ f : Q \to R$ such that there exists
a Hopf $*$-algebra map $g : R \to L$ satisfying $g \circ f = \pi$.
\item Surjective Hopf $*$-algebra maps $ f' : Q^{\sigma} \to R'$ such that there exists
a Hopf $*$-algebra map $g' : R' \to L^{\sigma}$ satisfying $g' \circ f' = [\pi]$.
\end{enumerate}
\end{proposition}
Similarly, the following result is essentially contained in \cite{bb09}.
\begin{proposition}\label{gene}
Let $\pi : Q \to L$ be a Hopf $*$-algebra surjection and let
$\sigma : L \otimes L \to \C$ be a unitary $2$-cocycle on $L$. We have an injective
$*$-algebra map
\begin{align*}
\theta : Q^{\sigma} & \longrightarrow Q \otimes {_\sigma \! L} \otimes L_{\sigma^{-1}} \\
[x] & \longmapsto x_{2} \otimes \{\pi(x_1)\} \otimes \langle \pi(x_3) \rangle
\end{align*}
that induces an isomorphism to the subalgebra of coinvariant elements
$$Q^{\sigma} \simeq (Q \otimes {_\sigma \! L} \otimes L_{\sigma^{-1}})^{{\rm co}(L^{\rm cop} \otimes L)}$$
where the respective right coactions of $L^{\rm cop} \otimes L$ on $Q$ and ${_\sigma \! L} \otimes L_{\sigma^{-1}}$ are defined by
\begin{align*}
Q & \rightarrow Q \otimes L^{\rm cop} \otimes L \quad \quad \quad \quad \quad {_\sigma \! L} \otimes L_{\sigma^{-1}} \rightarrow {_\sigma \! L} \otimes L_{\sigma^{-1}} \otimes L^{\rm cop} \otimes L \\
x & \mapsto x_2 \otimes \pi(x_1) \otimes \pi(x_3) \quad \quad \{\pi(x)\} \otimes \langle \pi(y) \rangle \mapsto \{\pi(x_1)\} \otimes \langle \pi(y_2)\rangle \otimes S^{-1}\pi(x_2) \otimes S\pi(y_1)
\end{align*}
If moreover $Q$ and $L$ are cosemisimple and $h_Q$ and $h_L$ denote their respective Haar integrals, we have $(h_Q \otimes h_L \otimes h_L)\theta =h_Q$.
\end{proposition}
\begin{proof}
It follows from the definitions that $\theta$ is a $*$-algebra map and that $({\rm id}_Q \otimes \varepsilon \otimes \varepsilon)\theta = {\rm id}_{Q^\sigma}$, hence $\theta$ is injective. It is a direct verification to check that
$\theta(Q^\sigma) \subset (Q \otimes {_\sigma \! L} \otimes L_{\sigma^{-1}})^{{\rm co}(L^{\rm cop} \otimes L)}$, and that $\theta$ induces the announced isomorphism, with inverse $ ({\rm id}_Q \otimes \varepsilon \otimes \varepsilon)$. The last assertion is immediate.
\end{proof}
We now specialize to the case $Q=\rep(G)$, the algebra of representative on a classical compact group $G$.
\begin{proposition}\label{fixcocycle}
Let $G$ be a compact group, let $\Gamma \subset G$ be a closed subgroup and let $\sigma$ be a unitary $2$-cocycle on $\mathcal R(\Gamma)$. Put $B= \cs^*_r(_{\sigma}\rep(\Gamma)) \otimes \cs_r^*(\rep(\Gamma)_{\sigma^{-1}})$. Then there exists a $\cs^*$-algebra embedding
$$\theta : C(G)^\sigma \longrightarrow C(G) \otimes B$$
inducing a $\cs^*$-algebra isomorphism
$$C(G)^\sigma \simeq (C(G) \otimes B)^{\Gamma^{\rm op} \times \Gamma}$$
for some natural actions of $\Gamma^{\rm op} \times \Gamma$ on $G$ and $B$.
\end{proposition}
\begin{proof}
The restriction map $\mathcal R(G) \rightarrow \mathcal R(\Gamma)$ enables us to use the previous proposition.
The previous injective $*$-algebra map $\theta : \mathcal R(G)^{\sigma} \rightarrow \rep(G) \otimes {_\sigma \! \rep(\Gamma)} \otimes \rep(\Gamma)_{\sigma^{-1}}$ induces a $*$-algebra map $C(G)^\sigma \longrightarrow C(G) \otimes B$, still denoted $\theta$ (recall that $C(G)^\sigma$ is the enveloping $\cs^*$-algebra of $\rep(G)^\sigma$). The co-amenability of $\mathcal R(G)^\sigma$ \cite{ba99} and the last observation in the previous proposition show that $\theta$ is injective at the $\cs^*$-algebra level. The coactions of the previous proposition induce actions of $\Gamma^{\rm op} \times \Gamma$ on $\mathcal R(G)$ and on ${_\sigma \! \rep(\Gamma)} \otimes \rep(\Gamma)_{\sigma^{-1}}$, and hence on $C(G)$ and on $B$. We have, by the previous proposition, an isomorphism $\rep(G)^{\sigma} \simeq (\rep(G) \otimes {_\sigma \! \rep(\Gamma)} \otimes \rep(\Gamma)_{\sigma^{-1}})^{\Gamma^{\rm op} \times \Gamma}$, and hence, since $(\rep(G) \otimes {_\sigma \! \rep(\Gamma)} \otimes \rep(\Gamma)_{\sigma^{-1}})^{\Gamma^{\rm op} \times \Gamma}$ is dense in $(C(G) \otimes B)^{\Gamma^{\rm op} \times \Gamma}$, an isomorphism
$$C(G)^\sigma \simeq (C(G) \otimes B)^{\Gamma^{\rm op} \times \Gamma}$$
This gives the announced result.
\end{proof}
\begin{remark}{\rm
The right action of $\Gamma^{\rm op} \times \Gamma$ on $G$ in the previous result
is given by
\begin{align*}
G \times (\Gamma^{\rm op} \times \Gamma) &\longrightarrow G \\
(g, (r,s)) & \longmapsto rgs
\end{align*}
The $C^*$-algebra $(C(G) \otimes B)^{\Gamma^{\rm op} \times \Gamma}$ is naturally identified with
$C(G \times_{\Gamma^{\rm op} \times \Gamma} B)$, the algebra of continuous functions $f : G \rightarrow B$
such that $f(g.(r,s))=(r,s)^{-1}.f(g)$, $\forall g \in G$, $\forall (r,s) \in \Gamma^{\rm op} \times \Gamma$.
Thus it follows that $C(G)^\sigma$ is (the algebra of sections on) a continuous bundle of $\cs^*$-algebras over the orbit space $G/(\Gamma^{\rm op} \times \Gamma) \simeq \Gamma \setminus G/ \Gamma$, with fiber at an orbit $\Gamma g\Gamma$ the fixed point algebra $B^{(\Gamma^{\rm op} \times \Gamma)_g}$, where $(\Gamma^{\rm op}\times\Gamma)_g= \{(r,s) \in \Gamma \times \Gamma, \ rgs=g\}$: see e.g. Lemma 2.2 in \cite{ee}. Hence the representation theory of $C(G)^{\sigma}$ is determined by the representation theory of the fibres $B^{(\Gamma^{\rm op} \times \Gamma)_g}$.
}
\end{remark}
The following result will be our main tool to study the representations and quotients of a Woronowicz algebra of type $C(G)^\sigma$.
\begin{proposition}\label{maintool}
Let $G$ be a compact group, let $\Gamma \subset G$ be a closed subgroup and let $\sigma$ be a unitary $2$-cocycle on $\mathcal R(\Gamma)$. Then for each $g \in G$ we have a $*$-algebra map
\begin{align*}
\theta_g : C(G)^{\sigma} & \longrightarrow \cs_r^*({_\sigma \! \mathcal R(\Gamma})) \otimes \cs_r^*(\mathcal R(\Gamma)_{\sigma^{-1}}) \\
\mathcal R(G)^{\sigma} \ni [f] & \longmapsto f_{2}(g) \{ {f_1}_{|\Gamma}\} \otimes \langle {f_3}_{|\Gamma} \rangle \in {_\sigma \! \mathcal R(\Gamma}) \otimes \mathcal R(\Gamma)_{\sigma^{-1}}
\end{align*}
If $\Gamma$ is finite, then $\dim(\Im(\theta_g))= |\Gamma g \Gamma|$.
Assume moreover that
${_\sigma \! \mathcal R(\Gamma})$ and $\mathcal R(\Gamma)_{\sigma^{-1}}$ are full matrix algebras, so that $\theta_g$ defines a representation of dimension $|\Gamma|$ of $\mathcal R(G)^{\sigma}$.
\begin{enumerate}
\item Every irreducible representation of $C(G)^\sigma$ is isomorphic to a subrepresentation of $\theta_g$ for some $g \in G$. In particular every irreducible representation of $C(G)^\sigma$ is finite-dimensional and has dimension at most $|\Gamma|$.
\item The representation $\theta_g$ is irreducible if and only if $|\Gamma g \Gamma|= |\Gamma|^2$, if and only if $\#\{(s,t) \in \Gamma \times \Gamma \ | \ sgt=g\}=1$. Any irreducible representation of dimension $| \Gamma|$ of $C(G)^{\sigma}$ is isomorphic to an irreducible representation $\theta_g$ as above.
\item For $g,h \in G$, we have $\theta_g \simeq \theta_h \iff \Gamma g \Gamma = \Gamma h \Gamma$.
\item For $g,h \in G$, we have $\theta_g \otimes \theta_h \simeq \oplus_{s \in \Gamma} \theta_{gsh}$.
\item Assume furthermore that $\Gamma$ is abelian. Then each $s \in \Gamma$ defines a 1-dimensional representation $\varepsilon_s$ of $C(G)^\sigma$, and for $s \in \Gamma$, we have $\theta_s\simeq \oplus_{t\in \Gamma} \varepsilon_t$.
\end{enumerate}
\end{proposition}
\begin{proof}
The representations $\theta_g$ are defined using the previous embedding $\theta$, by $\theta_g= ({\rm ev}_g \otimes {\rm id} \otimes {\rm id})\theta$, where ${\rm ev}_g$ is the evaluation at $g$.
We assume now that $\Gamma$ is finite. As a linear space, we view $\cs_r^*({_\sigma \! \mathcal R(\Gamma})) \otimes \cs_r^*(\mathcal R(\Gamma)_{\sigma^{-1}})$ as $C(\Gamma \times \Gamma)$.
Consider the continuous linear map
\begin{align*}
\theta'_g : C(G) &\longrightarrow C(\Gamma \times \Gamma) \\
f &\longmapsto ( (s,t) \mapsto f(sgt))
\end{align*}
For $f \in \mathcal R(G)$, we have $\theta'_g(f)=\theta_g([f])$, hence $\theta'_g(\mathcal R(G))=\theta_g(\mathcal R(G)^\sigma)$ and $\theta'_g(C(G))=\theta_g(C(G)^\sigma)$ by the density of $\mathcal R(G)$ and the finite-dimensionality of the target space.
We have ${\rm Ker (\theta'_g)} =\{f \in C(G) \ | \ f_{|\Gamma g\Gamma}=0\}=I$ and since $\theta'_g(C(G)) \simeq C(G)/I\simeq C(\Gamma g \Gamma)$, we have $\dim(\theta'_g(C(G))$ = $|\Gamma g \Gamma|=\dim(\theta_g(C(G)^\sigma))$.
Assume now that
${_\sigma \! \mathcal R(\Gamma})$ and $\mathcal R(\Gamma)_{\sigma^{-1}}$ are full matrix algebras.
By counting dimensions, ${_\sigma \! \mathcal R(\Gamma}) \otimes \mathcal R(\Gamma)_{\sigma^{-1}} \cong M_{|\Gamma|}(\C)$.
The irreducible representations of $C(G) \otimes {_\sigma \! \mathcal R(\Gamma}) \otimes \mathcal R(\Gamma)_{\sigma^{-1}}$ all are of the form ${\rm ev}_g \otimes {\rm id} \otimes {\rm id}$, and since
$\theta$ defines an embedding $C(G)^\sigma \hookrightarrow C(G) \otimes {_\sigma \! \mathcal R(\Gamma}) \otimes \mathcal R(\Gamma)_{\sigma^{-1}}$, it follows from Theorem \ref{extend} that any irreducible representation of $C(G)^\sigma$ is isomorphic to a subrepresentation of some $\theta_g$, and hence is finite-dimensional of dimension $\leq |\Gamma|$. This proves (1). The matrix representation $\theta_g$ is irreducible if and only if $\theta_g$ is surjective, if and only if $|\Gamma g \Gamma|=\dim({\rm Im}(\theta_g))= |\Gamma|^2$, and this proves (2).
Consider now the linear map
\begin{align}
\label{eq:character}
\chi'_g : C(G) & \longrightarrow \C \\
f & \mapsto \frac{1}{|\Gamma|} \sum_{s,t \in \Gamma}f(sgt) \nonumber
\end{align}
Let $\chi_g$ be the character of $\theta_g$. Let us check that $\chi_g([f])=\chi'_g(f)$ for any $f \in \mathcal R(G)$. By the density of $\mathcal R(G)$ and $\mathcal R(G)^\sigma$, this will show that for $g,h \in G$, we have $\chi_g=\chi_h \iff \chi'_g=\chi'_h$.
Consider the normalized Haar integral $h : C(\Gamma) \rightarrow \C$, $f \mapsto \frac{1}{|\Gamma|} \sum_{s \in \Gamma}f(s)$. Then $h$, viewed as a linear map on
${_\sigma \! \mathcal R(\Gamma})$, is still a trace since it is invariant under the natural ergodic action of the finite group $\Gamma$ on the matrix algebra ${_\sigma \! \mathcal R(\Gamma})$, and hence we have $h=\frac{1}{\sqrt{|\Gamma|}} {\rm tr}$, where ${\rm tr}$ is the usual trace. Thus we have, for $f \in \mathcal R(G)$,
\begin{align*}
\chi_g([f])=({\rm tr} \otimes {\rm tr})\theta_g ([f]) &= |\Gamma|(h \otimes h)\theta_g([f])= |\Gamma|(h\otimes h)( f_{2}(g) \{f_{1_{|\Gamma}} \} \otimes \langle f_{3_{|\Gamma}} \rangle) \\
& = \frac{1}{|\Gamma|} \sum_{s,t \in \Gamma} f_1(s) f_2(g) f_3(t) = \frac{1}{|\Gamma|} \sum_{s,t \in \Gamma}
f(sgt) = \chi'_g(f)
\end{align*}
Let $g,h \in G$. If $\Gamma g \Gamma=\Gamma h \Gamma$, then $\chi'_g=\chi'_h$, and hence $\chi_g=\chi_h$, and it follows that $\theta_g \simeq \theta_h$.
Conversely, assume that $\Gamma g \Gamma\not=\Gamma h \Gamma$, and let $f \in C(G)$ be such that
$f_{|\Gamma g \Gamma}=0$ and $f_{|\Gamma h \Gamma}=1$. We have $\chi'_g(f)=0$ and $\chi'_h(f)=1$: this shows that $\chi_g \not=\chi_h$ and hence that $\theta_g$ and $\theta_h$ are not isomorphic.
This proves (3).
For $g,h \in G$, let us show that $(\chi_g \otimes \chi_h)\Delta = \sum_{s \in \Gamma} \chi_{gsh}$. This will prove (4). For $f$ in $\mathcal R(G)$, we have
\begin{align*}(\chi_g \otimes \chi_h)\Delta([f]) &= \chi_g([f_1])\chi_h([f_2])=\frac{1}{|\Gamma|^2}\sum_{r,s,t,u \in \Gamma}f_1(rgs)f_2(tsu) \\
& = \frac{1}{|\Gamma|^2}\sum_{r,s,t,u \in \Gamma} f(rgsthu) = \frac{1}{|\Gamma|}\sum_{r,s,u \in \Gamma} f(rgshu)
= \sum_{s \in \Gamma} \chi_{gsh}([f])
\end{align*}
and we have the result by density of $\mathcal R(G)^\sigma$ in $C(G)^\sigma$.
Assume finally that $\Gamma$ is abelian. Then $\rep(\Gamma)$ is cocommutative and $\rep(\Gamma)^\sigma=\rep(\Gamma)$. For $s \in \Gamma$, the $*$-algebra map $\varepsilon_s : \rep(G)^\sigma \rightarrow \C$ is obtained by composing the restriction $\mathcal R(G)^\sigma \rightarrow \rep(\Gamma)^\sigma=\rep(\Gamma)$ with the evaluation at $s$. For $s \in \Gamma$ and $f$ in $\rep(G)$, we have
$$\chi_s([f])=\frac{1}{|\Gamma|}\sum_{r,t \in \Gamma} f(rst) = \frac{1}{|\Gamma|}\sum_{r,t \in \Gamma} \varepsilon_{rst}([f])=\sum_{r \in \Gamma}\varepsilon_r([f])$$
and again we get the result by density of $\mathcal R(G)^\sigma$ in $C(G)^\sigma$.
\end{proof}
We arrive at a useful criterion to show that a quotient of a twisted function algebra on compact group is still a twisted function algebra on a compact subgroup.
\begin{theorem}\label{critsub}
Let $G$ be a compact group and let $\sigma$ be a unitary 2-cocycle on $\rep(G)$ induced by a finite abelian subgroup $\Gamma \subset G$ such that ${_\sigma \! \mathcal R(\Gamma})$ is a full matrix algebra. Let $A$ be a Woronowicz algebra quotient of $C(G)^\sigma$. Then all the irreducible representations of the $\cs^*$-algebra $A$ have dimension $\leq |\Gamma|$, and if $A$ has an irreducible representation of dimension $|\Gamma|$, then there exists a compact subgroup $\Gamma \subset K \subset G$ such that $A \simeq C(K)^\sigma$.
\end{theorem}
\begin{proof}
We are in the situation of Proposition \ref{maintool}, since the algebras ${_\sigma \! \mathcal R(\Gamma})$ and $\mathcal R(\Gamma)_{\sigma^{-1}}$ are anti-isomorphic. Thus if $\rho$ is an irreducible representation of $A$ of dimension $|\Gamma|$, then $\rho\pi$ is also an irreducible representation of $C(G)^\sigma$ (with $\pi : C(G)^\sigma \rightarrow A$ being the given quotient map), and so there exists $g \in G$ such that $\rho\pi\simeq \theta_g$. That is, $\theta_g$ factors through a representation of $A$. The isomorphisms from \ref{maintool}
$$\theta_g \otimes \theta_{g^{-1}} \simeq \oplus_{s \in \Gamma} \theta_{gsg^{-1}}\simeq \theta_{1} \oplus ( \oplus_{s \in \Gamma, s\not=1} \theta_{gsh})\simeq
(\oplus_{s \in \Gamma}\varepsilon_s)\oplus (\oplus_{s \in \Gamma, s\not=1} \theta_{gsh}) $$
show that $\theta_{g^{-1}}$ is the dual of the representation $\theta_g$ of $C(G)^{\sigma}$. Thus, $\theta_{g^{-1}}$ factors through a representation of $A$, as do all the simple constituents of $\theta_g \otimes \theta_{g^{-1}}$. In particular, each $\varepsilon_s$, $s \in \Gamma$, defines a representation $A$, and we get a surjective $*$-algebra map $A \rightarrow \rep(\Gamma)$. We conclude by Proposition \ref{corespsubgroup}.
\end{proof}
\section{Application to $SU_{-1}(2m+1)$ and $U_{-1}(2m+1)$}
From now on we assume that $n=2m+1$ is odd.
We recall how the quantum groups $SU_{-1}(2m+1)$ and $U_{-1}(2m+1)$ can be obtained by 2-cocycle deformation, using a 2-cocycle induced from the group $\mathbb Z_2^{2m}$, and then use the results of the previous section to get information on their quantum subgroups.
We denote by $\mathbb Z_2$ the cyclic group on two elements, and we use the identification
$$\mathbb Z_2^{2m} = \langle t_1, \ldots , t_{2m+1} \ | \ t_it_j=t_it_j, \ t_1^2=\cdots = t_{2m+1}^2= 1= t_1 \cdots t_{2m+1} \rangle$$
Let $\sigma : \mathbb Z_2^{2m} \times \mathbb Z_2^{2m} \rightarrow \{\pm 1\}$ be the unique bicharacter such that
$$\sigma(t_i,t_j)=
-1 = -\sigma(t_j,t_i) \ \text{for $1\leq i<j \leq 2m$}$$
$$\sigma(t_i,t_i)=(-1)^m \ \text{for $1 \leq i\leq 2m+1$}$$
$$\sigma(t_i,t_{2m+1})= (-1)^{m-i} = -\sigma(t_{2m+1},t_i) \ \text{ for $1\leq i\leq 2m$}
$$
It is well-known that the twisted group algebra $\C_\sigma \mathbb Z_2^{2m}$ is isomorphic to the matrix algebra
$M_{2^{m}}(\C)$.
There exists a surjective Hopf $*$-algebra morphism
\begin{align*}
\pi : \mathcal R(SU(2m+1)) &\rightarrow \C \mathbb Z_2^{2m} \\
u_{ij} & \longmapsto \delta_{ij}t_{i}
\end{align*}
induced by the restriction of functions to $\Gamma$, the subgroup of $SU(2m+1)$ formed by diagonal matrices having $\pm 1$ as entries, composed with the Fourier transform $\rep(\Gamma) \simeq \C \widehat{\Gamma} \simeq \C \mathbb Z_2^{2m}$. Thus we may form the twisted Hopf algebra $\mathcal R(SU(2m+1))^\sigma$, and it is not difficult to check that there exists a surjective Hopf $*$-algebra map $\mathcal R(SU_{-1}(2m+1)) \rightarrow \mathcal R(SU(2m+1))^\sigma$, $u_{ij} \mapsto [u_{ij}]$, which is known to be an isomorphism (there are several ways to show this, a simple one being to invoke the presentation Theorem 3.5 in \cite{gkm}). Hence we have $C(SU_{-1}(2m+1)) \simeq C(SU(2m+1))^\sigma$, with $\sigma$ induced from the subgroup $\Gamma \simeq \mathbb Z_2^{2m}$, and we are in the framework of Theorem \ref{critsub}. Similarly $C(U_{-1}(2m+1)) \simeq C(U(2m+1))^\sigma$.
If $K$ is a compact subgroup of $SU(2m+1)$ with $\Gamma \subset K$, we denote by $K_{-1}$ the compact quantum group corresponding to the Woronowicz algebra $C(K)^\sigma$. With this language, the following result is an immediate consequence of Theorem \ref{critsub}.
\begin{theorem}\label{subsu(2m+1)}
Let $G$ be a compact quantum subgroup of $SU_{-1}(2m+1)$. Then all the irreducible representations of the $\cs^*$-algebra $C(G)$ have dimension $\leq 4^m$, and if $C(G)$ has an irreducible dimension
of dimension $4^{m}$, then there exists a compact subgroup $\Gamma \subset K \subset SU(2m+1)$ such that $G \simeq K_{-1}$.
\end{theorem}
A similar statement holds as well with $SU_{-1}(2m+1)$ replaced by $U_{-1}(2m+1)$.
\section{Quantum subgroups of $SU_{-1}(3)$}
This section is devoted to the proof of Theorem \ref{subSu3}. We first need some preliminary results, and we begin by fixing some notation.
For a permutation $\nu \in S_3$, we put
$$SU(3)^{\nu}= \{ g=(g_{ij}) \in SU(3) \ | \ g_{ij}=0 \ {\rm if} \ \nu(j)\not=i\}$$
and also
$$SU(3)^{\rm \Sigma} = \cup_{\nu \in S_3} SU(3)^{\nu}.$$
For $g \in SU(3)^\Sigma$, we denote by $\nu_g$ the unique element of $S_3$ such that $g \in SU(3)^{\nu_g}$.
The following result is easily verified (and has an obvious generalization for any $n$).
\begin{lemma}\label{1rep}
Any element $g =(g_{ij}) \in SU(3)^\Sigma$ defines a $*$-algebra map $\varepsilon_g : C(SU_{-1}(3)) \rightarrow \C$ such that $\varepsilon_g(u_{ij})=\epsilon(\nu_g)g_{ij}$ (where $\epsilon(\nu_g)$ is the signature of $\nu_g$). Conversely any $1$-dimensional representation of $C(SU_{-1}(3))$ arises in this way.
\end{lemma}
As is the previous section, the subgroup of $SU(3)$ formed by diagonal matrices having $\pm 1$ as entries
is denoted $\Gamma$.
In the case $g\in\Gamma$, then $\varepsilon_g$ is of course the representation of the same name from Proposition \ref{maintool}.
We denote by
$SU(3)_{\rm reg}$ the subset of matrices in $SU(3)$ for which there exists a row or a column having no zero coefficient.
Recall from Section 4 and Proposition \ref{maintool} that each $g \in SU(3)$ defines a representation
$$\theta_g : C(SU_{-1}(3)) \longrightarrow \C_\sigma\Gamma \otimes \C_\sigma\Gamma \simeq M_2(\C) \otimes M_2(\C) \simeq M_4(\C)$$
The twisted group algebra $\C_\sigma\Gamma$ is presented by generators $T_1$, $T_2$, $T_3$ and relations $T_1^2=-1=T_2^2=T_3^2$, $1=T_{1}T_{2}T_3$, $T_iT_j=-T_jT_i$ if $i \not=j$
(where in the notation of the previous sections, $T_i= \{t_i\}=\langle t_i\rangle$).
With this notation, the representation $\theta_g$ ($g \in SU(3)$) has the following form
\begin{align*}
\theta_{g} : C(SU_{-1}(3)) &\longrightarrow \C_\sigma\Gamma \otimes \C_\sigma\Gamma \\
u_{ij} &\longmapsto g_{ij} T_i \otimes T_j
\end{align*}
\begin{lemma}\label{critirred}
The representation $\theta_g$ is irreducible if and only if $g \in SU(3)_{\rm reg}$. If $g \in SU(3)^{\Sigma}$, then $\theta_g$ is isomorphic to a direct sum of one-dimensional representations.
\end{lemma}
\begin{proof}
The first assertion follows directly from (2) in Proposition \ref{maintool}. The second assertion follows from the fact that if $g \in SU(3)^{\Sigma}$, the algebra $\theta_g(C(SU_{-1}(3))$ is commutative (this is clear from the above description of $\theta_g$).
\end{proof}
Our next aim is to describe the tensor products $\varepsilon_g \otimes \theta_h$.
\begin{lemma}\label{fusion}
Let $g \in SU(3)^\Sigma$ and let $h \in SU(3)$. Then the representations $\varepsilon_g \otimes \theta_h$ and $ \theta_{g h}$ are isomorphic.
\end{lemma}
\begin{proof}
Put $g =(\delta_{i, \nu(j)}a_i)$ with $\nu \in S_3$. We have, for any $i,j$,
$$(\varepsilon_g \otimes \theta_{h})\Delta(u_{ij}) = \sum_k\varepsilon_g(u_{ik})h_{kj} T_k \otimes T_j
=\epsilon(\nu)a_ih_{\nu^{-1}(i)j} T_{\nu^{-1}(i)}\otimes T_j$$
It is straightforward to check that there exists an automorphism
$\alpha_\nu$ of $\mathbb C_\sigma \Gamma$ such that $\alpha_\nu(T_i)=\varepsilon(\nu)T_{\nu(i)}$ for any $i$. We have
$$\alpha_\nu \otimes {\rm id}(\varepsilon_g \otimes \theta_{h})\Delta(u_{ij})
=a_ih_{\nu^{-1}(i)j} T_{i}\otimes T_j=\theta_{g h}(u_{ij})$$
and hence, since $\alpha_\nu$ is (necessarily) an inner automorphism of the matrix algebra $\C_\sigma\Gamma$, we conclude that the representations $\varepsilon_g \otimes \theta_h$ and $ \theta_{g h}$ are isomorphic.
\end{proof}
Before going into the proof of Theorem \ref{subSu3}, we need a final piece of notation. For $1\leq i,j \leq 3$, we put
$$SU(3)^{[i,j]}= \{ g=(g_{ij}) \in SU(3) \ | \ g_{ik}=0 \ {\rm if} \ k\not=j, \ g_{kj}=0 \ {\rm if } \ i\not=k, \ g \not \in SU(3)^{\Sigma}\}$$
\begin{proof}[Proof of Theorem \ref{subSu3}]
Let $G \subset SU_{-1}(3)$ be a non-classical compact quantum subgroup, with corresponding surjective Woronowicz algebra map $\pi : C(SU_{-1}(3)) \rightarrow C(G)$. Recall that we have to prove that one of the following assertion holds.
\begin{enumerate}
\item There exists a compact subgroup $\Gamma \subset K \subset SU(3)$ such that $G$ is isomorphic to $K_{-1}$.
\item $G$ is isomorphic to a quantum subgroup of $U_{-1}(2)$.
\end{enumerate}
We already know from Theorem \ref{subsu(2m+1)} that if $C(G)$ has an irreducible representation of dimension 4, then (1) holds. So we assume that $C(G)$ has all its irreducible representation of dimension $<4$.
We denote by $X$ the set of (isomorphism classes) of irreducible representations of $C(G)$ having dimension $d$ satisfying $1<d<4$. We remark that $X$ is non-empty since $C(G)$ is non-commutative.
Let $\rho \in X$. Then $\rho$ defines an irreducible representation $\rho\pi$ of $C(SU_{-1}(3))$, and hence by Proposition \ref{maintool} there exists $g \in SU(3)$ such that $\rho\pi \prec \theta_g$. If $g \in SU(3)_{\rm reg}$, then by Lemma \ref{critirred} $\theta_g$ is irreducible and $\rho\pi\simeq \theta_g$ has dimension 4, which contradicts our assumptions. Hence
$g \not \in SU(3)_{\rm reg}$. If $g \in SU(3)^{\Sigma}$, then by Lemma \ref{critirred} $\theta_g$ is a direct sum of representations of dimension $1$, hence $\rho$ has dimension $1$, which again contradicts our assumption, and hence $g \not \in SU(3)^{\Sigma}$. Thus there exist $i,j$ such that $g \in SU(3)^{[i,j]}$. Suppose that $i \not=j$. Then
$\rho\pi \otimes \rho\pi \prec \theta_{g} \otimes \theta_g \simeq \oplus_{s \in \Gamma}\theta_{gsg}$ (by Proposition \ref{maintool}). For any $s \in \Gamma$, $sg \in SU(3)^{[i,j]}$ and it is a direct computation to check that $gsg \in SU(3)_{\rm reg}$, so the constituents of this decomposition are irreducible representations. By a dimension argument there exists
$s \in \Gamma$ such that $\rho\pi \otimes \rho\pi \simeq \theta_{gsg}$, and hence $\rho \otimes \rho$ is irreducible of dimension $4$; this is a contradiction.
We have thus proved that for any $\rho \in X$, there exists $i \in \{1,2,3\}$ and $g \in SU(3)^{[i,i]}$ such that
$\rho\pi \prec \theta_g$. Assume that there exist $\rho,\rho' \in X$ with $\rho\pi \prec \theta_g$, $\rho'\pi \prec \theta_{g'}$ for $g \in SU(3)^{[i,i]}$, $g'\in SU(3)^{[j,j]}$ and $i \not=j$. Then $\rho\pi \otimes \rho'\pi \prec \theta_{g} \otimes \theta_{g'} \simeq \oplus_{s \in \Gamma}\theta_{gsg'}$. Once again, for any $s \in \Gamma$, $gsg' \in SU(3)_{\rm reg}$, and we conclude as before that $\rho \otimes \rho'$ is an irreducible representation of dimension $4$, a contradiction.
Thus we have proved that there exists $i \in \{1,2,3 \}$ such that for any $\rho \in X$, we have $\rho\pi \prec \theta_g$ for some $g \in SU(3)^{[i,i]}$, and hence $\rho\pi(u_{ik})=0=\rho\pi(u_{ki})$ for any $k \not=i$ and $\rho \in X$.
Let $\phi$ be a $1$-dimensional representation of $C(G)$. By Lemma \ref{1rep}, there exists $\nu \in S_3$ and $g \in SU(3)^\nu$ such that $\phi\pi= \varepsilon_g$. Let $\rho \in X$ with $\rho\pi \prec \theta_h$ for $h \in SU(3)^{[i,i]}$. Then $\phi\pi \otimes \rho\pi \prec \varepsilon_g \otimes \theta_h\simeq \theta_{g h}$ by Lemma \ref{fusion}. It is straightforward to check that $g h \in SU(3)^{[\nu(i),i]}$. By a previous case we must have $\nu(i)=i$. Hence $\phi\pi(u_{ik})=0=\phi\pi(u_{ki})$ for any $k \not=i$.
Summarizing, we have shown that for any $\rho \in \widehat{C(G)}$, we have $\rho\pi(u_{ik})=0=\rho\pi(u_{ki})$ for any $k \not=i$. The irreducible representations of a $\cs^*$-algebra separate its elements, so we conclude that $\pi(u_{ik})=0=\pi(u_{ki})$ for any $k \not=i$, and by Lemma \ref{reduc}, we are in situation (2). This concludes the proof.
\end{proof}
\begin{corollary}\label{irred}
Let $G$ be a non-classical compact quantum subgroup of $SU_{-1}(3)$ acting irreducibly on $\C^3$.
Then $G$ is isomorphic to a $K_{-1}$, a twist at $-1$ of a compact subgroup $K \subset SU(3)$ containing the subgroup of diagonal matrices having $\pm 1$ as entries, and acting irreducibly on $\C^3$.
\end{corollary}
\begin{proof}
We have shown in the previous proof that if $C(G)$ does not have an irreducible representation of dimension $4$, then the fundamental $3$-dimensional representation of $G$ is not irreducible. Thus if $G$ acts irreducibly on $\C^3$, there exist an irreducible representation of dimension 4 of $C(G)$ and a compact subgroup $\Gamma \subset K \subset SU(3)$ such that $G$ is isomorphic to $K_{-1}$, and $K$ acts irreducibly on $\C^3$ since $G$ does.
\end{proof}
\begin{remark}{\rm
The proof of Theorem 1.1 works as well by replacing $SU(3)$ by $SO(3)$. In particular one recovers, under a less precise form, the results of \cite{bb09}: if $G \subset SO_{-1}(3)$ is a non-classical compact quantum subgroup, then either there exists a compact subgroup $\Gamma \subset K \subset SO(3)$ such that $G$ is isomorphic to $K_{-1}$ or
$G$ is isomorphic to a quantum subgroup of $O_{-1}(2)$. }
\end{remark}
\begin{remark}{\rm
Corollary \ref{irred} also holds with $SU_{-1}(3)$ replaced by $U_{-1}(3)$ (and $SU(3)$ replaced by $U(3)$), with a similar proof.
}
\end{remark}
|
1008.1696
|
\section{Introduction}
\label{intro}
The jammed state of condensed matter is characterized by the
sudden arrest of a system's internal dynamics. Macroscopically,
the jammed system develops an yield stress and behaves, essentially, like a
solid. A pile of sand under the action of gravity, clogged flow of powders
through a pipe or coagulated colloidal microstructures with high elastic moduli
are a few examples with practical applications that exhibit jamming. On the
theoretical side, random packings of hard and soft elements (spheres, disks,
etc.) display similar behavior to those observed in real systems and are often
used as prototype systems for the study of jamming.
It has been extensively studied recently both by simulations
\cite{Mak00,Her02,Her03,Zha05,Her05,Silbert05-01,Silbert06,Cia09-01,Cha09,Xu09}
and experiments \cite{Che09-01,Che09-02}.
As expected for strongly interacting systems, there are several theoretical
proposals \cite{Fierro05,Henkes05,Hentschel07,Song08} to describe this state,
most of them relying on approximate mean-field arguments.
There are some facts that seem well settled today about jamming.
First, a quench is an essential ingredient for a system to reach the jammed
state. It prevents any crystallization that may occur during a slow
rearrangement of particle positions during equilibration. Second, the critical
jamming density is affected by
particle size ratio \cite{Xu09}, shape \cite{Schreck10} and the preparation
protocol, but its critical properties are the same (recently, Chaudhuri
{\em et al}. \cite{Cha09} showed that this critical point is not unique, even
for large
systems). Third, the jamming point in monodisperse and bidisperse packings is
manifested in structural properties as the $\delta$-function behavior of the
first peak of the radial distribution
function and the split of its second neighbor peak \cite{Silbert06}.
Most studies about jamming at zero temperature focus only in
one preparation protocol (random packings of, possibly,
overlapping particles, quenched to the nearest minimum energy state).
One may ask, then, is it possible to produce a jammed state from a
completely ordered system (crystal)? Is the quench alone enough to take the
packing out of its global minimum of energy and trap it in a jammed state? In
particular, if the jammed system prepared in this way will it be more dense
than the initial crystal? All these question, and others, will be addressed in
this paper. Jammed states will be produced by the quenching, and further
decompression, of crystalline disk packings.
This initial condition, at first sight, is not suitable to produce a jammed
state, since disordered packings are commonly associated with jamming. However,
it will be shown that the structural features of jamming are present in such
systems, hence regarding this preparation protocol as a valid one to produce
jammed packings. It will be shown that this initial condition opens the
possibility to reach jammed states in a distinct region of the
packing phase diagram, which are unaccessible from ordinary, random initial
packings algorithms. Along with the studies of bidisperse packings, the
decompression of polydisperse packings is explored.
Sect. \ref{sim} is where simulation details are provided, sect. \ref{results}
holds all results and sect. \ref{summary} is reserved for conclusion.
\section{\label{sim}Simulation methods}
The particles are soft, elastic disks, which interact through linear springs.
The compression potential energy between disks $i$ and $j$ given by:
\begin{equation}
\label{ener-law}
U_{ij}=\frac{1}{2}\kappa(R_i+R_j-r_{ij})^2\Theta(R_i+R_j-r_{ij}),
\end{equation}
where, $\kappa$ is the elastic constant (taken $\kappa=1$ and equal for all
contacts), $R_i$ is the $i$-th disk radius, $r_{ij}$ is the distance between
the disks, and $\Theta(x)$ is the step function.
Initially, the packing consists in $N$ disks, of radius $R_0$ (taken as the
unit length), arranged in a triangular lattice. The contact energy, eq.
(\ref{ener-law}), is given in units of $\kappa R_0^2$. The system's periodic
boundary lengths are given by $L_X=2R_0N_X$ and $L_Y=\sqrt{3}R_0N_Y$, where
$N_X\times N_Y=N$,
is the total number of disks. The values $N_X=N_Y=50$ are chosen, which gives
$N=2500$, throughout the experiments. The results presented here are
essentially the same for systems with $N_X=N_Y=10$ and $N=100$. This
choice of boundary lengths perfectly accommodates a
triangular lattice of equal disks, which implies that the initial
packing fraction has the largest value for a two dimensional monodisperse
system (the Kepler conjecture) \cite{Hal05}:
\begin{equation}
\label{pack-tri}
\phi_0=\frac{N\pi R_0^2}{L_XL_Y}=\frac{\pi}{\sqrt{12}}\approx0.907.
\end{equation}
The quench is performed by changing the disks radii by a suitable
amount, the dispersity degree $\Delta R$. Such change is instantaneous,
in order to trap the system in a jammed state (quench). For bidisperse
packings, a
number $N_+=f_+N$ of the disks (randomly chosen) have their radii increased by
$\Delta R$ while the rest of the particles, $N_-=(1-f_+)N$, have their
radii decreased by $\Delta R$ (similar to what was used in \cite{Boc92}). The
number fractions chosen are $f_+=0.400$, $0.500$, $0.600$, $0.700$ and $0.800$.
For polydisperse packings, the radii are changed by an amount uniformly
distributed between $\left[-\Delta R, \Delta R\right]$. Fig. \ref{quench}
shows a quenched configuration.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=quenched_config_2.eps,width=10.0cm,height=8.5cm}}
\caption{An illustration of a quenched packing ($f_+=0.500$ and
$\Delta R/R_0=0.300$).
\label{quench}}
\end{figure}
These
changes introduce a compression potential energy, since there will always be
some overlap between nearest neighbor grown disks.
Given the absence of energy dissipation, the minimization is performed by the
conjugate gradient method \cite{nr}. A dissipative packing cannot be studied
with this numerical method. A molecular dynamics (MD) approach is more
suitable, and
certainly will yield very distinct results (for a study on the phase diagram
of dissipative packings see \cite{Cia09-02}). Finally, random packings can also
be prepared with a MD approach by swelling void particles (void expansion
method \cite{Schenker09}) that increase the volume fraction and takes the
system through the jammed point.
It should noticed that a jammed state is not reached, with this protocol, at
lower number fraction, for instance $f_+\leq0.300$.
Given the small probability to form large-large contacts at the beginning,
the available space for rearrangements is larger than the one needed for
complete relaxation. This leads to a {\em melting} of the packing,
instead of jamming (the highlight emphasizes that the melting picture should
be supported by additional simulations).
Therefore, such low number fraction packings do not behave as their high $f_+$
counterparts. This will happen only if the mean radius of such packings is
larger than $R_0$.
Since the goal is to find the maximum packing fraction with a
vanishing potential energy, an initial minimization is performed right after
the quench. The energy
minimum is achieved when the difference between the current and the last
energy values is no more than $10^{-10}$.
After reaching the nearest energy minimum, particles are slowly decompressed,
{\em i.e.}, particle's radii are decreased by a small constant amount $\gamma$
at each cycle, which provides a slightly larger space for them to relax
(expansion step). After each expansion step, an energy minimization is
performed in order to take the system closer to the zero potential energy
state. The decompression is finished when the total potential energy is less
than a predefined value, $\epsilon$. Hence, at the of the protocol for a
suitable $\epsilon$ value, the system should be very close to the jamming point.
In sect. \ref{res_params}, the influence of both parameters on the results
will be shown.
The average packing fraction $\left<\phi\right>$, from now on referred to as
the critical packing fraction (CPF), is measured at the end of the
decompression as a function of $\Delta R/R_0$. The averages are over
realizations (typically $20$ for each case) and the error bars are calculated
as $\sqrt{\left<(\phi-\left<\phi\right>)^2\right>}$ in each case.
Also, the packing structure and order are
studied through the calculation of the Radial Distribution Function (RDF),
$g(r)$, and the orientational order parameter:
\begin{equation}
\label{psi}
\Psi_j=\frac{1}{z_j}\sum\limits_{k=1}^{z_j}e^{i6\theta_{jk}},
\end{equation}
where the sum runs over the $z_j$ nearest neighbors of disk $j$ and
$\theta_{jk}$ is the angle between the line joining the $j$-th and $k$-th
disks centers and the $x$ axis \cite{Sta97}. The absolute value of $\Psi_j$ is
measured at the end of the full minimization, and is presented as an average
over particles and runs. Its value is unity for a perfect
triangular array of particle, while $\left|\Psi_j\right|<1$ for disordered
packings. Two disks are considered first neighbors if they overlap.
It should be noticed that, in most studies of bidisperse packings
\cite{Her02,Her03,Her05,Cha09,Xu09}, particle size differences are given in
terms of the size ratio, $\sigma$, instead of the size difference, $\Delta R$.
Both quantities are connected by:
\[
\frac{\Delta R}{R_0}=\frac{\sigma-1}{\sigma+1}.
\]
\section{\label{results}Results and discussion}
This sect. holds all numerical results. First, some results for the CPF as a
function of the simulation parameters $\gamma$ and $\epsilon$ are shown.
Second, the full CPF results, along with the packing structure and order will
follow, respectively.
\subsection{\label{res_params}Independence on the simulation parameters}
The simulation is controlled by two sensitive parameters, the decompression
rate, $\gamma$, and the minimum compression energy, $\epsilon$. Figure
\ref{rpf-params} holds the results for the CPF for three distinct parameter
sets:
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=CPF-params.eps,width=7.0cm,height=5.5cm}}
\caption{CPF for distinct values of the decompression parameter, $\gamma$, and
the energy minimum, $\epsilon$.
\label{rpf-params}}
\end{figure}
The three cases presented, all for $f_+=0.500$, have the following set of
parameter values: case 1, $\gamma=10^{-5}$ and $\epsilon=10^{-6}$; case 2,
$\gamma=10^{-6}$ and $\epsilon=10^{-6}$; case 3, $\gamma=10^{-6}$ and
$\epsilon=10^{-8}$. As seen in this figure, all results agree well within
simulation error. Hence, all the following results will be given for case 1,
unless noticed otherwise. This value of $\epsilon$ corresponds to
a minimum energy per particle of the order $10^{-9}$. Similar results hold
for the polydisperse packing (not shown).
\subsection{Jamming packing fraction}
In fig. \ref{rpf-all}, the results for the CPF for all
number fraction, $f_+$, and size dispersity, $\Delta R/R_0$, values are shown.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=CPF_case1_all_vRevised_EPJE.eps,width=7.0cm,height=5.5cm}}
\caption{All results for the CPF, for distinct particle number fraction, $f_+$,
and size dispersity. Symbols correspond to number fractions of $f_+=0.400$
(circles), $f_+=0.500$ (squares), $f_+=0.600$ (diamonds), $f_+=0.700$
(triangles), $f_+=0.800$ (left triangles) and polydisperse packing
(inverted triangles).
The inset is a zoom to the region close to the curves minima and the dashed
line represents the CPF value for the RCP state \cite{Her02},
$\phi_{RCP}=0.842$.
\label{rpf-all}}
\end{figure}
From this graph, one can see that the final packing fraction is
never larger than the one for the triangular lattice, eq. (\ref{pack-tri}),
and it goes through a minimum. From the inset of fig. \ref{rpf-all},
it can be seen that the minimum CPF changes with number fraction. Its lowest
value is $0.843$ for $f_+=0.400$ at $\Delta R/R_0=0.120$, while the largest
minimum is $0.845$ for $f_+=0.800$ at $\Delta R/R_0=0.150$. The CPF value for
the random close packing (RCP) state is shown as a reference
value, since it is the jamming packing fraction for the monodisperse packing
\cite{Her02}. These results indicate that the RCP
state corresponds to the lowest critical packing fraction achieved with this
protocol, with packing properties distinct from the original study (in
that case \cite{Her02}, $f_+=0.500$ and $\Delta R/R_0=1/6$). For the
polydisperse packing, the minimum CPF is $0.844$ at $\Delta R/R_0=0.200$.
Recent results on jamming of bidisperse sphere packings \cite{Xu09} and
ellipsoid and dimer packings \cite{Schreck10}, that focused on the influence
of the size ratio in the CPF, show that this quantity presents a maximum in
the size ratio range $[1,\infty]$. These references also carried out their
simulations with random initial packings.
(In \cite{Xu09}, the largest CPF occurs at $f_+=0.500$ and $\Delta R/R_0=1/3$).
Here, all CPF values are also above the RCP one, but it goes
through a minimum instead of a maximum. The reason behind this distinction seems
to be mainly the initial packing. The jamming point in a random monodisperse
packing may be increased with an appropriate size ratio (the system packs more
efficiently). On the other hand, the jamming point for a regular packing can
only be decreased from the value given in eq. (\ref{pack-tri}). This can be
seen as a consequence of the fact that, as stated earlier, the triangular
lattice is the most dense packing of equal disks \cite{Hal05}.
Therefore, one can conclude that performing a decompression simulation with
a regular initial packing, one can reach a very dense jammed state, not
accessible from a random initial packing. This is the main results of this
paper and it is what is meant by a novel route to unjamming in the title.
The small dispersity behavior of the CPF can be understood as follows. When
the regular triangular array of disks is
quenched, the total area occupied by the disks, initially given by
$A_0=N\pi R_0^2$, is changed to:
\[
A_{b0}=\pi\left[\sum\limits_{i=1}^{N_+}(R_0+\Delta R)^2+\sum\limits_{i=1}^{N_-}
(R_0-\Delta R)^2\right]-A_{ovlp}.
\]
where $A_{ovlp}$ represents the total overlapped area. Developing the squared
terms and using the fact that $f_++f_-=1$, one reaches the following
expression for the initial modified area:
\[
A_{b0}=N\pi R_0^2+2(2f_+-1)N\pi R_0\Delta R+N\pi\Delta R^2-A_{ovlp}.
\]
Since the experiment is carried through particle decompression at a constant
rate, when
the overlapped area vanishes, the total area occupied
by the disks should be given by the exact same expression, but for a distinct
mean radius $R$, instead of $R_0$:
\[
A_b=N\pi R^2+2(2f_+-1)N\pi R\Delta R+N\pi\Delta R^2.
\]
The value of this mean radius is $R=R_0-r$, with $r=\gamma n$, where $n$ is
the number of decompression steps. Defining $x=\Delta R/R_0$ and $y=r/R_0$,
and dividing both sides by $L_XL_Y$ one has:
\[
\frac{A_b}{L_XL_Y}=
\]
\[
\frac{N\pi}{L_XL_Y}\left[(R_0-\gamma n)^2+
2(2f_+-1)(R_0-\gamma n)\Delta R+\Delta R^2\right].
\]
The final average CPF, among different experiments with fixed
boundary lengths can be obtained using the average $r$ value in
$A_b$. Therefore, using eq. (\ref{pack-tri}), this relationship yields:
\[
\left<\phi_b\right>=
\]
\begin{equation}
\label{avg-rpf}
\phi_0\left[1+2(2f_+-1)x+x^2-2(2f_+-1)x\left<y\right>-2\left<y\right>+\left<y^2\right>\right].
\end{equation}
A similar argument holds for a polydisperse packing. In this case, the area
occupied by the disks after the initial compression is given by:
\[
A_{p0}=\pi\sum_{i=1}^{N}(R_0+\Delta R_i)^2-A_{ovlp},
\]
where $\Delta R_i$ is the change in the $i$-th disk radius. At the end of the
decompression phase, the total disk area is:
\[
A_p=\pi\sum_{i=1}^{N}(R+\Delta R_i)^2,
\]
with $R$ given as above. Since the quantity $\Delta R_i$ is uniformly
distributed in the range $[-\Delta R,\Delta R]$, one may use the moments of
$\Delta R_i$,
\begin{eqnarray}
\label{mom-uni}
\frac{1}{N}\sum\limits_{i=1}^{N}\Delta R_i=0,\\
\label{mom-uni-2}
\frac{1}{N}\sum\limits_{i=1}^{N}\Delta R_i^2=\frac{\Delta R^2}{3},
\end{eqnarray}
to write
\[
A_p=N\pi\left[(R_0-\gamma n)^2+\frac{\Delta R^2}{3}\right].
\]
Following the same steps as in the bidisperse case, the average CPF can
be written as:
\begin{equation}
\label{avg-rpf-poly}
\left<\phi_p\right>=\phi_0\left[1-2\left<y\right>+\left<y^2\right>+
\frac{x^2}{3}\right].
\end{equation}
Hence, with the knowledge of the average number of decompression steps, one
can match equations (\ref{avg-rpf}) and (\ref{avg-rpf-poly}) with the results
given in fig. \ref{rpf-all}. The
results for $\left<y\right>$ and $\left<y^2\right>$ are given in fig.
\ref{avg-rad}.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=Data_r_both_all_revisedEPJE.eps,width=8.0cm,height=6.5cm}}
\caption{Upper panel: average number of decompression cycles. Symbols follows
convention in fig. \ref{rpf-all}. Lower panel: average square number of
decompression cycles.
\label{avg-rad}}
\end{figure}
Consider the following argument to explain this result.
If this decompression experiment was performed in the
monodisperse case, where each disk has exactly $6$ contacts, simetrically
placed around its center, no change in the structure will ever occur due to
the decompression, and the packing fraction value in which the
compression energy is zero would be given by (\ref{pack-tri}). Therefore, the
disk radii should return to their original value in order to reach this packing
fraction. This implies that $\left<y^\gamma\right>=x^\gamma$. Since at
low dispersity this relationship is approximately realized, one can infer that
at the jamming point, one has $\left<y\right>=ax^\alpha$ and
$\left<y^2\right>=bx^\beta$.
These four parameters $a,b,\alpha,\beta$ represent the effect of structure
rearrangements in the quantity $r$. Table \ref{tab_1} holds their values as
measured from power law fits to the curves in fig. \ref{avg-rad}. Since
deviation from power law behavior do not occur at the same dispersity for all
cases, the fits were performed up to $x=0.040$ for $f_+=0.400$, $0.050$ for
$f_+=0.500$, $0.060$ for $f_+=0.600$ and $x=0.100$ for the other cases. The
parameters approach their monodisperse values as one
increases the number fraction. The polydisperse packing is an exception since
it has no monodisperse limit. Also, one can see that, at all number fractions
and the polydisperse case, the relationship $\beta=2\alpha$ holds. The data
do not imply a simple relationship between the coefficients $a$ and $b$.\\
\begin{table}
\caption{Decompression step parameters, measured from the curves in fig. \ref{avg-rad}.}
\label{tab_1}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$f_+$ & $0.400$ & $0.500$ & $0.600$ & $0.700$ & $0.800$ & poly \\
\hline
$a$ & $0.214$ & $0.413$ & $0.588$ & $0.722$ & $0.895$ & $0.244$ \\
\hline
$b$ & $0.046$ & $0.171$ & $0.346$ & $0.520$ & $0.802$ & $0.061$ \\
\hline
$\alpha$ & $0.898$ & $0.941$ & $0.956$ & $0.956$ & $0.979$ & $0.941$ \\
\hline
$\beta$ & $1.80$ & $1.88$ & $1.91$ & $1.91$ & $1.96$ & $1.89$ \\
\hline
\end{tabular}
\end{table}
By using this form for the average number of decompression steps, the average
CPF, eq. (\ref{avg-rpf}), can be cast in the following form:
\begin{equation}
\label{avg-rpf-2}
\frac{\left<\phi\right>_b}{\phi_0}=1+2(2f_+-1)x+x^2-2(2f_+-1)ax^{\alpha+1}-2ax^\alpha+bx^{2\alpha}.
\end{equation}
This curve for the parameters $a$, $b$,
$\alpha$ and $\beta$ given for the $f_+=0.700$ case is shown as a dashed line
in fig. \ref{fp70-fit}. One can see that, at low dispersity, this eq. agrees
well with the results.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=CPF_fp70_lineFit.eps,width=7.0cm,height=5.5cm}}
\caption{Eq. (\ref{avg-rpf-2}) plotted with the parameters correspondent to
the $f_+=0.700$ case.
\label{fp70-fit}}
\end{figure}
For the polydisperse packing, the average CPF can be written as:
\begin{equation}
\label{avg-rpf-poly-2}
\frac{\left<\phi\right>_p}{\phi_0}=1-2ax^\alpha+bx^{2\alpha}+\frac{1}{3}x^2.
\end{equation}
The comparison between figs. \ref{rpf-all} and \ref{avg-rad} shows that the
deviation from the power law behavior of $\left<y\right>$ coincides with the
region where the CPF reaches its minimum value.
Since a complete understanding
of this fact would be given by a more detailed (and complicated) theoretical
approach, such as the one proposed in \cite{Song08}, only the packing
structure is probed here.
The reason behind this choice is that the structure relaxation during
decompression is the main cause for the results seen in figs. \ref{rpf-all} and
\ref{avg-rad}.
\subsection{Packing structure}
The RDF is measured at the end of the
full minimization process. These measurements are performed regarding
the type of particle contact, {\em i.e.}, the probabilities of
small-small, $g_{SS}(r)$, large-large, $g_{LL}(r)$, and small-large,
$g_{SL}(r)$, contacts are measured. In the triangular array of monodisperse
disks, one expects $g(r)$ to have sharp peaks at $r=1$, $\sqrt{3}$ and $2$
particle diameters.
Figure \ref{rdf-1} shows all RDFs for
$\Delta R/R_0=0.120$ and all number fractions. Figure
\ref{rdf-2} has these same functions but for a dispersity value of $0.500$. In
both cases,
all interparticle distances are normalized by the corresponding final average
particle diameter. For instance, for a small-small contact, the final small
particle diameter is $2\left<R\right>=2(R_0-\Delta R-\left<r\right>)$, where
$\left<r\right>$ is the average number of decompression steps, fig.
\ref{avg-rad}. Hence, the distance range where these contact probabilities are
measured appear distinct for each contact type and number fraction, especially
for large-large contacts, which have a larger mean particle diameter.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=RDF_dr012_normDiam_all_v2.eps,width=7.0cm,height=5.5cm}}
\caption{RDFs for a dispersity value of $0.120$. Each line, shifted for
clarity, corresponds to a distinct number fraction (from top to bottom):
$f_+=0.800$, $0.700$, $0.600$, $0.500$, $0.400$.
\label{rdf-1}}
\end{figure}
In all curves in fig. \ref{rdf-1}, the jamming
structural signature, a delta-like first peak and a split second peak, are
seen. Also, the second small-small contact peaks are not located precisely at
$r=\sqrt{3}$ and $2$. Instead, they are shifted to the right, consistent with
the results shown in \cite{Xu09}. A brief explanation of this fact is that
these two peaks occur only when three particles form a triangular cluster.
Hence, this type of cluster should be absent for small-small and small-large
contacts. Moreover, small-large
contacts have a second peak at intermediate positions between small-small and
large-large contacts, as expected, since the mean diameter of such a contact
is given by $2(R_0-\left<r\right>)$.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=RDF_dr050_normDiam_all_v2.eps,width=7.0cm,height=5.5cm}}
\caption{RDFs for a dispersity value of $0.500$. Curves are shown in accordance
to the convention in fig. \ref{rdf-1}.
\label{rdf-2}}
\end{figure}
A marked feature of these graphs is that the $g_{SS}(r)$ and $g_{SL}(r)$ peaks
between $r=1$ and $\sqrt{3}$ decrease with $f_+$, while those for $r=2$ become
sharper. This indicates that, for larger $f_+$, small particle relax to
positions farther away from each other, even though they start all at the same
structure.
The $g_{LL}(r) $ peaks at $r=1$, $\sqrt{3}$ and $2$, increase and become
sharper, while the intermediate ones between $r=1$ and
$\sqrt{3}$ almost disappear at the largest number fraction. Since sharp peaks
at $\sqrt{3}$ and $2$ are a signature of a triangular lattice structure, one
may infer that such large particle rich packings relax to structures similar
to a crystalline one. This fact also explains why the large-large peaks
between $r=1$ and $\sqrt{3}$ become smaller. Such characteristic should
be expected, since the initial packing is regular and only large-large particle
contacts, at the outset, imply compression. Therefore, if a group of first
neighbors become large particles, they will probably remain in this cluster
up to the end of the process.
An illustration of such a configuration is given in fig. \ref{config_1}. It
shows a well mixed packing with an occasional pocket of large particle
crystals (lower left corner).
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=config_f50_dr012.eps,width=10.0cm,height=8.5cm}}
\caption{Packing configuration for $\Delta R/R_0=0.12$ and $f_+=0.50$. Particle
periodic images are omitted.
\label{config_1}}
\end{figure}
On the other hand, the pair correlation functions at large dispersity, fig.
\ref{rdf-2}, shows markedly distinct features of the packing structure.
First of all, small particle aggregates become
progressively more distant, in small particle mean diameter units, for larger
$f_+$. Second, $g_{LL}(r)$ at $f_+=0.400$ and $f_+=0.500$ shows
several small peaks between those at $r=1$ and $2$, as in fig.
\ref{rdf-1}, but the one at $r=\sqrt{3}$ is not easy to distinguish. Only at
high number fractions this peak becomes clear. This means that large
particles form, again, structures close to crystalline ones at high number
fraction. Finally, one can see a clear change in $g_{SL}(r)$
from $f_+=0.700$ to $0.800$. The peaks for the lower number fraction are
sharper than the corresponding ones at larger number fraction. This is
probably due to the fact that small particles can be more easily accommodated in
vacancies among the large particle contact network (approximately triangular).
Figure \ref{config_2}
shows a snapshot of a packing at the largest number fraction and dispersity.
One clearly sees that the structure is a crystal with defects.
Finally, the broad $g_{SS}(r)$ peak around $r=1$, at $f_+=0.800$, does not
imply overlap between small particles. In fact, the $g_{SS}(r)$ value is zero
while $r<1$. The reason for this apparent broad peak is that the bin size is
larger for smaller average contact diameter. This quantity decreases for
increasing number fraction and dispersity. Hence, the bin size at $f_+=0.800$
is significantly larger than those at lower number fractions.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=config_f80_dr050.eps,width=10.0cm,height=8.5cm}}
\caption{Packing configuration for $\Delta R/R_0=0.500$ and $f_+=0.800$. Small
particles are enlarged for better visualization.
\label{config_2}}
\end{figure}
Although not seen in the RDF plots, the area below the first peak for all
RDFs changes as a function of the number fraction and dispersity. This might
have implications for the mechanism
that leads to the CPF values seen in fig.
\ref{rpf-all}. In order to study this influence, the area below the first
peak, in each of the RDFs, is measured. The $r$ range in which this area is
considered is from the first peak
position (contact diameter) up to this distance plus the dispersity,
$\Delta R$. This is an (unnormalized) account for the average number of
neighbors of a given type around a given particle \cite{Silbert06}. A more
natural approach to this question would be to compare the coordination numbers
with regard to each contact type. Any protocol for producing jammed states is
known to produce an amount of rattlers (particles with no contacts). These
particles also contribute to the packing relaxation. Then, a comparison of
coordination number would exclude these particles from the analysis.
The notation corresponds to
\[
N_{SS}(\Delta R)=\int\limits_{d_{SS}}^{d_{SS}+\Delta R}g_{SS}(r)dr,
\]
\[
N_{LL}(\Delta R)=\int\limits_{d_{LL}}^{d_{LL}+\Delta R}g_{LL}(r)dr,
\]
\[
N_{SL}(\Delta R)=\int\limits_{d_{SL}}^{d_{SL}+\Delta R}g_{SS}(r)dr.
\]
This choice for the integration limits ensures that, for small-small contacts,
only small particles are within this range. At the first few
dispersity values, which are of the order of the $g(r)$ bin size, this
integration leads to an overestimation of the number of neighbors.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=first_peaks_all.eps,width=7.0cm,height=5.5cm}}
\caption{Area under the first peak of the pair correlations of the three
contact types. Symbols follow fig. \ref{rpf-all}.
\label{g_area}}
\end{figure}
One can directly infer that up to intermediate dispersities, where the critical
packing fraction reaches its lowest value, small particles increase their
probability of being around other particles, of both types (top and bottom
panels) with a more pronounced effect at $f_+=0.800$, while the probability for
large-large particle contacts reaches its
lowest value (middle panel). In addition to that, the large-large contacts
barely change with number fraction and its minimum value occurs at a
dispersity value close to those of the minimum jamming density, fig.
\ref{rpf-all}.
These results, along with the RDFs in figs. \ref{rdf-1} and \ref{rdf-2}, imply
that, less efficient packing correspond to more small particle clusters and
less large particle ones. A possible explanation can be
given by the fact that the initial compression is provided solely by large
particles and the packing relaxation should take the available
space provided by small-small neighbors, {\em i.e.}, large particles should
push small ones in order to decrease the potential energy. This will deform the
initial structure and small-small
contacts will be formed in a distinct structure that the initial one. Since
this structure is the most dense possible, the critical packing will
occur at a lower density. Also,
the larger the number of small particle contacts, the larger is the space
available for structure rearrangements. Therefore, less decompression cycles
will be needed to reach the minimum energy.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=RDF_multi_some_v2.eps,width=7.0cm,height=5.5cm}}
\caption{Pair correlation of multidisperse packings for dispersity values
(from top to bottom): $\Delta R/R_0=0.500$, $0.200$, $0.120$, $0.050$ and
$0.010$. Curves are shifted for clarity.
\label{rdf_multi}}
\end{figure}
For polydisperse packings, the jammed structure shows a completely distinct
scenario. Since there are several particle sizes, the chance for the formation
of crystalline regions during the packing relaxation is very low. Therefore,
the packing structure should be strongly amorphous, as shown in fig.
\ref{rdf_multi}, at large $\Delta R/R_0$. Moreover, this amorphization seems
to be a continuous process, since the peaks at low dispersity become smoother
for larger $\Delta R/R_0$ until they merge and, eventually, disappear.
Since the results for the RDFs implied that the small particle contacts
introduced disorder, one can find a correlation between the CPF and the
contacts orientational order. Fig. \ref{psi_all} has the average value, over
particles and runs, measured at the end of the decompression, of the
orientational order parameter related to the triangular lattice, eq.
(\ref{psi}). It should be noticed that, given the definition of first neighbors
followed here, this order parameter does not contain any contribution from
rattlers.
\vspace{2em}
\begin{figure}[h]
\rotatebox{0}{\epsfig{file=Psi_all.eps,width=8.0cm,height=6.5cm}}
\caption{Orientational order parameter for all number fractions and dispersity
values. Symbols and colors are as in fig. \ref{rpf-all}: $f_+=0.400$
(circles), $f_+=0.500$ (squares), $f_+=0.600$ (diamonds), $f_+=0.700$
(triangles), $f_+=0.800$ (left triangles), polydisperse (inverted triangles).
\label{psi_all}}
\end{figure}
First of all, one see that, at low dispersity, this order parameter decreases
continuously. This is consistent with the arguments given earlier, that small
particle contacts introduce disorder in the system and it packs less
efficiently.
Moreover, the fast increase of the critical packing fraction at large
dispersities and number fractions can be seen to correlate with a fast increase
in the orientational order, also consistent with the appearance of large
crystalline regions.
On the other hand, the smooth increase in $\left<\phi\right>$ with dispersity
is not accompanied with a corresponding increase in the order parameter.
This implies that the inversion in the CPF curves are due to pockets of large
particle clusters, as seen in fig. \ref{g_area} (middle panel), regardless of
the overall decrease in order. Large particles clusters have a high value of
$\left|\left<\Psi\right>\right|$, and, given the smooth increase in $N_{LL}(r)$
with dispersity, one may infer that their number is small, since the whole
system has a low value of the order parameter. Only when large particle
clusters are formed, the global order increases, providing a more dense packing.
In the polydisperse case, the orientational order parameter continuously
decreases with increasing dispersity. Its value
is higher than for the bidisperse cases up to $\Delta R/R_0=0.200$. This is
surprising for one would expect that more particle sizes would lead to less
order (from eqs. (\ref{mom-uni} and (\ref{mom-uni-2})) one can see that the
size dispersion of a polydisperse packing is proportion to $\Delta R$). Given
that there is no monodisperse limit for this packing when
$\Delta R/R_0\rightarrow1$, one can imagine that the order parameter for the
$f_+=0.400$ and $0.500$ cases will eventually be higher than the polydisperse
one at some dispersity beyond $0.500$.
\section{\label{summary}Summary and conclusions}
It was presented a numerical study on the jamming properties and structure of
a two dimensional packing of elastic disks, for bi- and polydisperse cases. The
attention was focused on
the value of the maximal packing fraction for which the compression energy is
zero. This was measured through numerical decompression experiments of a
disordered packing, initially arranged in a crystal (triangular lattice)
structure. The critical packing fraction (CPF) was measured at the end of the
decompression and was shown as a function of the dispersity degree,
$\Delta R/R_0$. Also, for bidisperse packings, the CPF was also studied as a
function of the number fraction of large disks.
The general trend of the CPF is initially decreasing up to a minimum value,
and then increasing with dispersity, fig. \ref{rpf-all}. The lowest CPF value
is close to the RCP value, but obtained for a number fraction of $f_+=0.400$,
instead of the original value obtained at $f_+=0.500$ \cite{Her02}.
The distinct CPF behavior with $\Delta R/R_0$ observed in
\cite{Xu09,Schreck10} is due to the fact that, in the
present case, one starts from the most possible dense packing and, therefore,
the introduction of disorder, through the quench and internal structure
rearrangement, will certainly lead to a lower jamming packing fraction, since
a more efficient packing can be achieved with a increase in local order
\cite{Tor00}.
At low dispersity, the system behaves approximately as the monodisperse
crystal, as seen in the results for the average number of decompression steps,
fig.
\ref{avg-rad}. The departure from the monodisperse regime can be attributed to
an increase in the number of small-small particle contacts, fig. \ref{g_area}.
The structure reveals that, for low dispersity, the decompressed packing has
significant order, as revealed by the long range behavior of $g_{SS}(r)$,
$g_{LL}(r)$ and $g_{SL}(r)$.
The packing structure is mostly disordered at intermediate dispersities, and at
the highest dispersity, large-large particle contacts bear most of the
translational order in the system, which forms a crystal with defects, with
the few small particles scattered between the scarce space available between
the large particle contacts.
For polydisperse packings the long range order is completely
absent for large dispersities, since a disk size is chosen from a uniform
distribution of values in the range $[-\Delta R, \Delta R]$, for larger
$\Delta R$, there will a very broad distribution of particle sizes.
The data for the local orientational order gives a similar picture. For larger
number fractions, the packings are more ordered locally compared to lower
number fraction cases. Also, for low dispersities, the packing is more
ordered, as expected since it behaves like the monodisperse one. Above the
low dispersity range, the local orientational order decreases with
$\Delta R/R_0$, regardless of the smooth increase in the CPF. However, the
fast increase of the CPF at large number fraction and dispersities
is strongly correlated with a fast increase in this order parameter, implying
that such packings are close to a crystal.
\section*{Acknowledgements} I thank C. Brito for a very welcome reading of this
paper. This work is financially supported by CNPq and FAPESPA.
|
1702.04493
|
\section{Introduction}
\IEEEPARstart{T}{o} meet the ever-increasing demands for high-data-rate multimedia access, the capacity of next-generation wireless networks has to increase exponentially. One promising way to boost the capacity is to exploit new spectrum bands. Recently, millimeter wave (mm-wave) bands from 28 GHz to 300 GHz have been proposed as a promising candidate for new spectrum in 5G networks, which previously were only considered for indoor and fixed outdoor scenarios \cite{hur2013millimeter}. This proposal is supported by recent experiments in the United States and Korea \cite{rappaport2014millimeter,roh2014millimeter}, showing that mm-wave signals can cover up to 200 meters.
Lately, channel measurements have confirmed some unique propagation characteristics of mm-wave signals \cite{rappaportchannel}. It turns out that mm-wave signals are sensitive to blockages, which causes totally different path loss laws for line-of-sight (LOS) and non-line-of-sight (NLOS) mm-wave signals. Furthermore, diffraction and scattering effects are shown to be limited for mm-wave signals. This makes the conventional channel model for sub-6 GHz systems no longer suitable, and thus more sophisticated channel models are needed for the performance analysis of mm-wave networks.
Another distinguishing characteristic of mm-wave signals is the directional transmission. Thanks to the small wavelength of mm-wave signals, large-scale directional antenna arrays can be leveraged to provide substantial array gains and synthesize highly directional beams, which help to compensate for the additional free space path loss caused by the ten-fold increase of the carrier frequency \cite{rappaport2014millimeter}.
More importantly, different from the rich diffraction and scattering environment in sub-6 GHz systems, directional antennas will dramatically change the signal power, as well as the interference power. In mm-wave networks, the signal or interference power is highly directional and closely related to the angles of departure/arrival (AoDs/AoAs). In particular, the directional antenna array will provide variable power gains corresponding to different AoDs/AoAs. Even a slight shift of AoD/AoA may lead to a large array gain variation. Therefore, it is necessary and critical to incorporate the directional antenna arrays when analyzing mm-wave networks.
\subsection{Related Works and Motivation}
There exist several studies of the coverage performance of mm-wave networks \cite{7094802,venugopal2015interference,robertadhoc,robertcoverage,7105406,7279196,7397837,actualpattern,7370940,7357653,7154396,6824746}.
Analytical results for coverage and rate coverage probabilities in noise-limited mm-wave networks were presented in \cite{7094802}. Although directional transmission does, to some extent, suppress co-channel interference, a dense deployment is usually required to overcome the blockage in mm-wave networks, which makes mm-wave networks prone to be interference-limited. Hence, only including noise into the coverage analysis is not enough.
Analytical results on signal-to-interference-plus-noise ratio (SINR) and rate coverage based on a simplified directional antenna pattern were obtained for device-to-device (D2D) \cite{venugopal2015interference}, ad hoc \cite{robertadhoc}, and cellular \cite{robertcoverage,7105406} networks, respectively.
To maintain analytical tractability, the antenna pattern was simplified as a \emph{flat-top} pattern, which is a widely used simplification. Since the directional antenna array is a differentiating feature in mm-wave systems, it is crucial and intriguing to accurately incorporate it into the mm-wave network performance analysis.
Basically, the flat-top antenna pattern quantizes the continuously varying antenna array gains in a binary manner. Although it significantly simplifies the analytical derivation, the oversimplified flat-top pattern will lead to pessimistic coverage results, as will be revealed in this paper. Moreover, it is difficult to analyze the impact of directional antenna arrays with the flat-top antenna pattern, as only a few parameters are extracted to abstractly depict the actual antenna pattern. In practice, some critical parameters of the antenna beam pattern such as beamwidth, the $n$-th minor lobe maxima, nulls, and front-back ratio are all determined by the array size. Nevertheless, with the flat-top antenna pattern, these parameters can only be determined qualitatively and inaccurately according to the array size. As a side effect, the quantized antenna array gain also hinders further investigations of directional antenna arrays. For example, it is difficult to analyze beam misalignment, which is a critical problem in mm-wave networks \cite{hur2013millimeter}.
Recently, there have been some works considering the actual antenna pattern.
Two works considered random beamforming in mm-wave networks and used the actual antenna pattern, but only focused on the single link analysis without interference \cite{7279196,7397837}, and adopted some asymptotic approximation in the analysis.
The actual antenna pattern was adopted in \cite{actualpattern} for evaluating the capacity of an interfered communication link. However, all the interferers were assumed to use the same array gain, which weakens the practicality of the analytical result. An SINR coverage analysis incorporating the actual antenna pattern was carried out in \cite{7370940}. While the coverage probability is analytically given, the multiple integrals (4 nested integrals in the expression) prevent practical evaluation. Also, a Rayleigh fading channel model is not realistic for mm-wave networks due to their poor scattering property. All these works demonstrated that the actual antenna pattern suffers from poor analytical tractability. In addition, there are some works proposing different approximate antenna patterns. In \cite{7357653}, a Gaussian antenna pattern was numerically shown to be a good candidate to approximate the actual antenna pattern but does not lend itself to further analysis. Moreover, though the aforementioned works presented some analytical results with the actual antenna pattern, none of them unraveled how the array size will influence mm-wave networks, which is a critical and unique problem in mm-wave systems, and has only been reported through some simulation works \cite{7154396,6824746}.
To this end, an antenna pattern that not only approximates the actual antenna pattern accurately and realistically, but also with acceptable analytical tractability, is required to reveal more insights on directional antenna arrays in mm-wave networks.
In summary, there is so far no comprehensive investigation on the impact of directional antenna arrays in mm-wave networks. In this work, we will fill this gap with new analytical results of coverage probabilities that adopt more accurate approximations for the actual antenna pattern.
\subsection{Contributions}
We investigate the coverage\footnote{The terminology ``coverage'' is used for both cellular and ad hoc networks.} probabilities in mm-wave networks with a random spatial network model, where transmitters are modeled as a homogeneous Poisson point process (PPP) \cite{haenggi2012stochastic}, and the blockage effect is reflected by a \emph{LOS ball} blockage model \cite{robertcoverage}.
All the transmitters are assumed to utilize analog beamforming to serve the corresponding users. The main contributions of this work are summarized as follows.
\begin{itemize}
\item We first present a general framework for the coverage analysis in mm-wave networks, with arbitrary interference power distributions and antenna patterns, under the assumption that the information signal power is gamma distributed. Compared to previous results, the new expression of the coverage probability is more compact and can be evaluated more efficiently.
\item Based on the general framework, analytical expressions of the coverage probabilities for both mm-wave ad hoc and cellular networks are provided. For these two types of networks, two approximate antenna patterns are proposed to achieve a good balance between accuracy and analytical tractability. While the proposed approximate antenna patterns are more complicated than the flat-top pattern, our analytical results are more tractable for practical evaluation, thanks to a new approach to deal with gamma distributed signal powers and interferers located in a finite region.
\item With the highly tractable coverage probabilities at hand, the impact of directional antenna arrays in both mm-wave ad hoc and cellular networks is investigated. We show that the coverage probabilities are monotone increasing functions of the array size. Moreover, the increasing functions are similar in both kinds of networks, which is the product of an exponential and a polynomial function of the inverse of array size. Asymptotic outage probabilities are also derived when the number of antennas goes to infinity, which shows that the asymptotic outage probability is inversely proportional to the array size. This is the first analytical result on the impact of antenna arrays that has been derived in mm-wave networks.
\item All the analytical results are shown to be computationally efficient through numerical evaluations. Numerical results also show that NLOS signals and NLOS interference have negligible impact on the coverage probability in mm-wave networks. Moreover, the interference power in mm-wave networks is shown to be dominated by the directional antenna array gains, and large-scale directional antenna arrays are needed in mm-wave networks to maintain an acceptable coverage probability. With the increasing network density, the coverage probability has a peak value in mm-wave cellular networks, while it monotonically decreases in ad hoc networks.
\end{itemize}
\subsection{Organization}
The remainder of this paper is organized as follows. We shall present the system model in Section \ref{sec_sys}, and a general coverage analysis framework for mm-wave networks is introduced in Section \ref{sec_frame}. Then the coverage probabilities, as well as the impact of directional antenna arrays, for mm-wave ad hoc and cellular networks are derived in Sections \ref{sec_ad} and \ref{sec_cellular}, respectively. Numerical results will be presented in Section \ref{numer}, and conclusions will be drawn in Section \ref{conclu}.
\section{System Model}\label{sec_sys}
\subsection{Network and Channel Models}\label{II-A}
We consider downlink transmission in both mm-wave ad hoc and cellular networks. We will first present the common features for both types of networks, and the difference will be specified later. The transmitters are assumed to be distributed according to a homogeneous PPP \cite{haenggi2012stochastic}, which has been shown to be a network model with both reasonable accuracy and analytical tractability \cite{6620915}. As depicted in Fig. 1, we consider the receiver at the origin, which, under an expectation over the point process, becomes the typical receiver.
We assume that each receiver has a single receive antenna and is receiving signals from the corresponding transmitter equipped with a directional antenna array composed of $N_\mathrm{t}$ elements. All transmitters operate at a constant power $P_\mathrm{t}$.
We use the LOS ball \cite{robertcoverage,7061455} to model the blockage effect as shown in Fig. \ref{systemmodel}. Specifically, we define a LOS radius $R$, which represents the distance between a receiver and its nearby blockages, and the LOS probability of a certain link is one within $R$ and zero outside the radius. Compared with other blockage models adopted in the performance analysis for mm-wave networks, e.g., the 3GPP-like urban micro-cellular model, the LOS ball model has a better fit with real-world blockage scenarios \cite{7593259}. The incorporation of the blockages induces different path loss laws for LOS and NLOS links.
It has been pointed out in \cite{robertadhoc,robertcoverage} that NLOS signals and NLOS interference are negligible in mm-wave networks. Hence, we will focus on the analysis where the typical receiver is associated with a LOS transmitter and the interference stems from LOS interferers. The relevant transmitters thus form a PPP, denoted as $\Phi$, with density $\lambda_\mathrm{b}$ in a disk of radius $R$ centered at the origin. In Section \ref{numer}, we will justify the LOS assumption through simulations.
Directional antenna arrays are leveraged to provide significant beamforming gains to overcome the path loss and to synthesize highly directional beams. Universal frequency reuse is assumed, and thus the received signal for the typical receiver is given by
\begin{equation}
\begin{split}
y&=\sqrt{\beta}r_0^{-\frac{\alpha}{2}}\mathbf{h}_{x_0}\mathbf{w}_{x_0}\sqrt{P_\mathrm{t}}s_{x_0}\\
&\relphantom{=}+\sum_{x\in\Phi^\prime}\sqrt{\beta}\|x\|^{-\frac{\alpha}{2}}\mathbf{h}_{x}\mathbf{w}_x\sqrt{P_\mathrm{t}}s_x+n_0,\label{receivedsignal}
\end{split}
\end{equation}
where $r_0=\|x_0\|$ is the distance between the typical receiver and its corresponding transmitter, while $\|x\|$ is the distance between the transmitter at location $x$ and the typical receiver. The locations of the interfering transmitters are denoted as $\Phi^\prime$, and the channel vector between the interferer and the typical receiver is denoted as $\mathbf{h}_{x}$. The path loss exponent and intercept are symbolized by $\alpha$ and $\beta$ \cite{rappaportchannel}. In addition, the beamforming vector of the transmitter at location $x$ is denoted as $\mathbf{w}_x$, and $n_0$ stands for the additive white Gaussian noise (AWGN) with power being $\sigma^2$.
\begin{figure}[tbp]
\centering
\subfigure
{
\centering\includegraphics[height=5.6cm]{./systemmodel1}\label{systemmodel}
}\quad\quad
\subfigure
{
\centering\includegraphics[height=3.6cm]{./system2}\label{system2}
}
\caption{(a): A sample mm-wave network where transmitters are modeled as a PPP. The LOS ball is used to model the blockage effect in the network. (b): Illustration of the spatial AoDs $\vartheta_x$ and $\varphi_x$.}
\end{figure}
One main difference between the models for mm-wave ad hoc and cellular networks is the distance $r_0$ between the typical receiver and its corresponding transmitter. In ad hoc networks, each transmitter is assumed to have a corresponding receiver at a fixed distance $r_0$ that is called the \emph{dipole distance}. On the other hand, in cellular networks, the distance $r_0$ between the typical user and its serving base station (BS) is random, due to the random locations of BS and users. We assume that the typical user is associated with its nearest BS, which is commonly adopted in cellular network analysis. The difference in $r_0$ also gives rise to another difference between these two kinds of networks, i.e., the set of interferers $\Phi^\prime$. In ad hoc networks, a dipolar pair is added with the receiver at the origin, and this pair becomes the typical pair. Therefore, $\Phi^\prime=\Phi$. On the other hand, because each user in cellular networks is associated with the nearest BS, which is part of the PPP $\Phi$, the set of interfering BSs $\Phi^\prime=\Phi\backslash\{x_0\}$ forms a PPP conditional on $x_0$ within a ring with inner diameter $r_0$ and outer diameter $R$.
Next we will present the channel model. Due to high free-space path loss, the mm-wave propagation environment is well characterized by a clustered channel model, i.e., the Saleh-Valenzuela model \cite{rappaportchannel},
\begin{equation}
\mathbf{h}_x=\sqrt{N_\mathrm{t}}\sum_{l=1}^L\rho_{xl}\mathbf{a}_\mathrm{t}^H(\vartheta_{xl}),
\end{equation}
where $(\cdot)^H$ symbolizes the conjugate transpose and $L$ is the number of clusters. The complex small-scale fading gain of the $l$-th cluster is denoted as $\rho_{xl}$. Due to the poor scattering environment, especially for LOS signals and interference, the Rayleigh fading assumption commonly used in sub-6 GHz systems no longer holds, which has also been noted in recent works \cite{7593259}. In this paper, we assume, as in \cite{robertcoverage}, that $|\rho_{xl}|$ follows independent Nakagami-$M$ fading for each link.
For mm-wave channels containing LOS components, the effect of NLOS signals is negligible since the channel gains of NLOS paths are typically 20 dB weaker than those of LOS signals \cite{rappaportchannel}. Hence, for the remainder of this paper, we will focus on LOS paths, i.e., $L=1$, and adopt a uniformly random single path (UR-SP) channel model that is commonly used in mm-wave network analysis \cite{7279196,7397837,7160780,6484896}.
In addition, $\mathbf{a}_t(\vartheta_x)$ represents the transmit array response vector corresponding to the spatial AoD $\vartheta_x$, and it has been shown in \cite[Fig. 3]{mine} that uniform distribution is an excellent approximation for the distribution of spatial AoDs. We consider the uniform linear array (ULA) with $N_\mathrm{t}$ antenna elements. Therefore, the array response vectors are written as
\begin{equation}
\begin{split}
\mathbf{a}_\mathrm{t}(\vartheta_x)=\frac{1}{\sqrt{N_\mathrm{t}}}\left[1,\cdots,e^{{j2\pi k\vartheta_x}},
\cdots,e^{{j2\pi\left(N_\mathrm{t}-1\right) \vartheta_x}}\right]^T,
\end{split}
\end{equation}
where $\vartheta_x=\frac{d}{\lambda}\cos\phi_x$ is assumed uniformly distributed over $\left[-\frac{d}{\lambda},\frac{d}{\lambda}\right]$, and $0\le k<N_\mathrm{t}$ is the antenna index. Furthermore, $d$, $\lambda$, and $\phi_x$ are the antenna spacing, wavelength, and physical AoD. In order to enhance the directionality of the beam, the antenna spacing $d$ should be no larger than half-wavelength to avoid grating lobes \cite{balanis2005antenna}.
\subsection{Analog Beamforming and Antenna Pattern}\label{II-B}
While various space-time processing techniques can be applied at each multi-antenna mm-wave transmitter, we focus on analog beamforming, where the beam direction is controlled via phase shifters. Due to the low cost and low power consumption, analog beamforming has already been adopted in commercial mm-wave systems such as WiGig (IEEE 802.11ad) \cite{rappaport2014millimeter}. Assuming the spatial AoD of the channel between the transmitter at location $x$ and its serving user is $\varphi_x$, the optimal analog beamforming vector is well known and given by
\begin{equation}
\mathbf{w}_x=\mathbf{a}_\mathrm{t}(\varphi_x),\label{beamalign}
\end{equation}
which means the transmitter should align the beam direction exactly with the AoD of the channel to obtain the maximum power gain.
As shown in Fig. \ref{system2}, based on the optimal analog beamforming vector \eqref{beamalign}, for the typical receiver, the product of small-scale fading gain and beamforming gain of the transmitter at location $x$ is given by
\begin{equation}
\left|\mathbf{h}_x\mathbf{w}_x\right|^2=N_\mathrm{t}\left|\rho_x\right|^2\left|\mathbf{a}_\mathrm{t}^H(\vartheta_x)\mathbf{a}_\mathrm{t}(\varphi_x)\right|^2,
\end{equation}
where $\left|\rho_x\right|^2$ is the power gain of small-scale fading. By defining the array gain function $G_\mathrm{act}(x)$ as
\begin{equation}
G_\mathrm{act}(x)\triangleq\frac{\sin^2\left(\pi N_\mathrm{t}x\right)}{N_\mathrm{t}^2\sin^2\left(\pi x\right)},
\end{equation}
the normalized array gain of the transmitter at location $x$ can be expressed as
\begin{equation}
\begin{split}
\left|\mathbf{a}_\mathrm{t}^H(\vartheta_x)\mathbf{a}_\mathrm{t}(\varphi_x)\right|^2&=
\frac{1}{N_\mathrm{t}^2}\left|\sum_{i=0}^{N_\mathrm{t}-1}{e^{j2\pi i(\vartheta_x-\varphi_x)}}\right|^2\\
&=\frac{\sin^2\left[\pi N_\mathrm{t}(\vartheta_x-\varphi_x)\right]}{N_\mathrm{t}^2\sin^2\left[\pi (\vartheta_x-\varphi_x)\right]}= G_\mathrm{act}(\vartheta_x-\varphi_x),
\label{sinsin}
\end{split}
\end{equation}
where $\vartheta_x$ and $\varphi_x$ are independent uniformly distributed random variables over $\left[-\frac{d}{\lambda},\frac{d}{\lambda}\right]$. The array gain function in \eqref{sinsin} is a normalized \emph{Fej\'er kernel} with factor $\frac{1}{N_\mathrm{t}}$ and is referred as the \emph{actual antenna pattern}.
In fact, the distribution of $\vartheta_x-\varphi_x$ in \eqref{sinsin} is uniform, which is stated in the following lemma. Note that this substitution will not change the overall distribution of the array gain.
\begin{lemma}
The array gain $G_\mathrm{act}(\vartheta_x-\varphi_x)$ is equal in distribution to $G_\mathrm{act}\left(\frac{d}{\lambda}\theta_x\right)$, where $\theta_x$ is a uniformly distributed random variable over $[-1,1]$.
\end{lemma}
\begin{IEEEproof}
The proof is based on the uniform distribution of $\vartheta_x$ and $\varphi_x$, and the periodic property of the function $e^{j2\pi x}$ in \eqref{sinsin}. The proof has been established in \cite[Appendix A]{7279196}.
\end{IEEEproof}
Although the Fej\'er kernel has a relatively simple analytical form, it does not lend itself to further analysis due to the sine functions in both the numerator and denominator, which calls for an approximate antenna pattern with both accuracy and tractability in performance analysis of mm-wave networks.
Next we will introduce two new approximate antenna patterns, as well as the flat-top antenna pattern, which has been widely used in existing works. Fig. \ref{motivation} visualizes these antenna patterns and evaluates the coverage probabilities with different antenna patterns through simulation.
\begin{figure*}
\centering
\subfigure[Visualization of four different antenna patterns when $N_\mathrm{t}=64$.]
{
\centering\includegraphics[height=5.6cm]{./moti1}\label{motivation1}
}
\subfigure[Coverage probability evaluations using four different antenna patterns in mm-wave cellular networks when $R=200$ m, $N_\mathrm{t}=64$, $\lambda_\mathrm{b}=1\times 10^{-3}$ m$^{-2}$, $M=3$, and $\alpha=2.1$.]
{
\centering\includegraphics[height=5.6cm]{./moti2}\label{motivation2}
}
\caption{The comparisons between different approximate antenna patterns.}\label{motivation}
\end{figure*}
\emph{1) Flat-top antenna pattern}:
Most of the existing works \cite{venugopal2015interference,robertadhoc,robertcoverage,7105406} adopt this simplified antenna pattern in the coverage analysis, where the array gains within the half-power beamwidth (HPBW) \cite{balanis2005antenna} are assumed to be the maximum power gain, and the array gains corresponding to the remaining AoDs are approximated to be the first minor maximum gain of the actual antenna pattern. While this simple approximation is highly tractable, it introduces huge discrepancies when we evaluate the network coverage probability, as shown in Fig. \ref{motivation2}\footnote{The gap can be narrowed by heuristically choosing different parameters for the flat-top pattern, e.g., beamwidth and front-back ratio, but the overall shape of the coverage probability remains different.}.
\emph{2) Sinc antenna pattern}: Instead of the actual antenna pattern, a tight lower bound is widely adopted for the numerical analysis in antenna theory. Since the antenna spacing $d$ is usually no larger than half-wavelength to avoid grating lobes, and $\sin x\simeq x$ for small $x$, the array gain function can be approximately expressed as \cite[Equation (6-10d)]{balanis2005antenna}
\begin{equation}
G_\mathrm{sinc}(x)
\triangleq\frac{\sin^2\left(\pi N_\mathrm{t}x\right)}{\left(\pi N_\mathrm{t} x\right)^2},
\end{equation}
which is a squared sinc function. The accuracy of this tight lower bound is shown in \cite[Appendix I,II]{balanis2005antenna}. In Fig. \ref{motivation1}, it turns out that the sinc antenna pattern is almost the same as the actual antenna pattern, and there is almost no error when using this approximate antenna pattern to investigate the coverage probability, as illustrated in Fig. \ref{motivation2}. Moreover, note that the sinc function is more tractable due to the absence of the sine function in the denominator, compared to the actual antenna pattern.
\emph{3) Cosine antenna pattern}:
Another antenna pattern approximation is based on the cosine function as follows
\begin{equation}\label{approxpattern}
G_\mathrm{cos}(x)=
\begin{cases}
\cos^2\left(\frac{\pi N_\mathrm{t}}{2}x\right)&|x|\le\frac{1}{N_\mathrm{t}},\\
0&\text{otherwise},
\end{cases}
\end{equation}
where the nonzero part is an elementary function with better analytical tractability. In Fig. \ref{motivation1}, we observe that the cosine antenna pattern provides a good approximation for the main lobe gains while sacrificing the accuracy for the side lobe ones. When incorporated into the coverage probability, the cosine antenna pattern has negligible gap between the actual antenna pattern, which can be viewed as a desirable trade-off between accuracy and tractability in performance analysis for mm-wave networks.
Fig. \ref{motivation} shows that the sinc and cosine antenna patterns are more accurate. In particular, they are superior to the flat-top pattern since they accurately capture the impact of directional antenna arrays in mm-wave networks. In particular, given the operating frequency and the antenna spacing, the antenna pattern is critically determined by the array size. In the flat-top pattern, however, it is very difficult to quantitatively and accurately depict the variation of the HPBW and the first minor maximum for different array sizes and AoDs. Moreover, the binary quantization of the array gains cannot reflect the roll-off characteristic of the actual antenna pattern and therefore is unable to provide different array gains for various AoDs. In other words, the flat-top antenna pattern obliterates the possibility of analyzing the impact of directional antenna arrays, which is a critical and unique issue in mm-wave systems. On the contrary, the sinc and cosine antenna patterns are explicit functions of the array size, which makes it possible to investigate the relation between the coverage probability and the directional antenna arrays. The sinc and cosine antenna patterns will be adopted in the coverage analysis for mm-wave ad hoc and cellular networks in Sections \ref{sec_ad} and \ref{sec_cellular}, respectively.
\section{A General Framework for Coverage Analysis of mm-wave Networks}\label{sec_frame}
In this section, we will develop a general framework for the coverage analysis of mm-wave networks. The main result is a tractable expression for the coverage probability, for arbitrary antenna patterns and interference distributions. It will then be applied in the following two sections to evaluate mm-wave ad hoc networks and mm-wave cellular networks.
\subsection{Signal-to-interference-plus-noise Ratio (SINR) Analysis}\label{III-A}
We assume that each transmitter has full information about the AoD of the channel between itself and its serving user, which can be obtained through sophisticated beam training protocols \cite{rappaport2014millimeter}. The transmitters can align the beam to the AoD direction, according to \eqref{beamalign}, using analog beamforming to obtain the maximum antenna array gain. The SINR at the typical receiver is then given by
\begin{equation}
\mathrm{SINR}=\frac{P_\mathrm{t}N_\mathrm{t}|\rho_{x_0}|^2\beta r_0^{-\alpha}}{\sigma^2 + \sum_{x\in\Phi^\prime}P_\mathrm{t}g_x\beta \|x\|^{-\alpha}}= \frac{|\rho_{x_0}|^2 r_0^{-\alpha}}{\sigma_\mathrm{n}^2 + \sum_{x\in\Phi^\prime}g_x \|x\|^{-\alpha}},\label{generalSINR}
\end{equation}
where $\sigma_\mathrm{n}^2=\frac{\sigma^2}{\beta P_\mathrm{t}N_\mathrm{t}}$ is the normalized noise power and $g_x$ is the channel gain from the interfering transmitter at location $x$, including both the small-scale fading gain and the directional antenna array gain\footnote{Since $g_x$ is an arbitrary channel gain, when normalizing the noise power by $N_\mathrm{t}$, we abbreviate the normalized channel gain $\frac{g_x}{N_\mathrm{t}}$ as $g_x$ with a slight abuse of notation.}. In this section, we assume $(g_x)_{x\in\Phi^\prime}$ is a family of non-negative random variables with independent and identically distributions, which will be specified in the following two sections.
Besides the complicated directional antenna array gains, there is another difficulty when calculating the SINR distribution. Note that the channel gain for the signal $|\rho_{x_0}|^2$ follows a gamma distribution $\mathrm{Gamma}\left(M,\frac{1}{M}\right)$, where $M$ is the Nakagami parameter. Compared with the exponential distributed power gain induced by Rayleigh fading, this gamma distribution brings additional challenges into the derivation. Note that the gamma distribution for the signal power widely appears when evaluating various multi-antenna systems. Many previous works illustrated that the signal power is gamma distributed considering more general transmission techniques, e.g., maximal ratio transmission \cite{chang}, or other network settings such as heterogeneous networks \cite{chang1}.
\subsection{Coverage Analysis Framework}\label{III-B}
The coverage probability, defined as the probability that the received SINR is greater than a certain threshold $\tau$, is written as
\begin{equation}
\begin{split}
p_\mathrm{c}(\tau)&=\mathbb{P}\left(\frac{|\rho_{x_0}|^2 r_0^{-\alpha}}{\sigma_\mathrm{n}^2 + \sum_{x\in\Phi^\prime}g_x \|x\|^{-\alpha}}>\tau\right)\\
&=\mathbb{P}\left[|\rho_{x_0}|^2>\tau r_0^\alpha\left(\sigma_\mathrm{n}^2 +I\right)\right],\label{coverageprob}
\end{split}
\end{equation}
where $I=\sum_{x\in\Phi^\prime}g_x \|x\|^{-\alpha}$.
As mentioned before, one main difficulty of the analysis comes from the gamma distributed random variable $|\rho_{x_0}|^2$. In previous works that investigated the coverage analysis for mm-wave networks \cite{mine, venugopal2015interference,robertadhoc,robertcoverage}, an upper bound for the cumulative probability function (cdf) of a normalized gamma random variable was adopted. In contrast, in this paper, we will derive an exact expression for this probability. The coverage probability \eqref{coverageprob} is firstly rewritten as
\begin{equation}
\begin{split}
p_\mathrm{c}(\tau)&\overset{(a)}{=}\mathbb{E}_{r_0}\left\{\sum_{n=0}^{M-1}\frac{(M\tau r_0^\alpha)^n}{n!}\mathbb{E}_I\left[(\sigma_\mathrm{n}^2+I)^ne^{-M\tau r_0^\alpha (\sigma_\mathrm{n}^2+I)}\right]\right\}\\
&=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}\frac{(-s)^n}{n!}\mathcal{L}^{(n)}(s)\right],
\label{nthde}
\end{split}
\end{equation}
where $s=M\tau r_0^\alpha$, $\mathcal{L}(s)=e^{-s\sigma_\mathrm{n}^2}\mathbb{E}_I\left[e^{-sI}\right]$ is the Laplace transform of noise and interference. The variable $r_0$ is random in cellular networks but deterministic in ad hoc ones. The notation $\mathcal{L}^{(n)}(s)=(-1)^n\mathbb{E}_I\left[(\sigma_\mathrm{n}^2+I)^ne^{-s(\sigma_\mathrm{n}^2+I)}\right]$ stands for the $n$-th derivative of $\mathcal{L}(s)$, and step $(a)$ is from the cdf of a gamma random variable.
Next, we will derive the coverage probability based on the expression \eqref{nthde}. In particular, we will show that, for arbitrary distributions of the channel gain, the $n$-th derivative of the Laplace transform can be expressed in a recursive form. Afterwards, the coverage probability can be expressed by the induced $\ell_1$-norm of a lower triangular Toeplitz matrix. This approach yields a more compact analytical result for the coverage probability than previous works, thanks to the more delicate handling of the gamma distributed fading gain. More importantly, this framework enables us to perform further analyses of mm-wave networks, e.g., to investigate the impact of directional antenna arrays in later sections, which cannot be unraveled from existing works.
The first step is to derive the Laplace transform $\mathcal{L}(s)$, given in the following lemma. As mentioned in Section \ref{II-A}, we focus on the LOS interference within the LOS radius $R$ in the following derivation.
\begin{lemma}Assuming a lower bound $\kappa$ on the distance between the typical receiver and the nearest interferer, the Laplace transform of noise and interference is
\begin{align}
\label{Ls}\nonumber
\mathcal{L}(s)&=\exp\bigg(-s\sigma_\mathrm{n}^2-\pi\lambda_\mathrm{b}\Big\{R^2- \kappa^2+\delta\kappa^2\mathbb{E}_{g}\left[\mathrm{E}_{1+\delta}(s\kappa^{-\alpha} g)\right]\\
&\relphantom{=}-\delta R^2\mathbb{E}_{g}\left[\mathrm{E}_{1+\delta}(sR^{-\alpha}g)\right]\Big\}\bigg)\nonumber\\
&\triangleq \exp\left\{\eta(s)\right\},
\end{align}
where $\delta=\frac{2}{\alpha}$, $\mathrm{E}_p(z)$ is the generalized exponential integral \cite[Page \text{xxxv}]{zwillinger2014table}, and $g$ is the channel gain that is distributed as all the $g_x$ in \eqref{generalSINR}.\label{lem1}
\end{lemma}
\begin{IEEEproof}
The Laplace transform of the interference $I$, denoted as $\mathcal{L}_I(s)$, is well known and is written as \cite{6042301}
\begin{equation}\label{eq14}
\begin{split}
\mathcal{L}_I(s)&=\mathbb{E}_I\left[e^{-sI}\right]\\
&=\exp\left\{-2\pi\lambda_\mathrm{b}\int_\kappa^R{\left(1-\mathbb{E}_{g}[\exp(-sgx^{-\alpha})]\right)}x\mathrm{d}x\right\}.
\end{split}
\end{equation}
Note that the expectation over $g$ is another integral, so \eqref{eq14} involves a double integral. Since the integration function is integrable, according to Fubini's theorem, we can swap the order of the expectation and integration, and part of the exponent of $\mathcal{L}_I(s)$ can be recast as
\begin{align}
&\relphantom{=}2\int_\kappa^R{\left(1-\mathbb{E}_{g}[\exp(-sgx^{-\alpha})]\right)}x\mathrm{d}x\nonumber\\
&=2\mathbb{E}_{g}\left[\int_\kappa^R\left[1-\exp(-sgx^{-\alpha})\right]x\mathrm{d}x\right]\label{swap}\\
\nonumber&= R^2- \kappa^2+\mathbb{E}_{g}\left[(sg)^{\delta}\int_{sgR^{-\alpha}}^{sg\kappa^{-\alpha}} e^{-t}\mathrm{d}t^{-\delta}\right]\\
&= R^2- \kappa^2+\delta\kappa^2\mathbb{E}_{g}\left[\mathrm{E}_{1+\delta}(s\kappa^{-\alpha}g)\right]-\delta R^2\mathbb{E}_{g}\left[\mathrm{E}_{1+\delta}(sR^{-\alpha}g)\right].\label{intlap}
\end{align}
By substituting \eqref{intlap} into \eqref{eq14}, the Laplace transform $\mathcal{L}(s)$ in Lemma \ref{lem1} can be obtained.
\end{IEEEproof}
Calculating the Laplace transform in PPP usually involves two expectation operations over the interferers' locations and channel gains, respectively.
Note that in the derivation of Lemma \ref{lem1}, we first take the expectation over the interferers' locations and then average over the channel gains as shown in \eqref{swap}, which is in the reverse order compared to the conventional derivation \cite{6042301} and existing works in mm-wave networks \cite{mine, venugopal2015interference,robertadhoc,robertcoverage}. The reason why we perform these two expectations in this order is that, in mm-wave networks, the distribution of the channel gains involving the directional antenna array gains are much more complicated than that in sub-6 GHz networks, and therefore we take it as the latter step to maintain the analytical tractability. In later sections we will see the benefits of this swapping.
Based on the Laplace transform derived in Lemma \ref{lem1}, the coverage probability is given in the following theorem.
\begin{theorem}The coverage probability \eqref{coverageprob} is given by
\begin{equation}
p_\mathrm{c}(\tau)=
\begin{dcases}
\left\Vert\exp\left(\mathbf{C}_M\right)\right\Vert_1&\mathrm{ad\,\,hoc},\\
\int_0^Rf_{r_0}(r)\left\Vert\exp\left\{\mathbf{C}_M(r)\right\}\right\Vert_1\mathrm{d}r&\mathrm{cellular},\label{frameexpr}
\end{dcases}
\end{equation}
where $f_{r_0}(r)$ is the probability density function (pdf) of the distance between the typical receiver and its associated transmitter, and $\mathbf{C}_M$ is an $M\times M$ lower triangular Toeplitz matrix
\begin{equation}
\mathbf{C}_M=\left[{\begin{IEEEeqnarraybox*}[][c]{,c/c/c/c/c,}
c_0&{}&{}&{}&{}\\
c_1&c_0&{}&{}&{}\\
c_2&c_1&c_0&{}&{}\\
\vdots &{}&{}& \ddots &{}\\
c_{M-1}&\cdots&c_2& c_1 &c_0
\end{IEEEeqnarraybox*}} \right],\label{topmatrix}
\end{equation}
whose nonzero entries are determined by
\begin{equation}
c_k=\frac{(-s)^k}{k!}\eta^{(k)}(s),
\end{equation}\label{th1}
and $c_k>0$ for $k\ge1$.
\end{theorem}
\begin{IEEEproof}
See Appendix \ref{AB}.
\end{IEEEproof}
As stated in Section \ref{III-A}, the main assumption in Theorem \ref{th1} is the gamma distributed signal power, and this theorem holds for arbitrary interference distributions and antenna patterns. Furthermore, note that we adopt a general exponent $\eta(s)$ of the Laplace transform $\mathcal{L}(s)$, and thus Theorem \ref{th1} can be viewed as a generalization of the results in \cite{chang,chang1}. When the exponent $\eta(s)$ is specified as \cite[Equation (35)]{chang} and \cite[Equation (36)]{chang1} according to different network settings and fading assumptions, Theorem \ref{th1} specializes to the expressions therein. In mm-wave networks, the channel gain $g$ not only includes the small-scale fading gain, but also the directional antenna array gain. With Theorem \ref{th1} at hand, in order to obtain the specific coverage probability expression for a certain kind of channel gain $g$, the only parameters required to be determined are the entries $\{c_k\}_{k=0}^{M-1}$ in the matrix $\mathbf{C}_M$. While we focus on analog beamforming, the framework proposed in this section is also applicable for mm-wave networks adopting other transmission techniques, e.g., hybrid precoding \cite{el2014spatially,7397861}. In the following two sections, we shall derive the coverage probabilities for different network settings and antenna patterns.
\section{Coverage Analysis for mm-Wave Ad Hoc Networks}\label{sec_ad}
Millimeter wave communications has been proposed as a promising technique for next-generation ad hoc networks with short-range transmission, e.g., military battlefield networks \cite{5273811}, high-fidelity video transmission \cite{rappaport2014millimeter}, and D2D networks \cite{7010536}. In this section, we will first derive an analytical expression of the coverage probability for mm-wave ad hoc networks, based on which we will then investigate the critical role of directional antenna arrays in such networks.
\subsection{Coverage Analysis}\label{IV-A}
In mm-wave ad hoc networks, a dipole model is adopted, where the communication distance between the typical receiver and its associated transmitter is assumed to be fixed as the dipole distance \cite{haenggi2012stochastic}. As mentioned in Section \ref{II-A}, we assume that the typical dipole pair is in the LOS condition, i.e., $r_0\le R$. In fact, if the typical receiver is associated with a NLOS transmitter out of the LOS radius, due to the huge path loss and high noise power at mm-wave bands, the coverage probability will be fairly low (close to zero) for a practical SINR threshold, and therefore with little analytical significance. Furthermore, in ad hoc networks, the nearest interferer can be arbitrarily close to the typical receiver, i.e., $\kappa=0$. According to \eqref{generalSINR}, the received SINR is given by
\begin{equation}
\mathrm{SINR}=\frac{|\rho_{x_0}|^2r_0^{-\alpha}}
{\sigma^2_n + \sum_{x\in\Phi}|\rho_x|^2G_\mathrm{act}\left(\frac{d}{\lambda}\theta_x\right) \|x\|^{-\alpha}}.
\end{equation}
As mentioned in Section \ref{II-B}, the sinc antenna pattern is an excellent approximation of the actual antenna pattern with better analytical tractability, so we propose to adopt it in the analysis of mm-wave ad hoc networks.
Note that in Section \ref{III-B}, we have pointed out that the main task to derive the coverage probability $p_\mathrm{c}(\tau)$ is to determine the entries in the matrix $\mathbf{C}_M$. The channel gain $g$ is the product of the gamma distributed small-scale fading gain $|\rho_x|^2$ and the directional antenna array gain $G_\mathrm{sinc}\left(\frac{d}{\lambda}\theta_x\right)$. First, a unique property of the directional array gain with the sinc antenna pattern is presented to help derive the coverage probability.
\begin{lemma}\label{lem3}
For $p\in\mathbb{Z}^+$,
\begin{equation}
\int_0^\infty \left(\frac{\sin x}{x}\right)^{2p}\mathrm{d}x=\frac{\pi}{2(2p-1)!}{{2p-1} \atopwithdelims \langle \rangle{p-1}},
\end{equation}
where $n \atopwithdelims \langle \rangle k$ are the Eulerian numbers, i.e., ${n \atopwithdelims \langle \rangle k}=\sum_{j=0}^{k+1}(-1)^j{{n+1}\choose j}(k-j+1)^n$.
\end{lemma}
\begin{IEEEproof}
The proof can be found in \cite[Lemma 2]{mine}.
\end{IEEEproof}
Based on Lemma \ref{lem3}, a lower bound of the coverage probability with the sinc antenna pattern is derived in the following proposition.
\begin{prop}\label{th2}The coverage probability of mm-wave ad hoc networks with the sinc antenna pattern is tightly lower bounded by
\begin{equation}\label{adcov}
p_\mathrm{c}^{\mathrm{sinc}}(\tau)\ge\left\Vert\exp\left(\frac{1}{N_\mathrm{t}}\mathbf{C}_M\right)\right\Vert_1.
\end{equation}
The coefficients in $\mathbf{C}_M$ are given by
\begin{equation}
\begin{split}
c_k&=\Bigg[\frac{\pi R^2\lambda_\mathrm{b}\lambda}{\alpha d}\sum_{p=\max\{1,k\}}^\infty\frac{(-\tau r_0^\alpha)^p{{2p-1} \atopwithdelims \langle \rangle{p-1}} \Gamma(M+p)}{R^{\alpha p}(2p-1)!(p-k)!\left(p-\delta\right)\Gamma(M)}\\
&\relphantom{=}-\frac{\delta\lambda_\mathrm{b}\lambda}{ d}\left(\delta\right)_k\Gamma\left(-\delta\right)\frac{\Gamma\left(M+\delta\right)}{\Gamma(M)}\tau^{\delta}r_0^2\xi\\
&\relphantom{=}+\mathbf1(k\le1)\frac{\tau Mr_0^\alpha\sigma^2}{\beta P_\mathrm{t}}\Bigg]\times\frac{(-1)^{k+1}}{k!},
\end{split}
\end{equation}
where $\Gamma(\cdot)$ denotes the gamma function, $(x)_n$ represents the falling factorial, $\mathbf{1}(\cdot)$ is the indicator function, and
\begin{equation}
\xi=\int_{0}^\infty\left|\frac{\sin x}{x}\right|^{2\delta}\mathrm{d}x.
\end{equation}
\end{prop}
\begin{IEEEproof}
See Appendix \ref{AD}.
\end{IEEEproof}
\emph{Remark 1:} According to recent mm-wave channel measurements \cite{rappaportchannel}, the path loss exponent $\alpha$ is less than 3, which ensures the convergence of $\xi$.
\emph{Remark 2:} Although the expressions in Proposition \ref{th2} involve a summation of infinitely many terms, it turns out that, in practical evaluation, the series converges quickly, and the high-order terms contribute little to the sum. Hence, using a finite number of terms is sufficient for numerical computation. In addition, $\xi$ only depends on the path loss exponent $\alpha$ and can easily be evaluated numerically and offline. Overall, the expression in Proposition \ref{th2} is much easier to evaluate than existing results \cite{robertadhoc,venugopal2015interference} that contain multiple nested integrals.
\emph{Remark 3:} Note that the derivation in Appendix \ref{AD} is based on the Laplace transform provided in Lemma \ref{lem1}, where we swap the order of two expectations as mentioned in Section \ref{III-B}.
With the help of this swapping operation, we are able to derive a more tractable expression for the Laplace transform, which verifies the benefits and superiority of the proposed analytical framework.
\emph{Remark 4:} For a given coverage probability, the maximum transmitter density can be numerically determined by Proposition 1.
\subsection{Impact of Directional Antenna Arrays}\label{IV-B}
Next we investigate how directional antenna arrays affect the coverage probability in mm-wave networks. Increasing the array size enhances the signal quality, but may also increase interference power. The overall effect is revealed in the following corollary.
\begin{corollary}\label{coro1}
The tight lower bound of the coverage probability \eqref{adcov} is a non-decreasing concave function with the array size, and it can be rewritten as
\begin{equation}\label{eqcoro1}
p_\mathrm{c}^{\mathrm{sinc}}(t)\ge e^{c_0t}\left(1+\sum_{n=1}^{M-1}\beta_nt^n\right),
\end{equation}
where $t=\frac{1}{N_\mathrm{t}}$, and
\begin{equation}
\beta_n=
\frac{\left\Vert \left(\mathbf{C}_M-c_0\mathbf{I}_M\right)^n\right\Vert_1}{n!}\quad n\ge1.
\end{equation}
When $t\to0$, i.e., $N_\mathrm{t}\to\infty$, the asymptotic outage probability is given by
\begin{equation}\label{asymp}
\tilde{p}_\mathrm{o}^\mathrm{sinc}(t)\sim\frac{\mu}{N_\mathrm{t}},
\end{equation}
where $\mu=-\sum_{n=0}^{M-1}c_n>0$.
\end{corollary}
\begin{IEEEproof}
See Appendix \ref{AE}.
\end{IEEEproof}
It can be seen in Corollary \ref{coro1} that $p_\mathrm{c}^\mathrm{sinc}(t)\to 1$ as $t\to0$ for all network parameters. Hence, for any desired coverage requirement $1-\epsilon$, there exists a minimum antenna array size $N_\mathrm{t}$ that can satisfy it regardless of the other network parameters,
which can be numerically determined by Corollary \ref{coro1}.
The lower bound in Corollary \ref{coro1} indicates how antenna arrays affect the coverage probability. From Corollary \ref{coro1}, we discover that increasing the directional antenna array size will definitely benefit the coverage probability in ad hoc networks. Later in Section \ref{numer} we show that the result is tight
through simulations. Moreover, we see that the lower bound is a product of an exponential function and a polynomial function of order $M-1$ of the inverse of the array size $t$. For the special case that $M=1$, i.e., Rayleigh fading channel, the lower bound reduces to an exponential one.
The asymptotic coverage probability \eqref{asymp} shows that the asymptotic outage probability is inversely proportional to the array size.
To the best of the authors' knowledge, this is the first analytical result on the impact of antenna arrays in mm-wave network analysis.
\emph{Remark 5:} Note that the manipulation in Corollary \ref{coro1} is based on the proposed analytical framework in Section \ref{sec_frame}. Especially, it benefits greatly from the delicate tackling of the gamma distributed signal power, via a lower triangular Toeplitz matrix representation. If the upper bound in \cite{mine, venugopal2015interference,robertadhoc,robertcoverage} was used instead, we would not be able explicitly reveal the impact of antenna arrays, which, from another perspective, confirms the advantages of the proposed analytical framework.
\section{Coverage Analysis for mm-Wave Cellular Networks}\label{sec_cellular}
In this section, we will analyze the coverage probability for mm-wave cellular networks. While the sinc antenna pattern can still be employed to get a highly accurate approximation of the actual antenna pattern, its numerical evaluation is more complicated and the expression reveals little insight.
In particular, as \eqref{frameexpr} showed, an additional integral is needed over the distance between the serving BS and the typical user. Furthermore, since $\kappa=r_0$ in cellular networks, the summation of infinite terms in the integrand does not converge quickly.
Instead, we will analyze the cosine antenna pattern in this section, which will provide a more tractable expression. The impact of antenna arrays on the coverage probability will then be investigated.
\subsection{Coverage Analysis}
In contrast to existing works \cite{robertcoverage}, we will present an analytical result for coverage probability that fully reflects the directionality in mm-wave cellular networks. Note that although the proposed approximate antenna pattern is more complicated than the flat-top pattern, the new expression based on the analytical framework in Section \ref{sec_frame} is more compact and tractable. With the cosine antenna pattern, the coverage probability is derived in the following proposition.
\begin{prop}\label{th3}
The coverage probability of mm-wave cellular networks with the cosine antenna pattern is given by
\begin{equation}
p_\mathrm{c}^{\cos}(\tau)=\pi\lambda_\mathrm{b}\int_0^{R^2}e^{-\pi\lambda_\mathrm{b}r}\left\Vert\exp\left\{\frac{1}{N_\mathrm{t}}\mathbf{C}_M(r)\right\}\right\Vert_1\mathrm{d}r.\label{26}
\end{equation}
The nonzero entries in $\mathbf{C}_M$ are determined by
\begin{equation}
\begin{split}
c_k(r)&=\frac{2\sqrt{\pi}\lambda_\mathrm{b}\lambda\Gamma\left(k+\frac{1}{2}\right)\Gamma(M+k)\tau^k}{d(k!)^2(\alpha k-2)\Gamma(M)}\\
&\relphantom{=}\times\left[J_k\left(-\tau\right)r-J_k\left(-\frac{\tau}{R^\alpha}r^\frac{1}{\delta}\right)R^{2-\alpha k}r^\frac{ k}{\delta}\right]\\
&\relphantom{=}+\mathbf{1}(k\le1)\frac{(-1)^{k+1}M\tau\sigma^2}{\beta P_\mathrm{t}},
\end{split}
\end{equation}
where
\begin{equation}
J_k\left(x\right)={}_3F_2\left(k+\frac{1}{2},k-\delta,k+M;k+1,k+1-\delta;x\right),
\end{equation}
with ${}_3F_2(a_1,a_2,a_3;b_1,b_2;z)$ denoting the generalized hypergeometric function.
\end{prop}
\begin{IEEEproof}
See Appendix \ref{AF}.
\end{IEEEproof}
Note that the coefficients $c_k(r)$ in Proposition \ref{th3} can be expressed based on the well-known hypergeometric function rather than the infinite summations as in Proposition \ref{th2} for ad hoc networks, which are efficiently calculated in modern numerical software. This illustrates that the cosine antenna pattern enables a more tractable analysis for cellular networks. We will use Proposition \ref{th2} as an approximation for the coverage probability with the actual antenna pattern, while the accuracy of the cosine antenna pattern will be verified in Section \ref{numer}. Furthermore, similar to Remark 3, the swap of two expectations operated in \eqref{swap} enables the derivation of Proposition \ref{th3} and turns out to be an effective and tractable approach to tackle complicated channel gain distributions in mm-wave networks.
\emph{Remark 6:} With Proposition \ref{th3}, we can numerically calculate the required BS density as well as the minimum number of antennas for a desirable coverage probability. Furthermore, the optimal BS density that achieves the maximum coverage probability, as we will see in Section \ref{VI-A}, can also be numerically determined by Proposition \ref{th3}.
\subsection{Impact of Directional Antenna Arrays}
In the last subsection, we have derived an analytical result for coverage probability of mm-wave cellular networks with the cosine antenna pattern. However, it is difficult to further analyze the impact of directional antenna arrays since there is an extra integral of the induced $\ell_1$-norm of the matrix exponential, which contains the array size parameter $N_\mathrm{t}$. As an alternative, a lower bound for the coverage probability in Proposition \ref{th3} is provided next, based on which we will present the impact directional antenna arrays.
\begin{corollary}\label{coro2}
A lower bound of the coverage probability \eqref{26} is given by
\begin{equation}\label{cecov}
p_\mathrm{c}^{\cos}(\tau)\ge\left(1-e^{-\pi\lambda_\mathrm{b}R^2}\right)\left\Vert\exp\left\{\frac{1}{N_\mathrm{t}(1-e^{-\pi\lambda_\mathrm{b}R^2})}\mathbf{D}_M\right\}\right\Vert_1.
\end{equation}
The nonzero entries in $\mathbf{D}_M$ determined by
\begin{equation}
\begin{split}
d_k&=\frac{2\lambda\Gamma\left(k+\frac{1}{2}\right)\Gamma(M+k)\tau^k}{\sqrt{\pi}d(k!)^2(\alpha k-2)\Gamma(M)}
\bigg[y_k\left(-\tau\right)-(\pi\lambda_\mathrm{b})^2R^{2-\alpha k}\\
&\relphantom{=}\times\int_0^{R^2}e^{-\pi\lambda_\mathrm{b}r}r^\frac{\alpha k}{2}J_k\left(-\frac{\tau}{R^{\alpha}}r^\frac{1}{\delta}\right)\mathrm{d}r\bigg]\\
&\relphantom{=}+\mathbf{1}(k\le1)\frac{(-1)^{k+1}M\tau\sigma^2}{\beta P_\mathrm{t}\left({\pi\lambda_\mathrm{b}}\right)^{\frac{1}{\delta}}}\gamma\left(1+\frac{1}{\delta},\pi\lambda_\mathrm{b}R^2\right),
\end{split}
\end{equation}
where $\gamma(s,x)$ is the lower incomplete gamma function \cite[Page 890]{zwillinger2014table} and
\begin{equation}
\begin{split}
y_k(x)&=J_k(x)\left[1-e^{-\pi\lambda_\mathrm{b}R^2}\left(1+\pi\lambda_\mathrm{b}R^2\right)\right]\\
&\relphantom{=}+\mathbf{1}(k=0)\left(
\pi\lambda_\mathrm{b}R^2-1+e^{-\pi\lambda_\mathrm{b}R^2}\right).
\end{split}
\end{equation}
\end{corollary}
\begin{IEEEproof}
See Appendix \ref{AG}.
\end{IEEEproof}
With this lower bound, the integrand no longer involves the induced $\ell_1$-norm of a matrix exponential, which creates the possibility to disclose the impact of antenna arrays as stated in the following corollary.
\begin{corollary}\label{coro3}
The lower bound of the coverage probability \eqref{26} is a non-decreasing concave function of the array size, and it can be rewritten as
\begin{equation}\label{b33}
p_\mathrm{c}^{\cos}(t)\ge \left(1-e^{-\pi\lambda_\mathrm{b}R^2}\right)e^{\beta_0t}\left(1+\sum_{n=1}^{M-1}\beta_nt^n\right),
\end{equation}
where $t=\frac{1}{N_\mathrm{t}}$ is the inverse of the array size and
\begin{equation}
\beta_n=
\begin{dcases}
\frac{d_0}{1-e^{-\pi\lambda_\mathrm{b}R^2}}&n=0,\\
\frac{\left\Vert \left(\mathbf{D}_M-d_0\mathbf{I}_M\right)^n\right\Vert_1}{n!\left(1-e^{-\pi\lambda_\mathrm{b}R^2}\right)}&n\ge1.
\end{dcases}
\end{equation}
When $t\to0$, i.e., $N_\mathrm{t}\to\infty$, the asymptotic outage probability is given by
\begin{equation}
\tilde{p}_\mathrm{o}^\mathrm{cos}(t)\sim \frac{\mu}{N_\mathrm{t}}+e^{-\pi\lambda_\mathrm{b}R^2},
\end{equation}
where $\mu=-\sum_{n=0}^{M-1}d_n>0$.
\end{corollary}
\begin{IEEEproof}
The proof is similar to that of Corollary \ref{coro1}.
\end{IEEEproof}
It turns out that this lower bound of the coverage probability with the array size is quite similar to that in mm-wave ad hoc networks, yet with additional terms brought by the user association. This similarity shows that the impact of directional antenna arrays in mm-wave networks does not depend much on the user association strategy. Although this result is based on the cosine antenna pattern and a lower bound, later we will show its accuracy via simulations. Similar to Remark 5, the key tool here is the analytical framework proposed in Section \ref{sec_frame}, which enables us to investigate the impact of antenna arrays in mm-wave cellular networks.
\section{Numerical Results}\label{numer}
\begin{figure*}
\centering
\subfigure[SINR, SIR, and SNR coverage probabilities in mm-wave cellular networks when $R=200$ m, $N_\mathrm{t}=64$, $\tau=10$ dB, $M=3$, and $\alpha=2.1$.]
{
\centering\includegraphics[height=5.6cm]{./NLOS_c}\label{f1a}
}
\subfigure[SINR, SIR, and SNR coverage probabilities in mm-wave ad hoc networks when $R=180$ m, $N_\mathrm{t}=64$, $\tau=5$ dB, $M=5$, $\alpha=2.2$, and $r_0=25$ m.]{
\centering\includegraphics[height=5.6cm]{./NLOS_a}\label{f1b}
}
\caption{The impact of NLOS signals and interference in mm-wave (a) cellular networks and (b) ad hoc networks.}\label{f1}
\end{figure*}
In this section, we will present numerical results of coverage probabilities in both mm-wave ad hoc and cellular networks. We assume that the bandwidth is 1 GHz, and the transmit power of each BS is set as 1 Watt. The separation between the antenna elements is $d=\frac{\lambda}{4}$, i.e., quarter-wavelength to avoid the grating lobes. From the recent measurements of mm-wave signal propagations \cite{rappaportchannel}, the path loss exponent $\alpha$ is close to 2 and the intercept is $\beta=-61.4$ dB. All simulation results shown in this section are averaged over $5\times10^5$ realizations.
\subsection{The Role of NLOS Signals and Interference}\label{VI-A}
In Section \ref{II-A}, we stated the assumption that NLOS signals and NLOS interference are negligible in mm-wave networks, which will be justified in this subsection. To model the NLOS signals and interference, we set the propagation parameters as follows: the path loss exponent is $\alpha_\mathrm{NLOS}=4$ and the intercept is $\beta_\mathrm{NLOS}=-72$ dB \cite{rappaportchannel}. Due to the richer reflections and scattering environment of NLOS propagations, Rayleigh fading is assumed as the small-scale propagation model of the NLOS signals and interference. According to recent measurements, a practical value of the LOS ball radius $R$ should be in the order of hundred meters \cite{7593259}.
In Fig. \ref{f1a}, we show a simulation of the SINR coverage probability without incorporating the NLOS serving BS and NLOS interferers, whose curve almost coincides with that including NLOS components. This demonstrates that the impact of NLOS signals and interference is negligible and validates the LOS assumption made in Section \ref{II-A}, i.e., we only need to focus on the analysis where the typical receiver is associated with a LOS transmitter and the interference is brought by LOS interferers. The underlying reasons are as follows for different BS densities: 1) When the BS density is low, the network is operating in the noise-limited regime, and thus only LOS signal matters; 2) At medium BS densities, there is a certain probability to have a LOS serving BS, and the interference gradually affects the SINR coverage. However, the LOS interference power is much higher compared to the NLOS ones. On the other hand, when the typical link is NLOS, it is difficult to achieve a satisfactory SINR value; 3) Very dense mm-wave networks will be LOS interference-limited, which has been investigated in \cite{mine,robertcoverage}.
For mm-wave ad hoc networks, the typical dipole pair is assumed to be LOS. As explained in Section \ref{IV-A}, the coverage probability is unsatisfactory due to the huge path loss and high noise power when the signal link is NLOS. Fig. \ref{f1b} demonstrates the impact of NLOS interferers when the tagged transmitter is in the LOS condition. It manifests that, with a LOS transmitter associated with the typical receiver, the NLOS interference is also negligible for the reasons which are similar to those in cellular networks \cite{robertadhoc}.
Hence, it is reasonable to neglect the NLOS components in the analysis for mm-wave networks. Although we showed that NLOS parts are minor, note that all the simulations include them to maintain the completeness and consistency. Moreover, we retain the actual antenna pattern \eqref{sinsin} in the remaining simulations.
\subsection{Coverage Analysis}
\begin{figure*}
\centering
\subfigure[Coverage probability in mm-wave ad hoc networks.]
{
\centering\includegraphics[height=5.6cm]{./SINR_a}\label{f2a}
}
\subfigure[Coverage probability in mm-wave cellular networks when $R=200$ m, $N_\mathrm{t}=128$, $\lambda_\mathrm{b}=1\times 10^{-3}$ m$^{-2}$, $M=3$, and $\alpha=2.1$.]{
\centering\includegraphics[height=5.6cm]{./SINR_c}\label{f2b}
}
\caption{Coverage analysis using (a) Proposition \ref{th2} for mm-wave ad hoc networks, and (b) Proposition \ref{th3} and Corollary \ref{coro2} for mm-wave cellular networks.}
\end{figure*}
The effects of noise and interference in mm-wave networks are also investigated. In Fig. \ref{f1a}, we evaluate the signal-to-interference ratio (SIR) and signal-to-noise (SNR) coverage probabilities versus the BS density in mm-wave cellular networks.
It was found in \cite{7061455} that the SIR coverage probability will monotonically decrease with the increasing BS density in sub-6 GHz networks with the dual-slope path loss model, which, however, no longer holds in mm-wave cellular networks.
This is because the difference in the small-scale fading for LOS and NLOS propagations in mm-wave networks, which were assumed to be the same in \cite{7061455}.
When the BS density gradually increases, the signal link tends to experience Nakagami fading rather than Rayleigh fading. This change in small-scale fading results in a slight increase of the SIR coverage probability, which also implicitly illustrates that Nakagami fading provides better coverage than Rayleigh fading.
Therefore, as a lower bound of both SIR and SNR coverage probabilities, the SINR coverage probability in mm-wave cellular networks has a peak value with the increasing BS density. On the other hand, different from mm-wave cellular networks, the SINR coverage probability decreases with network densification due to the fixed dipole distance and arbitrarily close interferers, which is shown in Fig. \ref{f1b}. This evaluation indicates the importance of analyzing the SINR distribution in mm-wave cellular networks, while the SIR coverage can be used as a good metric for mm-wave ad hoc networks.
In this subsection, we will verify our analytical results in Sections \ref{sec_ad} and \ref{sec_cellular} through simulations.
In Fig. \ref{f2a}, the SINR coverage probabilities for mm-wave ad hoc networks are evaluated. It can be seen that the analytical results match the simulations with negligible gaps, which implies the accuracy of the bound in Proposition \ref{th2}. In Remark 1, we mentioned that using finite terms for the summations in $\{c_k\}_{k=0}^{M-1}$ is sufficient. In the numerical evaluation of Proposition \ref{th2} in Fig. \ref{f2a}, we only use 5 terms in the summations and it turns out that the higher-order terms are negligible for practical evaluation.
In Fig. \ref{f2b}, the coverage probability for a mm-wave cellular network is evaluated. We see that both the analytical results in Proposition \ref{th3} and Corollary \ref{coro2} give an approximate coverage probability with minor gaps. The expression in Proposition \ref{th3} yields a very good approximation for smaller SINR thresholds and a tight bound for larger ones. This is because the major approximations made in the cosine antenna pattern \eqref{approxpattern} are on the side lobe gains that are approximated to be zeros, while the main lobe gains are approximated accurately with the cosine function. When the SINR threshold gets large, the interference power is smaller, which also means the interference is more likely to be produced by side lobe gains. Therefore, the gap will gradually increase due to the relatively crude approximation of the side lobe gains.
The analytical result in Corollary \ref{coro2} provides a lower bound of the expression in Proposition \ref{th3}. Although it is not guaranteed to be a lower or upper bound of the exact SINR coverage probability, it gives a good approximation as shown in Fig. \ref{f2b}, with more analytical tractability and potential for further analysis, which will be discussed in detail in the next subsection. The results presented in Fig. \ref{f2b} show the effectiveness and rationale of the proposed cosine antenna pattern \eqref{approxpattern} in coverage analysis for mm-wave cellular networks, which is an ideal candidate for further performance analysis in mm-wave cellular networks.
\subsection{Impact of Directional Antenna Arrays}
\begin{figure*}
\centering
\subfigure[Impact of antenna arrays in mm-wave ad hoc networks when $R=200$ m, $\tau=5$ dB, $\lambda_\mathrm{b}=1\times 10^{-3}$ m$^{-2}$, $\alpha=2.1$, and $r_0=25$ m.]
{
\centering\includegraphics[height=5.6cm]{./Nt_a}\label{f3a}
}
\subfigure[Impact of antenna arrays in mm-wave cellular networks when $R=200$ m, $\tau=5$ dB, $\lambda_\mathrm{b}=1\times 10^{-3}$ m$^{-2}$, and $\alpha=2.1$.]
{
\centering\includegraphics[height=5.6cm]{./Nt_c}\label{f3b}
}
\caption{Investigation on the impact of antenna arrays using (a) Corollary \ref{coro1} for mm-wave ad hoc networks, and (b) Corollary \ref{coro3} for mm-wave cellular networks.}
\end{figure*}
In this subsection, we will discuss the impact of directional antenna arrays on coverage probability in mm-wave networks\footnote{In Figs. \ref{f3a} and \ref{f3b}, the x-axes are reversed, and the y-axes are in the logarithm scale.}.
Fig. \ref{f3a} demonstrates that the analytical result in Corollary \ref{coro1} for mm-wave ad hoc networks well matches the simulation result. We see that the increase of the array size leads to an improvement of the coverage probability, which confirms the monotonicity property in Corollary \ref{coro1}. In the following, we provide some intuitive explanations for this phenomenon.
The increase of the array size increases the maximum array gain for both signal and interference at the same pace, in proportion to the array size, and therefore there is almost no performance gain from increasing the maximum array gain via enlarging the array size. Nevertheless, another effect of the increasing array size for the interference is the narrowing of the beams, which reduces the probability that the interferers direct the main lobes towards the typical receiver. Moreover, note that the lower bound derived in Corollary \ref{coro1} is non-decreasing concave, which means that the benefits on the coverage from leveraging more antennas gradually diminishes with the increasing antenna size. In addition, we discover that the increase of the Nakagami parameter $M$ results in an increase of the coverage probability.
In Fig. \ref{f3b}, the impact of antenna arrays in mm-wave cellular networks is investigated. For the analytical result, we evaluate the coverage probability using the expression in Corollary \ref{coro3}, which gives a lower bound of the coverage probability adopting the cosine antenna pattern. Although Rayleigh fading, i.e., $M=1$, is only a special case for the analysis in this paper and not suitable for LOS mm-wave channels, it is valuable to examine this special case for checking the lower bound in Corollary \ref{coro3}. As stated in Section \ref{IV-B}, when $M=1$, the lower bound \eqref{b33} will reduce to an exponential one, which is linear in the logarithm scale shown in Fig. \ref{f3b}. When the Nakagami parameter $M$ increases, the polynomial term will take effect to make the lower bound to be a non-decreasing concave one. It turns out that the lower bound derived in Corollary \ref{coro3} can be regraded as an effective expression for analyzing the impact of directional antenna arrays in mm-wave cellular networks, and that the cosine antenna pattern is a satisfactory surrogate of the actual antenna pattern for tractable analysis in mm-wave networks.
\section{Conclusions}\label{conclu}
In this paper, we first proposed a general framework for coverage analysis in mm-wave networks. It was then applied to derive new tractable expressions of coverage probabilities for mm-wave ad hoc and cellular networks, where two approximate antenna patterns with good accuracy and analytical tractability were adopted.
We have shown that, as the network density increases, the coverage probability reaches a peak in mm-wave cellular networks, while it monotonically decreases in ad hoc networks.
More importantly, analytical results show that the coverage probabilities of both types of networks increase as a non-decreasing concave function with the antenna array size.
It will be interesting to extend the proposed analytical framework to more advanced precoding techniques, e.g., hybrid precoding \cite{el2014spatially,7397861}. Moreover, a coverage analysis that includes the beam misalignment caused by the imperfect channel information also is a promising future research direction.
\appendices
\section{}\label{AB}
Defining $x_n=\frac{(-s)^n}{n!}\mathcal{L}^{(n)}(s)$, the coverage probability \eqref{nthde} can be expressed as
\begin{equation}\label{25}
p_\mathrm{c}(\tau)=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}x_n\right],
\end{equation}
where $x_0=\mathcal{L}(s)=\exp\{\eta(s)\}$ is given in Lemma \ref{lem1}. Next, we will express $x_n$ in a recursive form. It is obvious that $\mathcal{L}^{(1)}(s)=\eta^{(1)}(s)\mathcal{L}(s)$, and according to the formula of Leibniz for the $n$-th derivative of the product of two functions \cite{roman1980formula}, we have
\begin{equation}
\mathcal{L}^{(n)}(s)=\frac{\mathrm{d}^{n-1}}{\mathrm{d}s}\mathcal{L}^{(1)}(s)=\sum_{i=0}^{n-1}{{n-1}\choose i} \eta^{(n-i)}(s)\mathcal{L}^{(i)}(s),
\end{equation}
followed by
\begin{equation}
\frac{(-s)^n}{n!}\mathcal{L}^{(n)}(s)=\sum_{i=0}^{n-1}\frac{n-i}{n}\frac{(-s)^{(n-i)}}{(n-i)!}\eta^{(n-i)}(s)\frac{(-s)^i}{i!}\mathcal{L}^{(i)}(s).
\end{equation}
Therefore, the recursive relationship of $x_n$ is
\begin{equation}\label{recur}
x_n=\sum_{i=0}^{n-1}\frac{n-i}{n}c_{n-i}x_i,\quad c_k=\frac{(-s)^k}{k!}\eta^{(k)}(s).
\end{equation}
We define two power series as follows to solve for $x_n$,
\begin{equation}\label{eq40}
C(z)\triangleq\sum_{n=0}^\infty c_nz^n,\quad
X(z)\triangleq\sum_{n=0}^\infty x_nz^n.
\end{equation}
Following the method in \cite[Appendix A]{chang1}, using the properties that $C^{(1)}(z)=\sum_{n=0}^{\infty}nc_nz^{n-1}$ and $C(z)X(z)=\sum_{n=0}^\infty\sum_{i=0}^nc_{n-i}x_iz^n$, from \eqref{recur}, we obtain the differential equation
\begin{equation}
X^{(1)}(z)=C^{(1)}(z)X(z),
\end{equation}
whose solution is
\begin{equation}
X(z)=\exp\left\{C(z)\right\}.\label{40}
\end{equation}
Therefore, according to \eqref{25}, \eqref{eq40} and \eqref{40}, the coverage probability is given by
\begin{equation}
\begin{split}
p_\mathrm{c}(\tau)&=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}x_n\right]=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}\frac{1}{n!}\left.{X^{(n)}(z)}\right|_{z=0}\right]\\
&=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}\frac{1}{n!}\frac{\mathrm{d}^n}{\mathrm{d}z^n}\left.{e^{C(z)}}\right|_{z=0}\right].\label{chang}
\end{split}
\end{equation}
From \cite[Page 14]{henrici1974applied}, the first $M$ coefficients of the power series $e^{C(z)}$ form the first column of the matrix exponential $\exp\{\mathbf{C}_M\}$, whose exponent is given in \eqref{topmatrix}.
\begin{figure*}
\begin{equation}\label{eq45}
c_k=\begin{dcases}
s\sigma_\mathrm{n}^2+\delta\pi\lambda_\mathrm{b}s\left\{R^{2-\alpha}\mathbb{E}_{g}[g\mathrm{E}_{\delta}(sR^{-\alpha}g)]-\kappa^{2-\alpha}\mathbb{E}_{g}[g\mathrm{E}_{\delta}(s\kappa^{-\alpha}g)]\right\}&k=1,\\
\frac{\pi\delta\lambda_\mathrm{b}s^k}{ k!}\left\{R^{2-\alpha k} \mathbb{E}_{g}\left[g^k\mathrm{E}_{1+\delta-k}(sR^{-\alpha}g)\right]-\kappa^{2-\alpha k}\mathbb{E}_{g}\left[g^k\mathrm{E}_{1+\delta-k}(s\kappa^{-\alpha} g)\right]\right\}&k\ge2.\\
\end{dcases}
\end{equation}
\hrule
\end{figure*}
Equation \eqref{chang} can be further expressed as \eqref{frameexpr}. Furthermore, due to the fact $\frac{\mathrm{d}}{\mathrm{d}z}\mathrm{E}_p(z)=-\mathrm{E}_{p-1}(z)$, the coefficients can be recast as \eqref{eq45}.
It can be proved that $z^{2-\alpha k}\mathrm{E}_{1+\delta-k}(z)$ is a monotone decreasing function with respect to $z$, and therefore the coefficients $c_k>0$ for $k\ge1$.
Summing up what has been mentioned above completes the proof of Theorem \ref{th1}.
\section{
}\label{AD}
Since $\kappa=0$ in mm-wave ad hoc networks, \eqref{intlap} can be simplified as
\begin{equation}
\begin{split}
&\relphantom{=}R^2-\delta R^2\mathbb{E}_{g}\left[\mathrm{E}_{1+\delta}(sR^{-\alpha}g)\right]\\
&\overset{(b)}=R^2-\delta R^2\Bigg\{\frac{s^\delta}{R^2}\Gamma\left(-\delta\right)\mathbb{E}_{g}\left[g^{\delta}\right]+\frac{\alpha}{2}\\
&\relphantom{=}-\sum_{p=1}^\infty\frac{(-s)^p}{R^{\alpha p}p!\left(p-\delta\right)}\mathbb{E}_{g}\left[g^p\right]\Bigg\}\\
&=\delta R^2\sum_{p=1}^\infty\frac{(-s)^p}{R^{\alpha p}p!\left(p-\delta\right)}\mathbb{E}_{g}\left[g^p\right]
-\delta s^\delta\Gamma\left(-\delta\right)\mathbb{E}_{g}\left[g^{\delta}\right]\\
&=\delta R^2\sum_{p=1}^\infty\frac{(-s)^p\Gamma(M+p)}{R^{\alpha p}p!\left(p-\delta\right)\Gamma(M)M^p}\int_0^1\frac{\sin^{2k}\left(\frac{\pi d }{\lambda}N_\mathrm{t}\theta\right)}{\left(\frac{\pi d}{\lambda}N_\mathrm{t}\theta\right)^{2k}}\mathrm{d}\theta\\
&\relphantom{=}-\delta s^\delta\Gamma\left(-\delta\right)\frac{\Gamma\left(M+\delta\right)}{\Gamma(M)M^\delta}\int_0^1\left|\frac{\sin\left(\frac{\pi d }{\lambda}N_\mathrm{t}\theta\right)}{\frac{\pi d}{\lambda}N_\mathrm{t}\theta}\right|^{2\delta}\mathrm{d}\theta\\
&\overset{(c)}\le \frac{R^2\lambda}{\alpha dN_\mathrm{t}}\sum_{p=1}^\infty\frac{(-s)^p{{2p-1} \atopwithdelims \langle \rangle{p-1}} \Gamma(M+p)}{R^{\alpha p}(2p-1)!p!\left(p-\delta\right)\Gamma(M)M^p}\\
&\relphantom{=}-\frac{\delta s^{\delta}\lambda}{\pi dN_\mathrm{t}}\Gamma\left(-\delta\right)\frac{\Gamma\left(M+\delta\right)}{\Gamma(M)M^\delta}\xi,
\end{split}
\end{equation}
where $(b)$ is from the series expansion of the generalized exponential integral. Step $(c)$ follows Lemma
\ref{lem3}, and the upper bound is derived by extending the integral upper limit to infinity given that, for the tiny ripple tails of the $2k$-th power of the sinc, the additional integration values are extremely small and thus the upper bound in $(c)$ is tight. Therefore, the exponent of the Laplace transform is given by
\begin{equation}
\begin{split}
\eta(s)&=-\frac{\pi R^2\lambda_\mathrm{b}\lambda}{\alpha dN_\mathrm{t}}\sum_{p=1}^\infty\frac{(-s)^p{{2p-1} \atopwithdelims \langle \rangle{p-1}} \Gamma(M+p)}{R^{\alpha p}(2p-1)!p!\left(p-\delta\right)\Gamma(M)M^p}\\
&+\frac{\delta s^{\delta}\lambda_\mathrm{b}\lambda}{ dN_\mathrm{t}}\Gamma\left(-\delta\right)\frac{\Gamma\left(M+\delta\right)}{\Gamma(M)M^\delta}\xi-\frac{s\sigma^2}{\beta P_\mathrm{t}N_\mathrm{t}}.\label{lsad}
\end{split}
\end{equation}
The coefficients in Proposition \ref{th2} can be easily obtained via taking the $k$-th derivative of \eqref{lsad}.
\section{
}\label{AE}
According to \eqref{Ls}, the Laplace transform of noise and interference is
\begin{equation}
\begin{split}
\mathcal{L}(s)&=x_0=\exp\{\eta(s)\}\\
&=\exp\left(-s\sigma_\mathrm{n}^2-\pi\lambda_\mathrm{b}R^2\left\{1-\delta \mathbb{E}_{g}\left[\mathrm{E}_{1+\delta}(sR^{-\alpha}g)\right]\right\}\right).
\end{split}
\end{equation}
Note that $1-\delta \mathbb{E}_{g}\left[\mathrm{E}_{1+\delta}(sR^{-\alpha}g)\right]$ is a positive term due to the facts that $\mathrm{E}_{1+\delta}(z)$ is a monotone decreasing function of $z$ and $\mathrm{E}_{1+\delta}(0)=\frac{1}{\delta}$. Hence, the Laplace transform $x_0$ is non-decreasing with the antenna array size $N_\mathrm{t}$, where $\eta(s)$ is given in \eqref{lsad}. According to the recursive relationship \eqref{recur} between $x_n$, it turns out that every $x_n$ is a non-decreasing function of $N_\mathrm{t}$. Recalling that $p_\mathrm{c}(\tau)=\mathbb{E}_{r_0}\left[\sum_{n=0}^{M-1}x_n\right]$, the monotonicity in Corollary \ref{coro1} has been proved, and the concavity of the lower bound can be proved via similar steps.
We first write $\mathbf{C}_M$ in the form
\begin{equation}
\mathbf{C}_M=c_0\mathbf{I}_M+(\mathbf{C}_M-c_0\mathbf{I}_M),
\end{equation}
where the first term is a scalar matrix. Since $\mathbf{C}_M$ is a lower triangular Toeplitz matrix, the second part is a nilpotent matrix, i.e., $(\mathbf{C}_M-c_0\mathbf{I}_M)^n=\mathbf{0}$ for $n\ge M$. Hence, according to the properties of matrix exponential, we have
\begin{equation}
\exp\left\{\frac{1}{N_\mathrm{t}}\mathbf{C}_M\right\}=e^{c_0\frac{1}{N_\mathrm{t}}}
\sum_{n=0}^{M-1}\frac{1}{n!}\left[\frac{1}{N_\mathrm{t}}\left(\mathbf{C}_M-c_0\mathbf{I}_M\right)\right]^n.
\end{equation}
Since Theorem \ref{th1} has shown that $c_k>0$ for $k\ge1$, $\mathbf{C}_M-c_0\mathbf{I}_M$ is a strictly lower triangular Toeplitz matrix with all positive entries, and so are the matrices $(\mathbf{C}_M-c_0\mathbf{I}_M)^n$. Therefore,
\begin{equation}
\left\Vert\exp\left\{\frac{1}{N_\mathrm{t}}\mathbf{C}_M\right\}\right\Vert_1=
e^{c_0\frac{1}{N_\mathrm{t}}}
\sum_{n=0}^{M-1}\frac{1}{n!}\left[\frac{1}{N_\mathrm{t}^n}\left\Vert\left(\mathbf{C}_M-c_0\mathbf{I}_M\right)^n\right\Vert_1\right],
\end{equation}
which completes the proof of Corollary \eqref{eqcoro1}.
When $t\to0$, by omitting the higher order terms, the linear Taylor expansion of the coverage is
\begin{equation}
\left\Vert\exp\left\{\frac{1}{N_\mathrm{t}}\mathbf{C}_M\right\}\right\Vert_1\sim1+\frac{c_0+\left\Vert\mathbf{C}_M-c_0\mathbf{I}_M\right\Vert_1}{N_\mathrm{t}}=1+\frac{\sum_{n=0}^{M-1}c_n}{N_\mathrm{t}},
\end{equation}
where the slope
\begin{equation}
\sum_{n=0}^{M-1}c_n\overset{(d)}{<}\sum_{n=0}^\infty c_n=\sum_{n=0}^\infty \frac{(-s)^n}{n!}\eta^{(n)}(s)\overset{(e)}{=}\eta(0)=0.
\end{equation}
Step $(d)$ follows the fact that $c_k>0$ for $k\ge1$ as proved in Appendix \ref{AB}, and $(e)$ follows from the Taylor expansion of $\eta(0)$ at point $s$.
\section{
}\label{AF}
Following similar steps as in Appendix \ref{AD}, \eqref{intlap} can be derived as
\begin{equation}\label{44}
\begin{split}
&\relphantom{=}2\int_{r_0}^R{\left(1-\mathbb{E}_g[\exp(-sgx^{-\alpha})]\right)}x\mathrm{d}x\\
&=\delta R^2\sum_{k=1}^\infty\frac{(-sR^{-\alpha})^k}{k!(k-\delta)}\mathbb{E}_g[g^k]-\delta r_0^2\sum_{k=1}^\infty\frac{(-sr_0^{-\alpha})^k}{k!(k-\delta)}\mathbb{E}_g[g^k].
\end{split}
\end{equation}
Based on the cosine antenna pattern \eqref{approxpattern}, we have
\begin{equation}\label{45}
\begin{split}
&\relphantom{=}\sum_{k=1}^\infty\frac{(-z)^k}{k!(k-\delta)}\mathbb{E}_g[g^k]\\
&=\frac{\lambda}{\pi dN_\mathrm{t}}\sum_{k=0}^\infty\frac{(-z)^k}{k!(k-\delta)}\int_0^\pi\cos^{2k}\frac{x}{2}\mathrm{d}x+\frac{\lambda}{\delta dN_\mathrm{t}}\\
&=\frac{\lambda}{\sqrt{\pi} dN_\mathrm{t}}\sum_{k=0}^\infty\frac{(-z)^k\Gamma\left(\frac{1}{2}+k\right)}{(k!)^2(k-\delta)}+\frac{\lambda}{\delta dN_\mathrm{t}}\\
&\overset{(f)}{=}\frac{\lambda}{\delta dN_\mathrm{t}}\left[1-{}_3F_2\left(\frac{1}{2},-\delta,M;1,1-\delta;-\frac{z}{M}\right)\right],\\
\end{split}
\end{equation}
where $(f)$ inversely applies the definition (series expansion) of the generalized hypergeometric function \cite[Page 1000]{zwillinger2014table}.
Substituting \eqref{45} into \eqref{44}, the exponent of the Laplace transform is given by
\begin{equation}
\begin{split}
\eta(s)&=-\frac{s\sigma^2}{\beta P_\mathrm{t}N_\mathrm{t}}-\frac{\pi\lambda_\mathrm{b}\lambda}{dN_\mathrm{t}}\bigg\{\left[J_0\left(-\frac{sr_0^{-\alpha}}{M}\right)-1\right]r_0^2\\
&\relphantom{=}-\left[J_0\left(-\frac{s R^{-\alpha}}{M}\right)-1\right]R^2\bigg\}.
\end{split}
\end{equation}
Note that the derivative for the generalized hypergeometric function is
\begin{equation}
\begin{split}
&\relphantom{=}\frac{\mathrm{d}}{\mathrm{d}z}{}_3F_2(a_1,a_2,a_3;b_1,b_2;z)\\
&=\frac{\prod_{i=1}^3a_i}{\prod_{j=1}^2b_j}{}_3F_2(a_1+1,a_2+1,a_3+1;b_1+1,b_2+1;z).
\end{split}
\end{equation}
Based on this expression and Theorem \ref{th1}, the entries in $\mathbf{C}_M$ in Proposition \ref{th3} are obtained.
\section{
}\label{AG}
Similar to Appendix \ref{AB}, we define a power series
\begin{equation}
Q(z)\triangleq\mathbb{E}_{r_0}\left[X(z)\right]=\sum_{n=0}^\infty q_nz^n.
\end{equation}
Recall that $X(z)=\exp\{C(z)\}$ in \eqref{40}, and we obtain the following lower bound with a slight abuse of notation due to the fact that $C(z)$ is a function of $r_0$ in cellular networks,
\begin{equation}
\begin{split}
Q(z)&=\pi\lambda_\mathrm{b}\int_0^{R^2}\exp\left\{-\pi\lambda_\mathrm{b}r+\frac{1}{N_\mathrm{t}}C(z;r)\right\}\mathrm{d}r\\
&=\pi\lambda_\mathrm{b}\int_0^{R^2}e^{-\pi\lambda_\mathrm{b}r}\exp\left\{\frac{1}{N_\mathrm{t}}\sum_{k=0}^\infty c_k(r)z^k\right\}\mathrm{d}r\\
&\overset{(g)}{\ge}\left(1-e^{-\pi\lambda_\mathrm{b}R^2}\right)\exp\Bigg\{\frac{\pi\lambda_\mathrm{b}}{N_\mathrm{t}(1-e^{-\pi\lambda_\mathrm{b}R^2})}\\
&\relphantom{=}\times\sum_{k=0}^\infty\left(\int_0^{R^2}e^{-\pi\lambda_\mathrm{b}r}c_k(r)\mathrm{d}r\right)z^k\Bigg\}\\
&\triangleq\left(1-e^{-\pi\lambda_\mathrm{b}R^2}\right)\exp\left\{\frac{1}{N_\mathrm{t}(1-e^{-\pi\lambda_\mathrm{b}R^2})}D(z;r)\right\}.
\end{split}
\end{equation}
In fact, $Q(z)$ can be viewed as $\left(1-e^{-\pi\lambda_\mathrm{b}R^2}\right)\mathbb{E}_{r_0^\prime}\left[\exp\left\{ \frac{1}{N_\mathrm{t}}C(z;r_0^\prime)\right\}\right]$ for the random variable with pdf $f_{r_0^\prime}(r)=\frac{\pi\lambda_\mathrm{b}}{1-e^{-\pi\lambda_\mathrm{b}R^2}}e^{-\pi\lambda_\mathrm{b}r}$. Due to the convexity of the exponential function, we apply Jensen's inequality in $(g)$ and obtain the lower bound.
Therefore, the coverage probability is given by
\begin{equation}
p_\mathrm{c}^{\cos}(\tau)=\sum_{n=0}^{M-1}\frac{1}{n!}\frac{\mathrm{d}^n}{\mathrm{d}z^n}\left.{Q(z)}\right|_{z=0},
\end{equation}
which can be further expressed as in Corollary \ref{coro3}.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1702.04565
|
\section{Introduction}
The proliferation of mobile devices with built-in sensors has made mobile crowdsensing an efficient sensing paradigm especially in people-centric and Internet of Things (IoT) services. Crowdsensing users collect sensing data using their personal mobile devices, e.g.,~mobile phones and IoT gadgets. However, the development of crowdsensing services is impeded by many challenges, especially the criticism on the privacy protection of crowdsensing users. Service providers require true data which is a key factor in optimizing data originated services~\cite{vaidya2006privacy}. This introduces contradicting incentives of maximizing the privacy protection of users and the prediction accuracy of service providers. Most of the existing incentive models in the literature are monetary motivated with sole profit maximization objective, e.g.,~\cite{YangXueFangEtAl2012,XuXiangYang2015,luo2016incentive}, while the privacy incentive of users is neglected. Therefore, conventional monetary-based incentive models are inapplicable in privacy preserving crowdsensing systems, and new privacy-aware incentive models are required. Several major questions related to developing privacy-aware incentive models in mobile crowdsensing arise. First, how does the crowdsensing service define the contributions and payoff allocations of users with varying privacy levels? Second, do crowdsensing coalitions change the attained privacy of the cooperative users? Third, how do cooperative users divide the coalition payoff among themselves?
This article provides answers for the aforementioned questions by presenting a novel incentive framework for privacy preservation and accuracy maximization in mobile crowdsensing. The sensing users select their preferred data anonymization levels without knowing the privacy preferences of the other users. The data anonymization is inversely proportional to the accuracy of data analytics of the service provider. Accordingly, the users are paid based on their marginal contributions to the service accuracy. The users can be also penalized with a negative payoff if they cause a marginal harm to the service accuracy, e.g.,~an outlier providing misleading data. Moreover, a set of $k$~cooperative users can jointly work by forming a crowdsensing coalition, increasing the anonymity privacy protection measured by the $k$-anonymity metric. The total coalition payoff is then divided among the cooperative users based on their marginal contributions to the coalition's data quality. Our experiments on a real-world dataset of crowdsensing activity recognition system show that the payoff allocation of a particular user does not directly depend on the contributed data size but on the data quality. Likewise, the payoff allocation is found to decrease as the privacy level increases.
The rest of this article is organized as follows. We first present an overview of mobile crowdsensing in people-centric and IoT services and review some related incentive mechanisms. Next, we discuss the privacy preservation in mobile crowdsensing. Then, we propose an incentive framework for privacy preservation and accuracy maximization in crowdsensing services. After that, we present numerical experiments based on a real-world crowdsensing dataset. Finally, we outline some interesting research directions and conclude the article.
\section{Mobile Crowdsensing and IoT}
This section first gives an overview of mobile crowdsensing in IoT and then reviews some monetary incentive mechanisms in mobile crowdsensing.
\subsection{Architectures and Data Management}
In mobile crowdsensing, mobile devices and human intelligence are jointly adopted for collecting sensing data regardless of geographic separation among users and service providers. As shown in Figure~\ref{fig:sensing_architecture}, the design of mobile crowdsensing services includes the following stages:
\begin{figure*}
\begin{centering}
\includegraphics[width=1\linewidth,trim= 0 1cm 0 0]{Figures/architecture}
\par\end{centering}
\caption{System model of mobile crowdsensing.\label{fig:sensing_architecture}}
\end{figure*}
\begin{itemize}
\item \emph{Data Sensing and Gathering}: Crowdsensing users sense and collect data using mobile devices including phones, wearable devices, and in-vehicle sensing devices. Users can also annotate the sensory data with subjective observations and reports such as their emotions and surrounding events. The data is sent to the cloud server through various types of networks including cellular and Wi-Fi networks.
\item \emph{Data Analytics}: After receiving the raw data from the users, cloud computing can be used to store and process the large-scale data. Data analytics, e.g.,~machine learning methods, are typically applied to extract useful information and make effective predictions. Services also support data visualization, generate reports, and provide platforms to share the outcomes with other collaborative entities, e.g.,~social networking services.
\end{itemize}
\subsection{Applications}
Mobile crowdsensing has become an efficient sensing paradigm in people-centric and IoT services. People-centric services contain sensing, computing, and communication components that aim to assist human life. The following are some pertinent crowdsensing applications.
\begin{itemize}
\item \emph{Traffic Monitoring}: Mobile Millennium\footnote{\url{http://traffic.berkeley.edu}, accessed on 18 December 2016.} is a traffic crowdsensing service. Millennium collects geolocation data from taxi drivers. It also assimilates other data obtained in realtime from radars, loop detectors and historical databases. The traffic information can be accessed by drivers for accurate real-time traffic conditions, e.g.,~traffic congestion points.
\item \emph{Wi-Fi Sharing}: WiFi-Scout\footnote{\url{http://wifi-scout.sns-i2r.org}, accessed on 18 December 2016.} is a crowdsensing service for sharing reviews and connection quality of Wi-Fi hotspots. Users can easily search for free and paid Wi-Fi hotspots covering the locations that they will be visiting. The users are also rewarded based on their compliance and review quality.
\item \emph{Healthcare}: PatientsLikeMe\footnote{\url{https://www.patientslikeme.com}, accessed on 18 December 2016.} is a healthcare crowdsensing service that collects health data from patients. The patients provide their experience on medication, supplement, or devices. PatientsLikeMe also sells the collected data to pharmaceutical companies in order to improve and develop effective medication and healthcare equipment.
\end{itemize}
\subsection{Monetary Incentive Models}
Mobile crowdsensing should incorporate efficient incentive mechanisms to attract and retain enough crowdsensing users. In~\cite{musthag2011exploring}, the authors compared the resulting data quality and user compliance of three incentive schemes. The \emph{uniform} scheme pays user at a fixed rate of $4$ cents per completed task. The \emph{variable} scheme selects the payoff in the range of $2$ to $12$ cents based on the required task and user performance. Finally, the \emph{hidden} scheme includes a lottery factor in defining the payoff values where the users are not informed of the expected payoff before completing the task. The study showed that the variable scheme reduces the total cost by $50\%$ compared to the uniform scheme for the same completion rate and performance. The hidden scheme is found to be the least effective incentive scheme.
Next, we review monetary incentive mechanisms for mobile crowdsensing with an emphasis on reverse auction mechanisms~\cite{Klemperer2004} as they fit well and are commonly applied for mobile crowdsensing with multiple users. As shown in Figure~\ref{fig:reverse_auction}, a typical reverse auction framework occurs between the crowdsensing users and service. The crowdsensing users compete among themselves to perform the sensing task. The service provider first announces the description of the crowdsensing tasks to potential mobile users. Users are rational entities and will set their bids based on the cost of the crowdsensing task. In order to maximize the utility of the crowdsensing service, the auction system determines the task assignment and payoff of each user including both selected and rejected bids. For example, the crowdsensing tasks are assigned to the winning users with the lowest bids to perform the crowdsensing tasks and submit the data to the service. The service provider will provide the agreed payoff to the winning users. Table~\ref{tab:summary_auctions} provides a summary of the monetary incentive models reviewed in this section. From the table, ``risk-neutral'' means that the user is unaware of the loss of its payoff, e.g.,~when choosing between guaranteed \$$5$ and conditioned \$$10$ payoffs. A ``profitable'' solution guarantees a nonnegative utility for the service provider. An ``individual rational'' solution guarantees a nonnegative utility for each user. A ``truthful'' solution guarantees that the users cannot increase their payoff by submitting misleading bids for the crowdsensing task. Therefore, a truthful incentive mechanism provides a dominant strategy for rational users in bidding their true cost of performing the crowdsensing task.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth,trim= 1.5cm 0.5cm 1cm 0]{Figures/reverse_auction}
\par\end{centering}
\caption{Crowdsensing incentive mechanism as a reverse auction.\label{fig:reverse_auction}}
\end{figure}
\begin{table*}
\caption{Summary of the monetary incentive models in mobile crowdsensing.\label{tab:summary_auctions}}
\centering{
\begin{tabular}{>{\raggedright}p{2.3cm}>{\raggedright}p{3cm}>{\raggedright}p{3cm}>{\raggedright}p{3cm}>{\raggedright}p{3cm}}
\hline
\textbf{\noun{\small{}model}} & \textbf{\noun{\small{}Main entities}} & \textbf{\noun{\small{}Payoff scheme }} & \textbf{\noun{\small{}Maximization objective}} & \textbf{\noun{\small{}Solution Properties}}\tabularnewline
\hline
\hline
{\small{}Bayesian auction~\cite{CaoBrahmaVarshney2015}} & {\small{}Multiple risk-neutral users } & {\small{}Threshold winner payoff} & {\small{}The target tracking accuracy} & {\small{}Bayesian Nash equilibrium (profitable and individual rational)}\tabularnewline
\hline
{\small{}Sealed-bid auction~\cite{YangXueFangEtAl2012}} & {\small{}Fixed budget with risk-neutral users} & {\small{}Threshold winner payoff} & {\small{}The service utility (more user and less payoff)} & {\small{}Profitable and individual rational}\tabularnewline
\hline
{\small{}Stackelberg competition~\cite{YangXueFangEtAl2012}} & {\small{}A leader (service) and followers (users) } & {\small{}Threshold winner payoff} & {\small{}The service utility} & {\small{}Nash equilibrium (profitable and individual rational)}\tabularnewline
\hline
{\small{}Vickrey auction~\cite{XuXiangYang2015}} & {\small{}Multiple risk-neutral users} & {\small{}Contribution-dependent payoff} & {\small{}Data integrity} & {\small{}Profitable and truthful}\tabularnewline
\hline
{\small{}All-pay auction~\cite{luo2016incentive}} & {\small{}Risk-averse and risk-neutral users} & {\small{}All-pay contribution-dependent payoff} & {\small{}The service utility} & {\small{}Nash equilibrium (profitable and individual rational)}\tabularnewline
\hline
\end{tabular}
\end{table*}
We divide the incentive schemes into two main categories of \emph{threshold winner} and \emph{contribution-dependent} payoffs.
\subsubsection{Threshold Winner Payoff}
In this payoff scheme, only the winning users will be paid for performing the sensing task and there is no payoff allocation for rejected users. For example, the authors in~\cite{CaoBrahmaVarshney2015} presented a Bayesian reverse auction model for target tracking with crowdsourcing, assuming that the value estimate of a user can be drawn from a continuous probability distribution. The residual energy of the mobile devices has an impact on the prior distribution of the user bids. The objective of this model is maximizing the total target tracking utility of the service by solving the multiple-choice knapsack problem. Likewise, the authors in~\cite{YangXueFangEtAl2012} proposed two complementary payoff scenarios of \emph{user-centric} and \emph{platform-centric} schemes. In the user-centric scheme, the service defines the payoff using a reverse auction by following the steps shown in Figure~\ref{fig:reverse_auction}. In the platform-centric scheme, the crowdsensing problem is formulated as a Stackelberg game. The Nash equilibrium is solved using backward induction and found to be unique. A major limitation of~\cite{CaoBrahmaVarshney2015,YangXueFangEtAl2012} is assuming a known prior distribution of user bids. In the real world, users can collude and submit misleading bids to increase their own payoff. This problem is solved in contribution-dependent payoff schemes as discussed next.
\subsubsection{Contribution-Dependent Payoff}
A practical incentive mechanism requires all participants to be truthful. One principal way in achieving truthful user interaction is by choosing an appropriate pricing scheme where the payoff allocations of participants are not solely defined by their bids. The authors in~\cite{XuXiangYang2015} applied the Vickrey-Clarke-Groves (VCG) reverse auction with the objective of minimizing the sum of payoff values to crowdsensing users. A user is paid based on the difference between the sum of costs with and without that particular user. Reporting truthful bids is a weakly-dominant strategy in the VCG auction. The authors in~\cite{luo2016incentive} modeled the mobile crowdsensing problem as an all-pay auction where the crowdsensing users are not required to submit their bids at the beginning of the auction. Instead, the payoff is calculated based on the user contributions after completing the sensing tasks. The users with the highest contribution receive a payoff while the rest of the users do not receive any payoff allocation.
\section{Privacy Preservation in Mobile Crowdsensing}
Even though most of the existing works in the literature focus on monetary incentive models to achieve the maximum possible payoff allocation, privacy preservation is still a top priority for crowdsensing users. In this section, we first discuss the data anonymization properties which can be used to measure the privacy protection. Then, we discuss the challenges of privacy preservation in mobile crowdsensing.
\subsection{Privacy Properties and Data Anonymization}
Mobile crowdsensing comes with challenging privacy issues. In particular, crowdsensing users are typically concerned that their personal information can be leaked from the collected data. Personal information of users can be categorized into three main classes:
\begin{itemize}
\item \emph{Explicit identifiers} are the data attributes which directly reveal the user identity, e.g.,~full name and social security number.
\item \emph{Non-explicit identifiers} can be combined with background knowledge to reveal the user identity, e.g.,~zip code and birth date.
\item \emph{Sensitive attributes} can be utilized to extract private information about the user, e.g.,~realtime activity tracking using accelerometer data~\cite{kwapisz2011activity}.
\end{itemize}
Explicit identifiers should be completely removed before trading the crowdsensing data among businesses. To protect the non-explicit identifiers and sensitive attributes, data anonymization methods can be applied to sensing data.
Privacy is defined by the information gain of an adversary. The following syntactic privacy properties can be used to define of the privacy protection requirements.
\begin{itemize}
\item \emph{$k$-anonymity}~\cite{Sweeney2002}: This property is developed to guarantee that a data sample of a particular user in public datasets cannot be re-identified by potential intruders. Specifically, for a crowdsensing service to possess the $k$-anonymity property, each user should not be distinguishable from at least $k-1$ other users. For example, a user should be unidentifiable by combining the available gender and birth date crowdsensing data. This can be achieved by transformation techniques, such as identity generalization and suppression, to reduce the granularity of the data. For example, the birth dates can be replaced by date ranges instead of the exact values.
\item \emph{$l$-diversity}~\cite{machanavajjhala2007diversity}: The $k$-anonymity does not work well if the sensitive data attributes lack diversity. For example, if a few users of a healthcare crowdsensing service used a particular zip code and are infected by a disease, then background knowledge can be used to reveal the health privacy of a user which is known by the adversary to use that zip code. In order to avoid this privacy threat, the $l$-diversity property requires that each equivalence class has at least $l$ ``well-represented'' values. An equivalence class is a set of data samples with the same anonymized data attributes.
\item \emph{$t$-closeness}~\cite{LiLiVenkatasubramanian2007}: The $t$-closeness property requires the distribution of sensitive values within each equivalence class to be \textquotedblleft close\textquotedblright{} to their distribution in the entire original dataset. $t$-closeness is an extension of the $l$-diversity model as it takes the distribution of sensitive values into account. $t$-closeness can be achieved by adding random noise to sensitive data attributes. For example, adding Gaussian noise to accelerometer data can restrict the tracking of particular activities.
\end{itemize}
\subsection{Challenges of Privacy Preservation in Mobile Crowdsensing}
The authors in~\cite{pournajaf2016participant} reviewed the privacy threats and protection methods during the task management in mobile crowdsensing. A taxonomy of privacy methods was provided including pseudonyms, connection anonymization, and spatial cloaking. The authors also highlighted the challenging process of defining the user contribution in incentive-based task assignment. The authors in~\cite{ganti2011mobile} discussed the privacy and data integrity of mobile crowdsensing. The privacy is observed to be user-dependent.
Achieving the syntactic privacy properties can reduce the accuracy of data analytics algorithms. Applying a strict data anonymization to all users results in a poor accuracy of the data analytics. Instead, the users can be given the choice of setting their preferable data anonymization level such that reliable users receive high payoff allocation. The trade-off between the privacy preservation and accuracy maximization should also be taken into consideration which is the main objective of the next section.
\section{Incentive Mechanism for Privacy Preserving Crowdsensing}
In this section, we introduce a privacy preserving incentive framework for mobile crowdsensing where participating users can protect their private data by data anonymization. The level of data protection will accordingly be used to set the resulting payoff allocation such that the users have an incentive in providing their true data. We first present the system model and major entities. Then, we discuss the proposed incentive framework which is intended to maximize the accuracy of data analytics while preserving the privacy of the crowdsensing users.
\subsection{System Model}
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth,trim= 1cm 0.5cm 0.5cm 0]{Figures/system_model}
\par\end{centering}
\caption{System model of the privacy preserving crowdsensing framework supporting both data anonymization and identity generalization through crowdsensing coalition formulation. Cooperative users are connected using device-to-device (D2D) communication.\label{fig:system_model}}
\end{figure}
As shown in Figure~\ref{fig:system_model}, the crowdsensing system under consideration consists of the following three main entities:
\begin{itemize}
\item Crowdsensing\textbf{ users} are the participants which collect sensing data using their personal mobile devices, e.g.,~mobile phones and IoT gadgets. The contribution of a particular user to the crowdsensing community is defined based on the quality of the sensing data from the data analytics perspectives. A user with positive contribution to the sensing process is considered \emph{pivotal}. The users can apply data anonymization, e.g.,~adding noise to the sensing data, to protect their privacy and personally-identifying information. Additionally, crowdsensing coalitions can be built as an efficient scheme for achieving $k$-anonymity protection, where $k$ is the number of cooperative users in the coalition.
\item A \textbf{service provider} buys data from the crowdsensing users through a mediator, applies data analytics, and delivers a service to a set of customers. The provider makes a profit by charging the customers a subscription fee.
\item A \textbf{mediator} is the auction management entity that controls the exchange of data between the crowdsensing users and the service provider. Moreover, the mediator divides the payoff received from the service provider among the crowdsensing users based on their contributions to the crowdsensing system.
\end{itemize}
We next discuss the privacy preserving model through which the crowdsensing users can sell data to the service provider and receive a payoff according to their individual contributions as illustrated in Figure~\ref{fig:system_model}. First, we define the individual contributions and resulting payoffs of the users from data analytics perspectives. Second, we develop a privacy preserving mechanism which gives the users an incentive for contributing their true data with the least possible data anonymization level. Third, we consider the case where the users can form a crowdsensing coalition for identity generalization, and we present a fair payoff allocation among the cooperative users.
\subsection{Data Analytics}
Crowdsensing data $\mathcal{D}=\left\{ \left(\mathbf{x}_{i},y_{i}\right)\right\} _{i=1}^{L}$ usually includes tuples of sensing feature set $\mathbf{x}_{i}\in\mathbb{R}^{M}$ and a class label $y_{i}\in\mathbb{R}$, where $L$ is the number of data tuples and $M$ is the number of data attributes. The feature set $\mathbf{x}_{i}$ includes the sensing data, e.g.,~images in vision services and geographic coordinates in transport services. The class label $y_{i}$ contains human input and is only available in supervised data analytics, e.g.,~specifying accident events in transport services. After collecting sufficient data, the service provider applies data analytics methods, e.g.,~deep learning~\cite{lecun2015deep}, to build data originated services. For example, transport services can provide accurate prediction of vehicle arrival times and road congestions. We denote the accuracy function of the data analytics model trained using dataset $\mathcal{D}$ as $f\left(\mathcal{D}\right)$. $f\left(\mathcal{D}\right)$ measures the performance of the service in providing accurate prediction of the ground truth.
\subsection{Incentive Mechanism Design}
We consider a set of $N$ users which are connected to a privacy preserving crowdsensing service. Each user $n$ generates true sensing data $\mathcal{D}_{n}$ and selects a data anonymization level $p_{n}$. The data anonymization can be performed by adding random noise to the true data $\mathbf{x}_{i}$ subject to $p_{n}$, e.g.,~Gaussian noise $\mathcal{N}\left(0,p_{n}\mathbf{I}_{M}\right)$ with zero mean and a variance of $p_{n}$, where $\mathbf{I}_{M}$ is the identity matrix of size $M$. Each user submits its anonymized data $\tilde{\mathcal{D}}_{n}$ and data anonymization level $p_{n}$ to the crowdsensing mediator, without knowing the preferences of the other users. The full anonymized dataset $\tilde{\mathcal{D}}=\underset{1\leq n\leq N}{\cup}\tilde{\mathcal{D}}_{n}$ and data anonymization preferences $\mathcal{P}=\left\{ p_{1},\ldots,p_{N}\right\} $ are collected by the mediator from all users. According to the VCG auction~\cite{Klemperer2004}, the mediator calculates the payoff of user~$n$ as follows:
\begin{equation}
F_{n}=f(\tilde{\mathcal{D}})-f(\mathcal{\tilde{\mathcal{D}}}_{-n}),\label{eq:utility_1}
\end{equation}
where $\mathcal{\tilde{\mathcal{D}}}_{-n}$ is the anonymized data after excluding the data of user $n$. The following three cases for the payoff function exist:
\begin{itemize}
\item If $F_{n}>0$, the user will receive a positive payoff allocation of $F_{n}$ as its data contribution increases the accuracy. These users are called pivotal.
\item If $F_{n}=0$, the user does not change the crowdsensing choice nor the service accuracy. Such users receive zero payoff and can be advised to decrease their data anonymization level.
\item If $F_{n}<0$, the user has a negative contribution, e.g.,~excessive data anonymization, and will accordingly be penalized with a negative payoff. The data collected from such users should not be used in the data analytics.
\end{itemize}
Sending the true data to the service provider is a weakly-dominant strategy under the VCG rules regardless of the data anonymization levels of the other users.
\subsection{Crowdsensing Coalition}
A set of $k$ users can cooperate to form a crowdsensing coalition, denoted by $\mathcal{K}$, which increases the privacy level by providing the data of the cooperative users under one generalization identity and achieving $k$-anonymity privacy protection. Those $k$ users must be connected using device-to-device~(D2D) communication without traversing the service provider. The generalization identity guarantees that a data sample cannot be used to identify its source from the $k$ cooperative users. $\mathcal{K}$ is a virtual alliance of users which work collectively and are seen as one sensing entity by the service provider. Specifically, the service provider cannot identify the source of data samples as a particular data sample can relate to any of the $k$ cooperative users. The payoff of the coalition is
\begin{equation}
F_{\mathcal{K}}=f(\tilde{\mathcal{D}})-f(\mathcal{\tilde{\mathcal{D}}}_{-\mathcal{K}}),\label{eq:utility_2}
\end{equation}
where $\mathcal{\tilde{\mathcal{D}}}_{-\mathcal{K}}$ is the anonymized data after excluding the data from all users in the coalition $\mathcal{K}$. Solution concepts from cooperative game theory, such as the Shapley value and Nash bargaining solution~\cite{peleg2007introduction}, can be applied to share the resulting payoffs among the cooperative users in the coalition $\mathcal{K}$. From the Shapley value, the payoff allocation, i.e.,~monetary payment, of each user is defined based on its contribution to the coalition.
\section{Numerical Results}
In this section, we present numerical experiments to evaluate the performance of the proposed privacy preserving framework.
\subsection{System Setup}
In this section, we use a real-world dataset~\cite{kwapisz2011activity} of crowdsensing activity recognition system of six activities including walking, jogging, upstairs, downstairs, sitting, and standing. The dataset includes $L=1,098,207$ samples of accelerometer data which were collected by $N=36$~users. The mobile devices sampled at a rate of $20$Hz resulting in $M=120$ data features of framed $3$-axial acceleration. We assume that the service provider uses deep learning~\cite{lecun2015deep} to develop the prediction service. The service provider buys the crowdsensing data from the users through the auction mediator and sells an activity tracking service to customers.
We assume that Users~2 and 3 protect their sensitive activities by adding varied levels of Gaussian noise $\mathcal{N}\left(0,p_{n}\mathbf{I}_{M}\right)$ to the acceleration data. Accordingly, Users~2 and 3 acquire the $t$-closeness property, where $t$ is equal to the variance of the added noise $p_{n}$. The payoff of each user is defined based on the payoff rule in~(\ref{eq:utility_1}). Moreover, Users~2 and 3 can collaborate in the crowdsensing coalition $\mathcal{K}$ to acquire the $k$-anonymity protection, where $k=2$ for two cooperative users. The coalition's total payoff is defined based on the payoff rule in~(\ref{eq:utility_2}), while the payoff sharing among Users~2 and 3 is defined according to the Shapley value.
\subsection{User Contributions and Pivotal Users}
Figure~\ref{fig:agent_contribution} shows the contributed data rates from each user and the resulting service accuracy $f(\cdot)$ by training a deep learning model on the data of each user separately. Two key results can be noted. Firstly, the data rate varies among different users. However, there is no correlation between the service accuracy from the data analytics perspective and the contributed data rate from the sensing perspective. The service accuracy depends on the quality of the used mobile device, user's performance during task execution, and data annotation. For example, User~1 contributes more data than that of User~2, while the accuracy resulting from the data of User~1 is lower than that of User~2. Secondly, Users~3 and 6 are pivotal and they score the highest standalone accuracy values of $68.3\%$ and $68.1\%$, respectively. The standalone accuracy for the rest of the users is less than $64\%$. The pivotal users are important to the service provider to ensure high service accuracy.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figures/agent_contribution}
\par\end{centering}
\caption{User contribution to the crowdsensing service.\label{fig:agent_contribution}}
\end{figure}
\subsection{The Impact of Privacy on Accuracy}
In Figure~\ref{fig:privacy_accuracy}, we consider the impact of the data anonymization level on the accuracy of the crowdsensing service. Several important results are observed. Firstly, there is an inverse relationship between the prediction accuracy and the data anonymization level. The maximum service accuracy of $f(\mathcal{D})=92.5\%$ is achieved when all users provide true data samples without any anonymization. This maximum value decreases as User~3 increases the level of data anonymization. High level of data anonymization can be required by the users to protect their privacy. Secondly, the service provider has an incentive of rejecting users with high data anonymization levels. For example, the service will reject User~3 when its data anonymization level is greater than $8$ which is labeled as ``critical point 1'' in Figure~\ref{fig:privacy_accuracy}. This is due to the resulting harm to the overall system accuracy. Thirdly, the prediction accuracy decreases as more users adopt the data anonymization scheme. For example, the accuracy is negatively affected when both Users~2 and 3 apply the data anonymization compared to the case of User~3 only. Accordingly, the crowdsensing system has an incentive for reducing the number of users applying the data anonymization scheme. As presented next, this can be achieved by increasing the payoff allocation of users which provide their true data.
\begin{figure}
\begin{centering}
\subfloat[\label{fig:privacy_accuracy}]{\begin{centering}
\includegraphics[width=1\columnwidth]{Figures/privacy_accuracy}
\par\end{centering}
}
\par\end{centering}
\begin{centering}
\subfloat[\label{fig:privacy_payoffs}]{\begin{centering}
\includegraphics[width=1\columnwidth]{Figures/privacy_payoffs}
\par\end{centering}
}
\par\end{centering}
\caption{Performance of the proposed privacy preserving framework under varied privacy levels. (a)~The resulting accuracy of the deep learning service trained on the crowdsensing data. (b)~The payoff allocation of Users~2 and~3. The privacy level is equal to the variance of the added Gaussian noise.}
\end{figure}
\subsection{Payoff Allocation}
Figure~\ref{fig:privacy_payoffs} shows the payoff allocation of Users~2 and 3 under the varied data anonymization levels. Firstly, the payoff allocation of any user decreases as its data anonymization level increases. For high data anonymization level which is equal to or greater than the over anonymization levels specified in Figure~\ref{fig:privacy_payoffs}, the users will be penalized by receiving negative payoff. Secondly, pivotal users receive a higher payoff compared to normal and low-performing users, e.g.,~the payoff of User~3 is greater than that of User~2. For the crowdsensing coalition case, the payoff allocation to the cooperative users is found using the Shapley value which reflects the individual contribution of each user. The cooperative users receive not only the same payoff in both the crowdsensing coalition and the standalone cases, but also a higher level of the $k$-anonymity privacy protection.
\section{Future Directions}
Based on the proposed incentive framework, the following open research directions can be further pursued.
\subsection{Cooperation and Competition Among Service Providers}
To collect high-quality data, service providers may cooperate or compete with each other to attract and retain crowdsensing users. With cooperation, service providers collude to set payoff strategies which maximize their profit as a cooperative coalition. In the competitive scenario, service providers can apply non-cooperative game and Nash equilibrium solutions for the service's subscription fee and crowdsensing data's prices. The strategic interaction among providers can also benefit the users in making higher revenues.
\subsection{Incentive Mechanism Design for Fog Computing}
Analyzing the crowdsensing data can be computationally expensive. Fog computing provides a solution by allowing partial data processing at the mobile devices owned by users. In such a design, the users are paid not only for the sensing data, but also for the available computing power. Incentive mechanisms are required to attract large contributions from users as fog nodes. Likewise, mobile devices come with varying hardware resources; methods for defining the user contributions in fog computing are also required.
\subsection{Dynamic and Heterogeneous Crowdsensing}
Crowdsensing users can be heterogeneous in term of the sensing precision and technical experience. Thus, the service provider has an incentive of attracting powerful users by increasing their payoff allocations, and the incentive mechanism has to optimize these payoff values. Additionally, users asynchronously join and leave the crowdsensing system. Stochastic optimization methods, e.g.,~Markov decision processes, can be formulated to determine the optimal payoff rates over time, e.g.,~to attract users during the off-peak times.
\section{Conclusion}
Privacy awareness has the potential of significantly boosting the performance of mobile crowdsensing, attracting more sensing users, and enabling the protection of privileged information. This article has presented an incentive mechanism for privacy preservation and accuracy maximization in mobile crowdsensing. It has been shown that the coalition strategy can be used by users to send their data under one generalized identity, increase the $k$-anonymity privacy protection, and share the resulting payoffs among cooperative users based on their individual sensing contribution. The proposed incentive framework has been evaluated using a real-world crowdsensing dataset. Finally, open research directions have been presented.
\section*{Acknowledgment}
This work was supported in part by Singapore MOE Tier 1 (RG18/13 and RG33/12) and MOE Tier 2 (MOE2014-T2-2-015 ARC4/15 and MOE2013-T2-2-070 ARC16/14).
\bibliographystyle{ieeetr}
|
2202.08116
|
\section{Introduction}\label{Sec1}
In what follows, we let $\sigma(x)$ denote the sum of divisors of the positive integer $x$. We will denote the deficiency of $x$ by $D(x)=2x-\sigma(x)$, the aliquot sum of $x$ by $s(x)=\sigma(x)-x$, and the abundancy index of $x$ by $I(x)=\sigma(x)/x$.
A number $M$ satisfying $\sigma(M)=2M$ is called a \emph{perfect number}. For example, $6$ and $28$ are perfect since
$$\sigma(6) = 1 + 2 + 3 + 6 = 2 \cdot 6$$
$$\sigma(28) = 1 + 2 + 4 + 7 + 14 + 28 = 2 \cdot 28.$$
The Euclid-Euler Theorem states that $M$ is an even perfect number if and only if
$$M = (2^t - 1){2^{t-1}},$$
where $2^t - 1$ (and therefore $t$) is prime. (If $t$ is prime, then $2^t - 1$ is not necessarily prime.) Primes of the form $2^t - 1$ are called Mersenne primes. Currently, there are $51$ known Mersenne primes (with the latest being discovered by the Great Internet Mersenne Prime Search in December of 2018), corresponding to $51$ even perfect numbers \cite{GIMPS}.
It is currently unknown whether there are infinitely many even perfect numbers. It has been conjectured, and is widely believed, that no odd perfect numbers exist. (There are no odd perfect numbers less than ${10}^{1500}$ \cite{OchemRao}, making the existence of an odd perfect number appear very unlikely.)
Euler proved that an odd perfect number $N$ must necessarily have the so-called \emph{Eulerian form}
$$N = q^k n^2$$
where $q$ is the special prime satisfying $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q,n)=1$. Descartes, Frenicle, and subsequently Sorli conjectured that $k=\nu_{q}(N)=1$ always holds \cite{Beasley}. Sorli predicted that $k=1$ after testing large numbers with eight distinct prime factors for perfection \cite{Sorli}.
Dris conjectured that $q^k < n$ \cite{Dris2}, on the basis of the result $I(q^k) < \sqrt[3]{2} < I(n)$.
We state these conjectures here for ease of reference later on.
\begin{conjecture}\label{DFS}
If $q^k n^2$ is an odd perfect number given in Eulerian form, then $k=1$.
\end{conjecture}
\begin{conjecture}\label{Dris}
If $q^k n^2$ is an odd perfect number given in Eulerian form, then $q^k < n$.
\end{conjecture}
Dris \cite{Dris} showed that the equation
$$i(q)=\frac{\sigma(n^2)}{q^k}=\frac{2n^2}{\sigma(q^k)}=\frac{D(n^2)}{s(q^k)}=\frac{2s(n^2)}{D(q^k)}=\gcd(n^2,\sigma(n^2))$$
holds. We also know that the index $i(q)$ is an integer which is at least $3$ by a result of Dris \cite{Dris2}. (The lower bound on $i(q)$ has since been improved by several authors.)
Furthermore, we can express $i(q)$ as
$$i(q)=q\sigma(n^2) - 2(q-1)n^2.$$
Set $E=n$, $F=\sigma(q^k)/2$, and $K = \gcd(E,F)$.
In this note, we compute expressions for the following GCDs
$$G = \gcd\bigg(\sigma(q^k),\sigma(n^2)\bigg)$$
$$H = \gcd\bigg(n^2,\sigma(n^2)\bigg)$$
and
$$I = \gcd\bigg(n,\sigma(n^2)\bigg).$$
It turns out that it is possible to express all of them in terms of $E$, $F$, and $\gcd(E,F)$.
As far as the author is aware, the approach presented in this paper is new and has not been considered before in the literature.
\section{Preliminaries}\label{Sec2}
Define
$$G = \gcd\bigg(\sigma(q^k),\sigma(n^2)\bigg)$$
$$H = \gcd\bigg(n^2,\sigma(n^2)\bigg)$$
and
$$I = \gcd\bigg(n,\sigma(n^2)\bigg).$$
The following lemma gives an identity that relates the values of $G, H,$ and $I$.
\begin{lemma}\label{G times H equals I squared}
If $N = q^k n^2$ is an odd perfect number given in Eulerian form, then $G \times H = I^2$.
\end{lemma}
\begin{proof}
We have
$$\sigma(q^k) = \frac{2n^2}{i(q)}$$
and
$$\sigma(n^2) = {q^k}{i(q)},$$
so that we get
$$G=\gcd\left(\sigma(q^k),\sigma(n^2)\right) = \gcd\bigg(\frac{2n^2}{i(q)}, {q^k}{i(q)}\bigg) = \frac{\gcd\Bigg(n^2, \bigg(i(q)\bigg)^2\Bigg)}{i(q)} = \frac{\Bigg(\gcd\bigg(n, i(q)\bigg)\Bigg)^2}{i(q)}$$
$$= \frac{\Bigg(\gcd\bigg(n, \gcd(n^2, \sigma(n^2))\bigg)\Bigg)^2}{i(q)} = \frac{\Bigg(\gcd\bigg(\gcd(n, n^2),\sigma(n^2)\bigg)\Bigg)^2}{i(q)} = \frac{\Bigg(\gcd\bigg(n,\sigma(n^2)\bigg)\Bigg)^2}{i(q)} = \frac{I^2}{H},$$
since $\sigma(n^2)$ is odd and $\gcd(q,n)=1$.
\end{proof}
Using the identity in Lemma \ref{G times H equals I squared}, we can now derive the following divisibility conditions.
\begin{lemma}\label{G divides I and I divides H}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. Then $G$ divides $I$ and $I$ divides $H$.
\end{lemma}
\begin{proof}
The proof of the divisibility constraint $I \mid H$ follows from the GCD property
$$\bigg(a \mid b\bigg) \implies \bigg(\gcd(a,c) \mid \gcd(b,c)\bigg).$$
Afterwards, the proof of the divisibility constraint $G \mid I$ then follows from Lemma \ref{G times H equals I squared}.
\end{proof}
Set
$$J=\frac{I}{G}=\frac{H}{I}.$$
By Lemma \ref{G times H equals I squared} and Lemma \ref{G divides I and I divides H}, $J$ is an (odd) integer.
The following lemma computes the value of $J$, in terms of $E, F,$ and $\gcd(E,F)$. (The proof is due to the anonymous MSE user mathlove \cite{mathlove}.)
\begin{lemma}\label{Value for J}
If $N = q^k n^2$ is an odd perfect number given in Eulerian form, then we obtain
$$J = \frac{n}{\gcd\bigg(\sigma(q^k)/2,n\bigg)}.$$
\end{lemma}
\begin{proof}
We have
$$H = \frac{n^2}{\sigma(q^k)/2}.$$
Hence, we obtain
$$J = \frac{H}{I} = \frac{n^2}{\sigma(q^k)/2\cdot\gcd\bigg(n,\sigma(n^2)\bigg)}=\frac{n^2}{\gcd\bigg(n\cdot\sigma(q^k)/2,\sigma(q^k)\sigma(n^2)/2\bigg)}=\frac{n^2}{\gcd\bigg(n\cdot\sigma(q^k)/2,q^k n^2\bigg)}$$
$$=\dfrac{n}{\gcd\bigg(\sigma(q^k)/2,q^k n\bigg)}=\frac{n}{\gcd\bigg(\sigma(q^k)/2,n\bigg)}.$$
\end{proof}
\begin{remark}\label{General Equation for G, H, I, and J}
Notice that we then have
$$J = \frac{I}{G} = \frac{H}{I} = \sqrt{\frac{H}{G}}$$
so that
$$H = G \times {J^2}.$$
\end{remark}
We can now compute expressions for $I$ and $G$, using Lemma \ref{Value for J}.
\begin{lemma}\label{Values for I and G}
If $N = q^k n^2$ is an odd perfect number given in Eulerian form, then we obtain
$$I = \Bigg(\frac{n}{\sigma(q^k)/2}\Bigg)\cdot{\gcd\bigg(\sigma(q^k)/2,n\bigg)}$$
and
$$G = \frac{\Bigg(\gcd\bigg(\sigma(q^k)/2,n\bigg)\Bigg)^2}{\sigma(q^k)/2}.$$
\end{lemma}
\begin{proof}
The proof is trivial.
\end{proof}
\section{What happens when $J=1$?}\label{Sec4}
Let us examine the case $J=1$ to see whether it is interesting.
First, we prove the following unconditional lemma.
\begin{lemma}\label{When J is 1}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. Then $J = 1$ holds if and only if $n \mid \sigma(q^k)/2$.
\end{lemma}
\begin{proof}
Recall that from Lemma \ref{Value for J}, we have
$$J = \frac{n}{\gcd\bigg(\sigma(q^k)/2,n\bigg)}.$$
This is equal to one if and only if
$$\gcd\bigg(\sigma(q^k)/2,n\bigg) = n,$$
which holds if and only if $n \mid \sigma(q^k)/2$.
\end{proof}
\begin{remark}\label{Remark on J equal to 1}
Note that $J = 1$ if and only if $G = H = I$ holds.
\end{remark}
If $J=1$, then from Lemma \ref{When J is 1}, we get $n \mid \sigma(q^k)/2$, from which we obtain $n < q^k$. But Brown \cite{Brown} proved the estimate $q < n$ in 2016. Hence, $J = 1$ implies that $k > 1$. We record this in the succeeding proposition.
\begin{lemma}\label{When J is 1 then k is not 1}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. If $J = 1$, then both Conjecture \ref{DFS} and Conjecture \ref{Dris} are false.
\end{lemma}
Recall from Remark \ref{General Equation for G, H, I, and J} that $H = G \times J^2$. Since $H \geq 3$ holds \cite{Dris2}, $G = J = 1$ is not true. We therefore get the following proposition.
\begin{lemma}\label{When J is 1 then G is not 1}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. If $J = 1$, then $G \neq 1$ holds.
\end{lemma}
\begin{theorem}\label{Equivalent Conditions 2}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. The following conditions are equivalent to $J=1$:
\begin{enumerate}
{
\item{$n \mid \sigma(q^k)/2$}
\item{$\sigma(n^2) \mid q^k n$.}
}
\end{enumerate}
\end{theorem}
\begin{proof}
The proof follows from Lemma \ref{When J is 1}, and by writing the equation
$$\frac{\sigma(n^2)}{n} = \frac{q^k n}{\sigma(q^k)/2}$$
in the form
$$\frac{q^k n}{\sigma(n^2)} = \frac{\sigma(q^k)/2}{n}.$$
\end{proof}
\section{What happens when $F$ is squarefree?}\label{Sec3}
We rewrite the equation
$$\frac{\sigma(n^2)}{q^k} = \frac{n^2}{\sigma(q^k)/2}$$
in the form
$$\frac{\sigma(n^2)}{n} = \frac{q^k n}{\sigma(q^k)/2}$$
to get the succeeding proposition.
The following theorem is similar in spirit to Theorem \ref{Equivalent Conditions 2}.
\begin{theorem}\label{Equivalent Conditions 1}
If $N = q^k n^2$ is an odd perfect number given in Eulerian form, then the following conditions are equivalent:
\begin{enumerate}
{
\item{$\sigma(q^k)/2 \mid n$}
\item{$n \mid \sigma(n^2)$}
\item{$G = \sigma(q^k)/2$}
\item{$I = n$}
}
\end{enumerate}
\end{theorem}
\begin{proof}
The equivalence of the first two conditions follows from the fact that $\gcd(q^k,\sigma(q^k))=1$.
Next, we show that the third condition is equivalent to the first. Recall from Lemma \ref{Values for I and G} that
$$G = \frac{\Bigg(\gcd\bigg(\sigma(q^k)/2,n\bigg)\Bigg)^2}{\sigma(q^k)/2}.$$
We then see that $G = \sigma(q^k)/2$ if and only if $\sigma(q^k)/2 \mid n$.
Lastly, we show that the fourth condition is equivalent to the second. To this end, suppose that
$$n = I = \gcd(n, \sigma(n^2)).$$
By the definition of GCD, it follows that $n \mid \sigma(n^2)$. Conversely, assume that $n \mid \sigma(n^2)$. Then we obtain
$$I = \gcd(n, \sigma(n^2)) = n$$
by the definition of GCD, and we are done.
\end{proof}
Under the condition that $\sigma(q^k)/2$ is squarefree, we get the following result.
\begin{theorem}\label{When F is squarefree}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. If $F = \sigma(q^k)/2$ is squarefree, then $J \neq 1$.
\end{theorem}
\begin{proof}
Suppose that $\sigma(q^k)/2$ is squarefree. (Note that, since $\sigma(q^k)/2 \mid n^2$ holds in general, then this hypothesis implies that $\sigma(q^k)/2 \mid n$ is true.) Assume to the contrary that $J = 1$. By Theorem \ref{Equivalent Conditions 2}, we have $n \mid \sigma(q^k)/2$. Since $\sigma(q^k)/2$ and $n$ are both positive, we obtain $\sigma(q^k)/2 = n$. This contradicts a result of Steuerwald in 1937 \cite{Steuerwald}, who proved that $n$ must contain a square factor.
\end{proof}
\subsection{What happens when $H$ is squarefree?}\label{Subsec3.1}
Suppose that $H$ is squarefree. By Lemma \ref{G times H equals I squared}, we have the equation
$$G \times H = I^2.$$
Since this implies $H \mid I^2$, it follows that $H \mid I$. However, by Lemma \ref{G divides I and I divides H}, we have $I \mid H$. Since $I$ and $H$ are both positive, we obtain $I = H$. This means that $J = 1$. By the contrapositive to Theorem \ref{When F is squarefree}, we finally have $\sigma(q^k)/2$ is not squarefree.
We record the immediately preceding results in the following propositions.
\begin{theorem}\label{When H is squarefree thm}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. If $H = G \times J^2$ is squarefree, then $J = 1$.
\end{theorem}
\begin{corollary}\label{When H is squarefree cor}
Suppose that $N = q^k n^2$ is an odd perfect number given in Eulerian form. If $H$ is squarefree, then $F = \sigma(q^k)/2$ is not squarefree.
\end{corollary}
\section{On the equation $H = I$}\label{Sec5}
Recall that we have the biconditional
$$J = 1 \iff H = I$$
from Remark \ref{Remark on J equal to 1}.
In this section, we shall attempt a naive determination of the asymptotic density of positive integers $m$ satisfying the equation $\gcd(m,\sigma(m^2))=\gcd(m^2,\sigma(m^2))$.
The author tried searching for examples and counterexamples via Sage Cell Server.
All positive integers from $1$ to $100$ (except for the integer $99$) satisfy the equation.
The following integers in the range $1 \leq m \leq 1000$ \emph{do not} satisfy $\gcd(m,\sigma(m^2))=\gcd(m^2,\sigma(m^2))$.
$$99 = {3^2}\cdot{11}$$
$$154 = 2\cdot 7\cdot 11$$
$$198 = 2\cdot{3^2}\cdot{11}$$
$$273 = 3\cdot 7\cdot 13$$
$$322 = 2\cdot 7\cdot 23$$
$$396 = {2^2}\cdot{3^2}\cdot{11}$$
$$399 = 3\cdot 7\cdot 19$$
$$462 = 2\cdot 3\cdot 7\cdot 11$$
$$469 = 7\cdot 67$$
$$495 = {3^2}\cdot 5\cdot 11$$
$$518 = 2\cdot 7\cdot 37$$
$$546 = 2\cdot 3\cdot 7\cdot 13$$
$$553 = 7\cdot 79$$
$$620 = {2^2}\cdot 5\cdot 31$$
$$651 = 3\cdot 7\cdot 31$$
$$693 = {3^2}\cdot 7\cdot 11$$
$$741 = 3\cdot 13\cdot 19$$
$$742 = 2\cdot 7\cdot 53$$
$$770 = 2\cdot 5\cdot 7\cdot 11$$
$$777 = 3\cdot 7\cdot 37$$
$$792 = {2^3}\cdot{3^2}\cdot 11$$
$$798 = 2\cdot 3\cdot 7\cdot 19$$
$$903 = 3\cdot 7\cdot 43$$
$$938 = 2\cdot 7\cdot 67$$
$$966 = 2\cdot 3\cdot 7\cdot 23$$
$$990 = 2\cdot{3^2}\cdot 5\cdot 11$$
A simple inspection yields that primes and prime powers satisfy the equation, so that there are infinitely many solutions.
The following Pari/GP-routines efficiently determine the numbers and percentages of solutions, up to a certain search limit. One can easily adjust the range.
\begin{lstlisting}
c=0;for(m=1,10,if(gcd(m,sigma(m^2))==gcd(m^2,sigma(m^2)),c=c+1));print(c," ",((c/10)*1.0))
c=0;for(m=1,100,if(gcd(m,sigma(m^2))==gcd(m^2,sigma(m^2)),c=c+1));print(c," ",((c/100)*1.0))
\end{lstlisting}
To summarize, we have the table below which shows the counts and percentages of the number of solutions to the equation
$$\gcd(m,\sigma(m^2))=\gcd(m^2,\sigma(m^2)),$$
up to $10$, ${10}^2$, ${10}^3$, ${10}^4$, ${10}^5$, and ${10}^6$, respectively:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Upper limit & Count & Percentage \\ \hline
10& 10 & 100\% \\ \hline
100& 99 & 99\% \\ \hline
1000& 974 & 97.4\% \\ \hline
10000& 9561 & 95.61\% \\ \hline
100000& 93845 & 93.845\% \\ \hline
1000000& 923464 & 92.3464\% \\ \hline
\end{tabular}
\end{center}
The author was only able to test until ${10}^6$ because the Pari/GP interpreter of Sage Cell Server begins to crash as soon as a search limit of ${10}^7$ is specified.
The author thinks this is not a rigorous proof, but it is definitely evidence to suggest that the asymptotic density in question is less than one.
We state and prove this assertion in the following theorem, which the author first conjectured in the year 2020:
\begin{theorem}\label{AsymptoticDensity1}
The asymptotic density $\mathscr{A}$ of positive integers $m$ with
$$\gcd(m,\sigma(m^2))=\gcd(m^2,\sigma(m^2))$$
satisfies
$$\mathscr{A} < 1.$$
\end{theorem}
\begin{proof}
Generalizing the first (counter)example of $99$ is trivial.
If ${3^2}\cdot{11} \parallel m$, then $11 \parallel \gcd(m,\sigma(m^2))$ and $11^2 \parallel \gcd(m^2,\sigma(m^2))$. So the asymptotic density in question is less than
$$1-\frac{2}{3^3}\cdot\frac{10}{11^2} = \frac{3247}{3267} \approx 0.993878.$$
Also, if $3 \parallel m$, then with probability $1$ there exist two distinct primes $y$ and $z$ congruent to $1$ modulo $3$ such that $y \parallel m$ and $z \parallel m$. In this case, we get $3 \parallel \gcd(m,\sigma(m^2))$ and $3^2 \parallel \gcd(m^2,\sigma(m^2))$. So the asymptotic density in question is less than
$$1-\frac{2}{3^2} = \frac{7}{9} \approx 0.\overline{777}.$$
\end{proof}
The real open problem is whether the asymptotic density $\mathscr{A}$ is $0$. We state this in the succeeding conjecture:
\begin{conjecture}\label{AsymptoticDensity2}
The asymptotic density $\mathscr{A}$ of positive integers $m$ with
$$\gcd(m,\sigma(m^2))=\gcd(m^2,\sigma(m^2))$$
satisfies
$$\mathscr{A} = 0.$$
\end{conjecture}
\begin{remark}\label{Remark Asymptotic Density H equals I}
In an answer to one of the author's questions in MathOverflow, Aaron Meyerowitz \cite{Meyerowitz} (\url{https://mathoverflow.net/users/8008}) made the following assertions regarding Conjecture \ref{AsymptoticDensity2}: "I think the density does go to zero, but quite slowly. If $p \equiv 1 \pmod 6$ is prime then there are two solutions $0<r<s<p-1$ of $$u^2+u+1 \equiv 0 \pmod p.$$
If $p\parallel m$ then, with probability $1,$ there are two distinct primes $u$ and $v,$ each congruent to $r \pmod p,$ with $u \parallel m$ and $v \parallel m.$ (Either or both could be congruent to $s$ as well.) Then $p \parallel \gcd(m,\sigma(m^2))$ while $p^2 \parallel \gcd(m^2,\sigma(m^2)).$ So the asymptotic density for this not to happen is $1-\frac{p-1}{p^2}<1-\frac{1}{p+2}$. If we can argue that the chance that none of these events happen is asymptotically $\prod(1-\frac{p-1}{p^2})$ over the primes congruent to $1 \pmod 6,$ then that asymptotic density is $0$."
\end{remark}
\section{Some Further Considerations}\label{Sec6}
\subsection{Bounds for $K, G, I$ and $J$}\label{Subsec6.1}
Recall from Section \ref{Sec1} that we have set $E = n$ and $F = \sigma(q^k)/2$. In this section, we compute a lower bound for $K = \gcd(E,F)$.
We begin with the following proposition.
\begin{theorem}\label{GCD of E and F}
If $N = q^k n^2$ is an odd perfect number given in Eulerian form, then $K \neq 1$.
\end{theorem}
\begin{proof}
Assume to the contrary that $K = 1$. Then, we can simplify the expression for $J$ (from Lemma \ref{Value for J}) as
$$J = \frac{n}{\gcd\bigg(\sigma(q^k)/2,n\bigg)} = \frac{E}{K} = E.$$
Recall from Remark \ref{General Equation for G, H, I, and J} that
$$J^2 = \frac{H}{G}.$$
We solve for $G$ and then obtain
$$G = \frac{H}{J^2} = \frac{E^2}{F}\cdot\frac{1}{E^2} = \frac{1}{F},$$
whereupon we get a contradiction from $F = \sigma(q^k)/2 \geq 3$ and $G \geq 1$.
\end{proof}
\begin{remark}\label{Remark GCD of E and F}
By Theorem \ref{GCD of E and F}, $K \geq 2$ must hold. Since $E$ and $F$ are both odd, then we also obtain $K \neq 2$. Consequently, we have the lower bound
$$K = \gcd(E,F) \geq 3.$$
\end{remark}
As a corollary, we obtain the following (unconditional) bounds for $G$, $I$ and $J$. (We have also used the definitional property of $K = \gcd(E,F)$, i.e. that $K$ divides both $E$ and $F$.)
\begin{corollary}
If $N = q^k n^2$ is an odd perfect number given in Eulerian form, then the following bounds hold:
\begin{enumerate}
{
\item{$\frac{9}{F} \leq G \leq F$}
\item{$\frac{3E}{F} \leq I \leq E$}
\item{$\frac{E}{F} \leq J \leq \frac{E}{3}$}
}
\end{enumerate}
\end{corollary}
\subsection{On the constraint $\sigma(w^2) \equiv 0 \pmod w$}\label{Subsec6.2}
Lastly, the author also tried checking for examples of numbers $2 \leq w \leq {10}^6$ satisfying the divisibility constraint
$$w \mid \sigma(w^2)$$
using the following Pari-GP script, via Sage Cell Server:
\begin{lstlisting}
for(w=2, 1000000, if((Mod(sigma(w^2),w) == 0),print(w,factor(w))))
\end{lstlisting}
Here is the output:
$$39 = 3 \cdot {13}$$
$$793 = {13} \cdot {61}$$
$$2379 = 3 \cdot {13} \cdot {61}$$
$$7137 = {3^2} \cdot {13} \cdot {61}$$
$$13167 = {3^2} \cdot 7 \cdot {11} \cdot {19}$$
$$76921 = {13} \cdot {61} \cdot {97}$$
$$78507 = {3^2} \cdot {11} \cdot {13} \cdot {61}$$
$$230763 = 3 \cdot {13} \cdot {61} \cdot {97}$$
$$238887 = {3^2} \cdot {11} \cdot {19} \cdot {127}$$
$$549549 = {3^2} \cdot 7 \cdot{11} \cdot {13} \cdot{61}$$
$$692289 = {3^2} \cdot {13} \cdot {61} \cdot {97}$$
$$863577 = {3^2} \cdot {{11}^2} \cdot {13} \cdot {61}$$
Note that all of the known examples are \emph{odd}. The author double-checked the list of the first $199$ terms of OEIS sequence \textit{A232354} (\url{https://oeis.org/A232354/b232354.txt}) and verified that all of them are odd. Additionally, all of the terms $w$ in that list \emph{do not satisfy} $\sigma(w^2)/w = d^e$ (where $d$ is prime), except for $w = 39$.
\section{Future Research}\label{Sec7}
We leave the following problem for other researchers to solve.
\begin{conjecture}
If $N = q^k n^2$ is an odd perfect number given in Eulerian form, then unconditionally we have
$$\gcd(\sigma(q^k), \sigma(n^2)) \neq \gcd(n^2, \sigma(n^2)).$$
\end{conjecture}
\section{Acknowledgments}\label{Sec8}
The author thanks the anonymous MSE user Peter (\url{https://math.stackexchange.com/u/82961}) for sharing Pari/GP-routines. The author also would like to give credit to the anonymous MSE user mathlove (\url{https://math.stackexchange.com/users/78967}) for providing a proof of Lemma \ref{Value for J} \cite{mathlove}. The author is likewise indebted to Aaron Meyerowitz of MathOverflow \cite{Meyerowitz}.
|
1901.09667
|
\section{Introduction}\label{intro}
The unavoidable coupling between the open quantum system and the environment allows the environment to thermalize the quantum system~\cite{Breuer2002,Gardiner2004,Scully2012}. Cooling the quantum system has long been a challenge and one of the most desirable quantum technologies. It plays a crucial role in the initialization of the quantum applications including but not limited to the adiabatic quantum computing~\cite{Childs2001,Farhi2001,Sarandy2005,Hammerer2009,You2011} and ultrahigh-precision measurements using mechanical resonators~\cite{Bocko1996,Caves1980,Li2011}.
A passive and straightforward way of cooling a quantum system is attaching the system to a cold bath. To cool down a system to an even lower temperature, a successful approach called sideband cooling employs transitions from the upper state of the system, e.g., a qubit, to an auxiliary intermediate state, which can then quickly relax into the ground state~\cite{Chu1998,Cohen-Tannoudji1998,Phillips1998,Leibfried2003,Valenzuela2006,Wineland1978,Neuhauser1978,Monroe1995}. One disadvantage of this cooling method is the requirement of an appropriate intermediate state. An alternative is sponsored by selective quantum measurements~\cite{Lloyd1997,Scully2001,Li2011,Xu2014,Pyshkin2016}. The results of measurement are read out to discard the unwished samples. One major disadvantage of this approach is that it requires performing many measurements to achieve the cooling target, and the survival probability is quite small.
Since the quantum systems cannot be isolated from the surrounding environment, an interesting idea is to exploit the environment to cool them down. In the spirit of this idea, frequent quantum measurements modifying the thermodynamic properties of the environment have been proposed theoretically~\cite{Erez2008,Gordon2009,Gordon2010,Gelbwaser2014,Kurizki2015,Kurizki2017} and verified experimentally~\cite{Gonzalo2010}. In the open-quantum-system theory, the Markovian limit renders an invariant and unidirectional energy-flow rate from the system to the environment during the process that the system is cooled down to the thermal-equilibrium state. However, there are more numbers of scenarios where the relaxation time scale of the environment is not sufficiently small compared to the dynamical time scale of the system, which means that the bath would have a memory effect on the system. The environmental memory effect has been used to extract work from the bath~\cite{Gelbwaser2013}, exceed the classical Carnot bound~\cite{Zhang2014}, and freeze the system state~\cite{Gordon2010}.
On account of the memory capacity of the structured baths, the energy flow may go from the system into the environment and then come back to the system until reaching the thermal equilibrium. More importantly, the structured baths are available to perform manipulation over the dynamics of the open system. The free unitary evolution of the combined density matrix $\rho$ of the system and the environment over a time $\tau$ can be expressed by $\mU_{\tau}[\rho(0)]$. Then supposing that the system and the environment could be roughly decoupled at this moment by, e.g., a nonselective impulsive measurement in the basis of the system bare Hamiltonian, the combined density matrix now becomes $\mM\mU_{\tau}[\rho(0)]$. Assume the time interval $\tau$ is so properly chosen that at the end of this interval, the energy flowing from the system to the environment exceeds that moving along the reverse direction, then an amount of net energy of the system would be retained in the environment by the measurement. Performing a sequence of periodical measurements with a constant separation time $\tau$ while among the measurements the system freely evolves under the environment; then the combined density matrix can be written as
\begin{equation}\label{process}
\rho(t=n\tau)=(\mM\mU_{\tau})^n[\rho(0)]
\end{equation}
after $n$ consecutive measurements. The process that system energy flows into the environment repeated by periodic quantum measurements pushes the quantum system into a quasisteady state. The effective temperature of the system is controllable by the measurement frequency and in certain conditions becomes even lower than the environmental temperature. This method may be realized in many existing experimental scenarios, such as microcavities and quantum dots~\cite{Gordon2009}. The cooling phenomenon was attributed to the quantum anti-Zeno effect~\cite{Erez2008}, yet we will see this argument is controversial.
Recently, we proposed a compact criterion concerning the spectral density function (SDF) of a zero-temperature environment to discriminate quantum Zeno and anti-Zeno effects~\cite{Zhang2018}. Inspired by a similar spectral analysis, we find in this work that the bath structure obviously affects the availability and efficiency of system cooling, and the cooling effect is not equivalent to the anti-Zeno effect. In a finite-temperature environment, we find that the cooling effect arising for a short measurement interval entails the breakdown of the rotating-wave approximation (RWA) and nontrivial contribution from the time-dependent damping rate of the system. The counter-rotating terms impact the final quasisteady state and could not be neglected. We examine the spectrum of the environment and conclude two conditions contributing to cooling: (i) the logarithmic derivative of the spectrum at the system transition frequency is large enough; (ii) the spectrum should have a sharp high-frequency cutoff.
The rest of this work is organized as follows. In Sec.~\ref{dyna}, we focus on the free evolution $\mU_{\tau}(\rho)$ described by a time-convolutionless master equation, which briefly introduces the dynamics of the system in a finite-temperature bath without RWA. We analyze the dynamical contributions from the rotating-wave and counter-rotating terms in details. Section~\ref{thdy} is devoted to the system dynamics under periodic nondemolition measurements. The connection and distinction between the quantum anti-Zeno effect and cooling are clarified. In Sec.~\ref{cool}, we establish the cooling conditions by spectral analysis and whereby study the cooling phenomenon in the modified Lorentzian model and the super-Ohmic model. We close this work with a summary in Sec.~\ref{conc}.
\section{Evolution of the system without measurements}\label{dyna}
\subsection{Time-convolutionless master equation for the system in a finite-temperature bath without RWA}
We consider a two-level system (qubit or TLS) with Bohr frequency $\om_a$ undergoing decay into a finite-temperature bath. The bath can be represented by a set of bosonic harmonic oscillators. The total Hamiltonian in the Schr\"{o}dinger picture has a general form ($\hbar=1$),
\begin{eqnarray}\non
H&=&\frac{1}{2}\om_a\si_z +\sum_k\om_k a_k^\dag a_k+\sum_k(g_k \si_+ a_k + g_k^*\si_- a_k^\dag) \\ \label{H}
&+&\sum_k(g_k \si_- a_k + g_k^* \si_+ a_k^\dag),
\end{eqnarray}
where $\si_z$ and $\si_{\pm}$ are respectively the Pauli matrix and the inversion operators of the system, $a_k^\dag$ and $a_k$ are respectively the creation and annihilation operators for the $k$th mode of the environment with frequency $\omega_k$, and $g_k$ describes the coupling strength between the system and the $k$th mode. The interaction Hamiltonian contains both the rotating-wave terms $\si_+a_k$ and $\si_-a_k^\dag$ and the counter-rotating terms $\si_+a_k^\dag$ and $\si_-a_k$.
In the weak-coupling regime, the evolution of the reduced density matrix of the system $\rho_S(t)$ over time $t$ can be written in the Schr\"{o}dinger picture as (a detailed and general derivation is presented in Appendix~\ref{SNME})
\begin{equation}
\label{ME}
\begin{aligned}
\frac{d}{dt}\rho_S(t)=&-i\left[\frac{1}{2}\om_a\si_z +\sum_{j=\pm}\De_j(t)\si_j^\dag \si_j,\rho_S(t)\right]\\
&+\sum_{j=\pm}
\frac{\Ga_j(t)}{2}\mL[\si_j](\rho_S(t))\\
&+\sum_{j=\pm}\left\{\left[\frac{\Ga_j(t)}{2}+i\De_j(t)\right]\si_j\rho_S(t)\si_j
+{\rm H.c.}\right\}.
\end{aligned}
\end{equation}
Here $\De_{+(-)}(t)$ and $\Ga_{+(-)}(t)$ are respectively the time-dependent Lamb shift of the ground (existed) state $|g\ra$ ($|e\ra$) and the time-dependent transition rate for $|g\ra\to|e\ra$ ($|e\ra\to|g\ra$). All of these coefficients can be decomposed into two parts: $\De_{\pm}(t)=\De^r_{\pm}(t)+\De^{cr}_{\pm}(t)$ and $\Ga_{\pm}(t)=\Ga^r_{\pm}(t)+\Ga^{cr}_{\pm}(t)$. Note, throughout this work, the superscripts $r$ and $cr$ are respectively used to signify the contributions from the rotating-wave terms and the counter-rotating terms in the interaction Hamiltonian and the subscripts $+$ and $-$ are respectively used to signify the contributions from the transitions $|g\ra\to|e\ra$ and $|e\ra\to|g\ra$. The Lamb shifts and the transition rates are respectively defined as
\begin{equation}
\label{De}
\begin{aligned}
\De_+^r(t)\equiv&\int_0^\infty d\om n_T(\om)G_0(\om)\frac{1-\cos[(\om-\om_a)t]}{\om-\om_a},\\
\De_+^{cr}(t)\equiv&-\int_0^\infty d\om [n_T(\om)+1]G_0(\om)\frac{1-\cos[(\om+\om_a)t]}{\om+\om_a},\\
\De_-^r(t)\equiv&-\int_0^\infty d\om [n_T(\om)+1]G_0(\om)\frac{1-\cos[(\om-\om_a)t]}{\om-\om_a},\\
\De_-^{cr}(t)\equiv&\int_0^\infty d\om n_T(\om)G_0(\om)\frac{1-\cos[(\om+\om_a)t]}{\om+\om_a},
\end{aligned}
\end{equation}
and
\begin{equation}
\label{Ga}
\begin{aligned}
\Ga_+^r(t)\equiv&2t\int_0^\infty d\om n_T(\om)G_0(\om){\rm sinc}[(\om-\om_a)t],\\
\Ga_+^{cr}(t)\equiv&2t\int_0^\infty d\om [n_T(\om)+1]G_0(\om){\rm sinc}[(\om+\om_a)t],\\
\Ga_-^r(t)\equiv&2t\int_0^\infty d\om [n_T(\om)+1]G_0(\om){\rm sinc}[(\om-\om_a)t],\\
\Ga_-^{cr}(t)\equiv&2t\int_0^\infty d\om n_T(\om)G_0(\om){\rm sinc}[(\om+\om_a)t],
\end{aligned}
\end{equation}
where $n_T(\om)=(e^{\be\om}-1)^{-1}$ is the temperature-dependent ($\be=1/T$ with $k_B\equiv1$) average population of the oscillator (bath mode) with frequency $\om$ at temperature $T$, and $G_0(\om)=\sum_k|g_k|^2\de(\om-\om_k)$ is the SDF at zero temperature. The Lindblad superoperators in the second line of Eq.~(\ref{ME}) are defined as
\begin{equation}
\mL[\si](\rho)\equiv 2\si\rho\si^\dag-\{\si^\dag \si,\rho\},
\end{equation}
representing the quantum jump process of the TLS characterized by an arbitrary system operator $\si$. The last line of Eq.~(\ref{ME}) represents the contribution of the so-called nonsecular terms and involves the two-photon processes. Note this line also includes $\si_+\rho_S(t)\si_+$ and $\si_-\rho_S(t)\si_-$, which stem from the cross interaction between the rotating-wave and the counter-rotating terms.
We can then obtain the generalized Bloch equations for the elements in $\rho_S(t)$ straightforwardly from Eq.~(\ref{ME}). The diagonal terms evolve according to~\cite{Kofman2004}
\begin{equation}\label{re}
\frac{d}{dt}\rho_{ee}(t)=-\frac{d}{dt}\rho_{gg}(t)=-\Ga_-(t)\rho_{ee}(t)+\Ga_+(t)\rho_{gg}(t),
\end{equation}
whereas the off-diagonal terms evolve according to
\begin{equation}\label{reg}
\begin{aligned}
&\frac{d}{dt}\rho_{eg}(t)=\frac{d}{dt}\rho_{ge}^*(t)\\
=&-\left\{\frac{\Ga_-(t)+\Ga_+(t)}{2}+i[\om_a+\De_-(t)-\De_+(t)]\right\}\rho_{eg}(t)\\
&+\left\{\frac{\Ga_-(t)+\Ga_+(t)}{2}-i[\De_-(t)-\De_+(t)]\right\}\rho_{ge}(t).
\end{aligned}
\end{equation}
It is clear that the time-dependent Lamb shift $\De(t)$ involves only the dephasing process of the system and the time-dependent transition rate $\Ga(t)$ affects both population decay and dephasing processes. The cooling or heating issue in this work resolves around Eq.~(\ref{re}) for populations and the retaining of the time dependence of the damping rates $\Ga_{\pm}(t)$ is crucial to cooling.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{NME.eps}
\caption{(Color online) Transition rates and Lamb shifts appearing in the master equation~(\ref{ME}). They are relevant to various physics. The time-convolutionless master equation includes all of the eight items. With the rotating-wave approximation, the items with superscript $r$ are sustained, which are distinguished by the red rounded rectangle, but the remaining four items as well as those in the third line of Eq.~(\ref{ME}) vanish. For the zero-temperature environment or the vacuum-field bath, $\De_-^r(t)$, $\Ga_-^r(t)$, $\De_+^{cr}(t)$, and $\Ga_+^{cr}(t)$ are sustained, which are distinguished by the blue rounded rectangle, but the remaining four items vanish. With the vacuum field under RWA, only two items $\De_-^r(t)$ and $\Ga_-^r(t)$ will survive, which are the overlap of the red and blue rounded rectangles. All of these coefficients will become time independent in the Markovian limit. }
\label{NME}
\end{figure}
In different scenarios, the perturbative and time-convolutionless master equation~(\ref{ME}) can be reduced to simpler forms as we summarize in the sketch of Fig.~\ref{NME}. The last and nonsecular term of Eq.~(\ref{ME}) will be ignored by the secular approximation and then our master equation reduces to a master equation in the Lindblad form. The counter-rotating terms in Hamiltonian~(\ref{H}) will be canceled by RWA. Consequently, counter-rotating contributions in both Lamb shift $\De^{cr}(t)$ and state-transition rate $\Ga^{cr}(t)$ will disappear from Eq.~(\ref{ME}), as well as the terms for the cross interaction. For a zero-temperature environment (vacuum-field bath), i.e., in the limit $\be\to\infty$ and $n_T(\om)\to 0$, we have $\De_+^r(t)=\De_-^{cr}(t)=\Ga_+^r(t)=\Ga_-^{cr}(t)=0$. Physically, the vacuum-field assumption gives rise to partial loss of both transition rate for $|g\ra\to|e\ra$ and energy shift for $|g\ra$ due to the rotating-wave terms. It also causes partial loss of both transition rate for $|e\ra\to|g\ra$ and energy shift for $|e\ra$ due to the counter-rotating terms. With the Markovian approximation, which is valid for a long time scale and popular in literature regarding the effect of external environment, all of the time-independent energy shifts and transition rates in the master equation~(\ref{ME}) will become time independent.
\subsection{Thermal contributions}\label{thermal}
In the general framework of open-quantum-system dynamics, the environment is supposed to be at the thermal equilibrium state with temperature $T$. The bath can be described by a canonical ensemble and the constituent oscillators satisfy the Bose-Einstein statistics. Here we assume that the ground- and excited- state populations of the qubit similarly obey the canonical distribution to define its time-dependent effective temperature $T_S(t)$. The excited population thus satisfies the Fermi-Dirac statistics
\begin{equation}
\rho_{ee}(t)=\frac{1}{e^{\be_S(t)\om_a}+1},
\end{equation}
where $\be_S(t)\equiv1/T_S(t)$. In the following, we will use the time evolution of the excited population $\rho_{ee}(t)$ to characterize that of the effective temperature. The heating and cooling of the qubit correspond to the enhancement and declination of the excitation population $\rho_{ee}(t)$, respectively.
In a very short times cale, one can apply an adiabatic approximation to Eq.~(\ref{re}), such that $\rho_{ee}(t)\simeq\rho_{ee}(0)$. The solution of the excitation population $\rho_{ee}(t)$ in Eq.~(\ref{re}) then can be written as
\begin{equation}\label{ree}
\rho_{ee}(t)\approx\rho_{ee}(0)\left[1+e^{\be_S(0)\om_a}J_+(t)-J_-(t)\right],
\end{equation}
where $J_-(t)=J_-^r(t)+J_-^{cr}(t)$ and $J_+(t)=J_+^r(t)+J_+^{cr}(t)$ with
\begin{equation}\label{J}
\begin{aligned}
J_+^r(t)\equiv&\int_0^{t} dt' \Ga_+^r(t')\\
=&t^2\int_0^\infty d\om n_T(\om)G_0(\om){\rm sinc}^2\left[\frac{(\om-\om_a )}{2}t\right],\\
J_+^{cr}(t)\equiv&\int_0^{t} dt' \Ga_+^{cr}(t')\\
=&t^2\int_0^\infty d\om [n_T(\om)+1]G_0(\om){\rm sinc}^2\left[\frac{(\om+\om_a )}{2}t\right],\\
J_-^r(t)\equiv&\int_0^{t} dt' \Ga_-^r(t')\\
=&t^2\int_0^\infty d\om [n_T(\om)+1]G_0(\om){\rm sinc}^2\left[\frac{(\om-\om_a )}{2}t\right],\\
J_-^{cr}(t)\equiv&\int_0^{t} dt' \Ga_-^{cr}(t')\\
=&t^2\int_0^\infty d\om n_T(\om)G_0(\om){\rm sinc}^2\left[\frac{(\om+\om_a )}{2}t\right].
\end{aligned}
\end{equation}
Thus, for various initial population, one can find quite similar dynamics for $\rho_{ee}(t)$ (see the inset of Fig.~\ref{rL}). Note that even if initially the temperature of the system is equal to that of the environment, the state of the system still evolves with time for it will become entangled with the environment before they approach a new thermal equilibrium. The excitation population of the system remains unchanged under the Born-Markovian approximation, yet it will change with time under a structured environment as displayed by the black solid line in the inset of Fig.~\ref{rL}.
The relative deviation of excitation population at the moment $t$ is
\begin{equation}\label{dr}
\frac{\rho_{ee}(t)-\rho_{ee}(0)}{\rho_{ee}(0)}=\int_0^\infty d\om F[\be_S(0),\be,t,\om]G_0(\om),
\end{equation}
where the filter function $F$ could also be divided into the rotating-wave and counter-rotating components $F(\be_S,\be,t,\om)=F^r(\be_S,\be,t,\om)+F^{cr}(\be_S,\be,t,\om)$. They are respectively defined as
\begin{equation}\label{filter}
\begin{aligned}
F^r(\be_S,\be,t,\om)\equiv&t^2\frac{e^{\be\om}-e^{\be_S\om_a}}{e^{\be\om}-1}{\rm sinc}^2\left(\frac{\om-\om_a}{2}t\right),\\
F^{cr}(\be_S,\be,t,\om)\equiv&t^2 e^{\be_S\om_a}\frac{e^{\be\om}-e^{-\be_S\om_a}}{e^{\be\om}-1}{\rm sinc}^2\left(\frac{\om+\om_a}{2}t\right).
\end{aligned}
\end{equation}
The contribution of the rotating-wave terms representing the energy exchange between the qubit and the environment can be further divided into two parts with respect to the bath-mode frequency $\om$. The oscillators with frequency higher than $\om_a\be_S(0)/\be$ (it is assumed to be a boundary separating the low- and high-frequency domains) render a negative $F^r[\be_S(0),\be,t,\om]$, that can be used to cool down the qubit, whereas the oscillators with frequency lower than $\om_a\be_S(0)/\be$ render a positive $F^r[\be_S(0),\be,t,\om]$, meaning heating up the qubit. In the mean time, the fluctuation of the thermal field, which comes from the counter-rotating terms or the so-called energy nonconserving terms in the interaction Hamiltonian, generates virtual particles. In the case without initial population inversion of the system, that is to say, in the case of non-negative temperature, the function $F^{cr}[\be_S(0),\be,t,\om]$ is found to be always positive implying that the virtual process carries energy to the qubit and then heats it up.
Especially when $\be_S(0)=\be$, the boundary separating the positive and negative regimes of $F^r(\be,\be,t,\om)$ is just the system transition frequency $\om_a$. Combining the contributions from the rotating-wave and the counter-rotating filter functions in the high-frequency ($\om>\om_a$) regime, the upper bound for the cooling is found to be determined by the condition when $F^r(\be,\be,\tau,\om)+F^{cr}(\be,\be,\tau,\om)\leq0$. In the low-temperature limit, it can be estimated by
\begin{equation}\label{wup}
\om\lesssim\om_a[1+4\be\om_a\exp(-\be\om_a)].
\end{equation}
which is obtained using the mean value $1/2$ to replace the square sine function in Eq.~(\ref{filter}). Thus the bath spectrum can be roughly divided into three parts according to the cooling and heating effects. Low- and high-frequency oscillators with $\om<\om_a$ and $\om>\om_a[1+4\be\om_a\exp(-\be\om_a)]$ effectively heat up the qubit and those with moderate frequencies $\om_a<\om<\om_a[1+4\be\om_a\exp(-\be\om_a)]$ turn out to cool down the qubit.
It is noted that Eq.~(\ref{ree}) is obtained under the adiabatic approximation, so that the above interpretation is appropriate for a short time scale. Scrutinizing the filter functions $F^r(\be_S,\be,t,\om)$ and $F^{cr}(\be_S,\be,t,\om)$, the factor $e^{\be_S\om_a}$ in $F^{cr}(\be_S,\be,t,\om)$ indicates that the contribution from the counter-rotating terms overwhelms that from the rotating-wave terms especially at an extremely low temperature. Thus one should be very cautious when applying the rotating-wave approximation with short timescales and finite temperatures.
\section{Excitation dynamics under non-demolition measurements}\label{thdy}
In this section we consider the complete process described by Eq.~(\ref{process}). A standard method to periodically decouple the system and the environment is to instantaneously perform a nonselective projective $\si_z$ measurement on the qubit at the moments $n\tau$, $n\geq1$, thereby to project the qubit state onto the energy eigenstates, $|e\ra$ and $|g\ra$,
\begin{equation}\label{measurement}
\rho_S(n\tau)\mapsto \rho_S^M(n\tau)=\frac{1}{2}\left[\rho_S(n\tau)+\si_z\rho_S(n\tau)\si_z\right].
\end{equation}
Since the $\si_z$ projective measurement commutes with the bare Hamiltonian of the system, it serves as a quantum nondemolition (QND) measurement on the system. The effect of the QND measurement is retaining the qubit's $\si_z$-diagonal terms and erasing the off diagonals. It should be stressed that the measurements are nonselective, i.e., the measurement results are unread and not used to discard unwanted samples. A detailed dynamical description in the interval $[0, \tau]$ is provided in Appendix~\ref{PM}; undoubtedly it applies to any $[(n-1)\tau, n\tau], n=1,2,3,\cdots$.
Applying $n$ periodical measurements with a constant time spacing $\tau$, we obtain
\begin{equation}\label{rhoM}
\begin{aligned}
\rho_{ee}^M(t=n\tau)\approx&\left[\rho_{ee}(0)-\frac{J_+(\tau)}{J(\tau)}\right]
e^{-\frac{J(\tau)}{\tau}t}+\frac{J_+(\tau)}{J(\tau)},
\end{aligned}
\end{equation}
with $J(\tau)\equiv J_+(\tau)+J_-(\tau)$. Equation~(\ref{rhoM}) clearly presents a formal solution that, under the periodical measurements, the excitation population follows an exponential-like decay towards a quasisteady value $J_+(\tau)/J(\tau)$. The effective decay rate reads $J(\tau)/\tau$. Both steady state and decay rate rely on the measurement time interval $\tau$. In the Markovian limit with a sufficiently large $\tau\to\infty$, Eq.~(\ref{rhoM}) can reduce to
\begin{equation}\label{rMar}
\rho_{ee}^M(t)=\left[\rho_{ee}(0)-\rho_{ee}^B\right]e^{-\Ga_0 t}+\rho_{ee}^B,
\end{equation}
where $\Ga_0\equiv \lim_{\tau\to\infty}J_+(\tau)/\tau=2\pi[2n_T(\om_a)+1]G_0(\om_a)$ is the free decay rate as determined by Fermi's golden rule, and $\rho_{ee}^B\equiv \lim_{\tau\to\infty}J_+(\tau)/J(\tau)=(e^{\be\om_a}+1)^{-1}$, meaning that eventually the effective temperature of the system is equivalent to that of the thermal bath. It means that cooling cannot be realized in this limit.
\begin{figure}[htbp]\centering
\includegraphics[width=0.5\textwidth]{rL.eps}
\caption{(Color online) Main panel: excitation dynamics of the TLS in a finite-temperature bath. The initial population is $\rho_{ee}(0)=0.15$. The orange dot-dashed line represents the free evolution of the excitation population based on Eq.~(\ref{re}). The red short-dashed line represents the free dynamics in the Markovian limit based on Eq.~(\ref{rMar}). The black solid line represents excitation population over time under periodical measurements with a time interval $\om_a\tau=2.5$. The measurement moments are marked by the black circles. The green dashed line represents the exponential approximation of the black solid line based on Eq.~(\ref{rhoM}). The black short dotted and the red dotted horizontal lines depict the final excitation population with and without measurements, respectively. Inset: the normalized free dynamics of the excitation population under different initial conditions. The red dashed line and the black solid line are respectively the evolutions with and without the adiabatic approximation~(\ref{ree}) for initial population $\rho_{ee}(0)=0.12$, by which initially the effective temperature of the system is set to be equivalent to that of the bath. The blue short-dashed line and the green dot-dashed line are the evolutions with and without the adiabatic approximation for initial population $\rho_{ee}(0)=0.15$, respectively. Here the environmental spectrum is chosen as the Lorentzian type in Eq.~(\ref{Lorentz}), with $\al=0.01$, $\Lam=0.25\om_a$, and $\om_0=1.5\om_a$. The inverse temperature of the bath is set as $\be\om_a=2$.}
\label{rL}
\end{figure}
The excitation dynamics of the qubit in a finite-temperature bath, with and without measurements, is portrayed in the main panel of Fig.~\ref{rL}. Note, in this figure, the environmental temperature is fixed by $\be\om_a=2$ such that $\rho_{ee}(0)=0.12$ if the system is set to the same effective temperature. The red short-dashed line and the orange dot-dashed line reflect respectively the free normalized excitation population dynamics with and without the Markovian approximation. Both of them asymptotically decrease to the same steady-state value determined by the environmental temperature, although the system is started from an effective temperature higher than that of the environment due to $\rho_{ee}(0)=0.15$. Then we can see the effect of the periodical measurements on the excitation population by the numerical evaluation according to the time-convolutionless master equation~(\ref{re}) and the measurement projection~(\ref{measurement}) (see the black solid line). It can also be approximately captured by the analytical result~(\ref{rhoM}). Either decay rate of them is roughly larger than that in the free evolution and then approaches a new but lower steady value than that of the free evolution. To demonstrate that these results are insensitive to the initial condition and analytical technique, in the inset of Fig.~\ref{rL} we plot the free evolutions of the excitation population under two initial conditions. One of them is so selected that initially the separated system and environment are set to the same temperature and the other is selected as the same as that in the main panel. They are found to follow a quite similar dynamics: first rises a little bit in a very short time scale and then rapidly declines to a value lower than the initial one, so that it is always possible to have a proper time-spacing constant $\tau$ for nonselective measurements to reduce the excitation population as well as to cool down the system.
It is known that the effective decay rate is a crucial quantity to identify the quantum Zeno effect (QZE) and the quantum anti-Zeno effect (QAZE) in the open-quantum-system dynamics~(\ref{rMar}). In particular, the QZE occurs if the effective decay rate is smaller than the free decay rate, i.e., $J(\tau)/\tau<\Gamma_0$, and the QAZE does if $J(\tau)/\tau>\Gamma_0$. In this work, the effective decay rate $J(\tau)/\tau$ in Eq.~(\ref{rhoM}) is an extension to the results in Refs.~\cite{Kofman1996,Kofman2000,Zhang2018}, by including the contributions from the counter-rotating terms and replacing the SDF $G_0(\om)$ at zero temperature by $[2n_T(\om)+1]G_0(\om)$ at finite temperature. Accordingly, the sign of the second derivative of SDF $[2n_T(\om)+1]G_0(\om_a)$ can be regarded as a criterion~\cite{Zhang2018} to distinguish the QZE and QAZE.
The measurements also lead to a quasisteady state shared by the TLS and the environment. The effective temperature of the TLS could be different from that of the bath. When $T_S(\infty)$ at thermal equilibrium is higher than $T$, i.e., $J_+(\tau)/J(\tau)>(e^{\be\om_a}+1)^{-1}$, the TLS is heated up; otherwise, when $T_S(\infty)<T$, i.e., $J_+(\tau)/J(\tau)<(e^{\be\om_a}+1)^{-1}$, the TLS is cooled down.
A controversial problem emerging in previous works is whether or not the cooling of the system is equivalent to the QAZE. Based on the above analysis, the QAZE requires
\begin{equation}\label{QAZEc}
J(\tau)>\Ga_0\tau.
\end{equation}
whereas the cooling condition reads
\begin{equation}\label{Coolc}
J(\tau)>\Ga_0\frac{\int_0^\tau dt'\Ga_+(t')}{\Ga_+(\infty)},
\end{equation}
which can be obtained by Eqs.~(\ref{J}) and (\ref{rhoM}) with the relation $J_+(\infty)/J(\infty)=\Ga_+(\infty)/\Ga(\infty)=(e^{\be\om_a}+1)^{-1}$. It is clear that the right-hand side of Eqs.~(\ref{QAZEc}) and (\ref{Coolc}) are equivalent to each other in the Markovian limit $\Ga_+(t')=\Ga_+(\infty)$ or after a long time scale $\tau\to\infty$. Physically the QAZE is determined by the decay rate and the cooling relies on the thermal equilibrium state. Thus cooling is usually related to QAZE, but they should not be regarded as the same thing. In fact, a SDF with a large high-frequency profile has been found to be beneficial to the QAZE~\cite{Zhang2018} but not to the cooling, as the super-Ohmic model analyzed in the next section.
\section{Cooling conditions}\label{cool}
\subsection{General theory}\label{Gt}
Under periodical measurements, the TLS approaches a new quasisteady state given by
\begin{equation}
\rho_{ee}^M(\infty)=\rho_{ee}^B\left[1+M(\tau)\right].
\end{equation}
Here $M(\tau)$ is a dimensionless measurement-modified factor defined as
\begin{equation}\label{M}
M(\tau)=\frac{e^{\be\om_a}J_+(\tau)-J_-(\tau)}{J(\tau)}
=\frac{\int_0^\infty d\om F(\be,\be,\tau,\om)G_0(\om)}{J(\tau)},
\end{equation}
which represents the deviation from the new equilibrium established by measurements with time spacing $\tau$ to the old one without measurements. The filter function $F(\be,\be,\tau,\om)$ has been defined in Eqs.~(\ref{dr}) and (\ref{filter}). $M(\tau)<0$ and $M(\tau)>0$ can be respectively regarded as the cooling and the heating factors. It is independent on both the initial condition and the coupling strength in the weak-coupling regime (more details could be found in Appendix~\ref{appr}). The denominator of $M(\tau)$
\begin{equation}
\begin{aligned}
J(\tau)=&\tau^2\int_0^\infty d\om\left(\frac{2}{e^{\be\om}-1}+1\right)\\
&\times\left[{\rm sinc}^2\left(\frac{\om-\om_a}{2}\tau\right)+{\rm sinc}^2\left(\frac{\om+\om_a}{2}\tau\right)\right]G_0(\om)
\end{aligned}
\end{equation}
is always positive. Consequently, whether the system is cooled down or heated up is determined by the sign of the numerator and thus further determined by the sign of the filter function.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{w12.eps}
\caption{(Color online) Main panel: the main frequency domain that could be used for cooling (the shaded area) with respect to the frequency of environmental mode vs the measurement time interval. Here the parameters are chosen as $\be_S(0)\om_a=\be\om_a=2$. The blue dot-dashed line and the green solid line are respectively the lower and upper bounds of the main effective domain for cooling. The red dashed line is a reference curve given by Eq.~(\ref{omtau}) as an analytical estimation of a frequency benchmark, around which the filter function $F^{cr}(\be,\be,\tau,\om)=0$ as a necessary cooling condition. The black dotted horizontal line depicts the measurement time-spacing-averaged $\om_2$ used to estimate the upper bound of cooling domain. Inset: an example of the filter function with $\om_a\tau=2$. The shaded area with negative values represents the main frequency-dependent cooling domain.}
\label{w12}
\end{figure}
The inset of Fig.~\ref{w12} plots an example of the filter function $F(\be,\be,\tau,\om)$ in Eq.~(\ref{M}) with an arbitrarily chosen environmental temperature and a measurement time interval. The frequencies $\om_1$ and $\om_2$ indicate the lower and upper bounds of the negative-value domain of the filter function, i.e., the main cooling domain, respectively. Throughout the whole range of frequency, the environmental-oscillators can be approximately divided into three parts: the oscillators with $\om<\om_1$ and $\om>\om_2$ contribute to heating the TLS; the oscillators with frequency $\om_1<\om<\om_2$ contribute to cooling the TLS. In the main panel of Fig.~\ref{w12}, we demonstrate the cooling domain by $\omega_1$ and $\omega_2$ as functions of the measurement time spacing $\tau$ (see the shaded area). When the measurement time spacing $\tau$ is comparatively small, both the lower bound $\om_1$ and upper bound $\om_2$ of the cooling domain remarkably decrease over $\tau$. Following the discussions in the end of Sec.~\ref{thermal}, the contribution from the counter-rotating filter function $F^{cr}$ defined in Eq.~(\ref{filter}) (in charge of heating) dominates that from the rotating-wave filter function $F^r$ (in charge of cooling) when $\tau$ is small and $\om>\om_a$. Thus the cooling domain with a short $\tau$ could be estimated by the condition in which the counter-rotating filter function $F^{cr}$ vanishes. Analytically it gives rise to
\begin{equation}\label{omtau}
\om=2\pi/\tau-\om_a
\end{equation}
as plotted as the red dashed line. As one can check in the main panel of Fig.~\ref{w12}, the cooling domain just appears around $2\pi/\tau-\om_a$ when $\omega_a\tau<3$ and $\om/\omega_a>1$. With a large measurement time spacing, the cooling domain is between $\om_a$ and the upper bound $\om_2$. The long measurement-time limit of $\om_2$ can be estimated by letting $F(\be,\be,\tau,\om)=0$, which resembles Eq.~(\ref{wup}) and yields
\begin{equation}\label{w2}
\om_2\approx \om_a[1+4\be\om_a\exp(-\be\om_a)],
\end{equation}
as plotted by the black-dotted line in the main panel. Accordingly, the upper bound of $\omega_2$ is about $2.47\omega_a$. Therefore, it is proper to use $\om=2\omega_a$ as a rough boundary to differentiate the contributions (heating or cooling) from various ranges of environmental spectrum.
It is experimentally challenging for many systems to enforce a number of measurements into an extremely small time scale. Physically, it is meaningful to consider the cooling condition for an appropriate $\tau$ such that the benchmark of cooling curve~(\ref{omtau}) actually stays in the near-resonant regime with respect to the system frequency $\om_a$. Thus one can divide the whole frequency domain into the low- and high-frequency domains and then obtain a more compact expression of the measurement-modified factor $M(\tau)$ with a not-too-small measurement time interval. Necessary details can be found in Appendix~\ref{appr}. In addition to the spectral analysis, we take the low-temperature limit, $e^{\be\om_a}\pm1\sim e^{\be\om_a}$ and $e^{-\be\om_a}\sim0$ as we focus on the cooling property of the bath. Eventually the measurement-modified factor $M(\tau)$~(\ref{M}) can be approximated as
\begin{widetext}
\begin{equation}\label{Mt}
M(\tau)\approx\frac{-\be[2G'_0(\om_a)-\be G_0(\om_a)]\left(\om_a-\frac{\sin\om_a\tau}{\tau}\right)
+\frac{\tau^2}{2}e^{\be\om_a}\int_0^{2\om_a} d\om {\rm sinc}^2\left(\frac{\om+\om_a}{2}\tau\right)G_0(\om) +e^{\be\om_a}\int_{2\om_a}^\infty d\om\frac{G_0(\om)}{\om^2}} {G_0(\om_a)\left(\pi\tau-\frac{2}{\om_a}\right)
+\om_a G''_0(\om_a)+2\int_{2\om_a}^{\infty}d\om \frac{G_0(\om)}{\om^2}}.
\end{equation}
\end{widetext}
There are three terms in the numerator of $M(\tau)$~(\ref{Mt}). The first one represents the cooling effect from the oscillators which is near resonant with the TLS transition frequency. The second one is also from the near-resonant oscillators but stems from the counter-rotating terms, which oscillate rapidly compared to the first one. It determines a proper time interval $\tau$ for the minimum of temperature (cooling bound) achieved by measurements when the integral finds the minimum value. The third term describes the heating effect of the off-resonant oscillators. Its contribution becomes considerable when the SDF $G_0(\om)$ in the high-frequency regime grows faster than $\om$. The denominator of $M(\tau)$ increases with the time interval $\tau$, corresponding to the result that $M(\tau)\to 0$ with $\tau\to\infty$, i.e., without measurements.
To further simplify the result in Eq.~(\ref{Mt}), we assume the center of $G(\om)$ is not far off resonant from $\om_a$ and ignore the contribution from the oscillators with $\om>2\om_a$. Then in the large limit of $\tau$, we have
\begin{equation}\label{Mt1}
\begin{aligned}
M(\tau)\approx&-\be\om_a\frac{2G'_0(\om_a)-\be G_0(\om_a)} {G_0(\om_a)\left(\pi\tau-\frac{2}{\om_a}\right)
+\om_a G''_0(\om_a)}\\
&+\frac{e^{\be\om_a}\int_0^{2\om_a} d\om\frac{G_0(\om)}{(\om+\om_a)^2}} {G_0(\om_a)\left(\pi\tau-\frac{2}{\om_a}\right)+\om_a G''_0(\om_a)}.
\end{aligned}
\end{equation}
Equation~(\ref{Mt1}) loosely implies a necessary condition of the SDF $G_0(\om)$ to cool down the system: the logarithmic derivative of the SDF $G_0(\om)$ at $\om_a$ should be at least larger than $\be/2$, i.e., the cooling effect of the near-resonant oscillators should be dominative. We can deduce an analytical cooling criterion as
\begin{equation}\label{coolcondition}
\frac{G'_0(\om_a)}{G_0(\om_a)}>\frac{\be}{2}.
\end{equation}
Simultaneously the third term in the numerator of Eq.~(\ref{Mt}) indicates a second condition for cooling: the heating contribution from the off-resonant oscillators should be as small as possible, which means the SDF $G_0(\om)$ should have a sharp high-frequency cutoff and this cutoff frequency is not very far away from $\om_a$. In the following, we will check the above cooling factor $M(\tau)$ as well as these two conditions in two popular spectra: the modified Lorentzian model and the super-Ohmic model. They are found to be helpful to cool down the system in certain conditions.
\subsection{Modified Lorentzian model}
An environment often used in literature is described by the modified Lorentzian model with SDF~\cite{Erez2008,Gordon2009},
\begin{equation}\label{Lorentz}
G_0(\om)=\alpha \om\frac{\Lam^2}{\Lam^2+(\om-\om_0)^2},
\end{equation}
where $\al$ is a dimensionless coupling strength, $\om_0$ is the Lorentzian peak, and $\Lam$ is the Lorentzian width. This model can describe the environment of a cavity with not-so-high finesse mirrors, i.e., a leaky cavity.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{ML.eps}
\caption{(Color online) Main panel: the measurement-modified factor $M(\tau)$ vs the measurement interval $\om_a\tau$ for the modified Lorentzian model. Here $\al=0.01$, $\Lam=0.25\om_a$, and $\om_0=1.5\om_a$. The inverse temperature of the bath is set as $\be\om_a=2$. The black solid line, the orange dot-dashed line, and the red dashed line represent the exact~(\ref{M}), approximated~(\ref{Mt}) and time-smoothing~(\ref{Mt1}) results, respectively. Inset: the minimum $M_{\rm min}$ vs the inverse temperature $\be\om_a$. The solid and dashed lines represent the exact~(\ref{M}) and approximated~(\ref{Mt}) results, respectively. Other parameters are identical to those in the main panel.}
\label{ML}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{Lw0.eps}
\caption{Main panel: the optimal measurement time interval $\tau_{\rm min}$ for cooling vs the spectral peak $\om_0/\om_a$ in the modified Lorentzian model. The circles represent the exact results and the line is the reference line $2\pi\om_a/(\om_0+\om_a)$. Inset: the minimum $M_{\rm min}$ vs the peak $\om_0$. Parameters are chosen as the same as those of Fig.~\ref{ML}.}
\label{Lw0}
\end{figure}
The numerical results of the measurement-modified factor $M(\tau)$ for this model are demonstrated by the black solid line, the orange dot-dashed line and the red dashed line in the main panel of Fig.~\ref{ML}. These results for the same parameters of the modified Lorentzian model are obtained respectively by the exact~(\ref{M}), the approximated~(\ref{Mt}) with respect to large $\tau$ and low temperature, and the time-smoothing~(\ref{Mt1}) expression of the cooling factor $M(\tau)$. Note we only present the cooling results with $M(\tau)\leq0$, which occurs with a large time spacing about $\tau\gtrsim2/\omega_a$ and fluctuates over $\tau$. It is shown that our approximated analytical result (see the orange dot-dashed line) could catch the main features of $M(\tau)$ and also mimic the following fluctuations, while the time-smoothing result from Eq.~(\ref{Mt1}) shown by the red dashed line can outline the main tendency of the exact result in the regime of large $\tau$.
Both the black solid line and the orange dot-dashed line indicate that an optimized measurement time spacing $\tau$ could be numerically obtained: a minimum value of $M_{\rm min}(\tau)$ corresponds to a maximal cooling efficiency. In the inset of Fig.~\ref{ML}, we depict the minimum measurement-modified factor as a function of the inverse temperature $\be$ of the bath. From Eq.~(\ref{Mt1}), the absolute value of both the cooling effect [see the first line of Eq.~(\ref{Mt1})] and the heating effect [see the second line of Eq.~(\ref{Mt1})] increase with an increasing inverse temperature $\beta$. Then the competition between them leads to a nonmonotonic behavior of the minimum measurement-modified factor $M_{\rm min}$ with respect to $\be$, so that the cooling efficiency could be also optimized with a moderate environmental temperature. Our approximation from Eq.~(\ref{Mt}) given by the dashed line fits well with the exact result given by the solid line.
The modified Lorentzian model with a distinguished peak and comparatively low wings is a good testbed to check the cooling conditions for the environment that we have proposed by the general theory in Sec.~\ref{Gt}. One can choose a proper measurement time interval to locate the spectral peak to the cooling domain and then efficiently cool down the open system. We plot the optimal measurement time interval $\tau_{\rm min}$ for cooling as a function of the peak $\om_0$ (the circles) in the main panel of Fig.~\ref{Lw0}. According to Eq.~(\ref{omtau}), when the spectral peak $\om_0$ is off resonant with respect to the system frequency, the maximal cooling is attained nearly at $\tau\approx2\pi/(\om_0+\om_a)$, where the heating contribution from the counter-rotating terms $F^{cr}(\be,\be,\tau,\om)$ vanishes. To achieve the lowest temperature, one has to enforce more frequent measurements into the dynamics in the case of a larger spectral peak. As for the particular minimum measurement-modified factor $M_{\rm min}$, the results shown in the inset of Fig.~\ref{Lw0} indicate that it is also optimized with a moderate peak $\om_0$.
\subsection{Super-Ohmic model}
Another widely used environment in the thermodynamics and solid-state physics is the super-Ohmic model. The SDF of this model reads~\cite{Leggett1987}
\begin{equation}
G_0(\om)=\al\om_c^{1-s}\om^s\Theta(1-\om/\om_c).
\end{equation}
Here $\al$ is a dimensionless coupling parameter, $\Theta(1-\om/\om_c)$ is a sharp cutoff function, and $\om_c$ is the cutoff frequency. A particular example of this general model is the Debye model, which has a well-known expression $G_0(\om)=\al\om_c^{-2}\om^3\Theta(1-\om/\om_c)$. The cubic frequency dependence of the Debye model arises from the coupling strength $g_k\propto\sqrt{\om_k}$ and the Debye density of states $\sum_k\de(\om-\om_k)\propto \om^2$. The cutoff frequency $\om_c$ is the Debye frequency in this example.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{MD.eps}
\caption{(Color online) Main panel: the measurement-modified factor $M(\tau)$ vs the time interval $\om_a\tau$ for the Debye model. Parameters are chosen as $\al=0.01$ and $\om_c=2\om_a$. The inverse temperature of the bath is set as $\be\om_a=2$. The black solid line, the orange dot-dashed line and the red dashed line represent the exact~(\ref{M}), approximated~(\ref{Mt}), and time-smoothing~(\ref{Mt1}) results, respectively. Inset: the minimum $M_{\rm min}$ vs the inverse temperature $\be\om_a$ (blue) and the exponent $s$ (green). The solid and dashed lines represent the exact~(\ref{M}) and approximated~(\ref{Mt}) results for different $\be\om_a$, respectively. The dot-dashed and short-dashed lines represent the exact~(\ref{M}) and approximated~(\ref{Mt}) results with different $s$, respectively.}
\label{MD}
\end{figure}
The measurement-modified factor $M(\tau)$ for the Debye model versus the measurement time spacing $\tau$ is demonstrated by the three lines in the main panel of Fig.~\ref{MD}. Again it shows that our analytical approximations in Eqs.~(\ref{Mt}) and (\ref{Mt1}) could be used to quantitatively describe the exact result of $M(\tau)$. Roughly a moderate $\tau$ gives rise to an efficient cooling effect implied by a negative $M(\tau)$ and the absolute value of $M(\tau)$ declines with increasing $\tau$. Similar to the Lorentzian model, the minimum measurement-modified factor $M_{\rm min}$ behaves nonmonotonically with respect to the inverse temperature of the bath (see the inset of Fig.~\ref{MD}). Furthermore, we plot the minimum measurement-modified factor $M_{\rm min}$ versus the bath exponent $s$ by the green dot-dashed line in the inset of Fig.~\ref{MD}. We can find that $|M_{\rm min}|$ is enhanced with increasing $s$, while the cooling is conditioned by $s\geq2$. Our approximation given by Eq.~(\ref{Mt}) (see the short-dashed-line) fits well with the exact result. In fact one can deduce from Eqs.~(\ref{Mt1}) and (\ref{coolcondition}) that the exponent $s$ for the super-Ohmic model dramatically affects the cooling effect. From the loose cooling condition~(\ref{coolcondition}), it could be found that the maximum inverse temperature (minimum temperature) $\be_{\rm max}=2s/\om_a$, so that, as described by the green lines in the inset of Fig.~\ref{MD}, a larger $s$ gives rise to a lower temperature that the TLS could reach by measurements.
\section{Conclusion}\label{conc}
In summary, we have explicitly investigated the required spectral structure of the bath to cool down an open quantum system coupled to a finite-temperature environment in the context of nonselective and nondemolition measurements. The memory effect of the bath that yields time-dependent damping rates is found to be the primary prerequisite of cooling. One can exploit the modified dynamics of the open system as well as a time-dependent energy flow from the system to the bath. It might be reminiscent of the non-Markovian effect of the environment. Yet a scrutinization over the relationship between any recent result including cooling and the non-Markovianity~\cite{Hall2014,Li2018} could be performed in a future work. The secondary prerequisite is performing the nonselective measurements to periodically decouple the system and the environment, which can also be replaced by alternative ways of measurements or control. To obtain the cooling conditions particularly relevant to the environment structure, we have derived a compact form of a time-convolutionless master equation for the system under a finite-temperature bath.
Our model is a standard spin-boson model without rotating-wave approximation. We find that the contribution from the counter-rotating terms cannot be neglected especially at short time scales and low temperature. Considering both the rotating-wave and the counter-rotating terms, the environmental bosonic oscillators within the near-resonant regime with respect to the system transition frequency $\om_a$ can be used to cool down the system, while the oscillators with lower or higher frequencies can heat the system up. Thus to realize the cooling of the system, one should strengthen coupling between the system and the near-resonant modes and meanwhile weaken the influence from the far-off-resonant modes. Based on this idea, we propose two cooling conditions with regard to the bath structure: (i) the logarithmic derivative of the spectrum around $\om_a$ should be as large as possible; (ii) the spectrum should have a sharp high-frequency cutoff. Based on these conditions we deduced, one can roughly estimate whether or not an environment could allow to cool down the system. Both of them should be simultaneously satisfied. For example, the $1/f$-noise model is popular in solid-state environments, which ranges from the voltages and currents in vacuum tubes to the diodes and transistors~\cite{Niemann2013}. The negative logarithmic derivative of its SDF throughout the whole frequency domain makes it impossible to cool down the quantum systems, although it satisfies the condition (ii). We have checked our conditions in the Lorentzian model and the Debye model, and thus searched the proper parameters for cooling. It is found that the environment with SDF in the Lorentzian form or the super-Ohmic form can be efficiently used to cool the system by discrete measurements.
\section*{Acknowledgments}
We acknowledge grant support from the National Science Foundation of China (Grants No. 11575071 and No. U1801661), Zhejiang Provincial Natural Science Foundation of China under Grant No. LD18A040001, and the Fundamental Research Funds for the Central Universities.
|
quant-ph/0505213
|
\section{Introduction}
\label{S:intro}
Fully controllable and scalable quantum computers are likely many years
from realization. This motivates study and development of somewhat
less ambitious quantum information processors, defined as devices that
fail to satisfy one or more of DiVincenzo's five criteria for a quantum
computer~\cite{DiVincenzo00}. An example of such a quantum
information processor is a mixed-state quantum system, which fails to
pass DiVincenzo's second requirement, that the system be prepared in a
simple initial state.
The prime example of a mixed-state quantum information processor is
provided by liquid-state NMR experiments in quantum information
processing~\cite{Jones01}. Current NMR experiments, which operate with
initial states that are highly mixed thermal states, use a technique
called pseudo-pure-state synthesis to process the initial thermal state
and thereby to simulate pure-state quantum information processing. This
technique suffers from an exponential loss of signal strength as the
number of qubits per molecule increases and thus is not scalable. There
is a different technique for processing the initial thermal state,
called algorithmic cooling~\cite{Schulman99}, which pumps entropy from
a subset of qubits into the remaining qubits, leaving the special
subset in a pure state and the remaining qubits maximally mixed.
Algorithmic cooling provides an in-principle method for making
liquid-state NMR---or any qubit system that begins in a thermal
state---scalable, in essence by providing an efficient algorithmic
method for cooling a subset of the initially thermal qubits to a pure
state, thereby satisfying DiVincenzo's second criterion.
Knill and Laflamme~\cite{kl98} proposed a related mixed-state
computational model, which they called DQC1, in which there is just one
initial pure qubit, along with $n$ qubits in the maximally mixed state.
Although provably less powerful than a pure-state quantum
computer~\cite{Ambainis00}, DQC1 can perform some computational tasks
efficiently for which there are no known polynomial time classical
algorithms. In particular, a DQC1 quantum circuit can be used to
evaluate, with fixed accuracy independent of $n$, the normalized trace,
${\rm{tr}}(U_n)/2^n$, of any $n$-qubit unitary operator $U_n$ that can be
implemented efficiently in terms of quantum
gates~\cite{kl98,Laflamme02}. In Sec.~\ref{S:classical} we consider
briefly whether there might be efficient classical algorithms for
estimating the normalized trace, and we conclude that this is unlikely.
The efficient quantum algorithm for estimating the normalized trace
provides an exponential speedup over the best known classical algorithm
for simulations of some quantum processes~\cite{pklo04,elpc04}. Knill
and Laflamme referred to the power of this mixed-state computational
model as the ``power of one qubit.''
Study of the power of one qubit is motivated partly by NMR experiments,
but our primary motivation in this paper is to investigate the role of
entanglement in quantum computation, using DQC1 as a theoretical test
bed for the investigation. For pure-state quantum computers, Jozsa and
Linden \cite{Josza99} have shown that exponential speedup over a
classical computer requires that entanglement not be restricted to
blocks of qubits of fixed size as problem size increases. Entanglement
that increases with problem size is thus a necessary prerequisite for
the exponential speedup achieved by a pure-state quantum computer. On
the other hand, the Gottesman-Knill theorem \cite{N&C}, demonstrates
that global entanglement is far from sufficient for exponential
speedup. While this means that the role of entanglement is not
entirely understood for pure-state quantum computers, far less is known
about the role of entanglement in mixed-state quantum computers. When
applied to mixed-state computation, the Jozsa-Linden proof does not
show that entanglement is a requirement for exponential speedup.
Indeed, it has not previously been shown that there is any entanglement
in the DQC1 circuits that provide an exponential speedup over classical
algorithms.
The purpose of this paper is to investigate the existence of and amount
of entanglement in the DQC1 circuit that is used to estimate the
normalized trace. The DQC1 model consists of a \emph{special qubit\/}
(qubit~0) in the initial state $|0\rangle\langle0|={1\over2}(I_1+Z)$,
where $Z$ is a Pauli operator, along with $n$ other qubits in the
completely mixed state, $I_n/2^n$, which we call the \emph{unpolarized
qubits}. The circuit consists of a Hadamard gate on the special qubit
followed by a controlled unitary on the remaining
qubits~\cite{Laflamme02}: \vspace{-1em}
\begin{equation}
\label{E:circuit}
\Qcircuit @C=.5em @R=-.5em {
& \lstick{\ket{0}\!\bra{0}} & \gate{H} & \ctrl{1} & \meter & \push{\rule{0em}{4em}} \\
& & \qw & \multigate{4}{U_n} & \qw & \qw \\
& & \qw & \ghost{U_n} & \qw & \qw \\
\lstick{\mbox{$I_n/2^n$}} & & \qw & \ghost{U_n} & \qw & \qw \\
& & \qw & \ghost{U_n} & \qw & \qw \\
& & \qw & \ghost{U_n} & \qw & \qw \gategroup{2}{2}{6}{2}{.6em}{\{}
}
\end{equation}
After these operations, the state of the $n+1$ qubits becomes
\begin{equation}
\rho_{n+1}=
{1\over2N}
\Bigl(|0\rangle\langle0|\otimes I_n+|1\rangle\langle1|\otimes I_n
+|0\rangle\langle1|\otimes U_n^\dagger
+|1\rangle\langle0|\otimes U_n\Bigr)
= \frac{1}{2N}
\begin{pmatrix}
I_n & \, U_n^\dag \\
\, U_n & I_n
\end{pmatrix} \;,
\label{E:rhoout}
\end{equation}
where $N=2^n$. The information about the normalized trace of $U_n$ is
encoded in the expectation values of the Pauli operators $X$ and $Y$ of
the special qubit, i.e., $\langle X\rangle={\rm Re}[{\rm{tr}}(U_n)]/2^n$ and
$\langle Y\rangle=-{\rm Im}[{\rm{tr}}(U_n)]/2^n$.
To read out the desired information, say, about the real part of the
normalized trace, one runs the circuit repeatedly, each time measuring
$X$ on the special qubit at the output. The measurement results are
drawn from a distribution whose mean is the real part of the normalized
trace and whose variance is bounded above by 1. After $L$ runs, one
can estimate the real part of the normalized trace with an accuracy
$\epsilon\sim1/\sqrt L$. Thus, to achieve accuracy $\epsilon$ requires
that the circuit be run $L\sim1/\epsilon^2$ times. More precisely, what
we mean by estimating with fixed accuracy is the following: let $P_e$
be the probability that the estimate is farther from the true value
than $\epsilon\,$; then the required number of runs is
$L\sim\ln(1/P_e)/\epsilon^2$. That the number of runs required to
achieve a fixed accuracy does not scale with number of qubits and
scales logarithmically with the error probability is what is meant by
saying that the DQC1 circuit provides an efficient method for
estimating the normalized trace.
Throughout much of our analysis, we use a generalization of the DQC1
circuit, in which the initial pure state of the special qubit is
replaced by the mixed state ${1\over2}(I_1+\alpha Z)$,
which has polarization $\alpha$,
\begin{equation}
\label{E:circuitalpha}
\Qcircuit @C=.5em @R=-.5em {
& \lstick{{1\over2}(I_1+\alpha Z)}
& \gate{H} & \ctrl{1} & \meter & \push{\rule{0em}{4em}} \\
& & \qw & \multigate{4}{U_n} & \qw & \qw \\
& & \qw & \ghost{U_n} & \qw & \qw \\
\lstick{\mbox{$I_n/2^n$}} & & \qw & \ghost{U_n} & \qw & \qw \\
& & \qw & \ghost{U_n} & \qw & \qw \\
& & \qw & \ghost{U_n} & \qw & \qw \gategroup{2}{2}{6}{2}{.6em}{\{}
}
\end{equation}
giving an overall initial state
\begin{equation}
\rho_i={1\over2N}(I_1+\alpha Z)\otimes I_n
={1\over2N}\!\left[I_{n+1}+
\alpha
\begin{pmatrix}
I_n&0\\
0&-I_n
\end{pmatrix}
\right]\;.
\end{equation}
We generally assume that $\alpha\ge0$, except where we explicitly note
otherwise. After the circuit is run, the system state becomes
\begin{equation}
\rho_{n+1}(\alpha)=
{1\over2N}\!\left[I_{n+1}+
\alpha
\begin{pmatrix}
0&U_n^\dag\\
U_n&0
\end{pmatrix}
\right]=
\frac{1}{2N}
\begin{pmatrix}
I_n & \alpha U_n^\dag \\
\alpha U_n & I_n
\end{pmatrix} \;.
\label{E:rhooutalpha}
\end{equation}
The effect of subunity polarization is to reduce the expectation
values of $\langle X\rangle$ and $\langle Y\rangle$ by a factor of
$\alpha$, thereby making it more difficult to estimate the normalized
trace. Specifically, the number of runs required to estimate the
normalized trace becomes $L\sim\ln(1/P_e)/\alpha^2\epsilon^2$. Reduced
polarization introduces an additional overhead, but as long as the
special qubit has nonzero polarization, the model still provides an
efficient estimation of the normalized trace. What we are dealing with
is really the ``power of even the tiniest fraction of a qubit.''
For $n+1$ qubits, \emph{all\/} states contained in a ball of radius
$r_{n+1}$ centered at the completely mixed state are
separable~\cite{b99,gb03} (distance is measured by the Hilbert-Schmidt
norm). Unitary evolution leaves the distance from the completely mixed
state fixed, so at all times during the circuit~(\ref{E:circuitalpha}),
the system state is a fixed distance
$\sqrt{{\rm{tr}}(\rho_i-I_{n+1}/2N)^2}=\alpha 2^{-(n+1)/2}$ from the
completely mixed state. This suggests that with $\alpha$ small enough,
there might be an exponential speedup with demonstrably separable
states. This suggestion doesn't pan out, however, because the radius
of the separable ball decreases exponentially faster than
$2^{-(n+1)/2}$. The best known lower bound on $r_n$ is $2\times
6^{-n/2}$~\cite{gb04}; for the system state to be contained in a ball
given by this lower bound, we need $\alpha\le 2\times 3^{-(n+1)/2}$.
The exponential decrease of $\alpha$ means that an exponentially
increasing number of runs is required to estimate the normalized trace
with fixed accuracy. More to the point, the possibility that the
actual radius of the separable ball might decrease slowly enough to
avoid an exponential number of runs is ruled out by the existence of a
family of $n$-qubit entangled states found by D\"ur~\emph{et
al.}~\cite{dur99}, which establishes an upper bound on $r_n$ that goes
as $2\times 2^{-n}$ for large $n$, implying that $\alpha\le 2\times
2^{-(n+1)/2}$ if the system state is to be in the ball given by the
upper bound. These considerations do not demonstrate the impossibility
of an exponential speedup using separable states, but they do rule out
the possibility of finding such a speedup within the maximal separable
ball about the completely mixed state.
We are thus motivated to look for entanglement in states of the
form~(\ref{E:rhooutalpha}), for at least some unitary operators $U_n$.
Initial efforts in this direction are not encouraging. It is clear
from the start that the marginal state of the $n$ unpolarized qubits
remains completely mixed, so these qubits are not entangled among
themselves. Moreover, in the state~(\ref{E:rhooutalpha}), as was shown
in Ref.~\cite{pklo04}, the special qubit is unentangled with the $n$
unpolarized qubits, no matter what $U_n$ is used. To see this, one
plugs the eigendecomposition of the unitary, $U_n=\sum_j
e^{i\phi_j}|e_j\rangle\langle e_j|$, into the expression for
$\rho_{n+1}(\alpha)$. This gives a separable decomposition
\begin{equation}
\rho_{n+1}(\alpha)=
{1\over2N}\sum_j
(|a_j\rangle\langle a_j|+|b_j\rangle\langle b_j|)
\otimes|e_j\rangle\langle e_j|\;,
\end{equation}
where $|a_j\rangle=\cos\theta|0\rangle+e^{i\phi_j}\sin\theta|1\rangle$
and $|b_j\rangle=\sin\theta|0\rangle+e^{i\phi_j}\cos\theta|1\rangle$,
with $\sin2\theta=\alpha$.
No entanglement of the special qubit with the rest and no entanglement
among the rest---where then are we to find any entanglement? We look
for entanglement relative to other divisions of the qubits into two
parts. In such bipartite divisions the special qubit is grouped with a
subset of the unpolarized qubits. To detect entanglement between the
two parts, we use the Peres-Horodecki partial transpose
criterion~\cite{p96,hhh96}, and we quantify whatever entanglement we
find using a closely related entanglement monotone which we call the
\emph{multiplicative negativity\/}~\cite{Vidal02}. The Peres-Horodecki
criterion and the multiplicative negativity do not reveal all
entanglement---they can miss what is called bound entanglement---but we
are nonetheless able to demonstrate the existence of entanglement in
states of the form~(\ref{E:rhoout}) and~(\ref{E:rhooutalpha}). For
convenience, we generally refer to the multiplicative negativity simply
as the negativity. The reader should note, as we discuss in
Sec.~\ref{S:negativity}, that the term ``negativity'' was originally
applied to an entanglement measure that is closely related to, but
different from the multiplicative negativity.
The amount of entanglement depends, of course, on the unitary
operator~$U_n$ and on the bipartite division. We present three results
in this regard. First, in Sec.~\ref{S:examples}, we construct a family
of unitaries $U_n$ such that for $\alpha>1/2$, $\rho_{n+1}(\alpha)$ is
entangled for all bipartite divisions that put the first and last
unpolarized qubits in different parts, and we show that for all such
divisions, the negativity is $(2\alpha+3)/4$ for $\alpha\ge1/2$ ($5/4$
for $\alpha=1$), independent of~$n$. Second, in Sec.~\ref{S:random},
we present numerical evidence that the state $\rho_{n+1}$ of
Eq.~(\ref{E:rhoout}) is entangled for typical unitaries, i.e., those
created by random quantum circuits. For $n+1=5,\ldots,10$, we find
average negativities between 1.155 and just above 1.16 for the
splitting that puts $\left\lfloor n/2\right\rfloor$ of the unpolarized
qubits with qubit~0. Third, in Sec.~\ref{S:bounds}, we show that for
all unitaries and all bipartite divisions of the $n+1$ qubits, the
negativity of $\rho_{n+1}(\alpha)$ is bounded above by the constant
$\sqrt{1+\alpha^2}$ ($\sqrt2\simeq1.414$ for $\alpha=1$), independent
of $n$. Thus, when $n$ is large, the negativity achievable by the DQC1
circuit~(\ref{E:circuit}) becomes a vanishingly small fraction of the
maximum negativity, $\sim2^{n/2}$, for roughly equal bipartite
divisions.
The layout of the paper is as follows. In Sec.~\ref{S:classical} we
examine the classical problem of estimating the normalized trace of a
unitary. In Sec.~\ref{S:negativity} we review pertinent properties of
the negativity before applying it to obtain our three key results in
Secs.~\ref{S:examples}-\ref{S:bounds}. We conclude in
Sec.~\ref{S:conclusion} and prove a brief Lemma in an Appendix.
Throughout we use $\breve A$ to stand for the partial transpose of an
operator $A$ relative to a particular bipartite tensor-product
structure, and we rely on context to make clear which bipartite
division we are using at any particular point in the paper.
\section{Classical Evaluation of the Trace}\label{S:classical}
In this section we outline briefly a classical method for evaluating the
trace of a unitary operator that can be implemented efficiently in terms
of quantum gates, and we indicate why this appears to be a problem that
is exponentially hard in the number of qubits.
The trace of a unitary matrix $U_n\equiv U$ is the sum over the
diagonal matrix elements of~$U$:
\begin{equation}
\label{E:sum1}
{\rm{tr}}(U)=\sum_{\mathbf{a}} \bra{\mathbf{a}}U\ket{\mathbf{a}}\;.
\end{equation}
Here $\mathbf{a}$ is a bit string that specifies a computational-basis state
of the $n$ qubits. By factoring $U$ into a product of elementary gates
from a universal set and inserting a resolution of the identity between
all the gates, we can write ${\rm{tr}}(U)$ as a sum over the amplitudes of
Feynman paths. A difficulty with this approach is that the sum must be
restricted to paths that begin and end in the same state. We can
circumvent this difficulty by preceding and succeeding $U$ with a
Hadamard gate on all the qubits. This does not change the trace, but
does allow us to write it as
\begin{equation}
\label{E:sum2}
{\rm{tr}}(U)=\sum_{\mathbf{a},\mathbf{b},\mathbf{c}}
\bra{\mathbf{a}}H^{\otimes n}\ket{\mathbf{b}}\bra{\mathbf{b}}U\ket{\mathbf{c}}\bra{\mathbf{c}}H^{\otimes n}\ket{\mathbf{a}}
={1\over2^n}\sum_{\mathbf{a},\mathbf{b},\mathbf{c}}
(-1)^{\mathbf{a}\cdot(\mathbf{b}+\mathbf{c})}\bra{\mathbf{b}}U\ket{\mathbf{c}}\;.
\end{equation}
Now if we insert a resolution of the identity between the elementary
gates, we get ${\rm{tr}}(U)$ written as an unrestricted sum over Feynman-path
amplitudes, with an extra phase that depends on the initial and final
states.
Following Dawson~\emph{et al.}~\cite{dhhmno05}, we consider two
universal gate sets: (i)~the Hadamard gate $H$, the $\pi/4$ gate $T$,
and the controlled-NOT gate and (ii)~$H$ and the Toffoli gate. With
either of these gate sets, most of the Feynman paths have zero
amplitude. Dawson~\emph{et al.}~\cite{dhhmno05} introduced a
convenient method, which we describe briefly now, for including only
those paths with nonzero amplitude. One associates with each wire in
the quantum circuit a classical bit value corresponding to a
computational basis state. The effect of an elementary gate is to
change, deterministically or stochastically, the bit values at its
input and to introduce a multiplicative amplitude. The two-qubit
controlled-NOT gate changes the input control bit $x$ and target bit
$y$ deterministically to output values $x$ and $y\oplus x$, while
introducing only unit amplitudes. Similarly, the three-qubit Toffoli
gates changes the input control bits $x$ and $y$ and target bit $z$
deterministically to $x$, $y$, and $z\oplus xy$, while introducing only
unit amplitudes. The $T$ gate leaves the input bit value $x$ unchanged
and introduces a phase $e^{ix\pi/4}$. The Hadamard gate changes the
input bit value $x$ stochastically to an output value $y$ and
introduces an amplitude $(-1)^{xy}/\sqrt2$.
The classical bit values trace out the allowed Feynman paths, and the
product of the amplitudes introduced at the gates gives the overall
amplitude of the path. In our application of evaluating the
trace~(\ref{E:sum2}), a path is specified by $n$ input bit values
(which are identical to the output bit values), $n$ random bit values
introduced by the initial Hadamard gates, and $h$ random bit values
introduced at the $h$ Hadamard gates required for the implementation of
$U$. This gives a total of $2n+h$ bits to specify a path and thus
$2^{2n+h}$ allowed paths. We let $\mathbf{x}$ denote collectively the $2n+h$
path bits.
If we apply the gate rules to a Hadamard-Toffoli circuit, the only gate
amplitudes we have to worry about are the $\pm1/\sqrt2$ amplitudes
introduced at the Hadamard gates. There being no complex amplitudes,
the trace cannot be complex. Indeed, for this reason, achieving
universality with the $H$-Toffoli gate set requires the use of a simple
encoding, and we assume for the purposes of our discussion that this
encoding has already been taken into account. With all this in mind,
we can write the trace~(\ref{E:sum2}) as a sum over the allowed paths,
\begin{equation}
{\rm{tr}}(U)=\frac{1}{2^{n+h/2}}\sum_{\mathbf{x}}(-1)^{\psi(\mathbf{x})}\;.
\end{equation}
Here $\psi(\mathbf{x})$ is a polynomial over $\mathbb{Z}_2$, specifically, the mod-2
sum of the products of input and output bit values at each of the
Hadamard gates. The downside is that a string of Toffoli gates
followed by a Hadamard can lead to a polynomial that is high order in
the bit values. As pointed out by Dawson~\emph{et
al.}~\cite{dhhmno05}, we can deal with this problem partially by
putting a pair of Hadamards on the target qubit after each Toffoli
gate, thus replacing the quadratic term in the output target bit with
two new random variables and preventing the quadratic term from
iterating to higher order terms in subsequent Toffoli gates. In doing
so, we are left with a cubic term in $\psi(\mathbf{x})$ from the amplitude of
the first Hadamard. The upshot is that we can always make $\psi(\mathbf{x})$
a cubic polynomial.
Notice now that we can rewrite the trace as
\begin{equation}
{\rm{tr}}(U)
=\frac{1}{2^{n+h/2}}\left[
\begin{pmatrix}
\mbox{number of $\mathbf{x}$ such}\\ \mbox{that $\psi(\mathbf{x})=0$}
\end{pmatrix}-
\begin{pmatrix}
\mbox{number of $\mathbf{x}$ such}\\ \mbox{that $\psi(\mathbf{x})=1$}
\end{pmatrix}\right]\;,
\end{equation}
thus reducing the problem of evaluating the trace exactly to counting
the number of zeroes of the cubic polynomial $\psi(\mathbf{x})$. This is a
standard problem from computational algebraic geometry, and it is known
that counting the number of zeroes of a general cubic polynomial over
any finite field is \#{\bf P} complete \cite{ek90}. It is possible
that the polynomials that arise from quantum circuits have some special
structure that can be exploited to give an efficient algorithm for
counting the number of zeroes, but in the absence of such structure,
there is no efficient classical algorithm for computing the trace
exactly unless the classical complexity hierarchy collapses and
\emph{all\/} problems in \#{\bf P} are efficiently solvable on a
classical computer.
Of course, it is not our goal to compute the trace exactly, since the
quantum circuit only provides an efficient method for estimating the
normalized trace to fixed accuracy. This suggests that we should
estimate the normalized trace by sampling the amplitudes of the allowed
Feynman paths. The normalized trace,
\begin{equation}
{{\rm{tr}}(U)\over2^n}=\frac{1}{2^{2n+h}}\sum_{\mathbf{x}}2^{h/2}(-1)^{\psi(\mathbf{x})}\;,
\end{equation}
which lies between $-1$ and $+1$, can be regarded as the average of
$2^{2n+h}$ quantities whose magnitude, $2^{h/2}$, is exponentially
large in the number of Hadamard gates. To estimate the average with
fixed accuracy requires a number of samples that goes as $2^h$,
implying that this is not an efficient method for estimating the
normalized trace. The reason the method is not efficient is pure
quantum mechanics, i.e., that the trace is a sum of amplitudes, not
probabilities.
If we apply the gate rules to a Hadamard-$T$-controlled-NOT circuit,
the bit value on each wire in the circuit is a mod-2 sum of appropriate
bit values in $\mathbf{x}$, but now we have to worry about the amplitudes
introduced by the Hadamard and $T$ gates. The trace~(\ref{E:sum2})
can be written as
\begin{equation}
{\rm{tr}}(U)={1\over2^{n+h/2}}
\sum_{\mathbf{x}}e^{i(\pi/4)\chi(\mathbf{x})}(-1)^{\phi(\mathbf{x})}\;.
\label{E:sum3}
\end{equation}
Here $\phi(\mathbf{x})$ is a polynomial over $\mathbb{Z}_2$, obtained as the mod-2
sum of the products of input and output bit values at each of the
Hadamard gates. Since the output value is a fresh binary variable and
the input value is a mod-2 sum of bit values in $\mathbf{x}$, $\phi(\mathbf{x})$ is a
purely quadratic polynomial over $\mathbb{Z}_2$. The function $\chi(\mathbf{x})$ is
a mod-8 sum of the input bit values to all of the $T$ gates. Since
these input bit values are mod-2 sums of bit values in $\mathbf{x}$,
$\chi(\mathbf{x})$ is linear in bit values, but with an unfortunate mixture of
mod-2 and mod-8 addition. We can get rid of this mixture by preceding
each $T$ gate with a pair of Hadamards, thus making the input to the
every $T$ gate a fresh binary variable. With this choice, $\chi(\mathbf{x})$
becomes a mod-8 sum of appropriate bit values from $\mathbf{x}$.
We can rewrite the sum~(\ref{E:sum3}) in the following way:
\begin{equation}
{\rm{tr}}(U)={1\over2^{n+h/2}}
\sum_{j=0}^7 e^{i(\pi/4)j} \left[
\begin{pmatrix}
\mbox{number of $\mathbf{x}$ such that}\\
\mbox{$\chi(\mathbf{x})=j$ and $\phi(\mathbf{x})=0$}
\end{pmatrix}
-
\begin{pmatrix}
\mbox{number of $\mathbf{x}$ such that}\\
\mbox{$\chi(\mathbf{x})=j$ and $\phi(\mathbf{x})=1$}
\end{pmatrix}
\right].
\end{equation}
Thus the problem now reduces to finding simultaneous (binary) solutions
to the purely quadratic $\mathbb{Z}_2$ polynomial $\phi(\mathbf{x})$ and the purely
linear $\mathbb{Z}_8$ polynomial $\chi(\mathbf{x})$. One has to be careful here to
note that we are only interested in binary solutions, so we are not
solving $\chi(\mathbf{x})=j$ over all values in $\mathbb{Z}_8$. The number of
solutions of a purely quadratic polynomial over $\mathbb{Z}_2$ can be obtained
trivially~\cite{ek90}, but the constraint over $\mathbb{Z}_8$ means that one
must count the number of solutions over a mixture of a field and a
ring. The complexity class for this problem is not known, but given
the equivalence to counting the number of solutions of a cubic
polynomial over $\mathbb{Z}_2$, it seems unlikely that there is an efficient
classical algorithm. Moreover, an attempt to estimate the normalized
trace by sampling allowed paths obviously suffers from the problem
already identified above.
\section{Properties of negativity}
\label{S:negativity}
In this section we briefly review properties of negativity as an
entanglement measure, focusing on those properties that we need in the
subsequent analysis (for a thorough discussion of negativity, see
Ref.~\cite{Vidal02}).
Let $A$ be an operator in the joint Hilbert space of two systems,
system~1 of dimension $d_1$ and system~2 of dimension $d_2$. The
partial transpose of $A$ with respect to an orthonormal basis of
system~2 is defined by taking the transpose of the matrix elements of
$A$ with respect to the system-2 indices. A partial transpose can also
be defined with respect to any basis of system~1. Partial
transposition preserves the trace, and it commutes with taking the
adjoint.
The operator that results from partial transposition depends on which
basis is used to define the transpose, but these different partial
transposes are related by unitary transformations on the transposed
system and thus have the same eigenvalues and singular values.
Moreover, partial transposition on one of the systems is related to
partial transposition on the other by an overall transposition, which
also preserves eigenvalues and singular values. Despite the
nonuniqueness of the partial transpose, we can talk meaningfully about
its invariant properties, such as its eigenvalues and singular values.
Similar considerations show that the eigenvalues and singular values
are invariant under local unitary transformations.
The singular values of an operator $O$ are the eigenvalues of
$\sqrt{O^\dag O}\equiv|O|$ (or, equivalently, of $\sqrt{O
O^\dag}$). Any operator has a polar decomposition $O=T|O|$, where
$T$ is a unitary operator. Writing $|O|=W^\dag SW$, where $W$ is
the unitary that diagonalizes $|O|$ and $S$ is the diagonal matrix of
singular values, we see that any operator can be written as $O=VSW$,
where $V=TW^\dag$ and $W$ are unitary operators.
We denote a partial transpose of $A$ generically by $\breve A$. We
write the eigenvalues of $\breve A$ as $\lambda_j(\breve A)$ and denote
the singular values by $s_j(\breve A)$. If $A$ is Hermitian, so is
$\breve A$, and the singular values of $\breve A$, i.e., the
eigenvalues of $|\breve A|$, are the magnitudes of the eigenvalues,
i.e, $s_j(\breve A)=|\lambda_j(\breve A)|$.
If a joint density operator~$\rho$ of systems 1 and 2 is separable, its
partial transpose $\breve\rho$ is a positive operator. This gives the
Peres-Horodecki entanglement criterion~\cite{p96,hhh96}: if
$\breve\rho$ has a negative eigenvalue, then $\rho$ is entangled (the
converse is not generally true). The magnitude of the sum of the
negative eigenvalues of the partial transpose, denoted by
\begin{equation}
\mathcal{N}(\rho)\equiv-\sum_{\lambda_j(\breve\rho)<0}\lambda_j(\breve\rho)\;,
\end{equation}
is a measure of the amount of entanglement. Partial transposition
preserves the trace, so ${\rm{tr}}(\breve\rho)=1$, from which we get
\begin{equation}
1+2\mathcal{N}(\rho)=\sum_j|\lambda_j(\breve\rho)|=
{\rm{tr}}|\breve\rho|\equiv\mathcal{M}(\rho)\;,
\end{equation}
where $\mathcal{M}(\rho)$ is a closely related entanglement measure. The
quantity $\mathcal{N}(\rho)$ was originally called the
\emph{negativity\/}~\cite{Vidal02}; we can distinguish the two measures
by referring to $\mathcal{M}(\rho)$ as the \emph{multiplicative negativity}, a
name that emphasizes one of its key properties and advantages over
$\mathcal{N}(\rho)$. In this paper, however, we use the multiplicative
negativity exclusively and so refer to it simply as ``the negativity''.
The negativity $\mathcal{M}(\rho)$ equals one for separable states, and it is
an entanglement monotone~\cite{Vidal02}, meaning that (i)~it is a
convex function of density operators and (ii)~it does not increase
under local operations and classical communication. The negativity has
the property of being multiplicative in the sense that the $\mathcal{M}$ value
for a state that is a product of states for many pairs of systems is
the product of the $\mathcal{M}$ values for each of the pairs. By the same
token, $\log\mathcal{M}(\rho)$, called the \emph{log-negativity\/}, is
additive, but the logarithm destroys convexity so the log-negativity is
not an entanglement monotone~\cite{Vidal02}. For another point of view
on the monotonicity of the log-negativity, see Ref.~\cite{Plenio05}.
The minimum value of the negativity is one, but we need to know the
maximum value to calibrate our results. Convexity guarantees that the
maximum value is attained on pure states. We can find the
maximum~\cite{SLee03} by considering the Schmidt decomposition of a
joint pure state of systems~1 and 2,
\begin{equation}
|\psi\rangle=\sum_{j=1}^d\sqrt{\mu_j}|j,j\rangle\;,
\end{equation}
where $d=\min(d_1,d_2)$. Taking the partial transpose of $\rho$
relative to the Schmidt basis of system~2 gives
\begin{equation}
\breve\rho
=\sum_{j,k=1}^d\sqrt{\mu_j\mu_k}|j,k\rangle\langle k,j|\;,
\end{equation}
with eigenvectors and eigenvalues
\begin{eqnarray}
|j,j\rangle\;,&&\mbox{eigenvalue $\mu_j$,}\nonumber\\
{1\over\sqrt2}(|j,k\rangle\pm|k,j\rangle)\;,
&&\mbox{eigenvalue $\pm\sqrt{\mu_j\mu_k}$,}\quad j<k.
\end{eqnarray}
This gives a negativity
\begin{equation}
\mathcal{M}(\psi)=
1+2\sum_{j<k}\sqrt{\mu_j\mu_k}
=\sum_{j,k=1}^d\sqrt{\mu_j\mu_k}
=\biggl(\sum_{j=1}^d\sqrt{\mu_j}\biggr)^2\;.
\end{equation}
The concavity of the square root implies $\sum_j\sqrt{\mu_j}\le\sqrt
d$, with equality if and only if $\mu_j=1/d$ for all $j$, i.e.,
$|\psi\rangle$ is maximally entangled. We end up with
\begin{equation}
1\le\mathcal{M}(\rho)\le d\;.
\end{equation}
The negativity is the sum of the singular values of
$\breve\rho$. For states of the form we are interested in, given by
Eq.~(\ref{E:rhooutalpha}), the negativity is determined by the
singular values of the partial transpose of the unitary operator~$U_n$.
To see this, consider any bipartite division of the qubits. Performing
the partial transpose on the part that does not include the special
qubit, we have
\begin{equation}
\breve\rho_{n+1}(\alpha)=
\frac{1}{2N}
\begin{pmatrix}
I_n & \alpha\breve U_n^\dag \\
\alpha\breve U_n & I_n
\end{pmatrix} \;,
\label{E:breverhooutalpha}
\end{equation}
where $\breve U_n$ is the partial transpose of $U_n$ relative to the
chosen bipartite division. Notice that if we make our division between
the special qubit and all the rest, then $\breve U_n=U_n^T$ is a
unitary operator, and $\breve\rho_{n+1}(\alpha)$ is the quantum state
corresponding to using $U_n^T$ in the circuit~(\ref{E:circuitalpha});
this shows that for this division, the negativity is 1, consistent with
our earlier conclusion that the special qubit is not entangled with the
other qubits. For a general division, we know there are unitaries $V$
and $W$ such that $\breve U_n=VSW$, where $S$ is the diagonal matrix of
singular values $s_j(\breve U_n)$. This allows us to write
\begin{equation}
\breve\rho_{n+1}(\alpha)=
\begin{pmatrix}
W^\dagger & 0 \\
0 & V
\end{pmatrix}
\frac{1}{2N}
\begin{pmatrix}
I_n & \alpha S \\
\alpha S & I_n
\end{pmatrix}
\begin{pmatrix}
W & 0 \\
0 & V^\dagger
\end{pmatrix}\;,
\end{equation}
showing that $\breve\rho_{n+1}(\alpha)$ is a unitary transformation
away from the matrix in the middle and thus has the same eigenvalues.
The block structure of the middle matrix makes it easy to find these
eigenvalues, which are given by $[1\pm\alpha s_j(\breve U_n)]/2N$.
This allows us to put the negativity in the form
\begin{equation}
\mathcal{M}\bigl(\rho_{n+1}(\alpha)\bigr)={1\over2N}
\sum_{j=1}^N|1+\alpha s_j(\breve U_n)|+|1-\alpha s_j(\breve U_n)| =
{1 \over N} \sum_{j=1}^N \max\bigl(\abs{\alpha} s_j(\breve U_n), 1\bigr)\;,
\label{E:Msingular}
\end{equation}
which is valid for both positive and negative values of $\alpha$. An
immediate consequence of Eq.~(\ref{E:Msingular}) is that
$\mathcal{M}\bigl(\rho_{n+1}(\alpha)\bigr)=\mathcal{M}\bigl(\rho_{n+1}(-\alpha)\bigr)$,
as one would expect. Since $\rho_{n+1}(\alpha)$ is a mixture of
$\rho_{n+1}(+1)=\rho_{n+1}$ and $\rho_{n+1}(-1)$, convexity tells us
immediately that
$\mathcal{M}\bigl(\rho_{n+1}(\alpha)\bigr)\le\mathcal{M}\bigl(\rho_{n+1}\bigr)$, i.e.,
that a mixed input for the special qubit cannot increase the negativity
over that for a pure input. More generally, we have that the
negativity cannot decrease at any point as $\alpha$ increases from 0
to~1.
\section{Entanglement in the DQC1 Circuit} \label{S:examples}
In this section, we construct a family of unitaries $U_n$ that produce
global entanglement in the DQC1 circuit~(\ref{E:circuit}). For
$\alpha=1$, the negativity produced by this family is equal to $5/4$,
independent of $n$, for all bipartite divisions that put the first and
last unpolarized qubits in different parts. We conjecture that this is
the maximum negativity that can be achieved in a circuit of the
form~(\ref{E:circuit}).
Before the measurement, the output state of the
circuit~(\ref{E:circuitalpha}) is given by Eq.~(\ref{E:rhooutalpha}).
To construct the unitaries $U_n$, we first introduce a two-qubit
unitary matrix
\begin{equation}
\label{E:U2}
U_{2} \equiv \begin{pmatrix}
A_1 & C_1 \\
D_1 & B_1 \\
\end{pmatrix} \;,
\end{equation}
where $A_1$, $B_1$, $C_1$, and $D_1$ are single-qubit ($2\times2$)
matrices that must satisfy $A_1^\dagger A_1+D_1^\dagger D_1=
B_1^\dagger B_1+C_1^\dagger C_1=I_1$ and $A_1^\dagger C_1+D_1^\dagger
B_1=0$ to ensure that $U_2$ is unitary. The $n$-qubit unitary $U_n$
is then defined by
\begin{eqnarray}
U_n &\equiv&
\begin{pmatrix}
I_{n-2} \otimes A_1 & X_{n-2} \otimes C_1 \\
X_{n-2} \otimes D_1 & I_{n-2} \otimes B_1
\end{pmatrix} \nonumber \\
&=&|0\rangle\langle0|\otimes I_{n-2}\otimes A_1
+|1\rangle\langle1|\otimes I_{n-2}\otimes B_1 \nonumber \\
&&\phantom{|}
+|0\rangle\langle1|\otimes X_{n-2}\otimes C_1
+|1\rangle\langle0|\otimes X_{n-2}\otimes D_1
\;.
\label{E:Un}
\end{eqnarray}
Here we use $X_1$, $Y_1$, and $Z_1$ to denote single-qubit Pauli
operators. A subscript $k$ on the identity operator or a Pauli operator
denotes a tensor product in which that operator acts on each of $k$
qubits. If we adopt the convention that $X_0 = I_0 = 1$, then $U_n$
reduces to $U_2$ when $n=2$. It is easy to design a quantum circuit
that realizes $U_n$. The structure of the circuit is illustrated by
the case of $U_4$:
\begin{equation}
\label{E:Ucircuit}
\Qcircuit @C=.5em @R=0.5em {
& \lstick{\mbox{1st qubit}} & \ctrl{2} & \ctrl{3} & \multigate{1}{U_2} & \ctrl{3} & \ctrl{2} & \qw \\
& \lstick{\mbox{4th qubit}} & \qw & \qw & \ghost{U_n} & \qw & \qw & \qw \\
& \lstick{\mbox{2nd qubit}} & \targ & \qw & \qw & \qw & \targ & \qw \\
& \lstick{\mbox{3rd qubit}} & \qw & \targ & \qw & \targ & \qw & \qw }
\end{equation}
In general, the two-qubit unitary $U_2$, acting on the first and last
qubits, is bracketed by controlled-NOT gates from the first qubit,
acting as control, to each of the other qubits, except the last, as
targets.
Because $I_1$ and $X_1$ are invariant under transposition, it is clear
from the form of $U_n$ that in the state~(\ref{E:rhooutalpha}), all
qubits, except~0, 1, and~$n$, are invariant under transposition. We
can use this fact to find the negativity for all bipartite divisions.
First consider any bipartite division that puts qubits 1 and $n$ in the
same part. There are two possibilities. If the special qubit is in
the same part as qubits~1 and $n$, then partial transposition on the
other part leaves $\rho_{n+1}(\alpha)$ unchanged, so the negativity
is~1. If the special qubit is not in the same part as~1 and $n$, then
partial transposition on the part that includes~1 and $n$ is the same
as partial transposition of all the unpolarized qubits, a case we
already know to have negativity equal to 1. We conclude that any
bipartite division that puts~1 and $n$ in the same part has negativity
equal to 1.
Turn now to bipartite divisions that put qubits~1 and $n$ in different
parts. There are two cases to consider: (i)~the special qubit is in
the same part as qubit~1, and (ii)~the special qubit is in the same part
as qubit~$n$. In case~(i), partial transposition of the part that
contains qubit~$n$ gives
\begin{equation}
\breve\rho_{n+1}(\alpha)={1\over2N}
\begin{pmatrix}
I_n & \alpha \breve U_n^\dagger \\
\alpha \breve U_n & I_n
\end{pmatrix}
\qquad\mbox{with}\qquad
\breve U_n=
\begin{pmatrix}
I_{n-2} \otimes A_1^T & X_{n-2} \otimes C_1^T \\
X_{n-2} \otimes D_1^T & I_{n-2} \otimes B_1^T
\end{pmatrix}\;.
\end{equation}
In case~(ii), partial transposition of the part that contains qubit~1
gives
\begin{equation}
\breve\rho_{n+1}(\alpha)={1\over2N}
\begin{pmatrix}
I_n & \alpha \breve U_n^\dagger \\
\alpha \breve U_n & I_n
\end{pmatrix}
\qquad\mbox{with}\qquad
\breve U_n=
\begin{pmatrix}
I_{n-2} \otimes A_1 & X_{n-2} \otimes D_1 \\
X_{n-2} \otimes C_1 & I_{n-2} \otimes B_1
\end{pmatrix}\;.
\end{equation}
The basic structure of $\breve\rho_{n+1}(\alpha)$ is the same in
both cases. Without changing the spectrum, we can reorder the rows and
columns to block diagonalize $\breve\rho_{n+1}(\alpha)$ so that
there are $N/4$ blocks, each of which has the form
\begin{equation}
{1\over2N}
\begin{pmatrix}
I_2 & \alpha \breve U_2^\dagger \\
\alpha \breve U_2 & I_2
\end{pmatrix}
={4\over N}\breve\rho_3(\alpha)\;,
\label{E:breverho2}
\end{equation}
where $\breve\rho_3(\alpha)$ is the appropriate partial transpose of
the three-qubit output state. Thus the spectrum of
$\breve\rho_{n+1}(\alpha)$ is the same as the spectrum of
$\breve\rho_3(\alpha)$, except that each eigenvalue is reduced by a
factor of $4/N$. In calculating the negativity, since each eigenvalue
is ($N/4$)-fold degenerate, the reduction factor of $4/N$ is cancelled
by a degeneracy factor of $N/4$, leaving us with the fundamental result
of our construction,
\begin{equation}
\mathcal{M}\bigl(\rho_{n+1}(\alpha)\bigr)=\mathcal{M}\bigl(\rho_3(\alpha)\bigr)\;.
\end{equation}
This applies to both cases of bipartite splittings that we are
considering, showing that all divisions have the same negativity as the
corresponding $n=2$ construction.
We now specialize to a particular choice of $U_2$ given by
\begin{equation}
A_1=
\begin{pmatrix}
0 & 0 \\ 0 & 1
\end{pmatrix}\;,
\quad
B_1=
\begin{pmatrix}
1 & 0 \\ 0 & 0
\end{pmatrix}\;,
\quad
C_1=
\begin{pmatrix}
0 & 1 \\ 0 & 0
\end{pmatrix}\;,
\quad\mbox{and}\quad
D_1=
\begin{pmatrix}
0 & 0 \\ 1 & 0
\end{pmatrix}\;.
\label{E:ABCD}
\end{equation}
For this choice, the two cases of bipartite division lead to the same
partial transpose. The spectrum of
\begin{equation}
\breve\rho_3(\alpha)={1\over8}
\begin{pmatrix}
I_1 & 0 & \alpha A_1 & \alpha D_1 \\
0 & I_1 & \alpha C_1 & \alpha B_1 \\
\alpha A_1 & \alpha D_1 & I_1 & 0\\
\alpha C_1 & \alpha B_1 & 0 & I_1
\end{pmatrix}
\end{equation}
is
\begin{equation}
{\rm{Spec}}(\breve\rho_3\bigl(\alpha)\bigr)=
{1\over 8}(1+2\alpha,1,1,1,1,1,1-2\alpha)\;,
\end{equation}
giving a negativity equal to 1 for $\alpha\le1/2$ and a
negativity
\begin{equation}
\mathcal{M}\bigl(\rho_{n+1}(\alpha)\bigr)=\mathcal{M}\bigl(\rho_3(\alpha)\bigr)=
{1\over4}(2\alpha+3)
\quad\mbox{for}\quad
\alpha\ge1/2\;.
\end{equation}
This result shows definitively that the circuit~ can produce
entanglement, at least for $\alpha>1/2$. We stress that the
negativity achieved by this family of unitaries is independent of
$n\ge2$.
For $\alpha=1$, the negativity achieved by this family reduces to
$5/4$. For large $n$, this amount of negativity is a vanishingly small
fraction of the maximum possible negativity, $\sim2^{n/2}$, for roughly
equal divisions of the qubits. This raises the question whether it is
possible for other unitaries to achieve larger negativities. A first
idea might be to find two-qubit unitaries $U_2$ that yield a higher
negativity $\mathcal{M}(\rho_3)=\mathcal{M}(\rho_{n+1})$ when plugged into the
construction of this section, but the bounds we find in
Sec.~\ref{S:bounds} dispose of this notion, since they show that $5/4$
is the maximum negativity that can be achieved for $n=2$. Another
approach would be to generalize the construction of this section in a
way that is obvious from the circuit~(\ref{E:Ucircuit}), i.e., by
starting with a $k$-qubit unitary in place of the two-qubit unitary of
Eq.~(\ref{E:Ucircuit}). Numerical investigation of the case $k=3$ has
not turned up negativities larger than $5/4$. We conjecture that $5/4$
is the maximum negativity that can be achieved by states of the
form~(\ref{E:rhoout}). Though we have not been able to prove this
conjecture, we show in the next section that typical unitaries for
$n+1\le10$ achieve negativities less than $5/4$ and in the following
section that the negativity is rigorously bounded by $\sqrt2$.
We stress that we are not suggesting that the construction of this
section, with $U_2$ given by Eq.~(\ref{E:ABCD}), achieves the maximum
negativity for all values of $\alpha$, for that would mean that we
believed that the negativity cannot exceed 1 for $\alpha\le1/2$, which
we do not. Although we have not found entanglement for $\alpha\le
1/2$, we suspect there are states with negativity greater than 1 as
long as $\alpha$ is large enough that $\rho_{n+1}(\alpha)$ lies outside
the separable ball around the maximally mixed state, i.e.,
$\alpha\ge2^{(n+1)/2}r_{n+1}$. The bound of Sec.~\ref{S:bounds} only
says that $\mathcal{M}\bigl(\rho_{n+1}(\alpha)\bigr)\le\sqrt{1+\alpha^2}$, thus
allowing negativities greater than 1 for all values of $\alpha$ except
$\alpha=0$. Moreover, since the negativity does not detect bound
entanglement, there could be entangled states that have a negativity
equal to~1.
\section{The Average Negativity of a Random Unitary} \label{S:random}
Having constructed a family of unitaries that yields a DQC1 state with
negativity $5/4$, a natural question to ask is, ``What is the
negativity of a typical state produced by the
circuit~(\ref{E:circuit})?'' To address this question, we choose the
unitary operator in the circuit~(\ref{E:circuit}) at random and
calculate the negativity. Of course, one must first define what it
means for a unitary to be ``typical'' or ``chosen at random''. The
natural measure for defining this is the Haar measure, which is the
unique left-invariant measure for the group ${\sf U}(N)$
\cite{Conway90}. The resulting ensemble of unitaries is known as the
Circular Unitary Ensemble, or CUE, and it is parameterized by the
Hurwitz decomposition \cite{Hurwitz1897}. Although this is an exact
parameterization, implementing it requires computational resources that
grow exponentially in the size of the unitary~\cite{Emerson03}. To
circumvent this, a pseudo-random distribution that requires resources
growing polynomially in the size of the unitary was formulated and
investigated in Ref.~\cite{Emerson03}. This is the distribution from
which we draw our random unitaries, and we summarize the procedure for
completeness.
We first define a random ${\sf SU}(2)$ unitary as
\begin{equation}
R(\theta,\phi,\chi) =
\begin{pmatrix}
e^{i \phi} \cos\theta & e^{i \chi} \sin\theta \\
-e^{-i \chi} \sin\theta & e^{-i \phi} \cos\theta \\
\end{pmatrix} \;,
\end{equation}
where $\theta$ is chosen uniformly between $0$ and $\pi/2$, and $\phi$
and $\chi$ are chosen uniformly between $0$ and $2\pi$. A random
unitary applied to each of the $n$ qubits is then
\begin{equation}
{\sf R} = \bigotimes_{i=1}^{n} R(\theta_i, \phi_i, \chi_i) \;,
\end{equation}
where a separate random number is generated for each variable at each
value of $i$. Now define a mixing operator ${\sf M}$ in terms of
nearest-neighbor $Z\otimes Z$ couplings as
\begin{equation}
{\sf M} =
\exp\left(i \frac{\pi}{4} \sum_{j=1}^{n-1} Z^{(j)} \otimes Z^{(j+1)} \right) \;.
\end{equation}
The pseudo-random unitary is then given by
\begin{equation}
{\sf R}_j{\sf M}{\sf R}_{j-1}\cdots{\sf M}{\sf R}_2{\sf M}{\sf R}_1 \;,
\end{equation}
where $j$ is a positive integer that depends on $n$, and each ${\sf
R}_k$ is chosen randomly as described above. For a given $n$, the
larger $j$ is, the more accurately the pseudo-random unitary
distribution resembles the actual CUE. From the results in
Ref.~\cite{Emerson03}, $j=40$ gives excellent agreement with the CUE
for unitary operators on at least up to 10 qubits, so this is what we
use in our calculations.
Due to boundary effects, not all bipartite splittings that put $k$
unpolarized qubits in one part are equivalent. Nevertheless, we
consider only bipartite divisions that split the qubits along
horizontal lines placed at various points in the circuit of
Eq.~(\ref{E:circuit}). We refer to the division that groups the last
$k$ qubits together as the $(n+1-k,k)$ splitting. For $\alpha=1$, we
calculate the average negativity and standard deviation of a
pseudo-random state $\rho_{n+1}$ for two different bipartite
splittings, $(n,1)$ and $(\left\lfloor n/2 \right\rfloor +1,
\left\lceil n/2 \right\rceil )$. These results are plotted in
Fig.~\ref{F:random}. For $n+1=5,\ldots,10$, the average negativity for
the roughly equal splitting lies between 1.135 and just above 1.16. The
standard deviation appears to converge exponentially to zero, as in
Ref.~\cite{Scott03}, a behavior that is typical of asymptotically
equivalent matrices. In addition, for $9+1$ qubits, we calculate the
average negativity and standard deviation for all nontrivial ($k\ne n$)
bipartite splittings $(n+1-k,k)$, and the results are shown in
Fig.~\ref{F:allsplits}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.458]{meanneg.eps}
\includegraphics[scale=.458]{devneg.eps}
\caption{{\bf Left}: Average negativity of the state $\rho_{n+1}$ of
Eq.~(\protect\ref{E:rhoout}) ($\alpha=1$) for a randomly chosen unitary
$U_n$ for two different bipartite splittings, $(n,1)$ and
$\left(\left\lfloor n/2 \right\rfloor +1, \left\lceil
n/2 \right\rceil\right)$. The $(n,1)$ splitting appears to
reach an upper bound quickly, whereas the other splitting is still
rising slowly at 10 qubits. {\bf Right}: Semi-log plot of the standard
deviation in the negativity of the randomly chosen state $\rho_{n+1}$.
The fit curves show that the standard deviation is decaying
exponentially, so that for large numbers of qubits, almost all
unitaries give the same negativity.} \label{F:random}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.6]{allsplits10.eps}
\caption{Average negativity of the state $\rho_{10}$ of
Eq.~(\protect\ref{E:rhoout}) ($\alpha=1$) for a randomly chosen unitary
$U_9$ as a function of bipartite splitting number $k$ for bipartite
splittings $(10-k,k)$. The error bars give the standard deviations.
The function attains a maximum when the bipartite split is made between
half of the qubits on which the unitary acts.}
\label{F:allsplits}
\end{center}
\end{figure}
\section{Bounds on the Negativity} \label{S:bounds}
In this section, we return to allowing the special qubit in the circuit
(\ref{E:circuitalpha}) to have initial polarization~$\alpha$. Since the
value of $n$ is either clear from context or fixed, we reduce the
notational clutter by denoting the state $\rho_{n+1}(\alpha)$ of
Eq.~(\ref{E:rhooutalpha}) as~$\rho_\alpha$.
Given a particular bipartite division, the partial transpose of
$\rho_\alpha$ with respect to the part that does not include the
special qubit is
\begin{equation}
\breve \rho_\alpha = \frac{I_n+\alpha \breve C}{2N} \;,
\end{equation}
where
\begin{equation}
\label{E:Cdef}
\breve C \equiv
\begin{pmatrix}
0 & \breve U_n^\dag \\
\breve U_n & 0
\end{pmatrix} \;.
\end{equation}
Using the binomial theorem, we can expand
${\rm{tr}}(\breve\rho^{\,s}_\alpha)$ in terms of ${\rm{tr}}(\breve C^k)$:
\begin{equation}
\label{E:binomialform}
{\rm{tr}}(\breve\rho^{\,s}_\alpha) =
\left({1\over2N}\right)^{\! \! s} \sum_{k=0}^s {s \choose k} \alpha^k {\rm{tr}}(\breve C^k) \;.
\end{equation}
When $k$ is odd, $\breve C^k$ is block off-diagonal, so its trace
vanishes. When $k$ is even, we have
\begin{equation}
{\rm{tr}}(\breve C^k) = 2\,{\rm{tr}}\Bigl((\breve U_n \breve U_n^\dag)^{k/2}\Bigr)\;.
\end{equation}
When $k=2$, this simplifies to ${\rm{tr}}(\breve C^2)=2{\rm{tr}}(\breve U_n\breve
U_n^\dag)=2{\rm{tr}}(U_n U_n^\dag)=2N$. The crucial step here follows
immediately from the property ${\rm{tr}}(\breve A\breve B)={\rm{tr}}(AB)$, which we
prove as a Lemma in the Appendix. Note that in general ${\rm{tr}}(\breve A_1
\breve A_2 \ldots \breve A_l) \not= {\rm{tr}}(A_1 A_2 \ldots A_l)$ if $l>2$,
so we cannot give a similar general calculation of
${\rm{tr}}(\breve\rho^{\,s}_\alpha)$ for even $s \ge 4$, since it involves terms
of this form.
Using Eq.~(\ref{E:binomialform}), we can now obtain three independent
constraint equations on the eigenvalues
$\lambda_j=\lambda_j(\breve\rho_\alpha)$ of the partial transpose
$\breve\rho_\alpha$:
\begin{equation} \label{E:sumsofpowers}
\sum_{j=1}^{2N}\lambda_j^s=
{\rm{tr}}(\breve \rho^{\,s}_\alpha)
= \frac{1}{2^s N^{s-1}}[(1+\alpha)^s + (1-\alpha)^s]\;,\qquad s=1,2,3.
\end{equation}
Since the negativity is given by
\begin{equation} \label{E:negdef}
\mathcal{M}(\rho_\alpha) = \sum_j \abs{\lambda_i} \;,
\end{equation}
we can find an upper bound on the negativity by maximizing
$\sum_j\abs{\lambda_j}$ subject to the
constraints~(\ref{E:sumsofpowers}). If we consider only the $s=1,2$
constraints, we obtain a nontrivial upper bound on the negativity with
little effort. We find that adding the constraint $s=3$ adds nothing
asymptotically for large $N$, but for small $N$ yields a tighter bound
than we get from the $s=1,2$ constraints, although this comes at the
cost of considerably more effort. We emphasize that these bounds apply
to all bipartite divisions and to all unitaries $U_n$. Notice that we
have no reason to expect these bounds to be saturated, since the traces
of higher powers of $\breve\rho_\alpha$ impose additional constraints
that we are ignoring. The one exception is the case of three qubits,
where the $s=1,2,3$ constraints are a complete set, and indeed, in this
case, the $s=1,2,3$ bound is $5/4$, which is saturated by the unitary
found in Sec.~\ref{S:examples}.
The remainder of this section is devoted to calculating the $s=1,2$ and
$s=1,2,3$ upper bounds. A graphical summary of our results is for
$\alpha=1$ presented in Fig.~\ref{F:totalfig}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.6]{totalfig.eps}
\caption{Plot of the bounds on the negativity of states of the
form~(\ref{E:rhoout}), i.e., for a pure-state input in the zeroth
register ($\alpha=1$). The uppermost plot is the simple analytic bound
$\mathcal{M}_{1,2}=\sqrt2$, obtained using the $s=1,2$~constraint equations;
the next largest plot is the numerically constructed $s=1,2,3$ bound.
One can see that the $s=1,2,3$ bound asymptotes to the $s=1,2$ bound.
As noted in the text, these bounds are independent of the unitary $U_n$
and the bipartite division. The flat line shows the negativity $5/4$
for the state constructed in Sec.~\ref{S:examples}, currently the state
of the form~(\ref{E:rhoout}) with the largest demonstrated negativity;
notice that for $n+1=3$, this state attains the $s=1,2,3$ bound. The
lowest two sets of data points display the expected negativities for a
randomly chosen unitary using the bipartite splittings $(n,1)$ and
$\left(\left\lfloor n/2 \right\rfloor +1, \left\lceil n/2
\right\rceil\right)$, which were also plotted in Fig.~\ref{F:random}.}
\label{F:totalfig}
\end{center}
\end{figure}
\subsection{The $s=1,2$ Bound}
We can use Lagrange multipliers to reduce the problem to maximizing a
function of one variable, but first we must deal with the absolute
value in Eq.~(\ref{E:negdef}). To do so, we assume that $t$ of the
eigenvalues are negative and the $2N-t$ others are nonnegative, where
$t$ becomes a parameter that must now be included in the maximization.
We want to maximize
\begin{equation}
\mathcal{M}_{1,2} = -\sum_{i=1}^t \lambda_i + \sum_{j=t+1}^{2N} \lambda_j\;,
\end{equation}
subject to the constraints
\begin{equation} \label{E:cons}
\sum_{k=1}^{2N} \lambda_k = 1
\quad \mbox{and} \quad \sum_{k=1}^{2N}
\lambda_k^2 = \frac{1+\alpha^2}{2N} \;.
\end{equation}
The notation we adopt here for the indices is that $i$ labels negative
eigenvalues and $j$ labels nonnegative eigenvalues, while $k$ can label
either. This serves to remind us of the sign of an eigenvalue just by
looking at its index.
Introducing Lagrange multipliers $\mu$ and $\nu$, the function we want
to maximize is
\begin{equation}
f(\lambda_k,t) =
-\sum_{i=1}^t \lambda_i + \sum_{j=t+1}^{2N} \lambda_j +
\mu\!\left(\sum_{k=1}^{2N} \lambda_k -1\right)
+ \nu\!\left(\sum_{k=1}^{2N} \lambda_k^2 -\frac{1+\alpha^2}{2N}\right)\;.
\end{equation}
Differentiating with respect to $\lambda_i$ and then $\lambda_j$, we find
\begin{eqnarray}
-1+\mu+2\nu \lambda_i = 0 \;, \\
+1+\mu+2\nu \lambda_j = 0 \;.
\end{eqnarray}
We immediately see that in the maximal solution, all the negative
eigenvalues are equal, and all the nonnegative eigenvalues are equal.
We can now reformulate the problem in the following way. If we call
the two eigenvalues $\lambda_-$ and $\lambda_+$, our new problem is to
maximize
\begin{equation} \label{E:newneg}
\mathcal{M}_{1,2} = \sum_k \abs{\lambda_k}= -t\lambda_-+(2N-t)\lambda_+\;,
\end{equation}
subject to the constraints
\begin{eqnarray} \label{E:constraint1}
t \lambda_- + (2N-t) \lambda_+ &=& 1\;,\\
t \lambda_-^2 + (2N-t) \lambda_+^2 &=& \frac{1+\alpha^2}{2N}\;.
\label{E:constraint2}
\end{eqnarray}
We can now do the problem by solving the constraints for $\lambda_-$
and $\lambda_+$ in terms of $t$, plugging these results into
$\mathcal{M}_{1,2}$, and then maximizing over $t$.
Before continuing, we note two things. First, $t$ cannot be $2N$, for
if it were, then all the eigenvalues would be negative, making it
impossible to satisfy Eq.~(\ref{E:constraint1}). Second, unless
$\alpha=0$, $t$ cannot be 0, for if it were, then all the eigenvalues
would be equal to $1/2N$ by Eq.~(\ref{E:constraint1}), a situation
Eq.~(\ref{E:constraint2}) says can occur only if $\alpha=0$. Since we
are not really interested in the case $\alpha=0$, for which
$\rho_\alpha$ is always the maximally mixed state, we assume $\alpha>0$
and $0<t<2N$ in what follows.
Solving Eqs.~(\ref{E:constraint1}) and (\ref{E:constraint2}) and
plugging the solutions into Eq.~(\ref{E:newneg}), we get the two
solutions
\begin{equation}
\label{E:ellipse}
\mathcal{M}_{1,2} = \frac{N-t \pm \alpha \sqrt{t(2N-t)}}{N} \;.
\end{equation}
We choose the positive branch, since it contains the maximum.
Maximizing with respect to $t$ treated as a continuous variable, we
obtain the upper bound,
\begin{equation} \label{E:maxneg}
\mathcal{M}_{1,2} = \sqrt{1+\alpha^2}\, \stackrel{\alpha \to 1}{=}
\sqrt 2 \simeq 1.414\;,
\end{equation}
which occurs when the degeneracy parameter is given by
\begin{equation}
\label{E:degeneracy}
t = N\!\left(1 -\frac{1}{\sqrt{1+\alpha^2}} \right)
\stackrel{\alpha \to 1}{\simeq} 0.292\,N \;.
\end{equation}
The numbers on the right are for the case $\alpha = 1$, corresponding
to the special qubit starting in a pure state. Notice that the upper
bound~(\ref{E:maxneg}) allows a negativity greater than 1 for all
$\alpha$ except $\alpha=0$.
Since we did not yet enforce the condition that $t$ be a positive
integer, the bound~(\ref{E:maxneg}) can be made tighter for specific
values of $N$ and $\alpha$ by calculating $t$ and checking which of the
two nearest integers yields a larger $\mathcal{M}_{1,2}$. Asymptotically,
however, the ratio $t/N$ can approach any real number, so this bound
for continuous $t$ is the same as the bound for integer $t$ in the
limit $N\to\infty$.
\subsection{The $s=1,2,3$ Bound}
To deal with this case, we again make the assumption that $t$ of the
eigenvalues are negative and $2N-t$ are nonnegative and thus write
\begin{equation}
\mathcal{M}_{1,2,3} = -\sum_{i=1}^t \lambda_i + \sum_{j=t+1}^{2N} \lambda_j\;,
\end{equation}
as before. In addition to the constraints~(\ref{E:cons}), we now have a
third constraint
\begin{equation}
\label{E:constraint3}
\sum_{k=1}^{2N} \lambda_k^3 = \frac{1+3 \alpha^2}{4N^2} \;.
\end{equation}
We specialize to the case $\alpha = 1$ for the remainder of this
subsection, because it is our main interest, and the algebra for
the general case becomes difficult.
Introducing three Lagrange multipliers, we can write the function we
want to maximize as
\begin{equation}
\label{E:lagrange123}
f(\lambda_k,t) =
-\sum_{i=1}^t \lambda_i + \sum_{j=t+1}^{2N} \lambda_j +
\mu\!\left( \sum_{k=1}^{2N} \lambda_k - 1 \right)+
\nu\!\left( \sum_{k=1}^{2N} \lambda_k^2 - \frac{1}{N} \right) +
\xi\!\left( \sum_{k=1}^{2N} \lambda_k^3 - \frac{1}{N^2} \right)\;.
\end{equation}
Differentiating with respect to $\lambda_i$ and then $\lambda_j$ gives
\begin{eqnarray}
-1+\mu+2\nu \lambda_i + 3 \xi \lambda_i^2 &=& 0 \;, \\
+1+\mu+2\nu \lambda_j + 3 \xi \lambda_j^2 &=& 0 \;.
\end{eqnarray}
These equations being quadratic, we see that there are at most two
distinct negative eigenvalues and at most two distinct nonnegative
eigenvalues. Since the sum of the two solutions of either of these
equations is $-2\nu/3\xi$, however, we can immediately conclude either
that one of the potentially nonnegative solutions is negative or that
one of the potentially negative solutions is positive. Hence, we find
that at least one of the four putative eigenvalues has the wrong sign,
implying that there are at most three distinct eigenvalues, though we
don't know whether one or two of them are negative.
Labelling the three eigenvalues by $A$, $B$, and $C$, we can reduce the
problem to solving the three constraint equations,
\begin{eqnarray} \label{E:123}
u A + v B + w C & = & 1\;,\nonumber \\
u A^2 + v B^2 + w C^2 & = & 1/N\;,\\
u A^3 + v B^3 + w C^3 & = & 1/N^2\;,\nonumber
\end{eqnarray}
for $A$, $B$, and $C$ and then maximizing $\mathcal{M}_{1,2,3}$ over the
degeneracy parameters $u$, $v$, and $w$, which are nonnegative positive
integers satisfying the further constraint
\begin{equation}
u+v+w=2N\;.
\end{equation}
We do not associate any particular sign with $A$, $B$, and $C$; the
signs are determined by the solution of the equations.
One might hope that the symmetry of Eqs.~(\ref{E:123}) would allow for
a simple analytic solution, but this appears not to be the case. In
solving the three equations, one is inevitably led to a sixth-order
polynomial in one of the variables, with the coefficients given as
functions of $u$, $v$, and $w$. Rather than try to solve this
equation, which appears intractable, we elected to do a brute force
optimization for any given value of $2N$ by solving Eqs.~(\ref{E:123})
for each possible value of $u$, $v$, and $w$. Picking the solution
that has the largest negativity then yields the global maximum. We did
this for each $N$ up to $2N=78$. The values of
$u$, $v$, and $w$ that maximize the negativity are always
\begin{equation} \label{E:uvw}
u=\left[N\left(1-\frac{1}{\sqrt 2}\right)\right] \;,
\ v=1 \;,
\ w = 2N-1-u \;,
\end{equation}
where $[x]$ denotes the integer nearest to $x$. The unique eigenvalue
corresponding to $v=1$ is the largest positive eigenvalue, $w$ is the
degeneracy of another positive eigenvalue, and $u$ is the degeneracy of
the negative eigenvalue. Notice that the degeneracy of the negative
eigenvalue is exactly what was found in the $s=1,2$ case. Using the
results~(\ref{E:uvw}) as a guide, we did a further numerical
calculation of the maximum for larger values of $N$, by considering
only the area around the degeneracy values given by
Eq.~(\protect\ref{E:uvw}). While this is not a certifiable global
maximum, the perturbation expansion described below matches so well
that the two are indistinguishable if they are plotted together for
$n+1>7$. This gives us confidence that the numerically determined
upper bound $\mathcal{M}_{1,2,3}$, which we plot in Fig.~\ref{F:totalfig}, is
indeed a global maximum for all $N$.
We have used the numerical work to help formulate a perturbation
expansion that gives the first correction to the $N\to\infty$ behavior
of the $s=1,2,3$ bound. Defining $x=1/N$, we rewrite the constraint
equations~(\ref{E:123}) as
\begin{eqnarray} \label{E:con}
a A + b B + c\,C & = & x\;,\nonumber \\
a A^2 + b B^2 + c\,C^2 & = & x^2\;,\\
a A^3 + b B^3 + c\,C^3 & = & x^3\;,\nonumber
\end{eqnarray}
where $a=u/N$, $b=v/N$, and $c=w/N$. We also have the constraint
\begin{equation} \label{E:con2}
a+b+c=2\;.
\end{equation}
As $x$ is the variable that is asymptotically small, we seek an
expansion in terms of it.
Our numerical work tells us that there are two positive eigenvalues,
one of which is larger and nondegenerate. In formulating our
perturbation expansion, we let $B$ and $C$ be the positive eigenvalues,
with $B$ being the larger one, having degeneracy $v=b_1\ge1$. We do
not assume that $b_1$ is 1, as the numerics show, but rather let the
equations force us to that conclusion. With this assumption, the form
of the constraints~(\ref{E:con}) shows that the variables have the
following expansions to first order beyond the $N\to\infty$ form:
\begin{eqnarray} \label{E:degs}
a & = & a_0 + a_1 x^{1/3}\;,\nonumber \\
b & = & b_1x\;,\\
c & = & c_0 + c_1x^{1/3}\;,\nonumber
\end{eqnarray}
and
\begin{eqnarray} \label{E:evs}
A & = & A_0 x + A_1 x^{4/3}\;,\nonumber \\
B & = & B_1 x^{2/3}\;,\\
C & = & C_0 x + C_1 x^{4/3}\;.\nonumber
\end{eqnarray}
We see that we are actually expanding in the quantity $y=x^{1/3}$. In
terms of these variables, the negativity is given by
\begin{eqnarray}
\mathcal{M}_{1,2,3}&=&
\frac{-a A + b B + c C}{x}\nonumber\\
&=&-a_0A_0+c_0C_0+(-a_0A_1-a_1A_0+c_0C_1+c_1C_0)x^{1/3}+O(x^{2/3})\;,
\end{eqnarray}
which we now endeavor to maximize.
Substituting Eqs.~(\ref{E:degs}) and (\ref{E:evs}) into the
constraints~(\ref{E:con}) and (\ref{E:con2}) and equating terms with
equal exponents of $x$, we obtain, to zero order,
\begin{eqnarray}
a_0 + c_0 & = & 2\;,\nonumber \\
a_0 A_0 + c_0 C_0 & = & 1\;,\\
a_0 A_0^2+ c_0 C_0^2 & = & 1 \;.\nonumber
\end{eqnarray}
Solving for $a_0$, $c_0$, and $C_0$ in terms of $A_0$ and substituting
the results into the zero-order piece of $\mathcal{M}_{1,2,3}$ gives
\begin{equation}
\label{E:neg1}
\mathcal{M}_{1,2,3}=\frac{1-4A_0+2A_0^2}{1-2A_0+2A_0^2}\;.
\end{equation}
Maximizing Eq.~(\ref{E:neg1}) gives $A_0^2=1/2$ and, hence,
$A_0=-1/\sqrt{2}$, since $A$ is the negative eigenvalue. This leads to
$a_0=1-1/\sqrt{2}$, $c_0 = 1+1/\sqrt{2}$, and $C_0=1/\sqrt{2}$, and the
resulting $N\to\infty$ upper bound is $\mathcal{M}_{1,2,3}=\sqrt2$, as
expected.
If we carry this process out to first order beyond the $N\to\infty$
behavior, we obtain, after some algebraic manipulation, $\mathcal{M}_{1,2,3} =
\sqrt2-b_1^{1/3}x^{1/3}/2^{7/6}+O(x^{2/3})$. Maximizing this simply
means making $b_1$ as small as possible, i.e., choosing $b_1=1$, whence
we obtain the following asymptotic expression for the $s=1,2,3$ upper
bound:
\begin{equation}
\mathcal{M}_{1,2,3} = \sqrt2 - \frac{1}{2^{7/6}N^{1/3}} +
O\!\left(\frac{1}{N^{2/3}}\right)\;.
\end{equation}
This shows that the upper bound of $\sqrt 2$ is approached
monotonically from below in the asymptotic regime. In addition, the
procedure verifies that in the maximum solution, the largest positive
eigenvalue is nondegenerate. For the case of qubits we have $N=2^n$,
implying that the approach to the $N\to\infty$ bound is exponentially
fast.
\section{Conclusion} \label{S:conclusion}
The mixed-state quantum circuit~(\ref{E:circuitalpha}) provides an
efficient method for estimating the normalized trace of a unitary
operator, a task that is thought to be exponentially hard on a
classical computer. If one believes that global entanglement is the
essential resource for the exponential speedup achieved by quantum
computation, then the question begging to be answered is whether there
is any entanglement in the circuit's output
state~(\ref{E:rhooutalpha}). The purpose of this paper was to
investigate this question.
A notable feature of the circuit~(\ref{E:circuitalpha}) is that it
provides an efficient method for estimating the normalized trace no
matter how small the initial polarization $\alpha$ of the special qubit
in the zeroth register, as long as that polarization is not zero.
Since all the other qubits are initially completely unpolarized, we are
led to characterize the computational power of this circuit as the
``power of even the tiniest fraction of a qubit.'' We provide
preliminary results regarding the entanglement that can be achieved for
$\alpha<1$. Our results are consistent with, but certainly do not
demonstrate the conclusion that separable states cannot provide an
exponential speedup and that entanglement is possible no matter how
small $\alpha$ is. The question of entanglement for subunity
polarization of the special qubit deserves further investigation.
Our key conclusions concern the case where the special qubit is
initially pure ($\alpha=1$). We find that the
circuit~(\ref{E:circuit}) typically does produce global entanglement,
but the amount of this entanglement is quite small. Using
multiplicative negativity to measure the amount of entanglement, we
show that as the number of qubits becomes large, the multiplicative
negativity in the state~(\ref{E:rhoout}) is a vanishingly small
fraction of the maximum possible multiplicative negativity for roughly
equal splittings of the qubits. This hints that the key to
computational speedup might be the global character of the
entanglement, rather than the amount of the entanglement. In the
spirit of the pioneering contribution of Wyler~\cite{Wyler74}, what
happier motto can we find for this state of affairs than {\it Multum ex
Parvo}, or A Lot out of A Little.
\begin{acknowledgments}
The authors thank H.~N. Barnum, A.~Denney, B.~Eastin, K.~Manne, N.~C.
Menicucci, and A.~Silberfarb for useful
discussions. The Matlab code used to calculate the results of
Sec.~\ref{S:random} made use of T.~S.~Cubitt's freely available
algorithm for taking the partial transpose of a matrix; this and other
useful algorithms written by Cubitt are available at the website
\href{http://www.dr-qubit.org/matlab.html}{http://www.dr-qubit.org/matlab.html}.
The quantum circuits in this paper were typeset using Qcircuit, which
is freely available online at
\href{http://info.phys.unm.edu/Qcircuit}{http://info.phys.unm.edu/Qcircuit}.
This work was supported in part by US Army Research Office Contract
No.~W911NF-04-1-0242.
\end{acknowledgments}
|
q-bio/0505041
|
\section*{Introduction}
The origin of homochirality is usually believed to be closely
connected with the origin of life (see Bada 1995 for an overview).
It may have even been a {\it prerequisite} for life in that the structural
stability provided by chiral polymers may have been essential for the
assembly of the first replicating molecule.
If this is so, it would probably mean that the origin of homochirality
had to be a physical one.
Possible candidates for a physical origin of homochirality include the
presence of polarized light from a nearby neutron star
(Rubenstein et al.\ 1983),
magnetic fields (Thiemann 1984, Rikken and Raupach 2000), or mechanisms
involving the electroweak force (e.g., Hegstrom, 1984).
However, Bailey et al.\ (1998) and Bailey (2001) showed later that
supernova remnants have not actually displayed circularly polarized light.
Another perhaps more likely possibility is that homochirality developed
rather as a {\it consequence} of life.
This would mean that some primitive form of life should have been possible
without chirality having played any role in this.
In connection with the origin of life one used to discuss the hypothesis
of a relatively simple self-replicating molecule (e.g.\ Frank 1953).
This picture ignores the possible importance of compartmentalization
that may be required for achieving the concentrations necessary for the
chemical reactions to take place.
This led to the concept of a very early lipid world that would have
preceded the often discussed RNA world.
Some insight into these ideas can be gained by looking at
recent theoretical attempts to build life from scratch invoking a series of steps
and chemical processes that are thermodynamically possible
(Rasmussen {\em et al., } 2003).
Interestingly enough, their approach involves peptide nucleic acid
(PNA) because of its charge carrying properties and the molecule's hydrophobic backbone.
Its potential as contemporary genome, which would for example require a machinery for protein transcriptase, was not utilized at this stage, although
it may undoubtedly become a candidate for carrying genetic information at
later evolutionary stages.
Although this is speculation and details are unknown, the idea of a
combined PNA/lipid world provides
an attractive scenario for discussing the origin of homochirality in the
context of genetic evolution (Nelson {\em et al., } 2000, Pooga {\em et al., } 2001).
We picture here a situation where PNA has developed to having autocatalytic
properties, just like RNA in the RNA world (Woese 1967).
PNA can be achiral if its peptide backbone is derived from glycine.
The step toward a chiral backbone invoking, for example, poly-alanine,
seems like a relatively minor one (one of two H is being substituted
by CH$_3$ in the CH$_2$ piece).
However, there are two different ways of doing this, leading to either
{\sc l} or {\sc d} alanine.
The assembly of mixed {\sc l} and {\sc d} PNA poly-nucleotides is unlikely,
just as it is unlikely in the corresponding case of DNA polymerization
(Joyce {\em et al., } 1984).
Moreover, the addition of a nucleotide of opposite handedness is known
to `spoil' further polymerization (also known as `enantiomeric
cross-inhibition').
This makes it increasingly unlikely to generate {\sc l} and {\sc d} polymers of
any appreciable length greater than just a few.
The main difference between PNA and DNA polymerization is that DNA
can attach new monomers only on the 3' end of the ribose sugar
(e.g.\ Turner {\em et al., } 2000), so polymerization is uni-directional and can
only proceed in one direction.
By contrast, PNA does not have this restriction and can polymerize
in a bi-directional fashion in either direction.
The latter case has been addressed in a number of recent studies
starting with Sandars (2003), but the former case is more
readily amenable to laboratory verification, as is shown by recent
experiments confirming the process of enantiomeric cross-inhibition
(Schmidt et al.\ (1997) and Kozlov et al.\ 1998).
Given that the differences between uni-directional and bi-directional
polymerization have not yet been explored, we must first extend the
formalism of Sandars (2003) to the uni-directional case and focus then
on the comparison between the two.
Although enantiomeric cross-inhibition seems to be an important
ingredient of homochirality, this can only work if the production of
new monomers is somehow biased toward the enantiomeric excess of the
already existing polymers -- even if this bias is extremely tiny.
This is the second important ingredient of homochirality and is
known as {\it autocatalysis}.
Certain chemical reactions are indeed known to have such properties
(Soai {\em et al., } 1995, Sato et al.\ 2003, Mathew {\em et al., } 2004).
It is important to point out that these reactions are only based on
dimerization, but they are nevertheless quite valuable in establishing
the basic elements of homochiralization in chemical systems
(Kitamura et al.\ 1998, Plasson et al.\ 2004), and can lead to
quantitative predictions (Blackmond et al.\ 2001, Buono and Blackmond 2003).
For a recent review see the paper by Mislow (2003).
The importance of the combined action of enantiomeric cross-inhibition
on the one hand and autocatalysis on the other has been well known
since the very early work of Strong (1898) and a
seminal paper of Frank (1953), who first proposed a
simple mathematical model consisting of only two variables representing
the relative numbers of left and right handed building blocks.
His paper was tremendously insightful in that he understood not only the
two basic ingredients needed for homochirality, but he was also aware
that there are two rather different scenarios through which homochirality
can be achieved, depending basically on how frequent the creation of a
potential life bearing molecule is.
If the creation of life was sufficiently frequent, life may have emerged
at different locations on the Earth's surface (including the oceans),
giving rise to the interesting possibility of having different life
forms of opposite handedness simultaneously.
This is the case studied recently by Brandenburg and Multam\"aki (2004),
who estimated that left and right handed life forms could have coexisted
for not more than the first 500 Million years.
This is because different populations will spread over the Earth's surface
and come eventually into contact, extinguishing one of the two life forms.
The other possibility is that the creation of life was an infrequent
event, in which case there was ever only one life form, which was then the
one that led eventually to the global population over the Earth's surface.
Regardless of which of the two scenarios applies, the final outcome would
have been the same.
In his paper, Frank (1953) only analyzed the second alternative in detail.
This is also the scenario discussed in most of the approaches since then,
which all discuss homochirality as the result of a bifurcation process
[see also Saito and Hyuga (2004a) for a recent classification of different
possibilities].
This forms also the basis for the model discussed in the present paper,
where we present a modification of a detailed polymerization model
proposed recently by Sandars (2003).
In this model the enantiomeric excess grows exponentially in time.
However, if the creation of life is a frequent event, the process
toward global homochirality can only occur linearly in time
(Brandenburg and Multam\"aki 2004; see also Saito and Hyuga 2004b for
related work).
In the model by Sandars (2003),
autocatalysis is incorporated by assuming that the rate of monomer
production of given handedness is proportional to the concentration of
polymers of the same handedness.
As noted above, this effect alone, i.e.\ without the additional effect
of enantiomeric cross-inhibition, cannot lead to complete homochirality,
because the initial enantiomeric excess is not (or only weakly) amplified.
In order to model this quantitatively, Sandars (2003) assumed that
polymerization can, at a certain rate, also occur with monomers of
opposite handedness.
This reaction produces chemically inactive products and it acts thus
as a means of removing oppositely oriented building blocks (that are
already in the minority) from the system.
This model has been studied further by Wattis and Coveney (2005) and
by Brandenburg {\em et al. } (2005a, hereafter referred to as BAHN) who showed
that, for large enough fidelity of the catalyst, the departure from a
homochiral state occurs exponentially fast at a growth rate that depends
on the fidelity and the rate of enantiomeric cross-inhibition.
They also discussed a model consisting only of primers and dimers
which can be reduced to a set of two ordinary differential equations
which are similar to those of Frank (1953).
An important difference to Frank's model is the form of the
cross-inhibition term.
As discussed by Blackmond (2004), the feedback term in his model
corresponds to the formation of inactive dimers with one left and one
right handed building block. This is unrealistic, because dimers with
two left or two right handed building blocks should also form.
This let her to the conclusion that the dimers must act as catalysts.
We have emphasized that
the original model of Sandars assumed that polymerization can occur on
either end of the polymer.
While this may be a reasonable assumption in general (and probably also
for PNA), it is not realistic for RNA polymerization where polymerization
can usually only proceed in a uni-directional fashion.
Since uni-directional polymerization leads to a simpler model, and since
the derivation of the bi-directional polymerization model has already
been discussed elsewhere (see, e.g., BAHN), the uni-directional case
is ideal for introducing the basic ingredients of the model.
Following the mathematical description of the uni-directional model, we
present numerical solutions that show that the main conclusions obtained
from the earlier bi-directional polymerization models carry over to the
uni-directional case.
This addresses possible objections that the Sandars model is not
applicable to RNA and DNA polymerization that is more easily
amenable to detailed laboratory verification.
\section*{Polymerization model}
\label{polymerization}
The starting point of the model is a basic polymerization process
\begin{equation}
L _{n} + L_{1} \stackrel{ k_{S}~}{\longrightarrow} L_{n+1} ,
\end{equation}
where $L_{n}$ denotes left handed polymers of length $n$ and $k_{S}$ the reaction rate. The corresponding model of the polymerization process reads
\begin{equation}
{{\rm d} {}\over{\rm d} {} t}[L_n] = -k_{S} [L_{1}] \left( [L_{n}] - L_{n-1}\right),
\label{Lneqn}
\end{equation}
where $[ L_{n} ]$ is the concentration of $L_{n}$.
New building blocks are continuously
added to the model, e.g.\ by the inclusion of a substrate
that provides a source $Q_L$ of new monomers, i.e.\
\begin{equation}
{{\rm d} {}\over{\rm d} {} t}[L_1] = Q_{L} - \sum _{n=1}^N k_{S} [L_{1}] [L_{n}]
\label{L1eqn}
\end{equation}
\begin{figure}[t!]\begin{center}
\includegraphics[width=\columnwidth]{pwave_uni}
\end{center}\caption[]{
Wave-like propagation of a finite amplitude perturbation
in the uni-directional polymerization model.
The initial profile is a gaussian.
Note the undisturbed outward propagation of the wave at $n=N$.
The time difference between the different curves is $20/(k_SQ)^{1/2}$.
We have shown the first and last times as dashed and solid lines,
respectively, and all other times as dotted lines.
The parameters are $N=50$ and $k_C/k_S=1$.
}\label{pwave}\end{figure}
The solution of Eqs.~(\ref{Lneqn}) and (\ref{L1eqn})
is simply a wave traveling toward longer polymers at velocity $k_{S}[L_1]$
(see \Fig{pwave}), as can also be seen by considering the continuous limit
of this equation,
$\partial[L_n]/\partial t=-k_{S}[L_1]\partial[L_n]/\partial n$.
Note that, in contrast to a similar result for bi-directional
polymerization (see BAHN, their Fig.~1), the functional form of $[L_n]$
is continuous between $n=1$ and $n=2$.
In the bi-directional case $[L_1]$ is about twice as large as $[L_2]$.
The model becomes more interesting when the right handed polymers, $R_n$,
are also included. The interaction between the mirrored strands is assumed
to occur through two separate phenomena: enantiomeric
cross-inhibition and enzymatic autocatalysis. The autocatalysis makes the left handed, respective right handed, polymers catalyze the production of left, respective right, handed building blocks.
The source terms $Q_L$ and $Q_R$ are proportional to the concentration of the
achiral substrate $[S]$ and a corresponding reaction coefficient $k_{C}$.
In the case of perfect fidelity, $f=1$, the source terms are written as
\begin{equation}
Q_L=k_C[S]C_L,\quad
Q_R=k_C[S]C_R,
\end{equation}
where $C_L$ and $C_R$ are some measures of the catalytic effect of the
already existing left-handed and right-handed polymers.
The should be a monotonically increasing function of the
overall concentration of the left handed polymers.
The exact functional form of these expressions are not known.
In fact, different authors have chosen different prescriptions for
$C_L$ and $C_R$.
The qualitative results of the models do however not seem not affected by this choice.
We find it natural to assume that
\begin{equation}
C_{L} = \sum _{n=1}^{N} n [ L_{n} ] ,
\end{equation}
\begin{equation}
C_{R} = \sum _{n=1}^{N} n [ R_{n} ] .
\end{equation}
In the more general case of finite fidelity of the assumed autocatalysis,
i.e.\ for $0<f<$, we model there will be `cross-talk' between between the
two handednesses, so we write
\begin{equation}
Q_L=k_C[S]\Big\{{\textstyle{1\over2}}(1+f)C_L+{\textstyle{1\over2}}(1-f)C_R+C_{0L}\Big\},
\label{QLdef}
\end{equation}
\begin{equation}
Q_R=k_C[S]\Big\{{\textstyle{1\over2}}(1+f)C_R+{\textstyle{1\over2}}(1-f)C_L+C_{0R}\Big\} ,
\label{QRdef}
\end{equation}
Here the terms $C_{0L}$ and $C_{0R}$ allow for the possibility of
non-catalytic production of left and right handed monomers.
However, in the following we assume $C_{0L} = C_{0R} =0$.
(The inclusion of $C_{0L}$ and $C_{0R}$ terms leads to so-called imperfect
bifurcations; see Fig.~6 of BAHN.)
The enantiomeric cross-inhibition occurs when a building block attaches to a polymer of the
opposite handedness. The resulting polymer cannot continue to grow at the affected end and can therefore be considered spoiled. This phenomenon has been observed in experiments by
(Joyce {\em et al., } 1984) who studied template-directed polymerization. When the cross-inhibition is
included, the set of reactions in the model is (for $n\ge2$)
\begin{eqnarray}
L_{n}+L_1&\stackrel{k_S~}{\longrightarrow}&L_{n+1},
\label{react1}\\
L_{n}+R_1&\stackrel{k_I~}{\longrightarrow}&L_nR_1,
\label{react2}
\end{eqnarray}
and for all four equations we have the complementary reactions
obtained by exchanging $L$ and $R$. The new parameter $k_{I}$ measures
the rate at which the cross-inhibition occurs.
The rate equations now read (for $n\ge2$)
\begin{equation}
{{\rm d} {}[L_{n}]\over{\rm d} {} t}=k_S[L_1]\left([L_{n-1}]-[L_{n}]\right)
-k_I[L_{n}][R_1],
\label{Ln_new2}
\end{equation}
\begin{equation}
{{\rm d} {}[R_{n}]\over{\rm d} {} t}=k_S[R_1]\left([R_{n-1}]-[R_{n}]\right)
-k_I[R_{n}][L_1].
\label{Rn_new2}
\end{equation}
The evolution of the spoiled polymers, $L_{n}R_{1}$ and $R_{n}L_{1}$,
can be discarded, because, in contrast to bi-directional polymerization,
their concentrations do not enter the uni-directional model.
In comparison with bi-directional polymerization we note that here
for $n=2$ there is no extra 1/2 factor in front of the $[L_1]^2$ and
$[R_1]^2$ terms in \Eqs{Ln_new2}{Rn_new2}.
This is because with polymerization from either end the total reaction
rate would be twice as big.
However, when two monomers interact, the corresponding reaction equation
is the same for uni-directional and bi-directional polymerization, because
the two reacting monomers are indistinguishable.
Thus, whether the first binds to the second or the second to the first
monomer does not make a difference.
This is then equivalent to saying that for two monomers polymerization
can occur both on the 3' and on the 5' end of the ribose sugar.
In effect, this removes an awkward 1/2 factor for the $n=2$ equations
in the model of Sandars (2003); see also Eq.~(7) of BAHN.
The reactions \eqs{react1}{react2} imply the presence of additional loss
terms in the evolution equations of monomers, so instead of \Eq{L1eqn}
we now have
\begin{equation}
{{\rm d} {}\over{\rm d} {} t}[L_1]=Q_{L}-\lambda_L[L_1],
\label{L1eqn2}
\end{equation}
\begin{equation}
{{\rm d} {}\over{\rm d} {} t}[R_1]=Q_{R}-\lambda_R[R_1],
\label{R1eqn2}
\end{equation}
where we have defined decay rates
\begin{equation}
\lambda_L=k_{S}\left([L_1]+\sum_{n=1}^N [L_{n}]\right)
+k_{I}\sum_{n=1}^N [R_{n}],
\label{lamL}
\end{equation}
\begin{equation}
\lambda_R=k_{S}\left([R_1]+\sum_{n=1}^N [R_{n}]\right)
+k_{I}\sum_{n=1}^N [I_{n}].
\label{lamR}
\end{equation}
Comparing again with the bi-directional model, the present model has an
extra $[L_1]$ (or $[R_1]$) term, but there is no factor of 2 in front
of the $k_S$ and $k_I$ terms and the sums over the concentrations of
semi-spoiled polymers are also absent.
From symmetry considerations it follows that there always exist a racemic
steady state ($[R_n]=[L_n]$) of the rate equations. In fact, we can show that a steady
state is given by (for $n\ge2$)
\begin{equation}
[L_n]= \left( 1+ \frac{k_I}{k_S} \right) ^{-(n-1)}[L_1]
\quad\mbox{(racemic)},
\label{racemic}
\end{equation}
In particular, if $k_I=k_S$, then $[L_n]=[L_1]/2^{n-1}$,
i.e.\ $[L_n]$ drops by a factor of 2 from one $n$ to the next.
This is also true between $[L_1]$ and $[R_1]$, while in the bi-directional
model their ratio is 4.
While the existence of a racemic solution is trivial, the interesting question is whether there
exist other fixed points of the equations, and in this case which of these fixed points are stable
under certain conditions. As was shown in BAHN the model typically
goes through a pitchfork bifurcation from a single stable fixed point (the racemic solution) to
a state with two homochiral stable fixed points where the racemic solution corresponds to an unstable fixed point.
The order parameter controlling the bifurcation is the fidelity $f$
of the autocatalysis.
In \Fig{bifurc} we show the enantiomeric excess, defined here as
\begin{equation}
\eta={C_R-C_L\over C_R+C_L},
\label{etdef}
\end{equation}
for $k_I/k_S=1$ and $k_I/k_S=0.1$.
We also compare with the corresponding result from the bi-directional
polymerization model.
The difference between the two cases is however surprisingly small.
\begin{figure}[t!]\begin{center}
\includegraphics[width=\columnwidth]{bifurc}
\end{center}\caption[]{
Bifurcation diagram for two different values of $k_I/k_S$ (=1 in the upper
panel and 0.1 in the lower panel).
Note the transition from a racemic to homochiral state as a function of
the autocatalytic fidelity $f$. Homochirality is measured in terms of $\eta =
(\sum _{n} n ([ L_{n} ] - [ R_{n} ])/(\sum _{n} n ([ L_{n} ] + [ R_{n} ])$.
For weak enantiomeric cross-inhibition ($k_I/k_S=0.1$ in the lower panel)
the range of permissible values of the fidelity parameter is decreased,
demonstrating the importance of enantiomeric cross-inhibition.
}\label{bifurc}\end{figure}
\section*{Polymer dissociation}
The model described in the last section provides a possible explanation of homochirality, without
appealing to external mechanisms for the symmetry breaking. One may also argue that the model
is rather realistic in that it explicitly considers the polymerization process. Less satisfactory are some
of the details in the description of the polymerization process. Perhaps most importantly, the polymerization process is
irreversible, no chain-breaking is included in the model.
As we have already pointed out in an earlier paper
(Brandenburg {\em et al., } 2005b, hereafter referred to as BAN), this is unrealistic
because for large enough fidelity the polymer length always tends to diverge.
Also, the model cannot be
self-contained since there is no feedback from the polymers back to the substrate.
Before discussing in more detail the differences between uni-directional and
bi-directional polymerization in the presence of dissociation, let us first
recall the main aspects of the polymerization model with dissociation, as
developed recently by BAN.
The dissociation process is described by the reaction
\begin{eqnarray*}
L_n &\stackrel{\gamma_S~}{\longrightarrow} & L_m + L_{n-m},
\end{eqnarray*}
and the corresponding reaction for the right handed polymers. It turns out that there are a
number of subtleties that need consideration when constructing the detailed model of the
chain breaking. For example, if we assume that the fragments can continue to
polymerize, the result is a catastrophic over-abundance of the short chains. The reason for this is
that all building blocks ($L_{1}$ and $R_{1}$) are used to produce longer polymers whereas
polymers of length two or more cannot (according to the reactions above) agglomerate into longer polymers. One way of remedy would of course be to include the agglomeration in the model,
but the disadvantage of this is that the model then becomes significantly more complex due to the
higher degree of nonlinearity.
These issues are discussed in further detail in BAN, where also
a number of possible of the model model are considered.
We focus here on the model where the polymerization fragments
are recycled back into the achiral substrate.
In the rest of this paper we discuss the modifications necessary to
incorporate dissociation in a uni-directional polymerization model.
\begin{figure}[t!]\begin{center}
\includegraphics[width=\columnwidth]{pnl}
\end{center}\caption[]{
Isotactic equilibrium states with polymerization, dissociation, and
recycling of fragments into the substrate, for different values of
${\cal M}$ (upper panel), and the mean polymer length $N_L$ (lower panel,
solid line), compared with the bi-directional polymerization model of
BAHN (dotted line).
}\label{pnl}\end{figure}
In the presence of dissociation, the new system of equations is
\begin{eqnarray*}
{{\rm d} {}\over{\rm d} {} t}[L_n] & = & p_n^{(L)}-(n-1)\gamma_S[L_n], \\
{{\rm d} {}\over{\rm d} {} t}[R_n] & = & p_n^{(R)}-(n-1)\gamma_S[R_n],
\end{eqnarray*}
where $p_n^{(L)}$ and $p_n^{(R)}$
indicate the terms due to polymerization described above. The source term in the substrate
is given by
\begin{equation}
Q=W_L+W_R+W_{LR}+W_{RL}
\end{equation}
where
\begin{equation}
W_L=\sum_{n=1}^N n w_n^{(L)},\quad
W_R=\sum_{n=1}^N n w_n^{(R)},
\end{equation}
is the total number of recycled building blocks (both left-handed and
right-handed), and
\begin{equation}
W_{LR}=\sum_{n=1}^N (n+1) w_n^{(LR)},\quad
W_{RL}=\sum_{n=1}^N (n+1) w_n^{(RL)}
\end{equation}
are the corresponding contributions from fragmented (inactive) polymers.
Like in the bi-directional case, the
average polymer length scales with a quarter power of the parameter ${\cal M}=
(k_{S} / \gamma _{S})\sum _{n=1}^N n \left( [L_{n}] + [ R_{n}] \right)$.
Thus, in order to achieve appreciable polymer length, the
normalized total mass must be sufficiently large.
Histograms of the chain distribution and the dependence of the chain
length on the total normalized mass are given in \Fig{pnl} and compared
with the bi-directional case.
For small chain mass (${\cal M}\leq10$) the chains tend to be very
short ($N_L\approx1...2$), which is common to both bi-directional and
uni-directional cases.
For larger total mass, however, the two cases begin to depart from each
other such that for the same total mass the chains are slightly shorter
in the uni-directional case.
\section*{Conclusions}
In the present paper we have modified the polymerization model of Sandars
(2003) such that polymerization is only possible on one of the two ends
of the polymer.
Although PNA polymerization is probably still bi-directional,
this is normally not the case for RNA polymerization.
The significance of considering RNA polymerization is that it is readily
amenable to direct experimental verification (e.g., Joyce 1984).
One of the perhaps most curious properties of the model is the
wave-like evolution of the polymer length after initializing the
polymerization process.
This prediction could possibly be tested experimentally by setting
up a range of different polymerization experiments that are being
stopped at different times.
A subsequent analysis, as it is done for DNA sequencing, might then reveal
a structure as seen in \Fig{pwave}.
We emphasize that homochirality appears spontaneously when two separate mechanisms are present in the polymerization process: autocatalysis and enantiomeric cross-inhibition. The accuracy of the
autocatalysis is parameterized by a fidelity factor. At low fidelity the polymerization leads to
a racemic solution whereas at higher fidelity a homochiral state is reached from an initially (almost) racemic solution. The corresponding bifurcation diagram displays a classic pitchfork bifurcation
and the autocatalytic fidelity acts as a control parameter.
The differences between uni-directional and bi-directional polymerization
are however surprisingly small.
In the second part of this paper we have extended the model to include
dissociation within the framework of uni-directional polymerization.
As in the case of bi-directional polymerization,
the model becomes chemically more realistic in that longer chains are
now possible.
Moreover, the model is constructed to be self-contained
in that the need for external
replenishing of the substrate is now replaced by the recycling
of dissociation fragments.
With respect to chirality, the qualitative behavior of the model
is shown to persist the inclusion of dissociation. We therefore conclude
that the existence of a transition between a racemic and homochiral state, as a function of the
autocatalytic fidelity, is a robust phenomenon within the class of models under consideration.
\section*{References}
\begin{list}{}{\leftmargin 3em \itemindent -3em\listparindent \itemindent
\itemsep 0pt \parsep 1pt}\item[]
Bada, J. L.\ynat{1995}{374}{594}
{595}{Origins of homochirality}
Bailey, J., Chrysostomou, A., Hough, J. H., Gledhill, T. M., McCall,
A., Clark, S., M\'enard, F. and Tamura, M.\ysci{1998}{281}{672}
{674}{Circular polarization in star forming regions: implications for
biomolecular homochirality}
Bailey, J.\yoleb{2001}{31}{167}
{183}{Astronomical sources of circularly polarized light and the origin
of homochirality}
Blackmond, D. G.\ypnas{2004}{101}{5732}
{5736}{Asymmetric autocatalysis and its implications for the origin
of homochirality}
Blackmond, D. G., McMillan, C. R., Ramdeehul, S., Schorm, A., and
Brown J. M.\yjacs{2001}{123}{10103}
{10104}{Origins of asymmetric amplification in autocatalytic alkylzinc
additions}
Brandenburg, A., Andersen, A., H\"ofner, S., and Nilsson, M.\poleb{2005a}
{Homochiral growth through enantiomeric cross-inhibition}
Preprints available online at: \url{http://arXiv.org/abs/q-bio/0401036} (BAHN).
Brandenburg, A., Andersen, A., and Nilsson, M.\poleb{2005b}
{Dissociation in a polymerization model of homochirality}
Preprints available online at: \url{http://arXiv.org/abs/q-bio/0502008} (BAN).
Brandenburg, A., \& Multam\"aki, T.\yjourS{2004}{Int.\ J.\ Astrobiol.}{3}{209}
{219}{How long can left and right handed life forms coexist?}
Buono, F. G., and Blackmond, D. G.\yjacs{2003}{125}{8978}
{8979}{Kinetic evidence for a tetrameric transition state in the
asymmetric autocatalytic alkylation of pyrimidyl aldehydes}
Frank, F. C.\yjour{1953}{Biochim.\ Biophys.\ Acta}{11}{459}
{464}{On Spontaneous Asymmetric Synthesis}
Joyce, G. F., Visser, G. M., van Boeckel, C. A. A., van Boom, J. H.,
Orgel, L. E., and Westrenen, J.\ynat{1984}{310}{602}
{603}{Chiral selection in poly(C)-directed synthesis of oligo(G)}
Hegstrom, R. A.\yjour{1984}{Orig.\ Life}{14}{405}
{414}{Parity nonconservation and the origin of biological chirality --
theoretical calculations}
Kitamura, M., Suga, S., Oka, H., and Noyori, R.\yjacs{1998}{120}{9800}
{9809}{Quantitative analysis of the chiral amplification in the amino
alcohol-promoted asymmetric alkylation of aldehydes with dialkylzincs}
Kozlov, I. A., Pitsch, S., and Orgel, L. E.\ypnas{1998}{95}{13448}
{13452}{Oligomerization of activated D- and L-guanosine mononucleotides
on templates containing D- and L-deoxycytidylate residues}
Mathew, S. P., Iwamura, H., and Blackmond,
D. G.\yjour{2004}{Ang. Chem. Int. Ed.}{43}{3317}
{3331}{Amplification of enantiomeric excess in a proline-mediated reaction}
Mislow, K.\yjour{2003}{Collect. Czech. Chem. Commun.}{68}{849}
{864}{Absolute asymmetric synthesis: a commentary}
Nelson, K. E., Levy, M., and Miller, S. L.\yjour{2000}
{Proc.\ Nat.\ Acad.\ Sci.\ U.S.A.}{97}{3868}
{3871}{Peptide Nucleic Acids rather than RNA may have been
the first genetic molecule}
Plasson, R., Bersini, H., and Commeyras, A.\ypnas{2004}{101}{16733}
{16738}{Recycling Frank: spontaneous emergence of homochirality in
noncatalytic systems}
Pooga, M., Land, T., Bartfai, T., and
Langel, \"U.\yjour{2001}{Biomol.\ Eng.}{17}{183}
{192}{PNA oligomers as tools for specific modulation of gene expression}
Rasmussen, S., Chen, L., Nilsson, M., and
Abe, S.\yjour{2003}{Artif.\ Life}{9}{269}
{316}{Bridging nonliving and living matter}
Rikken, G. L. J. A. and Raupach, E.\ynat{2000}{405}{932}
{935}{Enantioselective magnetochiral photochemistry}
Rubenstein, E., Bonner, W. A., Noyes, H. P.,
and Brown, G. S.\ynat{1983}{306}{118}
{118}{Super-novae and life}
Sandars, P. G. H.\yoleb{2003}{33}{575}
{587}{A toy model for the generation of homochirality during polymerization}
Saito, Y. and Hyuga, H.\yjour{2004a}{J.\ Phys.\ Soc.\ Jap.}{73}{33}
{35}{Complete homochirality induced by the nonlinear autocatalysis
and recycling} (SH)
Saito, Y. and Hyuga, H.\yjour{2004b}{J.\ Phys.\ Soc.\ Jap.}{73}{1685}
{1688}{Homochirality proliferation in space}
Sato, I., Urabe, H., Ishiguro, S., Shibata, T., and
Soai, K.\yjour{2003}{Angew. Chem. Int. Ed.}{42}{315}
{317}{Amplification of chirality from extremely low to greater than
99.5\% {\it ee} by asymmetric autocatalysis}
Schmidt, J. G., Nielsen, P. E., \& Orgel, L. E.\yjacs{1997}{119}{1494}
{1495}{Enantiomeric cross-inhibition in the synthesis of oligonucleotides
on a nonchiral template}
Soai, K., Shibata, T., Morioka, H., and Choji, K.\ynat{1995}{378}{767}
{768}{Asymmetric autocatalysis and amplification of enantiomeric excess
of a chiral molecule}
Strong, W. M.\ynat{1898}{59}{53}
{54}{Stereochemistry and vitalism}
Thiemann, W.\yoleb{1984}{14}{421}
{426}{Speculations and facts on the possible inductions of chirality
through earth magnetic field}
Turner, P. C., McLennan, A. G., Bates, A. D., and
White, M. R. H.\ybook{2000}{Molecular biology}
{BIOS Scientific Publishers, Taylor \& Francis Group, London and New York}
Wattis, J. A. D. and Coveney, P. V.\poleb{2005}
{Symmetry-breaking in chiral polymerization}
Preprints available online at: \url{http://arXiv.org/abs/physics/0402091}
Woese, C.\ybook{1967}{The Genetic Code}{New York: Harper and Row}
\end{list}
\vfill\bigskip\noindent{\it
$ $Id: paper.tex,v 1.32 2005/05/14 01:10:58 brandenb Exp $ $}
\end{document}
|
1907.00168
|
\section{Introduction}
The automatic correction of errors in text [\textit{In a such situaction} $\rightarrow$ \textit{In such a situation}] is receiving more and more attention from the natural language processing community. A series of competitions has been devoted to grammatical error correction (GEC): the CoNLL-2013 shared task~\citep{conll2013}, the CoNLL-2014 shared task~\citep{conll2014}, and finally the BEA 2019 shared task~\citep{bea19}. This paper presents the contributions from the Cambridge University Engineering Department to the latest GEC competition at the BEA 2019 workshop.
We submitted systems to two different tracks. The {\em low-resource track} did not permit the use of parallel training data except a small development set with around 4K sentence pairs. For our low-resource system we extended our prior work on finite state transducer based GEC~\citep{fst-gec} to handle new error types such as punctuation errors as well as insertions and deletions of a small number of frequent words. For the {\em restricted track}, the organizers provided 1.2M pairs (560K without identity mappings) of corrected and uncorrected sentences. Our goal on the restricted track was to explore the potential of purely neural models for grammatical error correction.\footnote{Models will be published at \url{http://ucam-smt.github.io/sgnmt/html/bea19_gec.html}.} We confirm the results of \citet{backtranslation-gec} and report substantial gains by applying back-translation~\citep{backtranslation} to GEC -- a data augmentation technique common in machine translation. Furthermore, we noticed that large parts of the training data do not match the target domain. We mitigated the domain gap by over-sampling the in-domain training corpus, and by fine-tuning through continued training. Our final model is an ensemble of four neural machine translation (NMT) models and two neural language models (LMs) with Transformer architecture~\citep{transformer}. Our purely neural system was also part of the joint submission with the Cambridge University Computer Lab described by \citet{bea19-cled}.
\section{Low-resource Track Submission}
\subsection{FST-based Grammatical Error Correction}
\begin{figure*}[t!]
\centering
\small
\includegraphics[width=0.82\linewidth]{i.png}
\caption{Input FST $I$ representing the source sentence `In a such situaction there is no other way.'. We follow standard convention and highlight the start state in bold and the final state with a double circle.}
\label{fig:input-fst}
\end{figure*}
\citet{fst-gec} investigated the use of finite state transducers (FSTs) for neural grammatical error correction. They proposed a cascade of FST compositions to construct a hypothesis space which is then rescored with a neural language model. We will outline this approach and explain our modifications in this section. For more details we refer to \citep{fst-gec}.
In a first step, the source sentence is converted to an FST $I$ (Fig.~\ref{fig:input-fst}). This initial FST is augmented by composition (denoted with the $\circ$-operator) with various other FSTs to cover different error types. Composition is a widely used standard operation on FSTs and supported efficiently by FST toolkits such as OpenFST~\citep{openfst}. We construct the hypothesis space as follows:\footnote{Note that our description differs from \citep{fst-gec} in the following ways: First, we use additional FSTs to allow insertions and deletions. Second, we integrate penalties directly into the FSTs rather than using special tokens in combination with a penalization transducer.}
\begin{figure}[t!]
\centering
\small
\includegraphics[width=0.4\linewidth]{d.png}
\caption{Deletion FST $D$ which can map any token in the list $R$ from Tab.~\ref{tab:deletion-list} to $\epsilon$. The $\sigma$-label matches any symbol and maps it to itself.}
\label{fig:deletion-fst}
\end{figure}
\begin{table}
\centering
\small
\begin{tabular}{|c|c|}
\hline
\textbf{Deletion Frequency} & \textbf{Token} \\
\textbf{(dev set)} & \\
\hline
164 & the \\
78 & , \\
50 & a \\
33 & to \\
20 & it \\
18 & of \\
16 & in \\
12 & that \\
8 & will \\
8 & have \\
8 & for \\
8 & an \\
7 & is \\
7 & - \\
6& they \\
6 & 's \\
6 & and \\
5 & had \\
\hline
\end{tabular}
\caption{List of tokens $R$ that can be deleted by the deletion transducer $D$ in Fig.~\ref{fig:deletion-fst}.}\label{tab:deletion-list}
\end{table}
\begin{figure}[t!]
\centering
\small
\includegraphics[width=0.7\linewidth]{e.png}
\caption{Edit FST $E$ which allows substitutions with a cost of $\lambda_\text{sub}$. The $\sigma$-label matches any symbol and maps it to itself at no cost.}
\label{fig:edit-fst}
\end{figure}
\begin{figure}[t!]
\centering
\small
\includegraphics[width=0.68\linewidth]{a.png}
\caption{Insertion FST $A$ for adding the symbols ``,'', ``-'', and ``'s'' at a cost of $\lambda_\text{ins}$. The $\sigma$-label matches any symbol and maps it to itself at no cost.}
\label{fig:add-fst}
\end{figure}
\begin{table*}
\centering
\small
\begin{tabular}{|ccc|cc|ccc|ccc|}\hline
{\bf Sub} & {\bf Del} & {\bf Ins} & {\bf LM} & {\bf Beam} & \multicolumn{3}{c|}{{\bf CoNLL-2014}} & \multicolumn{3}{c|}{{\bf BEA-2019 Dev}} \\
& & & & & {\bf P} & {\bf R} & {\bf M2} & {\bf P} & {\bf R} & {\bf ERRANT} \\ \hline
\multicolumn{5}{|l|}{Best published: \citet{fst-gec}} & 54.12 & 25.52 & 44.21 & \multicolumn{3}{c|}{n/a} \\ \hline
\checkmark & & & 1x & 8 & 58.59 & 24.14 & 45.58 & 42.44 & 14.68 & 30.79 \\
\checkmark & \checkmark & & 1x & 8 & 59.01 & 26.07 & 47.11 & 41.21 & 16.47 & 31.69 \\
\checkmark & \checkmark & \checkmark & 1x & 8 & 52.89 & 26.68 & 44.20 & 40.09 & 19.97 & 33.36 \\
\checkmark & \checkmark & \checkmark & 2x & 8 & 54.05 & 26.71 & 44.87 & 40.70 & 20.01 & 33.73 \\
\checkmark & \checkmark & \checkmark & 2x & 16 & 57.05 & 27.22 & 46.80 & 42.02 & 19.76 & 34.29 \\
\checkmark & \checkmark & \checkmark & 2x & 32 & 58.48 & 28.21 & 48.15 & 42.37 & 19.92 & 34.58 \\
\hline
\end{tabular}
\caption{Results on the low-resource track. The $\lambda$-parameters are tuned on the BEA-2019 dev set.}\label{tab:results-low-resource}
\end{table*}
\begin{enumerate}
\item We compose the input $I$ with the deletion transducer $D$ in Fig.~\ref{fig:deletion-fst}. $D$ allows to delete tokens on the short list shown in Tab.~\ref{tab:deletion-list} at a cost $\lambda_\text{del}$. We selected $R$ by looking up all tokens which have been deleted in the dev set more than five times and manually filtered that list slightly. We did not use the full list of dev set deletions to avoid under-estimating $\lambda_\text{del}$ in tuning.
\item In a next step, we compose the transducer from step 1 with the edit transducer $E$ in Fig.~\ref{fig:edit-fst}. This step addresses substitution errors such as spelling or morphology errors. Like \citet{fst-gec}, we use the confusion sets of \citet{chris-lm} based on CyHunspell for spell checking \citep{cyhunspell}, the AGID morphology database for morphology errors \citep{agid}, and manually defined corrections for determiner and preposition errors to construct $E$. Additionally, we extracted all substitution errors from the BEA-2019 dev set which occurred more than five times, and added a small number of manually defined rules that fix tokenization around punctuation symbols.
\item We found it challenging to allow insertions in LM-based GEC because the LM often prefers inserting words with high unigram probability such as articles and prepositions before less predictable words like proper names. We therefore restrict insertions to the three tokens ``,'', ``-'', and ``'s'' and allow only one insertion per sentence. We achieve this by adding the transducer $A$ in Fig.~\ref{fig:add-fst} to our composition cascade.
\item Finally, we map the word-level FSTs to the subword-level by composition with a mapping transducer $T$ that applies byte pair encoding \citep[BPE]{bpe} to the full words. Word-to-BPE mapping transducers have been used in prior work to combine word-level models with subword-level neural sequence models~\citep{fst-gec,sgnmt1,sgnmt2,mbr-nmt}.
\end{enumerate}
In a more condensed form, we can describe the final transducer as:
\begin{equation}
\label{eq:fst-cascade}
I\circ D \circ E \circ A \circ T
\end{equation}
with $D$ for deletions, $E$ for substitutions, $A$ for insertions, and $T$ for converting words to BPE tokens. Path scores in the FST in Eq.~\ref{eq:fst-cascade} are the accumulated penalties $\lambda_\text{del}$, $\lambda_\text{sub}$, and $\lambda_\text{ins}$. The $\lambda$-parameters are tuned on the dev set using a variant of Powell search~\citep{powell}. We apply standard FST operations like output projection, $\epsilon$-removal, determinization, minimization, and weight pushing~\citep{mohri-lang,mohri-push} to help downstream decoding. Following \citet{fst-gec} we then use the resulting transducer to constrain a neural LM beam decoder.
\subsection{Experimental Setup}
\label{sec:low-resource-setup}
Our LMs are Transformer~\citep{transformer} decoders (\texttt{transformer\_big}) trained using the Tensor2Tensor library~\citep{t2t}. We delay SGD updates~\citep{ucam-wmt18,danielle-syntax} with factor 2 to simulate 500K training steps with 8 GPUs on 4 physical GPUs. Training batches contain about 4K source and target tokens. Our LM training set comprises the monolingual {\em news2015}-{\em news2018} English training sets\footnote{\url{http://www.statmt.org/wmt19/translation-task.html}} from the WMT evaluation campaigns~\citep{wmt18} after language detection~\citep{langdetect} (138M sentences) and subword segmentation using byte pair encoding~\citep{bpe} with 32K merge operations. For decoding, we use our SGNMT tool~\citep{sgnmt1,sgnmt2} with OpenFST backend~\citep{openfst}.
\subsection{Results}
We report M2~\citep{m2} scores on the CoNLL-2014 test set~\citep{conll2014} and span-based ERRANT scores~\citep{errant} on the BEA-2019 dev set~\citep{bea19}. On CoNLL-2014 we compare with the best published results with comparable amount of parallel training data. We refer to \citep{bea19} for a full comparison of BEA-2019 systems. We tune our systems on BEA-2019 and only report the performance on CoNLL-2014 for comparison to prior work.
Tab.~\ref{tab:results-low-resource} summarizes our low-resource experiments. Our substitution-only system already outperforms the prior work of \citet{fst-gec}. Allowing for deletions and insertions improves the ERRANT score on BEA-2019 Dev by 2.57 points. We report further gains on both test sets by ensembling two language models and increasing the beam size.
\subsection{Differences Between CoNLL-2014 and BEA-2019 Dev}
\label{sec:diff-conll-bea}
Our results in Tab.~\ref{tab:results-low-resource} differ significantly between the CoNLL-2014 test set and the BEA-2019 dev set. Allowing insertions is beneficial on BEA-2019 Dev but decreases the M2 score on CoNLL-2014. Increasing the beam size improves our system by 3.28 points on CoNLL-2014 while the impact on BEA-2019 Dev is smaller (+0.85 points). These differences can be partially explained by comparing the error type frequencies in the reference annotations in both test sets (Tab.~\ref{tab:conll-vs-bea}). Samples in CoNLL-2014 generally need more corrections per sentence than in BEA-2019 Dev. More importantly, the CoNLL-2014 test set contains fewer missing words, but much more unnecessary words than BEA-2019 Dev. This mismatch tempers with tuning as we explicitly tune insertion and deletion penalties.
\begin{table}
\centering
\small
\begin{tabular}{|l|r|r|r|r|}
\hline
& \multicolumn{2}{c|}{{\bf Per Sentence}} & \multicolumn{2}{c|}{{\bf Per Word}} \\
& {\bf CoNLL} & {\bf BEA} & {\bf CoNLL} & {\bf BEA} \\ \hline
Missing & 0.35 & 0.46 & 1.51\% & 2.30\% \\
Replacement & 1.52 & 1.31 & 6.62\% & 6.57\% \\
Unnecessary & 0.42 & 0.19 & 1.83\% & 0.98\% \\ \hline
Total & 2.29 & 1.96 & 9.95\% & 9.86\% \\
\hline
\end{tabular}
\caption{Number of correction types in CoNLL-2014 and BEA-2019 Dev references.}\label{tab:conll-vs-bea}
\end{table}
\section{Restricted Track Submission}
In contrast to our low-resource submission, our restricted system entirely relies on neural models and does not use any external NLP tools, spell checkers, or hand-crafted confusion sets. For simplicity, we also chose to use standard implementations~\citep{t2t} of standard Transformer~\citep{transformer} models with standard hyper-parameters. This makes our final system easy to deploy as it is a simple ensemble of standard neural models with minimal preprocessing (subword segmentation). Our contributions on this track focus on NMT training techniques such as over-sampling, back-translation, and fine-tuning. We show that over-sampling effectively reduces domain mismatch. We found back-translation~\citep{backtranslation} to be a very effective technique to utilize unannotated training data. However, while over-sampling is commonly used in machine translation to balance the number of real and back-translated training sentences, we report that using over-sampling this way for GEC hurts performance. Finally, we propose a combination of checkpoint averaging~\citep{ckpt-avg} and continued training to adapt our NMT models to the target domain.
\subsection{Experimental Setup}
\begin{table}
\centering
\small
\begin{tabular}{|l|l|l|}
\hline
& {\sc Base} & {\sc Big} \\ \hline
T2T HParams set & \texttt{trans.\_base} & \texttt{trans.\_big} \\
\# physical GPUs & 4 & 4 \\
Batch size & 4,192 & 2,048 \\
SGD delay factor & 2 & 4 \\
\# training iterations & 300K & 400K \\
Beam size & 4 & 8 \\
\hline
\end{tabular}
\caption{NMT setups {\sc Base} and {\sc Big} used in our experiments for the restricted track.}\label{tab:trans-setups}
\end{table}
\begin{table}
\centering
\small
\begin{tabular}{|l|r|r|}
\hline
& \multicolumn{2}{c|}{{\bf Number of Sentences}} \\
& {\bf With Identities} & {\bf W/o Identities} \\ \hline
FCE & 28K & 18K \\
Lang-8 & 1,038K & 498K \\
NUCLE & 57K & 21K \\
W\&I+LOCNESS & 34K & 23K \\ \hline
{\bf Total} & {\bf 1,157K} & {\bf 560K} \\
\hline
\end{tabular}
\caption{BEA-2019 parallel training data with and without removing pairs where source and target sentences are the same.}\label{tab:train-corpus}
\end{table}
We use neural LMs and neural machine translation (NMT) models in our restricted track entry. Our neural LM is as described in Sec.~\ref{sec:low-resource-setup}. Our LMs and NMT models share the same subword segmentation. We perform exploratory NMT experiments with the {\sc Base} setup, but switch to the {\sc Big} setup for our final models. Tab.~\ref{tab:trans-setups} shows the differences between both setups. Tab.~\ref{tab:train-corpus} lists some corpus statistics for the BEA-2019 training sets. In our experiments without fine-tuning we decode with the average of the 20 most recent checkpoints~\citep{ckpt-avg}. We use the SGNMT decoder~\citep{sgnmt1,sgnmt2} in all our experiments.
\paragraph{In-domain corpus over-sampling}
\begin{table*}
\centering
\small
\begin{tabular}{|c|c|ccc|ccc|}\hline
{\bf W\&I+LOCNESS} & {\bf Ratio} & \multicolumn{3}{c|}{{\bf CoNLL-2014}} & \multicolumn{3}{c|}{{\bf BEA-2019 Dev}} \\
{\bf Over-sampling Rate} & & {\bf P} & {\bf R} & {\bf M2} & {\bf P} & {\bf R} & {\bf ERRANT} \\ \hline
1x & 1:33 & 59.88 & 17.46 & 40.30 & 38.20 & 15.09 & 29.24 \\
4x & 1:8 & 59.16 & 17.20 & 39.76 & 40.40 & 16.67 & 31.44 \\
8x & 1:4 & 57.73 & 17.76 & 39.81 & 39.19 & 16.73 & 30.90 \\
\hline
\end{tabular}
\caption{Over-sampling the BEA-2019 in-domain corpus W\&I+LOCNESS under {\sc Base} models. The second column contains the ratio of W\&I+LOCNESS samples to training samples from the other corpora.}\label{tab:results-locness-os}
\end{table*}
The BEA-2019 training corpora (Tab.~\ref{tab:train-corpus}) differ significantly not only in size but also their closeness to the target domain. The W\&I+LOCNESS corpus is most similar to the BEA-2019 dev and test sets in terms of domains and the distribution over English language proficiency, but only consists of 34K sentence pairs. To increase the importance of in-domain training samples we over-sampled the W\&I+LOCNESS corpus with different rates. Tab.~\ref{tab:results-locness-os} shows that over-sampling by factor 4 (i.e.\ adding the W\&I+LOCNESS corpus four times to the training set) improves the ERRAMT $F_{0.5}$-score by 2.2 points on the BEA-2019 dev set and does not lead to substantial losses on the CoNLL-2014 test set. We will over-sample the W\&I+LOCNESS corpus by four in all subsequent experiments.
\paragraph{Removing identity mappings}
\begin{table}
\centering
\small
\begin{tabular}{|@{\hspace{0.2em}}c@{\hspace{0.2em}}|@{\hspace{0.4em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{0.4em}}|@{\hspace{0.4em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{0.4em}}|}\hline
{\bf Identity} & \multicolumn{3}{@{\hspace{0em}}c@{\hspace{0em}}}{{\bf CoNLL-2014}} & \multicolumn{3}{c|}{{\bf BEA-2019 Dev}} \\
{\bf Removal} & {\bf P} & {\bf R} & {\bf M2} & {\bf P} & {\bf R} & {\bf ERR.} \\ \hline
$\times$ & 59.16 & 17.20 & 39.76 & 40.40 & 16.67 & 31.44 \\
\checkmark & 53.34 & 28.83 & 45.59 & 33.04 & 23.14 & 30.44 \\
\hline
\end{tabular}
\caption{Impact of identity removal on {\sc Base} models.}\label{tab:results-id}
\end{table}
Previous works often suggested to remove unchanged sentences (i.e.\ source and target sentences are equal) from the training corpora~\citep{fst-gec,gec-pretraining,marcin2018}. We note that removing these identity mappings can be seen as measure to control the balance between precision and recall. As shown in Tab.~\ref{tab:results-id}, removing identities encourages the model to make more corrections and thus leads to higher recall but lower precision. It depends on the test set whether this results in an improvement in $F_{0.5}$ score. For the subsequent experiments we found that removing identities in the parallel training corpora but not in the back-translated synthetic data works well in practice.
\paragraph{Back-translation}
\begin{table*}[!b]
\centering
\small
\begin{tabular}{|c|c|c|ccc|ccc|}\hline
{\bf Over-sampling Rate} & {\bf Number of} & {\bf Ratio} & \multicolumn{3}{c|}{{\bf CoNLL-2014}} & \multicolumn{3}{c|}{{\bf BEA-2019 Dev}} \\
{\bf (Real Data)} & {\bf Synthetic Sentences} & & {\bf P} & {\bf R} & {\bf M2} & {\bf P} & {\bf R} & {\bf ERRANT} \\ \hline
1x & 0 & - & 53.34 & 28.83 & 45.59 & 33.04 & 23.14 & 30.44 \\
1x & 1M & 1:1.6 & 56.17 & 31.30 & 48.47 & 37.79 & 23.86 & 33.84 \\
1x & 3M & 1:4.8 & 61.40 & 34.29 & 53.02 & 42.62 & 25.30 & 37.49 \\
1x & 5M & 1:7.9 & 64.18 & 34.27 & 54.64 & 44.69 & 25.59 & 38.88 \\ \hline
3x & 3M & 1:1.6 & 57.12 & 32.55 & 49.63 & 40.08 & 24.79 & 35.68 \\
6x & 5M & 1:1.3 & 59.15 & 33.99 & 51.52 & 41.52 & 25.05 & 36.69 \\
\hline
\end{tabular}
\caption{Using back-translation for GEC ({\sc Base} models). The third column contains the ratio between real and synthetic sentence pairs.}\label{tab:results-backtrans}
\end{table*}
Back-translation~\citep{backtranslation} has become the most widely used technique to use monolingual data in neural machine translation. Back-translation extends the existing parallel training set by additional training samples with real English target sentences but synthetic source sentences. Different methods have been proposed to synthesize the source sentence such as using dummy tokens~\citep{backtranslation}, copying the target sentence~\citep{copytarget}, or sampling from or decoding with a reverse sequence-to-sequence model~\citep{backtranslation,backtranslation-fb,backtranslation-gec}. The most popular approach is to generate the synthetic source sentences with a reverse model that is trained to transform target to source sentences using beam search. In GEC, this means that the reverse model learns to introduce errors into a correct English sentence. Back-translation has been applied successfully to GEC by~\citet{backtranslation-gec}. We confirm the effectiveness of back-translation in GEC and discuss some of the differences between applying this technique to grammatical error correction and machine translation.
Our experiments with back-translation are summarized in Tab.~\ref{tab:results-backtrans}. Adding 1M synthetic sentences to the training data already yields very substantial gains on both test sets. We achieve our best results with 5M synthetic sentences (+8.44 on BEA-2019 Dev). In machine translation, it is important to maintain a balance between authentic and synthetic data~\citep{backtranslation,backtranslation-ana,sys-uedin-wmt16}. Over-sampling the real data is a common practice to rectify that ratio if large amounts of synthetic data are available. Interestingly, over-sampling real data in GEC hurts performance (row 3 vs.\ 5 in Tab.~\ref{tab:results-backtrans}), and it is possible to mix real and synthetic sentences at a ratio of 1:7.9 (last three rows in Tab.~\ref{tab:results-backtrans}). We will proceed with the 5M setup for the remainder of this paper.
\paragraph{Fine-tuning}
\begin{figure}[t!]
\centering
\small
\includegraphics[width=0.93\linewidth]{ft.pdf}
\caption{Span-based ERRANT $F_{0.5}$ scores on the BEA-2019 dev set over the number of fine-tuning training iterations (single GPU, SGD delay factor~\citep{danielle-syntax} of 16).}
\label{fig:fine-tuning}
\end{figure}
\begin{table*}
\centering
\small
\begin{tabular}{|c|c|ccc|ccc|}\hline
{\bf Fine-tuning} & {\bf Checkpoint} & \multicolumn{3}{c|}{{\bf CoNLL-2014}} & \multicolumn{3}{c|}{{\bf BEA-2019 Dev}} \\
{\bf (Continued Training)} & {\bf Averaging} & {\bf P} & {\bf R} & {\bf M2} & {\bf P} & {\bf R} & {\bf ERRANT} \\ \hline
& & 63.61 & 33.39 & 53.86 & 44.16 & 25.01 & 38.29 \\
& \checkmark & 64.18 & 34.27 & 54.64 & 44.69 & 25.59 & 38.88 \\
\checkmark & & 64.98 & 33.05 & 54.46 & 48.62 & 27.19 & 42.00 \\
\checkmark & \checkmark & 66.03 & 34.17 & 55.65 & 48.99 & 26.87 & 42.06 \\
\hline
\end{tabular}
\caption{Fine-tuning through continued training on W\&I+LOCNESS and checkpoint averaging with a {\sc Base} model with 5M back-translated sentences.}\label{tab:results-ft}
\end{table*}
As explained previously, we over-sample the W\&I+LOCNESS corpus by factor 4 to mitigate the domain gap between the training set and the BEA-2019 dev and test sets. To further adapt our system to the target domain, we fine-tune the NMT models on W\&I+LOCNESS after convergence on the full training set. We do that by continuing training on W\&I+LOCNESS from the last checkpoint of the first training pass. Fig.~\ref{fig:fine-tuning} plots the $F_{0.5}$ score on the BEA-2019 dev set for two different setups. For the red curve, we average all checkpoints~\citep{ckpt-avg} (including the last unadapted checkpoint) up to a certain training iteration. Checkpoints are dumped every 500 steps. The green curve does not use any checkpoint averaging. Checkpoint averaging helps to smooth out fluctuations in $F_{0.5}$ score, and also generalizes better to CoNLL-2014 (Tab.~\ref{tab:results-ft}).
\paragraph{Final system}
\begin{table*}[!b]
\centering
\small
\begin{tabular}{|ccc|ccc|ccc|}\hline
{\bf NMT} & {\bf Fine-tuning} & {\bf LM} & \multicolumn{3}{c|}{{\bf CoNLL-2014}} & \multicolumn{3}{c|}{{\bf BEA-2019 Dev}} \\
& & & {\bf P} & {\bf R} & {\bf M2} & {\bf P} & {\bf R} & {\bf ERRANT} \\ \hline
\multicolumn{3}{|l|}{Best published: \citet{gec-pretraining}} & 71.57 & 38.65 & 61.15 & \multicolumn{3}{c|}{n/a} \\ \hline
1x & & & 64.04 & 35.74 & 55.28 & 45.86 & 26.46 & 40.00 \\
1x & \checkmark & & 66.57 & 35.21 & 56.50 & 51.57 & 27.49 & 43.88 \\
1x & \checkmark & 2x & 61.53 & 40.44 & 55.72 & 48.30 & 33.08 & 44.23 \\ \hline
4x & \checkmark & & 70.37 & 35.12 & 58.60 & 55.84 & 27.80 & 46.47 \\
4x & \checkmark & 2x & 66.89 & 39.85 & 58.90 & 53.17 & 32.89 & 47.34 \\
\hline
\end{tabular}
\caption{Final results on the restricted track with {\sc Big} models and back-translation.}\label{tab:results-restricted}
\end{table*}
Tab.~\ref{tab:results-restricted} contains our experiments with the {\sc Big} configuration. In addition to W\&I+LOCNESS over-sampling, back-translation with 5M sentences, and fine-tuning with checkpoint averaging, we report further gains by adding the language models from our low-resource system (Sec.~\ref{sec:low-resource-setup}) and ensembling. Our best system (4 NMT models, 2 language models) achieves 58.9 M2 on CoNLL-2014, which is slightly (2.25 points) worse than the best published result on that test set~\citep{gec-pretraining}. However, we note that we have tailored our system towards the BEA-2019 dev set and not the CoNLL-2013 or CoNLL-2014 test sets. As we argued in Sec.~\ref{sec:diff-conll-bea}, our results throughout this work suggest strongly that the optimal system parameters for these test sets are very different from each other, and that our final system settings are not optimal for CoNLL-2014. We also note that unlike the system of \citet{gec-pretraining}, our system for the restricted track does not use spell checkers or other NLP tools but relies solely on neural sequence models.
\section{Conclusion}
We participated in the BEA 2019 Shared Task on grammatical error correction with submissions to the low-resource and the restricted track. Our low-resource system is an extension of prior work on FST-based GEC~\citep{fst-gec} to allow insertions and deletions. Our restricted track submission is a purely neural system based on standard NMT and LM architectures. We pointed out the similarity between GEC and machine translation, and demonstrated that several techniques which originate from MT research such as over-sampling, back-translation, and fine-tuning, are also useful for GEC. Our models have been used in a joint submission with the Cambridge University Computer Lab~\citep{bea19-cled}.
\section*{Acknowledgments}
This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) grant EP/L027623/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service\footnote{\url{http://www.hpc.cam.ac.uk}} funded by EPSRC Tier-2 capital grant EP/P020259/1.
|
2301.11185
|
\section*{Acknowledgments}
\label{sec:acknowledgements}
We thank Dominique Fahrnbach for his intensive work on a
Wasserstein-based approach as part of his
Master's thesis. We thank
Malte Kaspereit and Malvina Supper for many
inspiring discussions on particle separation processes, in particular
for handing over the parameter values for Tables \ref{Tab:
parameter} and the formulas for calculating the numbers in Table \ref{tab:inputvalues}.
The paper is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 416229255 - SFB 1411.
\section*{Appendix}
\label{sec:appendix}
\begin{proof}[Proof of Lemma \ref{Lemma: Lipschitz_continuity_of_p_elementary}]
We show that for every feasible solution of \eqref{Prob: Dual_Purity_Constraint_elementary} the entries $Y_1,Y_2$ are bounded. To this end, w.l.o.g. let $\varepsilon_1=-1, \varepsilon_2=1, \varepsilon_i >0$ for every $i\in I \setminus\{1\}$ since every constraint
$$ \langle \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}^c,\tilde{\mu} \rangle \geq \varepsilon_i \text{ with } \varepsilon_i < 0$$
can equivalently be expressed by
$$\langle \text{sign} (1+\varepsilon_i) \mathbbm{1}_{T_i^C}^c,\tilde{\mu} \rangle \geq 1+\varepsilon_i.$$
In order to prove this equivalence, we add $1$ on both sides and consider the complement $T_i^C$ of $T_i$.
Now, we first prove that $\text{Tr}(Y_1)<\infty$:
Let $t=\EW{}$ and $v_i$ being the eigenvectors and $\lambda_i$ the eigenvalues of $Y_1$ then \eqref{Constr: Dual_elementary} implies:
\begin{equation}\label{Eq: help3}
\begin{aligned}
\lambda_{\min}\left(\begin{bmatrix}
\Sigma & 0 \\ 0 & \varepsilon_{\EW{}}
\end{bmatrix}\right) \text{Tr}(Y_1) & = \sum_{i=1}^n \lambda_i \lambda_{\min}\left(\begin{bmatrix}
\Sigma & 0 \\ 0 & \varepsilon_{\EW{}}
\end{bmatrix}\right) \overset{*}{\leq} \sum_{i=1}^n \lambda_i v_i^\top \begin{bmatrix}
\Sigma & 0 \\ 0 & \varepsilon_{\EW{}}
\end{bmatrix} v_i\\
& \leq \left\langle \begin{bmatrix}
\Sigma & 0 \\ 0 & \varepsilon_{\EW{}}
\end{bmatrix}, Y_1 \right\rangle \overset{{\eqref{Constr: Dual_elementary}}}{\leq} \sum_{i=1}^k a_i \mathbbm{1}_{A_i}(\EW{}) - \sum_{i\in I} \text{sign}(\varepsilon_i) y_i,
\end{aligned}
\end{equation}
where (*) holds due to the Rayleigh-Ritz principle, see e.g. \cite{Brezis2010a} for further details. We show that \eqref{Eq: help3} is bounded from above for every feasible solution to \eqref{Prob: Dual_Purity_Constraint_elementary} by considering the following LP:
\begin{equation}\label{Eq: help4}
\min_{y\in \mathbb{R}^I_{\geq 0}} \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\EW{})y_i:\ \sum_{i\in I}\varepsilon_i y_i \geq 0,
\end{equation}
whose constraint can be derived from \eqref{Obj: Objective_Dual_elementary} and the fact that both $\Sigma$ and $Y_2$ are positive semidefinite. Moreover, this is equivalent to
\begin{equation*}
\min_{y\in \mathbb{R}^I_{\geq 0}} -y_1+\sum_{i\in I \setminus \{1\}} y_i:\ \sum_{i\in I}\varepsilon_i y_i \geq 0.
\end{equation*}
due to $\EW{}\in T_i$ for every $i\in I$. Furthermore, it is bounded from below by $0$ since its dual LP:
\begin{align*}
\max_{z\geq 0} 0 z : - z &\leq -1,\\
\varepsilon_i z &\leq 1 &&\text{ for every } i\in I\setminus\{1\},
\end{align*}
is feasible for $z=1$ since w.l.o.g. $|\varepsilon_i|\leq 1$. Consequently, this provides a lower bound of $0$ to \eqref{Eq: help4} and thereby an upper bound to $\text{Tr}(Y_1)$ via \eqref{Eq: help3}.
Let $\lambda_{\min}(\Sigma)> 0$ denote the minimal eigenvalue of $\Sigma$ and $\lambda_i$ the eigenvalues of $Y_2$ with respect to eigenvector $v_i$. Then, on the one hand, we have
\begin{equation}\label{Eq: help1}
\begin{aligned}
\varepsilon_\Sigma \lambda_{\min}(\Sigma) \text{Tr}(Y_2) & = \varepsilon_\Sigma \sum_{i=1}^n \lambda_i \lambda_{\min}(\Sigma) \overset{(*)}{\leq} \varepsilon_\Sigma \sum_{i=1}^n \lambda_i v_i^\top \Sigma v_i = \varepsilon_\Sigma \left\langle \Sigma , \sum_{i=1}^n \lambda_i v_iv_i^\top \right\rangle\\
& = \varepsilon_\Sigma \langle \Sigma , Y_2 \rangle \overset{\eqref{Obj: Objective_Dual_elementary}}{\leq} \sum_{i\in I} \varepsilon_i y_i
\end{aligned}
\end{equation}
where (*) holds because of the Rayleigh-Ritz principle. In order to show that \eqref{Eq: help1} is bounded, we show that the following linear program is bounded from above:
\begin{equation}\label{Eq: LP_help}
\max_{y\in \mathbb{R}^I_{\geq 0}} \varepsilon^\top y:\ \tau^\top y \leq \sum_{i=1}^k a_i \mathbbm{1}_{A_i}(\EW{}),
\end{equation}
where $\tau_i = \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\EW{}).$ Note that $\tau\neq 0$ due to $\EW{} \in T_2$. Similar as before, the constraint in \eqref{Eq: LP_help} can be derived from \eqref{Constr: Dual_elementary} with $t=\EW{}$ in the following way:
\begin{equation}\label{Eq: help2}
\begin{aligned}
\sum_{i=1}^k a_i \mathbbm{1}_{A_i}(\EW{}) & \geq \sum_{i=1}^k a_i \mathbbm{1}_{A_i}(\EW{}) - \langle \begin{bmatrix}
\Sigma & 0 \\ 0 & \varepsilon_{\EW{}}
\end{bmatrix}, Y_1 \rangle \geq \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\EW{})y_i
\end{aligned}
\end{equation}
Then, weak duality implies
\begin{equation}
\eqref{Eq: LP_help} \leq \min_{z\in \mathbb{R}_{\geq 0}} z \sum_{i=1}^k a_i \mathbbm{1}_{A_i}(\EW{}):\ z \tau -\varepsilon \geq 0.
\end{equation}
Observe that $z=1$ is a feasible solution since
$$\tau_i=\text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}(\EW{})=1>\varepsilon_i$$
for every $i\in I\setminus\{1\}$ and $\tau_1=-1=\varepsilon_1$. Thus, we obtain an upper bound for \eqref{Eq: LP_help} and thereby for $\text{Tr}(Y_2)$.
Finally, we proved that the coefficients of $p(t)$ are bounded and the claim follows.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lemma:inner_approx}]
We first suppose w.l.o.g. that $t\in L_h$. Then, there exists a $\bar{t}\in L_h$ such that $\|t-\bar{t}\|\leq \delta_N\sqrt{m}$ and hence
\begin{align*}
f^c(t)+L\delta_N\sqrt{m} & \geq f^c(t)+L\|t-\bar{t}\| \overset{(1)}{\geq} f^c(t) +|p(\bar{t})-p(t)|\\
& \overset{\eqref{Obj: Objective_Dual_elementary}}{\geq} \sum_{i=1}^k a_i \mathbbm{1}_{A_i}^c(t) - \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c(t) + p(\bar{t}) \\
& \overset{(2)}{\geq} \sum_{i=1}^k a_i \mathbbm{1}_{A_i}(t) - \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(t) + p(\bar{t}) = f(\bar{t})\,
\end{align*}
where (1) holds due to definition of $L$ and (2) holds due to \eqref{Eq: Urysohn_approx}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lemma: crucial_points}]
First, we observe that the polynomial $p_y(t)$ is minimized at $\frac{2\EW{}y_5+y_4-y_3}{2y_5}$ if $y_5>0$ since
$$p'_y(t)=0 \Rightarrow y_3-y_4+2y_5 t - 2\EW{}y_5=0 \Rightarrow t=\frac{2\EW{}y_5+y_4-y_3}{2y_5}, \quad p''_y(t)=2y_5 >0.$$
We show, that given an arbitrary fixed $\delta>0$ the minimum $f_{\min}$ of $f^c$ is attained at a point in $[a^- -\delta,a^- + \delta]\cup [a^+ -\delta,a^+ +\delta]\cup \bigcup_{\bar{t}\in T_N} [\bar{t}-\delta,\bar{t}+\delta]$ or at $p_{\text{min}}$: Suppose not, i.e., there exists a $\bar{t}\in T_N$ such that
$$f_{\min}\in S\coloneqq (\bar{t}+\delta,\bar{t}+\delta_N-\delta)\cap [a^--\delta,a^-+\delta]^C\cap [a^+-\delta,a^++\delta]^C \cap \{p_{\min}\}^C.$$
Since $S$ is an open set we can find an open interval $(f_{\min}-\varepsilon,f_{\min}+\varepsilon)\subseteq S$. In particular, this on the one hand implies that
$\sum_{\tau\in T_N} \mathbbm{1}_{[\tau,\tau+\delta_N)}^c(t)z_\tau + a\mathbbm{1}_{[a^-,a^+]}^c(t) = \sum_{\tau\in T_N} \mathbbm{1}_{[\tau,\tau+\delta_N)}^c(f_{\min})z_\tau + a\mathbbm{1}_{[a^-,a^+]}^c(f_{\min})$ for every $t\in (f_{\min}-\varepsilon,f_{\min}+\varepsilon)$. On the other hand, we have that $|f_{\min}-p_{\min}|>\varepsilon$. Now, let w.l.o.g. $f_{\min} < p_{\min}$, then $p_y(f_{\min}+\varepsilon/2) < p_y(f_{\min})$ and consequently $f^c(f_{\min}+\varepsilon/2)< f^c(f_{\min})$ and we have a contradiction to the fact, that $f_{\min}$ was a minimizer of $f^c$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lemma: discretized_inner_approx}]
We compute the exact value of $\Delta_p\coloneqq p_y(p_{\text{min}}-\delta_N) - p_y(p_{\text{min}})$:
\begin{align*}
\Delta_p & = y_{3} (p_{\text{min}}-\delta_N) - y_{4}(p_{\text{min}}-\delta_N)+ y_{5} (p_{\text{min}}-\delta_N)^2 - y_5 2\EW{}(p_{\text{min}}-\delta_N)\\
& \qquad -(y_{3}p_{\text{min}} - y_{4} p_{\text{min}}+ y_{5} p_{\text{min}}^2 - y_52\EW{}p_{\text{min}}) \\
& = - y_{3} \delta_N + y_{4}\delta_N -2 y_{5}p_{\text{min}} \delta_N + y_{5}\delta^2_N +y_52\EW{}\delta_N\\
& \overset{\text{Lemma \ref{Lemma: crucial_points}}}{=} - y_{3} \delta_N + y_{4}\delta_N + y_{5}\delta^2_N -2 y_{5}\frac{2\EW{}y_5+y_{4}-y_{3}}{2y_{5}} \delta_N +y_52\EW{}\delta_N\\
& = -y_{3} \delta_N + y_{4} \delta_N + y_{5}\delta^2_N - (2\EW{}y_5+y_{4}-y_{3}) \delta_N +y_52\EW{}\delta_N\\
& = y_{5}\delta^2_N.
\end{align*}
Then, Lemma \ref{Lemma: refinement_main} shows that for every $t\in T$, we have that $f^c_{y,z}(t)\geq 0$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lem: SIP_has_nearby_MIP_solution_a>0}]
Let us first observe that since \eqref{Prob: dist_robust_chromatography_conic} has a strictly feasible solution, we note that every point on the line segment between this strictly feasible solution and $(a^-,a^+,y,z)$ is also strictly feasible. By abusing notation, we will identify $(a^-,a^+,y,z)$ with an arbitrarily close point on this line segment, i.e. w.l.o.g. we assume $(a^-,a^+)\in \text{int}(P)$ and $a^-,a^+\notin T_N$. Consequently, as $a^-,a^+\in \text{int}(P)$, we can define $(a^-)',(a^+)'\in P\cap T_N$ as follows
\begin{equation}
(a^-)'\coloneqq \max\{\bar{t}: \bar{t}\in T_N, \bar{t} < a^-\},\text{ and } (a^+)'\coloneqq \min\{\bar{t}: \bar{t}\in T_N, \bar{t} > a^+\}.
\end{equation}
Moreover, since $(a^-)',(a^+)'\in T_N$ we observe that by following the arguments in the proof of Theorem \ref{Thm: MIP_onedim}, we can define $\tilde{b},\Delta^-,\Delta^+$ in a way, that the constraints \eqref{Constr: sum_Delta_leq_2}--\eqref{Constr: MIP_onedim_nonneg_yz} are satisfied. Hence, it suffices to show, that there are also $y',z'$, that satisfy \eqref{Constr: discretized_dual_objective_geq0} -- \eqref{Constr: MIP_onedim_a-_special}:
We observe that as $a^-\neq (a^-)'$, we have $\lim_{t\uparrow a^-}z_{(a^-)'} + p_y(t)=0$ since otherwise $a^-\pm\delta$ would be feasible for sufficiently small $\delta>0$ and thus $a^-$ would not be optimal for \eqref{Prob: dist_robust_chromatography_conic}. Similarly, we conclude $\lim_{t\downarrow a^+}z_{(a^+)'-\delta_N} + p_y(t)=0$ and obtain
$$p_y(a^-) \leq 0 \text{ and } p_y(a^+) \leq 0.$$
We distinguish now between two cases:
\bigskip
Case 1: Suppose strict inequality holds for either $a^-$ or $a^+$:
\smallskip
Case 1.1: Consider $p_y(a^-)<0$.
Then, for sufficiently small $\delta_N$, we have $z_{(a^-)'}>y_5 \delta_N / \bound{+}{(a^-)'}$. Now, we set
$$y'_2=y_2+y_5 \delta_N^2 \text{ and } z'_{(a^-)'}=z_{(a^-)'}-y_5 \delta_N / \bound{+}{(a^-)'}.$$
Inserting these new values into \eqref{Constr: discretized_dual_objective_geq0} does not alter \eqref{Constr: discretized_dual_objective_geq0} and thus we obtain $\eqref{Constr: discretized_dual_objective_geq0} - y_5 \delta_N^2 + \delta_N^2 y_5 \geq b$,
due to \eqref{Eq: dual_conic_obj2}.
Moreover, for $\bar{t}\neq (a^-)'$ \eqref{Constr: discretized_purity_strengthened} with $(a^-)',(a^+)',y',z'$ holds immediately due to \eqref{Eq: dual_conic_constraint_SIP}. If $\bar{t}= (a^-)'$, \eqref{Constr: discretized_purity_strengthened} evaluates to:
\begin{align*}
& a + z_{(a^-)'}-y_5 \delta_N / \bound{+}{(a^-)'} +p_y((a^-)') + y_5\delta_N^2-y_5\delta_N^2\\
& = a-y_5 \delta_N / \bound{+}{(a^-)'} +z_{(a^-)'} +p_y((a^-)') \geq 0,
\end{align*}
since the $a-y_5 \delta_N / \bound{+}{(a^-)'}\geq 0$ for sufficiently small $\delta_N$ and $z_{(a^-)'} +p_y((a^-)') \geq 0$ due to \eqref{Eq: dual_conic_constraint_SIP}.
Next, \eqref{Constr: discretized_purity_strengthened_shifted} holds for every $\bar{t}\neq (a^-)',(a^+)'$ since
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted}
& \geq a\tilde{b}_{\bar{t}}+z_{\bar{t}} +p_y(\bar{t}+\delta_N) + y_5\delta_N^2-y_5\delta_N^2\\
& = \lim_{\delta \uparrow \delta_N} a\tilde{b}_{\bar{t}} +z_{\bar{t}} +p_y(\bar{t}+\delta) \geq 0,
\end{align*}
where the first inequality holds since $\bar{t+\delta_N} \geq \bar{t}$ whenever $\bar{t}\neq (a^+)'$ and the second one holds due to \eqref{Eq: dual_conic_constraint_SIP}. If $\bar{t}= (a^-)'$, we observe $|p_y(\bar{t}+\delta)-p_y(\bar{t})|\leq L_y\delta_N$, where $L_y$ denotes the Lipschitz constant of $p_y$. Furthermore, we obtain
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = a+z_{(a^-)'} -y_5 \delta_N / \bound{+}{(a^-)'} +p_y((a^-)'+\delta_N)\\
& = a -L_y\delta_N -y_5 \delta_N / \bound{+}{(a^-)'} + z_{(a^-)'} + p_y((a^-)')\\
& \geq 0,
\end{align*}
where $a -L_y\delta_N -y_5 \delta_N / \bound{+}{(a^-)'}\geq 0$ for sufficiently small $\delta_N$ and $z_{(a^-)'} + p_y((a^-)')\geq 0$ due to \eqref{Eq: dual_conic_constraint_SIP}. If $\bar{t}= (a^+)'$, we obtain with the same Lipschitz argument:
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = a+z_{(a^+)'} + p_y((a^+)'+\delta_N)\\
& = a -L_y\delta_N + z_{(a^+)'} + p_y((a^+)')\\
& \geq 0.
\end{align*}
Lastly,
\eqref{Constr: MIP_onedim_a+_special} holds immediately due to \eqref{Eq: dual_conic_constraint_SIP} since
$$\eqref{Constr: MIP_onedim_a+_special} = z_{(a^+)'} + p_y((a^+)') + y_5 \delta_N^2 - y_5\delta_N^2 = z_{(a^+)'} + p_y((a^+)'),$$
which is nonnegative due to \eqref{Eq: dual_conic_constraint_SIP} and again due to \eqref{Eq: dual_conic_constraint_SIP}, we have
$\eqref{Constr: MIP_onedim_a-_special} = \lim_{\delta\uparrow \delta_N} z_{(a^-)'-\delta_N} + p_y((a^-)'-\delta_N+\delta) \geq 0$.
\smallskip
Case 1.2: Consider $p_y(a^+)<0$. Then, for sufficiently small $\delta_N$, we have $z_{(a^+)'-\delta_N}>y_5 \delta_N / \bound{+}{(a^+)'-\delta_N}$. Now, we set
$$y'_2=y_2+y_5 \delta_N^2 \text{ and }z'_{(a^+)'-\delta_N}=z_{(a^+)'-\delta_N}-y_5 \delta_N / \bound{+}{(a^+)'-\delta_N}.$$
Again, inserting these new values into \eqref{Constr: discretized_dual_objective_geq0} does not alter \eqref{Constr: discretized_dual_objective_geq0} and we obtain $\eqref{Constr: discretized_dual_objective_geq0} - y_5 \delta_N^2 + \delta_N^2 y_5 \geq b,$
due to \eqref{Eq: dual_conic_obj2}.
Moreover, \eqref{Constr: discretized_purity_strengthened} with $(a^-)',(a^+)',y',z'$ holds immediately for $\bar{t}\neq (a^+)'-\delta_N$ due to \eqref{Eq: dual_conic_constraint_SIP}.
If $\bar{t}= (a^+)'-\delta_N$, we have:
\begin{align*}
\eqref{Constr: discretized_purity_strengthened} & = a + z_{(a^+)'-\delta_N}-y_5 \delta_N / \bound{+}{(a^+)'-\delta_N} +p_y((a^+)'-\delta_N) + y_5\delta_N^2-y_5\delta_N^2 \\
& = a-y_5 \delta_N / \bound{+}{(a^+)'-\delta_N} + z_{(a^+)'-\delta_N} +p_y((a^+)'-\delta_N)\\ & \geq a-y_5 \delta_N / \bound{+}{(a^+)'-\delta_N} + z_{(a^+)'-\delta_N} +p_y(a^+) - L_y\delta_N,
\end{align*}
where $L_y$ denotes the Lipschitz constant of $p_y$. Moreover, we obtain $a-y_5 \delta_N / \bound{+}{(a^+)'-\delta_N} -L_y\delta_N \geq 0$ for sufficiently small $\delta_N$ and $z_{(a^+)'-\delta_N} +p_y(a^+) = 0$ as otherwise $a^+$ would not be optimal for \eqref{Prob: dist_robust_chromatography_conic}.
Next, \eqref{Constr: discretized_purity_strengthened_shifted} holds for every $\bar{t}\neq (a^+)'$ since
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & \geq a\tilde{b}_{\bar{t}}+z_{\bar{t}} -y_5 \delta_N / \bound{+}{(a^+)'-\delta_N}\mathbbm{1}_{\{(a^+)'-\delta_N\}}(\bar{t}) +p_y(\bar{t}+\delta_N) + y_5\delta_N^2-y_5\delta_N^2\\
& = \lim_{\delta \uparrow \delta_N} a\tilde{b}_{\bar{t}}+z_{\bar{t}} -y_5 \delta_N / \bound{+}{(a^+)'-\delta_N}\mathbbm{1}_{\{(a^+)'-\delta_N\}}(\bar{t}) +p_y(\bar{t}+\delta) \geq 0,
\end{align*}
where we applied that $a\tilde{b}_{\bar{t}+\delta_N} \geq a\tilde{b}_{\bar{t}}$ whenever $\bar{t}\neq (a^+)'$ for the first inequality. The nonnegativity holds immediately by \eqref{Eq: dual_conic_constraint_SIP} if $\bar{t}\neq (a^+)'-\delta_N$ and with the same Lipschitz argument as above for \eqref{Constr: discretized_purity_strengthened}. Now, let $\bar{t}=(a^+)'$, then
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = z_{(a^+)'} +p_y((a^+)'+\delta_N) + y_5\delta_N^2-y_5\delta_N^2\\
& = z_{(a^+)'} +p_y((a^+)'+\delta_N)
\end{align*}
However, due to \eqref{Eq: dual_conic_constraint_SIP}, we know that
$z_{(a^+)'} +p_y((a^+)'+\delta)\geq 0,$
for every $\delta \in (0,\delta_N)$ and thus
$$\eqref{Constr: discretized_purity_strengthened_shifted} = \lim_{\delta \uparrow \delta_N} z_{(a^+)'} +p_y(a^++\delta) \geq 0.$$
Moreover, for $\delta\downarrow 0$, these arguments also prove \eqref{Constr: MIP_onedim_a+_special}, whereas \eqref{Constr: MIP_onedim_a-_special} holds since
\begin{align*}
\eqref{Constr: MIP_onedim_a-_special} & = z_{(a^-)'-\delta_N} + p_y((a^-)')+y_5\delta_N^2-y_5\delta_N^2\\
& = \lim_{\delta\uparrow \delta_N} z_{(a^-)'-\delta_N} + p_y((a^-)'-\delta_N + \delta),
\end{align*}
which is nonnegative for every $\delta\in (0,\delta_N)$ due to \eqref{Eq: dual_conic_constraint_SIP}.
\bigskip
Case 2: Suppose $p_y(a^-)=p_y(a^+)=0$. Since $p_y= 0$ is not feasible, if $b>0$, we have that $\frac{\partial}{\partial t} p_y(a^-)<0$ and $\frac{\partial}{\partial t} p_y(a^+)>0$ as these are the only sign changes in the quadratic polynomial $p_y$. Hence, $p_y((a^-)'), p_y((a^+)')>0$ and in particular, $p_{\text{min}}\in (a^-,a^+)$.
Now, let $(p_{\text{min}}^-)'\coloneqq \max\{\bar{t}\in T_N: \bar{t} \leq p_{\text{min}}\}$. Then, we set
\begin{align*}
& y_1' \coloneqq y_1+y_5\delta_N^2,\\ & z_{p_{\text{min}}}' \coloneqq z_{(p_{\text{min}}^-)'}+\frac{1}{4}y_5\delta_N/\bound{+}{(p_{\text{min}}^-)'},\\ & z_{(p_{\text{min}}^-)'+\delta_N}' \coloneqq z_{(p_{\text{min}}^-)'+\delta_N}+\frac{1}{4}y_5\delta_N/\bound{+}{(p_{\text{min}}^-)'+\delta_N},\\
& z_{(a^-)'-\delta_N}' \coloneqq z_{(a^-)'-\delta_N}+\frac{1}{4}y_5\delta_N/\bound{+}{(a^-)'-\delta_N}\\
& z_{(a^+)'}' \coloneqq z_{(a^+)'}+\frac{1}{4}y_5\delta_N/\bound{+}{(a^+)'}.
\end{align*} As above, we immediately obtain the validity of \eqref{Constr: discretized_dual_objective_geq0}. For \eqref{Constr: discretized_purity_strengthened} with $\bar{t}\neq (p_{\text{min}}^-)'$, we obtain
\begin{align*}
\eqref{Constr: discretized_purity_strengthened}& = a\tilde{b}_{\bar{t}} + z_{\bar{t}}' +p_{y'}(\bar{t}) - y_5\delta_N^2\\
& \geq a\tilde{b}_{\bar{t}} + p_y(\bar{t}) - 2y_5\delta_N^2 \eqqcolon (*),
\end{align*}
where
$$(*) > \begin{cases}
p_y((a^-)') - 2y_5\delta_N^2 \geq 0 & \text{ if } \bar{t} < (a^-)', \delta_N \text{ suff. small}\\
p_y((a^+)') - 2y_5\delta_N^2 \geq 0 & \text{ if } \bar{t} > (a^+)', \delta_N \text{ suff. small}\\
a + p_y(p_{\text{min}}) - 2y_5\delta_N^2 \geq 0 & \text{ if } \bar{t} \in [(a^-)',(a^+)'] \setminus\{p_{\text{min}}\}, \delta_N \text{ suff. small}.
\end{cases}$$
For the remaining case $\bar{t}=(p_{\text{min}}^-)'$, we have that $\bar{t}\in (a^-,a^+)$ and thus
$$a \tilde{b}_{\bar{t}} + z_{\bar{t}}' + p_{y'}(\bar{t}) -y_5\delta_N^2 \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} \frac{1}{4}y_5\delta_N/ \bound{+}{\bar{t}}-2y_5\delta_N^2 \geq 0$$
for sufficiently small $\delta_N$. Similarly, we show that $y',z'$ satisfy \eqref{Constr: discretized_purity_strengthened_shifted}:
To this end, we observe that if $\bar{t}+\delta_N \neq (p_{\text{min}}^-)'$, we have
$$\eqref{Constr: discretized_purity_strengthened_shifted} \geq a\tilde{b}_{\bar{t}+\delta_N} + p_y(\bar{t}+\delta_N)-2y_5\delta_N^2,$$
which equals $(*)$ and is thus nonnegative. Moreover, if $\bar{t}+\delta_N =(p_{\text{min}}^-)' \in (a^-,a^+)$, we have
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = \lim_{\delta\uparrow \delta_N} a + z_{\bar{t}} + \frac{1}{2}y_5\delta_N/\bound{+}{\bar{t}} + p_y(\bar{t}+\delta)-2y_5\delta_N^2\\
& \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} \frac{1}{4}y_5\delta_N/\bound{+}{\bar{t}}-2y_5\delta_N^2\geq 0,
\end{align*}
for sufficiently small $\delta_N$.
Lastly, $\eqref{Constr: MIP_onedim_a+_special} = z_{(a^+)'} + \frac{1}{4}y_5\delta_N/\bound{+}{(a^+)'} + p_y((a^+)') -2y_5\delta_N^2 \geq 0$, holds for sufficiently small $\delta_N$, whereas $\eqref{Constr: MIP_onedim_a-_special} = z_{(a^-)'-\delta_N} + \frac{1}{4}y_5\delta_N/\bound{+}{(a^-)'-\delta_N} + p_y((a^-)') -2y_5\delta_N^2 \geq 0$
if $\delta_N$ sufficiently small.
Finally, the objective value of our adjusted solution
$(a^-)',(a^+)',y',z'$ satisfies
$$c((a^-)',(a^+)',y',z') \geq c(a^-,a^+,y,z) - 2\|c\|_\infty \delta_N.$$
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lem: SIP_has_nearby_MIP_solution_a<0}]
As in the proof of Lemma \ref{Lem: SIP_has_nearby_MIP_solution_a>0}, we w.l.o.g. assume that $(a^-,a^+)\in \text{int}(P),\ a^-,a^+\notin T_N$ and define $(a^-)',(a^+)'\in P\cap T_N$ as follows
\begin{equation}
(a^-)'\coloneqq \min\{\bar{t}: \bar{t}\in T_N, \bar{t} > a^-+\delta_N\},\text{ and } (a^+)'\coloneqq \max\{\bar{t}: \bar{t}\in T_N, \bar{t} < a^+-\delta_N\}.
\end{equation}
Moreover, since $(a^-)',(a^+)'\in T_N$ we can again define $\tilde{b},\Delta^-,\Delta^+$ in a way, that the constraints \eqref{Constr: sum_Delta_leq_2}--\eqref{Constr: MIP_onedim_nonneg_yz} are satisfied. Hence, we continue by proving \eqref{Constr: discretized_dual_objective_geq0} -- \eqref{Constr: MIP_onedim_a-_special}:
We now observe that
\begin{equation}\label{Eq: Lemma7_help}
\lim_{t\downarrow a^-} a+z_{(a^-)'-2\delta_N} + p_y(t)=0
\end{equation}
since otherwise $a^-$ would not be optimal for \eqref{Prob: dist_robust_chromatography_conic} as $a^-\pm \delta$ would be feasible for sufficiently small $\delta>0$. Similarly, we conclude $\lim_{t\uparrow a^+} a+z_{(a^+)'+\delta_N} + p_y(t)=0$ and obtain
$$p_y(a^-) \leq -a \text{ and } p_y(a^+) \leq -a.$$
We distinguish now between two cases:
\bigskip
Case 1: Suppose strict inequality holds for either $a^-$ or $a^+$.
Case 1.1: Consider $p_y(a^-)<-a$.
Then, for sufficiently small $\delta_N$, we have $z_{(a^-)'-2\delta_N}>y_5 \delta_N / \bound{+}{(a^-)'-2\delta_N}$. Now, we set $y'_2=y_2+y_5 \delta_N^2$ and $z'_{(a^-)'-2\delta_N}=z_{(a^-)'-2\delta_N}-y_5 \delta_N / \bound{+}{(a^-)'-\delta_N}$. Inserting these new values into \eqref{Constr: discretized_dual_objective_geq0} again does not alter \eqref{Constr: discretized_dual_objective_geq0} and we obtain:
$\eqref{Constr: discretized_dual_objective_geq0} = \eqref{Constr: discretized_dual_objective_geq0} - y_5 \delta_N^2 + \delta_N^2 y_5 \geq b,$
due to \eqref{Eq: dual_conic_constraint_SIP}.
Moreover, \eqref{Constr: discretized_purity_strengthened} with $(a^-)',(a^+)',y',z'$ holds immediately for $\bar{t}\neq (a^-)'-2\delta_N$ due to \eqref{Eq: dual_conic_constraint_SIP}. If $\bar{t}= (a^-)'-2\delta_N$, we have:
\begin{align*}
\eqref{Constr: discretized_purity_strengthened} & = z_{(a^-)'-2\delta_N}-y_5 \delta_N / \bound{+}{(a^-)'-2\delta_N} +p_y((a^-)'-2\delta_N) + y_5\delta_N^2-y_5\delta_N^2\\
& = z_{(a^-)'-2\delta_N}-y_5 \delta_N / \bound{+}{(a^-)'-2\delta_N} +p_y((a^-)'-2\delta_N)\\
& \geq z_{(a^-)'-2\delta_N} +p_y(a^-)+a \overset{\eqref{Eq: Lemma7_help}}{=} 0.
\end{align*}
where the inequality holds since we have that $|p_y((a^-)'-2\delta_N)-p_y(a^-)|\leq -a - y_5 \delta_N / \bound{+}{(a^-)'-2\delta_N}$ if $\delta_N$ is sufficiently small.
For \eqref{Constr: discretized_purity_strengthened_shifted}, we first suppose $\bar{t}\neq (a^-)'-2\delta_N$ and observe
$$a\tilde{b}_{\bar{t}+\delta_N} = a\mathbbm{1}_{[(a^-)'-\delta_N,(a^+)'+\delta_N]}(\bar{t}+\delta_N) \geq a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta_N)$$ since $[(a^-)'-\delta_N,(a^+)'+\delta_N] \subseteq [a^-,a^+].$
Then, we conclude
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & \geq
a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta_N)+z_{\bar{t}} +p_y(\bar{t}+\delta_N) + y_5\delta_N^2-y_5\delta_N^2\\
& = \lim_{\delta\uparrow \delta_N} a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta) +z_{\bar{t}} +p_y(\bar{t}+\delta),
\end{align*}
which is nonnegative due to \eqref{Eq: dual_conic_constraint_SIP}. If $\bar{t}=(a^-)'-2\delta_N$, we have
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = z_{(a^-)'-2\delta_N} - y_5 \delta_N / \bound{+}{(a^-)'-2\delta_N} +p_y((a^-)'-\delta_N) + y_5\delta_N^2-y_5\delta_N^2\\
& \geq z_{(a^-)'-2\delta_N} - y_5 \delta_N / \bound{+}{(a^-)'-2\delta_N} +p_y((a^-)'-\delta_N) \geq 0.
\end{align*}
where the nonnegativity holds since $|p_y((a^-)'-\delta_N)-p_y(a^-)|\leq -a - y_5 \delta_N / \bound{+}{(a^-)'-2\delta_N}$ if $\delta_N$ is sufficiently small.
We conclude further, that
\eqref{Constr: MIP_onedim_a+_special} holds since
\begin{align*}
\eqref{Constr: MIP_onedim_a+_special} & = z_{(a^+)'} + p_y((a^+)')\\
& \geq a + z_{(a^+)'} + p_y((a^+)') \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0
\end{align*}
and lastly,
\begin{align*}
\eqref{Constr: MIP_onedim_a-_special} & = z_{(a^-)'-\delta_N} + p_y((a^-)')\\
& \geq a + z_{(a^-)'-\delta_N} + p_y((a^-)')\\
& = \lim_{\delta\downarrow 0} \mathbbm{1}_{[a^-,a^+]}((a^-)'-\delta) + z_{(a^-)'-\delta_N} + p_y((a^-)'-\delta) \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0
\end{align*}
\smallskip
Case 1.2: Consider $p_y(a^+)<-a$. Then, for sufficiently small $\delta_N$, we have $z_{(a^+)'+\delta_N} \geq -a -p_y(a^+) > y_5 \delta_N / \bound{+}{(a^+)'+\delta_N}$. Now, we set $y'_2=y_2+y_5 \delta_N^2$ and $z'_{(a^+)'+\delta_N}=z_{(a^+)'+\delta_N}-y_5 \delta_N / \bound{+}{(a^+)'+\delta_N}$. Again, inserting these new values into \eqref{Constr: discretized_dual_objective_geq0} gives:
$$\eqref{Constr: discretized_dual_objective_geq0} - y_5 \delta_N^2 + \delta_N^2 y_5 \geq 0,$$
due to \eqref{Eq: dual_conic_constraint_SIP}.
Moreover, \eqref{Constr: discretized_purity_strengthened} with $(a^-)',(a^+)',y',z'$ holds immediately for $\bar{t}\neq (a^+)'+\delta_N$ due to \eqref{Eq: dual_conic_constraint_SIP}. If $\bar{t}= (a^+)'+\delta_N$, we have:
\begin{align*}
\eqref{Constr: discretized_purity_strengthened} = & a + z_{(a^+)'+\delta_N}-y_5 \delta_N / \bound{+}{(a^+)'+\delta_N} +p_y((a^+)'+\delta_N) + y_5\delta_N^2-y_5\delta_N^2 \\
& = a - y_5 \delta_N / \bound{+}{(a^+)'+\delta_N} +z_{(a^+)'+\delta_N} +p_y((a^+)'+\delta_N)\\
& \geq a + z_{(a^+)'+\delta_N} +p_y(a^+)\\
& \geq 0,
\end{align*}
where the first inequality holds since we have that $|p_y((a^+)'+\delta_N)-p_y(a^+)|\leq -a - y_5 \delta_N / \bound{+}{(a^+)'+\delta_N}$ if $\delta_N$ is sufficiently small and the latter inequality holds due to
\eqref{Eq: dual_conic_constraint_SIP}.
Next, \eqref{Constr: discretized_purity_strengthened_shifted} holds for every $\bar{t}\neq (a^+)'+\delta_N$ since
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = \lim_{\delta \uparrow \delta_N} a\mathbbm{1}_{[(a^-)',(a^+)']}(\bar{t}+\delta) + z_{\bar{t}} +p_y(\bar{t}+\delta) + y_5\delta_N^2-y_5\delta_N^2\\
& \geq \lim_{\delta \uparrow \delta_N} a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta) +z_{\bar{t}} +p_y(\bar{t}+\delta) \geq 0,
\end{align*}
with \eqref{Eq: dual_conic_constraint_SIP}.
Consider $\bar{t}=(a^+)'+\delta_N$, we have
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = z_{(a^+)'+\delta_N} - y_5 \delta_N / \bound{+}{(a^+)'+\delta_N} + p_y((a^+)'+2\delta_N) + y_5\delta_N^2-y_5\delta_N^2\\
& = z_{(a^+)'+\delta_N} - y_5 \delta_N / \bound{+}{(a^+)'} +p_y((a^+)'+2\delta_N)\\
& \geq z_{(a^+)'+\delta_N} - y_5 \delta_N / \bound{+}{(a^+)'+\delta_N} + p_y(a^+) -L_y\delta_N\\
& \geq -a - y_5 \delta_N / \bound{+}{(a^+)'+\delta_N} - 2L_y\delta_N,
\end{align*}
where $L_y$ denotes the Lipschitz constant of $p_y$ and thus for sufficiently small $\delta_N$ the term is nonnegative.
We conclude further, that \eqref{Constr: MIP_onedim_a+_special} holds immediately since
\begin{align*}
\eqref{Constr: MIP_onedim_a+_special} & = z_{(a^+)'} + p_y((a^+)') + y_5\delta_N^2 - y_5\delta_N^2 \geq a + z_{(a^+)'} + p_y((a^+)') \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0
\end{align*}
and lastly,
\begin{align*}
\eqref{Constr: MIP_onedim_a-_special} & = z_{(a^-)'-\delta_N} + p_y((a^-)') \geq a + z_{(a^-)'-\delta_N} + p_y((a^-)')\\
& = \lim_{\delta\downarrow 0} a + z_{(a^-)'-\delta_N} + p_y((a^-)'-\delta) \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0.
\end{align*}
\bigskip
Case 2: Suppose $p_y(a^-)=p_y(a^+)=-a$. Since $p_y=0$ is not feasible, as $b>0$, we have that $\frac{\partial}{\partial t} p_y(a^-)<-a$ and $\frac{\partial}{\partial t} p_y(a^+)>0$ as these are the only sign changes in the quadratic polynomial $p_y$. Hence, $p_y((a^-)'-\delta_N), p_y((a^+)'+\delta_N)>0$ and in particular, $p_{\text{min}}\in (a^-,a^+)$.
Solely setting $y_1'\coloneqq y_1+y_5\delta_N^2$ may violate one of the constraints \eqref{Constr: discretized_dual_objective_geq0}--\eqref{Constr: MIP_onedim_a-_special}, particularly at the points around $p_{\text{min}}$. Thus, we also consider
$$L\coloneqq [p_{\text{min}}-3\delta_N,p_{\text{min}}+2\delta_N]\cap T_N$$
and set $z_{\bar{t}}'\coloneqq z_{\bar{t}}+\frac{1}{|L|}y_5\delta_N/\bound{+}{\bar{t}}$ for every $\bar{t}\in L$.
As in the previous cases, we immediately obtain the validity of \eqref{Constr: discretized_dual_objective_geq0}. For \eqref{Constr: discretized_purity_strengthened} with $\bar{t}\notin L$ and $\bar{t}\in L$ with $\bar{t}\leq p_{\text{min}}-2\delta_N$, we obtain
\begin{align*}
\eqref{Constr: discretized_purity_strengthened} & = a\tilde{b}_{\bar{t}} + z_{\bar{t}}' +p_{y'}(\bar{t}) - y_5\delta_N^2\\
& = a\mathbbm{1}_{[(a^-)',(a^+)']}(\bar{t}) + z_{\bar{t}} + p_y(\bar{t}) - 2y_5\delta_N^2 \\
& = a\mathbbm{1}_{[(a^-)',(a^+)']}(\bar{t}) + p_y(\bar{t}) - 2y_5\delta_N^2 \eqqcolon (*).
\end{align*}
We continue by estimating
\begin{align*}
(*) & \geq a + p_y(\bar{t}) -p_y(p_{\text{min}}) + p_y(p_{\text{min}}) - 2y_5\delta_N^2 \\
& \geq a + p_y(p_{\text{min}}-2\delta_N) -p_y(p_{\text{min}}) + p_y(p_{\text{min}}) - 2y_5\delta_N^2 \\
& \geq a + 2(p_y(p_{\text{min}}-\delta_N) -p_y(p_{\text{min}})) + p_y(p_{\text{min}}) - 2y_5\delta_N^2 \\
& \overset{\text{Proof of Lemma \ref{Lemma: discretized_inner_approx}}}{\geq} a + 2y_5\delta_N^2 + p_y(p_{\text{min}}) - 2y_5\delta_N^2 \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0.
\end{align*}
For the remaining case $\bar{t}\in L$, we have
$$\eqref{Constr: discretized_purity_strengthened} = a + z_{\bar{t}} + \frac{1}{|L|}y_5\delta_N/ \bound{+}{\bar{t}} + p_{y}(\bar{t}) -2y_5\delta_N^2 \geq a + z_{\bar{t}} + p_{y}(\bar{t}) \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0,$$
where the first inequality holds for sufficiently small $\delta_N$.
Similarly, we show that $y',z'$ satisfy \eqref{Constr: discretized_purity_strengthened_shifted}:
For the shifted constraint, we have
$$\eqref{Constr: discretized_purity_strengthened_shifted} \geq a\mathbbm{1}_{[(a^-)',(a^+)']}(\bar{t}+\delta_N) + p_y(\bar{t}+\delta_N)-2y_5\delta_N^2,$$
which equals $(*)$ if $\bar{t}+\delta_N\notin L$ and is thus nonnegative in this case. The same argument also holds if $\bar{t}+\delta_N \in L$ but $\bar{t}\notin L$ since then $\bar{t}+\delta_N \leq p_{\text{min}}-2\delta_N$. Hence, we now consider the remaining case, where $\bar{t},\bar{t}+\delta_N \in L$:
\begin{align*}
\eqref{Constr: discretized_purity_strengthened_shifted} & = a\tilde{b}_{\bar{t}+\delta_N} + z_{\bar{t}} + \frac{1}{|L|}y_5\delta_N/\bound{+}{\bar{t}} + p_y(\bar{t}+\delta_N)-2y_5\delta_N^2 \geq 0
\end{align*}
since on the one hand $a\tilde{b}_{\bar{t}+\delta_N} + z_{\bar{t}} + p_y(\bar{t}+\delta_N) = \lim_{\delta\uparrow \delta_N} a + z_{\bar{t}} + p_y(\bar{t}+\delta) \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0$ and on the other hand $\frac{1}{|L|}y_5\delta_N/\bound{+}{\bar{t}}-2y_5\delta_N^2 \geq 0$ for sufficiently small $\delta_N$.
Moreover, since $a \leq -2y_5\delta_N^2$ for sufficiently small $\delta_N$, we hav
\begin{align*}
\eqref{Constr: MIP_onedim_a+_special} & = z_{(a^+)'} + p_y((a^+)') - 2y_5\delta_N^2 \geq a + z_{(a^+)'} + p_y((a^+)') \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0.
\end{align*}
and also with $a \leq -2y_5\delta_N^2$, we obtain
\begin{align*}
\eqref{Constr: MIP_onedim_a-_special} & = z_{(a^-)'-\delta_N} + p_y((a^-)') - 2y_5\delta_N^2 \geq a + z_{(a^-)'-\delta_N} + p_y((a^-)')\\
& = \lim_{\delta\downarrow 0} a + z_{(a^-)'-\delta_N} + p_y((a^-)'-\delta) \overset{\eqref{Eq: dual_conic_constraint_SIP}}{\geq} 0.
\end{align*}
Finally, the objective value of our adjusted solution
$(a^-)',(a^+)',y',z'$ satisfies
$$c((a^-)',(a^+)',y',z') \geq c(a^-,a^+,y,z) - 2\|c\|_\infty \delta_N - 2\|c\|_\infty \delta_N = c(a^-,a^+,y,z) - 4\|c\|_\infty \delta_N.$$
\end{proof}
\section{Computational Results}
\label{Sec:comp-results}
To evaluate the introduced reformulation approaches, we test them on a
prototypical application in the setup of material design processes. A
fundamental and simultaneously challenging task in this active research field consists in the separation of a mixture of substances into its individual components, characterized by different
criteria. In this process, the
particle mixture flows along a so-called chromatograpic column with
material-dependent velocities. Loosely speaking, while flowing along the column,
different materials can be separated. At the end of the process the concentration over time of each particle is detected and documented in the \emph{chromatoramm}. Challenges then consist in
an optimized setup of particle separation, in particular the layout
of (one or more) such columns. A fundamental question consists in
determining points in time when to perform different ways to separate
the materials. In addition, the separated materials need to satisfy quality
requirements. Collection one or more separated materials is called \emph{fractionation}.
In this application, we consider Polyethylenglykol
(PEG) molecules that shall be separated
with respect to their \emph{degree of polymerization} $x$, i.e., the (discrete) number $x$ of
monomeric units. Quality requirement then state that at least a certain
fraction of the separated material needs to have the specified degree.
Chromatographic processes are prone to uncertainties that already in
very simplified settings
can impact the separation results negatively. In particular, the \emph{redidencetime distributions} (RTDs), which distribute the time a PEG needs to pass the column may be uncertain themselves. In order to maintain quality
requirements even under uncertainty, robust
protection is sought which is a current challenge.
In our example, we use realistic settings from \cite{kaspereitneu}. The uncertain RTDs are assumed to be normal distributions
$\uRTD{\EW{}}{t,x}$ which is a standard assumption in practice. Each distribution describes the degree of polymerization for one $x$. We naturally assume that the mean $\EW{x}$ and the variance
$\varianz{x}$ are uncertain.
We denote $X$ as the set of all polymerization degrees within the injection and $X_w \subset X$ as the set of the polymerization degrees of the desired particles.
Assuming a mixture of different PEGs, the aim is to set up
the
separation such that as much share as possible is collected from one
desired PEG size.
Thus, we need to find the time interval, i.e., a point in time $b_1$ where
to start and a not earlier point in time $b_2$ where to end
fractionation. On the one hand, we wish to collect as much of the
desired PEG. This amount is determined by the area under its
concentration distribution in the resulting chromatogramm. As this area is strongly correlated with the quantity
$b_2 - b_1$, we determine $b_1, b_2$ such that this difference is maximized.
On the other hand, quality requirements on the
endproduct need to be met. It is required that the percentage of the
desired PEG in the endproduct does not fall below a given bound, say
we require a purity of at least some value of $R\geq 0$. We show next that this fundamental setting in particle
separation falls exactly into the uncertain chance
constraints that we have analyzed in Section \ref{Sec: DRO
reformulation} (in particular the so-called Case 2). Indeed, it can be optimized by studying a
one-dimensional domain $T$ only. In formulas, we aim to solve:
\begin{subequations}\label{Prob: DRO_chromatography}
\begin{align}
\max &~b_2-b_1\\
& \text{s.t.}\ 0 \le \sum_{x\in X} \min_{\mass{x}\in \mathcal{U}_x}~ (\mathbbm{1}_{X_w}(x) - R)\initial{x}\langle \mathbbm{1}_{[b_1,b_2]}(t),\mass{x}\rangle \label{Constr: purity_primal}
\end{align}
\end{subequations}
with $b_1$ being the start, $b_2$ being the end of the fractionation, $q_0$ the initial PSD and the uncertain probability measures $\mass{x}\in \mathcal{U}_x$ if and only if $\mass{x}$ satisfies the following constraints:
\begin{subequations}\label{Prob: chroma}
\begin{align}
& \mass{x} \in \mathcal{M}(T)_{\ge 0} && \forall x \in X\\
&\langle 1, \mass{x} \rangle \ge 1 && \forall x \in X \\
&\langle -1, \mass{x} \rangle \ge -1 && \forall x \in X \\
& \langle -t, \mass{x}\rangle \ge -\EW{x, +} & & \forall x \in X \label{Constr: chroma_EW_1} \\
& \langle t, \mass{x}\rangle \ge \EW{x, -} & & \forall x \in X \label{Constr: chroma_EW_2} \\
&\langle -t^2 + (\EW{x,+}+\EW{x,-})t ,\mass{x}\rangle \ge -\varianz{x}^2\varepsilon_{\varianz{x}}+\EW{x,+}\EW{x,-} && \forall x \in X \label{Constr: chroma_var}\\
& \langle -\mathbbm{1}_{[\tau,\tau+h)}(t),\mass{x} \rangle \ge - h \cdot \bound{\EW{}}{\tau,x} && \forall x \in X, \forall \tau \in \bar{T}, \label{Constr: chroma_Envelope}
\end{align}
\end{subequations}
where $\EW{x, +}, \EW{x, -}$ is the upper and lower bound on the uncertain mean $\EW{\mass{x}}(t)$. Additionally, we want to elaborate on the change of parameters in \eqref{Constr: chroma_var}:
The modeling in Section \ref{Sec: DRO reformulation}, which was motivated by \cite{Delage2010a}, restricts the variance with respect to the nominal mean $\EW{x}$ instead of the uncertain mean $\EW{\mass{x}}(t)$. Since in chromatography the fluctuations of $\EW{\mass{x}}(t)$ are rather strong, but do not coincide with a grown variance, we developed a minor refinement of \eqref{Constr: Second_Moment}. To this end, we first exploit the linearity of $\EW{\mass{}}$ and obtain
\begin{equation}\label{Eq: Second_Moment_refined}
\langle t^2,\mass{x}\rangle - \EW{\mass{x}}(t)^2 = \EW{\mass{x}}(t^2-2t\EW{\mass{x}}(t) + \EW{\mass{x}}(t)^2) = \EW{\mass{x}}(t-\EW{\mass{x}}(t))^2 \leq \varepsilon_{\varianz{x}} \varianz{x}^2.
\end{equation}
As $\EW{\mass{x}}(t)^2=(\langle t,\mass{x}\rangle)^2$ is not linear in $\mass{x}$, we approximate $\EW{\mass{x}}(t)^2$ by a McCormick envelope. To this end, note that $\EW{x,-}\leq \EW{\mass{x}}(t) \leq \EW{x,+}$ and thus the corresponding McCormick envelope provides:
$$\EW{\mass{x}}(t)^2 \leq \EW{x,+}\EW{\mass{x}}(t) + \EW{\mass{x}}(t)\EW{x,-} -\EW{x,+}\EW{x,-},$$
which results in the following relaxation of \eqref{Eq: Second_Moment_refined}
$$\langle t^2,\mass{}\rangle - \EW{x,+}\EW{\mass{x}}(t) - \EW{\mass{x}}(t)\EW{x,-} \leq \varepsilon_{\varianz{x}} \varianz{x}^2 -\EW{x,+}\EW{x,-}.$$
Hence, we include this relaxation as \eqref{Constr: chroma_var} instead of \eqref{Eq: Second_Moment_refined}.
Although \eqref{Prob: DRO_chromatography} is affected by multiple
uncertainties as every $x$ yields uncertain
distributions, we argue next that we can still apply the results from Section \ref{Sec: DRO reformulation}:
We observe that every term
$$p_x\coloneqq \min_{\mass{x}\in \mathcal{U}_x}~ (\mathbbm{1}_{X_w}(x) - R)\initial{x}\langle \mathbbm{1}_{[b_1,b_2]}(t),\mass{x}\rangle$$
in the finite sum \eqref{Constr: purity_primal} can be optimized
separately. Thus, each term can be reformulated by its dual program
$\max_{\tilde{b},\Delta^-,\Delta^+,y,z\in \mathcal{V}_x} d_x(y,z)$ as
given by Theorem \ref{Thm: strong_duality}, where $\mathcal{V}_x$ denotes the corresponding set of feasible solutions. This means that
\eqref{Constr: purity_primal} is equivalent to $0\leq \sum_{x\in X}
\max_{\tilde{b},\Delta^-,\Delta^+,y,z\in \mathcal{V}_x}
d_x(y,z)$. Moreover, for every $x\in X$ the feasible set that arises
by applying Theorem \ref{Thm: MIP_onedim} to $d_x$ provides a
sufficient condition for
$\tilde{b},\Delta^-,\Delta^+,y,z\in\mathcal{V}_x$. Thus, the
intersection of these sets together with $0\leq \sum_{x\in X}
d_x(y,z)$ provides a sufficient condition for the quality requirement
condition \eqref{Constr: purity_primal} to hold. In order to receive
an algorithmically tractable robust counterpart, we apply the
discretization
from Section \ref{Sec: DRO reformulation} and obtain a mixed-integer
linear optimization problem. Its optimium
provides a solution to \eqref{Prob: DRO_chromatography}, i.e. robust fractionation times that are protected against uncertainties in the system.
\subsection{Application to chomatography with realistic data from chemical engineering}
For our example we used process and optimization
parameters that are typical when working with PEGs. In the following, the most important ones are explained first. Depending on these, the nominal mean, as well as its minimum and maximum deviation can then be calculated.
For the nominal values, we start by noting that
a \emph{solvent} is inserted together with the
mixture that transports the latter through the column. In our case,
this is \emph{acetonitrile}, where we denote its ratio by $x_\text{ACN}$.
The ACN ratio
impacts the so-called retention time that the PEGs need to flow through the column.
The so-called number of theoretical plates $NTP$ is a quantitative measure
of the separation efficiency of a chromatographic column. It
influences the peak width
through $\varianz{x} = \sqrt{\frac{\EW{x}^2}{NTP}}$. Using the typical numbers
from Table \ref{Tab: parameter}, the nominal mean for each PEG can
be determined using \cite{kaspereitneu}.
\begin{table}[H]
\caption{Process and optimization parameters for example chromatogram}
\centering
\begin{tabular}{c|c}
term & value \\
\hline
\hline
degrees of polymerization of the desired PEGs $x \in X_w$ & 32 \\
\hline
degrees of polymerization of all PEGs within the injection $x \in X$ & 30,31,32,33 \\
\hline
required purity $R$ & 0.95 \\
\hline
number of theoretical plates $NTP$ & 120.000 \\
\hline
ACN-ratio $x_\text{ACN}$ & 0.25 \\
\hline
uncertainty in variance $\varepsilon_{\sigma}$ & 0.01 \\
\hline
\end{tabular}
\label{Tab: parameter}
\end{table}
In practice, the ACN-ratio is uncertain because of the imprecise pump
for injecting ACN.
We consider realistic uncertainties $x_\text{ACN} \in
[0.25 -\varepsilon_{\text{ACN}},0.25+\varepsilon_{\text{ACN}}]$ and choose a
small ($\varepsilon_{\text{ACN}} =
0.004$), a medium ($\varepsilon_{\text{ACN}} =
0.0042$) as well as a large ($\varepsilon_\text{{ACN}} =
0.0044$)
uncertainty set.
The uncertainty on $x_\text{ACN}$
leads to uncertainty in the
mean
$\EW{x}$.
Again with \cite{kaspereitneu}, the \emph{maximum and minimum retention
times} $\EW{x, -}, \EW{x, +} $ are calculated, where the resulting numbers are displayed in Table \ref{tab:inputvalues}.
\begin{table}[H]
\caption{(Bounds on) retention times in minutes calculated via \cite{kaspereitneu}, using parameters in Table~\ref{Tab: parameter}.}
\centering
\begin{tabular}{|r||r||r r r||r r r||r r r|}\hline
$x$ & $\EW{x}$ & $\varepsilon_{\EW{}}$ &$\EW{x, -}$ & $\EW{x, +}$ & $\varepsilon_{\EW{}}$ &$\EW{x, -}$ & $\EW{x, +}$& $\varepsilon_{\EW{}}$ &$\EW{x, -}$ & $\EW{x, +}$ \\
\hline
30 & 2.93 & 0.004 & 2.868 & 2.994 & 0.0042 & 2.865 & 2.998 & 0.0044 & 2.862 & 3.001 \\
31 & 3.10 & 0.004 & 3.033 & 3.175 & 0.0042 & 3.030 & 3.179 & 0.0044 & 3.026 & 3.183 \\
32 & 3.29 & 0.004 & 3.212 & 3.373 & 0.0042 & 3.209 & 3.377 & 0.0044 & 3.205 & 3.382 \\
33 & 3.50 & 0.004 & 3.407 & 3.589 & 0.0042 & 3.403 & 3.593 & 0.0044 & 3.399 & 3.598 \\
\hline
\end{tabular}
\label{tab:inputvalues}
\end{table}
We solved \eqref{Prob: DRO_chromatography} using
Gurobi version 8.1.1. on a standard notebook.
The number of variables and constraints as well as the run time
depends strongly on the value of $\delta$ as shown for three examples
in Table~\ref{Tab:runtime}. For our application, a discretization of
$\delta = 0.001$ minutes is appropriate.
\begin{table}[H]
\caption{Running times for solving model \eqref{Prob:
DRO_chromatography} with different discretization $\delta$.}
\centering
\begin{tabular}{|r|r|r|r|}\hline
$\delta$ (min) & number of variables & number of constraints & CPU time(sec) \\
\hline
0.001 & 6673 & 10450 & 1.13\\
0.0005 & 13323 & 20900 & 2.12 \\
0.0001 & 66523 &104500 & 12.67 \\
\hline
\end{tabular}
\label{Tab:runtime}
\end{table}
Next, we investigate robust optimization of
fractionation times. The y-axis of a chromatogram shows
its output signal (an electric impulse) that is proportional to the
concentration. For example, each of the four
gray sharp peaks in Figure \ref{Fig: improvement} correspond to a different
PEG.
Under the considered uncertainty sets, the
peaks can lie within the black box around it and may overlap.
We point out that we have shown here that robust fractionation times
are algorithmically tractable by solving the
mentioned mixed-integer linear optimization problem, even though
elementary functions are involved. In addition, we next compare the
quality of robust optima, when the ambiguity sets are
constrained. To this end,
we compare the robust solution with moment
control obtained by solving \eqref{Prob: chroma} with robust
solutions obtained without momement control, i.e. without constraints \eqref{Constr: chroma_EW_1}, \eqref{Constr: chroma_EW_2} and \eqref{Constr: chroma_var}.
For the small uncertainty set, there is nearly no overlap of the
uncertainty sets given by the envelopes of the distributions (i.e. the black boxes).
In Figure~\ref{Fig: no_improvement_too_small},
the
components stay well separated and do not
overlap much. Therefore, the solution is governed by
equation~\eqref{Constr: chroma_Envelope}, and the robust fractionation
times without
moment control
(the interval defined by the red dotted line) as
well as that with bounded moments (the interval
defined by the blue dotted line) yield the same time of $b_2-b_1=0.192$ min.\\
For the large uncertainty set, the overlap of the envelopes
is large such that the bound on the variance does no longer bring
benefit and both fractionation times are similar (0.112 min), see Figure~\ref{Fig: no_improvement_too_big}.
\begin{figure}[H]
\includegraphics[width=8cm]{schwankung_42.png}
\caption{Chromatogram with four PEGs with medium-sized
uncertainty ($\varepsilon_{\EW{}}=0.0042$). Optimal fractionation times are displayed on the
x-axes. Additional variance control on the ambiguity impacts
robust fractionation times and should be used since our approach is less conservative.}
\label{Fig: improvement}
\end{figure}
Finally, we study the medium uncertainty set
where the situation is different. For the resulting
chromatogram the envelopes overlap slightly. As a consequence, the maximum robust fractionation time without variance control is
0.12 minutes (red) but considerably larger if in addition bounds on
the moments are enforced as well. Then, a considerably larger
fractionation time of 0.169 min (blue) is obtained,
see Figure~\ref{Fig: improvement}.
Thus, by bounding the variance in cases where
the overlap is not too large, a better robust solution is
obtained where fractionation is longer and still
quality requirements are met.
We conclude that for medium-sized uncertainty sets, robust
reformulations should also enforce additional information on the
ambiguity sets.
\begin{figure}
\begin{minipage}[t]{0.49\textwidth}
\captionsetup{width=.99\linewidth}
\includegraphics[width=\textwidth]{schwankung_44.png}
\captionof{figure}{
Chromatogram with four PEGs with large
uncertainty set ($\varepsilon_{\EW{}}=0.0044$).
Variance control
only
has little impact on robust fractionation times because uncertainty
sets overlap too much.
}
\label{Fig: no_improvement_too_big}
\end{minipage}
\begin{minipage}[t]{0.49\textwidth}
\captionsetup{width=.999\linewidth}
\includegraphics[width=\textwidth]{schwankung_4.png}
\captionof{figure}{Chromatogram with four PEGs with small
uncertainty set ($\varepsilon_{\EW{}}1=0.004$).
Variance control
only
has little impact on robust fractionation times as uncertainty sets
almost do not overlap.}
\label{Fig: no_improvement_too_small}
\end{minipage}
\end{figure}
\section{Conclusion}
\label{Sec: Conclusion}
In this paper, we have presented a novel approach for reformulating
distributionally robust optimization problems where elementary
functions are allowed. We have shown that a suitably discretized
formulation yields a mixed-integer positive-semidefinite optimization
model. For a one-dimensional version of the problem with specific confidence sets, we have proven
that the resulting MIP formulation converges to the true robust
counterpart which shows the high quality of the reformulation. The
fact that elementary functions can be included in the model pushes the
applicability of duality-based reformulations of distributional
robustness significantly beyond convexity.
\section{Distributionally robust constraints dependent on elementary-functions}\label{Sec: DRO_elementary_functions}
For both, Cases 1 and 2 from Section \ref{Sec: Problem Setting}, we consider the DRO constraint \eqref{Prob: DRO_with_undefined_ambiguity_set} where $\Omega$ is defined by \eqref{Eq: Sec2_first_moment}, \eqref{Eq: Sec2_second_moment} and \eqref{Eq: Sec2_confidence_sets}.
To this end, let again $b\in \mathbb{R}$, consider a compact set $T\subseteq
\mathbb{R}^m$, and let $I\subseteq \mathbb{N}$ denote a finite index set. Next we define the considered ambiguity set. We assume
a 'typical', i.e., nominal distribution with mean $\EW{} \in \mathbb{R}^m$
and covariance matrix $\Sigma \in \mathbb{R}^{m\times m}$ is given, for example from
expert knowledge or by estimation from given data. In formulas, we consider
\begin{subequations}
\label{Prob: Primal_Purity_Constraint_elementary}
\begin{align}
b \leq \min_{\mass{}}~& \langle \sum_{i=1}^k a_i \mathbbm{1}_{A_i}^c,\mass{}\rangle && \label{Constr: Objective_Primal_elementary}\\
\text{s.t.}~& \mass{} \in \mathcal{M}(T)_{\ge 0}, \\
& \langle \begin{bmatrix}
\Sigma & t-\mu\\
(t-\EW{})^\top & \varepsilon_{\EW{}}
\end{bmatrix}, \mass{}\rangle \succeq 0, \label{Constr: First_Moment_elementary} \\
&\langle -(t-\EW{})(t-\EW{})^\top ,\mass{}\rangle \succeq -\varepsilon_\Sigma\Sigma, && \label{Constr: Second_Moment_elementary}\\
&\langle \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}^c(t), \mass{} \rangle \ge \varepsilon_i && i\in I \label{Constr: Primal_elementary_confidence_set},
\end{align}
\end{subequations}
where a choice of $T_1=T, \varepsilon_1=1$ and $T_2=T,
\varepsilon_2=-1$ implies that $\mass{}(T)=1$, i.e. $\mass{}$ is a
probability measure on $T$. In the following, we aim at deriving an
algorithmically tractable
reformulation of this set of constraints. We note that in order to
dualize \eqref{Prob: Primal_Purity_Constraint_elementary}, we consider
continuous approximators $a_i \mathbbm{1}_{A_i}^c,
\text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}^c$ of the indicator functions
$a_i \mathbbm{1}_{A_i}, \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}$. The
existence of approximators that are arbitrarily close to the indicator
functions is given by the seminal Lemma of Urysohn, see e.g. \cite{Munkres2000a}. In particular, we choose $\mathbbm{1}^c_{A_i} \geq \mathbbm{1}_{A_i}$, an upper approximator whenever $a_i\geq 0$ and a lower approximator whenever $a_i<0$. The opposite approximators are chosen for $\mathbbm{1}_{T_i}$, i.e., we choose $\mathbbm{1}_{T_i}^c\leq \mathbbm{1}_{T_i}$ if $\varepsilon_i>0$ and $\mathbbm{1}_{T_i}^c\geq \mathbbm{1}_{T_i}$ whenever $\varepsilon_i<0$. This establishes the following key property
\begin{equation}\label{Eq: Urysohn_approx}
a_i\mathbbm{1}^c_{A_i} \geq a_i\mathbbm{1}_{A_i}\text{ and } \text{sign}(\varepsilon_i)\mathbbm{1}^c_{T_i} \leq \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}.
\end{equation}
In the following, we will define necessary ingredients for being
able to reformulate such a DRO constraint by dualizing
\eqref{Prob: Primal_Purity_Constraint_elementary}. Subsequently, a
tractable
and high-quality inner approximation of the resulting
constraint will be obtained.
We first employ duality theory using an adjoint operator:
\begin{remark}\label{Rem: existence_adjoint_operator}
Let $\mathcal{S}^m$ denote the set of symmetric $m$ by $m$ matrices. It might not be immediately clear whether an adjoint operator with respect to the primal operator $\mathcal{A} : \mathcal{M}(T) \rightarrow \mathcal{S}^{m+1}\times \mathcal{S}^m \times \mathbb{R}^I$ of \eqref{Prob: Primal_Purity_Constraint_elementary} exists.
However, it is constructed in a quite straightforward manner:
First, we observe that for the inner products containing matrices $M\in \mathcal{S}^k$, we have
$$\langle \langle M, \mass{} \rangle , Y \rangle_F = \langle \langle M, Y\rangle_F , \mass{} \rangle \text{ for arbitrary }\mass{}\in \mathcal{M}(T),Y\in \mathcal{S}^k,$$
where, $\langle \cdot,\cdot\rangle_F: \mathcal{S}^k\times \mathcal{S}^k\rightarrow \mathbb{R}$ denotes the Frobenius inner product. In particular, for $k\in \{m,m+1\}$, this includes the matrices
$$M\in \left\{\begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_{\EW{}}
\end{bmatrix}, -(t-\EW{})(t-\EW{})^\top \right\}.$$
For the inner products containing only the entries $\text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}$ of $\mathcal{A}$, we have
$$\langle \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}, \mass{} \rangle y = \langle \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i} y , \mass{} \rangle \text{ for every }\mass{}\in \mathcal{M}(T), y\in \mathbb{R}.$$
Hence, we have constructed an adjoint operator $\mathcal{B}: \mathcal{S}^{m+1}\times \mathcal{S}^m\times \mathbb{R}^I\rightarrow \mathcal{C}(T)$ to $\mathcal{A}$, which is defined by
$$\left\langle \begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_{\EW{}}
\end{bmatrix}, Y_1\right\rangle + \langle -(t-\EW{})(t-\EW{})^\top, Y_2\rangle + \sum_{i\in I}\text{sign}(\varepsilon_i) \mathbbm{1}_{T_i} y_i.$$
Moreover, $\mathcal{B}$ is unique due to Riesz' representation
theorem, see e.g. \cite{Brezis2010a}.
\end{remark}
With this adjoint operator, we derive the following dual program for \eqref{Prob: Primal_Purity_Constraint_elementary}:
\begin{subequations}
\label{Prob: Dual_Purity_Constraint_elementary}
\begin{align}
b \le \max_{y_i,Y_1,Y_2}~& \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \label{Obj: Objective_Dual_elementary} \\
\text{s.t.}~& \sum_{i=1}^k a_i \mathbbm{1}_{A_i}^c - \left\langle \begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_{\EW{}}
\end{bmatrix} , Y_1 \right\rangle -\langle -(t-\EW{})(t-\EW{})^\top , Y_2\rangle \notag \\
& -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c y_i \in \mathcal{C}(T)_{\ge 0}, \label{Constr: Dual_elementary}\\
& Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I,
\end{align}
\end{subequations}
where $\mathcal{C}(T)_{\ge 0}$ denotes the cone of the continuous,
nonnegative functions on $T$.
As typical in reformulation
approaches in robust optimization, we aim at using strong duality.
Indeed, next we establish strong duality between \eqref{Prob:
Primal_Purity_Constraint_elementary} and \eqref{Prob:
Dual_Purity_Constraint_elementary} that can be seen as a direct
corollary of Corollary 3.0.2 in \cite{Shapiro2000a} or as a direct
consequence of the dualization theory illustrated, e.g. in \cite{Barvinok2002a}.
\begin{theorem}\label{Thm: strong_duality}
Suppose that $\mass{} \sim \mathcal{N}(\EW{},\Sigma)$ is both, a strictly positive Radon measure and feasible for \eqref{Prob: Primal_Purity_Constraint_elementary}. Then, the duality gap of the problems \eqref{Prob: Primal_Purity_Constraint_elementary} and \eqref{Prob: Dual_Purity_Constraint_elementary} is zero.
\end{theorem}
\begin{proof}
We observe that $\mass{} \sim \mathcal{N}(\EW{},\Sigma)$ is feasible for \eqref{Prob: Primal_Purity_Constraint_elementary}, i.e. \eqref{Prob: Primal_Purity_Constraint_elementary} is "consistent" in the definition of Shapiro. Furthermore, $T$ is compact and the functions in the objective as well as in the constraints of \eqref{Prob: Primal_Purity_Constraint_elementary} are continuous. Due to the isometry of the metric spaces $(\mathcal{S}^n, \langle\cdot , \cdot \rangle_F)$ and $(\mathbb{R}^{\frac{n(n-1)}{2}}, \langle \cdot , \cdot \rangle)$, we further reformulate \eqref{Prob: Primal_Purity_Constraint_elementary} as a conic program with $\mathcal{A}\mass{} -b\in K$, where the cone $K\subseteq \mathbb{R}^{n(n-1)+|I|}$.
Hence, strong duality follows from Corollary 3.1 in \cite{Shapiro2000a}.
\end{proof}
\subsection{Computation of feasible solutions by a discretized robust counterpart}\label{Subsec: discretized_DRO_reformulation_elementary}
In this section, we derive an algorithmically tractable model
for the robust counterpart \eqref{Prob:
Dual_Purity_Constraint_elementary}.
A standard approach to find an approximate solution to this
semiinfinite (SIP) feasibility problem
is to sample the semiinfinite constraint \eqref{Constr:
Dual_elementary} and solve the resulting finite-dimensional SDP
that only contains the sampled constraints. However, a
feasible solution to a finite subsets of the constraints in
\eqref{Constr: Dual_elementary} does not necesarily satisfy
\eqref{Constr: Dual_elementary} itself.
This
means that a determined solution may not satisfy \eqref{Prob:
Dual_Purity_Constraint_elementary} and thus by solving Case 1 or 2
with respect to this relaxation of \eqref{Prob:
Dual_Purity_Constraint_elementary}, we might obtain a solution,
which is not necessarily protected against the uncertainties in the
ambiguity set $\Omega$, i.e. is not robust and does not necessarily satisfy \eqref{Prob: Primal_Purity_Constraint_elementary}.
In this work, we however aim for a robust constraint for $\Omega$ as
for many applications a guaranteed protection is important,
e.g.
in medical applications.
To this end, we propose a discretization scheme that
provides an inner approximation of \eqref{Constr:
Dual_elementary}. This means that every solution of the
discretization of \eqref{Prob: Dual_Purity_Constraint_elementary} will
indeed satisfy \eqref{Prob: Dual_Purity_Constraint_elementary} and
thereby guarantee that the corresponding decision variables
$a_i,a_i^-,a_i^+$ are feasible for \eqref{Prob:
Primal_Purity_Constraint_elementary}. This robust formulation will
make use of Lipschitz continuity of the non-elementary functions in
\eqref{Constr: Dual_elementary} for which we introduce some further notation.
First, for fixed $Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I$, we denote the left-hand side of \eqref{Constr: Dual_elementary} by
\begin{align*}
f^{c}(t)\coloneqq & \sum_{i=1}^k a_i \mathbbm{1}_{A_i}^{c}(t) - \left\langle \begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_{\EW{}}
\end{bmatrix} , Y_1 \right\rangle +\langle (t-\EW{})(t-\EW{})^\top , Y_2\rangle\\
&\quad -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^{c}(t) y_i.
\end{align*}
In particular, this implies the equivalence
$$\eqref{Constr: Dual_elementary} \Leftrightarrow f^c(t)\geq 0 \text{ for every } t \in T.$$
Moreover, we observe that $f^c(t)$ consists of a sum of elementary functions and the polynomial term:
$$p_Y(t)\coloneqq \left\langle \begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_{\EW{}}
\end{bmatrix} , Y_1 \right\rangle +\langle (t-\EW{})(t-\EW{})^\top , Y_2\rangle.$$
Furthermore, $p$ is Lipschitz continuous since $T$ is compact and its coefficients $Y_1,Y_2$ are bounded:
\begin{lemma}\label{Lemma: Lipschitz_continuity_of_p_elementary}
Given that $\EW{} \in T_i$ for every $i\in I$ with $\varepsilon_i>0$ and $\EW{} \notin T_i$ if $\varepsilon_i<0$. Then, the polynomial $p_Y(t)$ is Lipschitz continuous in $t$ with a uniform Lipschitz constant $L$.
\end{lemma}
The proof of this lemma is based on showing that the coefficients $Y_1,Y_2$ of $p_y$ are bounded. As it is rather technical, we postpone it to the appendix for ease of presentation and continue with discussing the resulting modelling power.
Indeed, its assumption on the confidence sets $T_i$ that either it is
$\EW{} \in T_i$ whenever $\varepsilon_i>0$ or $\EW{} \notin T_i$ if
$\varepsilon_i<0$, limits the choice of probability measures $\mass{}\in
\Omega$. We note, that this limitation is rather mild as we are only limited in our modeling power by not being able to force deviation from $\EW{}$. However, most distributions are concentrated around their respective expectation to some degree. Since the requirement above still allows us to force the probability mass of $\mass{}\in \Omega$
towards the estimated expected value $\EW{}$ it seems not very restrictive. In particular, discrepancy based approaches such as Wasserstein balls yield a similar structure.
If confidence
sets are used,
restrictions in modeling are fairly common, also for example in the so-called nesting condition in \cite{Wiesemann2014a} and the references therein.
In addition, there are relevant settings where the assumption from the
above lemma can be
weakened. Indeed, in Section \ref{Sec: DRO reformulation} we will
show that for one-dimensional $T$, no such assumption is needed at all.
In the following Lemma, we establish an inner approximation of the DRO constraint \eqref{Constr: Dual_elementary}. To this end, we denote by $T_N=\delta_N \mathbbm{Z}^m\cap T$ the standard lattice with stepsize $\delta_N \in \mathbb{R}_{>0}$, that serves as a discretization of $T$. Moreover, we define a \emph{level set} $L_h$ by
$$L_h\coloneqq \left\{t\in T:\ \sum_{i=1}^k a_i \mathbbm{1}_{A_i}(t)-\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(t) =h\right\},$$
where $h$ denotes the \emph{height} of the specific level set.
The motivation to consider these level sets is, that on the boundaries of $L_h$ the elementary functions $\mathbbm{1}_{A_i},\mathbbm{1}_{T_i}$ abruptly change and any potential Lipschitz constant $L$ for the continuous approximations $\mathbbm{1}_{A_i}^c,\mathbbm{1}_{T_i}^c$ of $\mathbbm{1}_{A_i},\mathbbm{1}_{T_i}$ tends to infinity, the closer the continuous approximation is. However, in most applications, we can assume that $A_i\cap T_N\neq \emptyset$ and $T_i\cap T_N\neq \emptyset$, whenever $\delta_N$ is sufficiently small, e.g. if every $A_i$ and $T_i$ contains open sets. In particular, we assume that $\delta_N$ is chosen small enough, such that for every $t\in L_h$, we have that there is a $\bar{t}\in T_N\cap L_h$ with $\|t-\bar{t}\|\leq \sqrt{m}\delta_N$. Thus, we address the jumps in $f^c$ on these boundaries by guaranteeing, that for every $t\in L_h$, there is a nearby sample point also contained in $L_h$. In addition, as seen in Lemma \ref{Lemma: Lipschitz_continuity_of_p_elementary}, we can address the differences on $f^c$ evaluated on sample points $\bar{t}\in T_N$ versus non-sample points $t\in T\setminus T_N$ by exploiting Lipschitz continuity on the polynomial part $p$ of $f^c$.
Finally, we observe that the union of all these level sets $\bigcup_{h} L_h=T$ is a finite, disjoint decomposition of $T$ and thus, we have addressed all potential deviations of $F^c$ between values on $T\setminus T_N$ and $T_N$.
\begin{lemma}\label{Lemma:inner_approx}
Let $L$ be the Lipschitz constant of $p$. Let further $\delta_N$ be sufficiently small, such that for every $t\in T$ with w.l.o.g. $t\in L_h$, there exists a $\bar{t}\in T_N\cap L_h$ with $\|t-\bar{t}\|\leq \delta_N\sqrt{m}$. Then, the finitely many constraints \begin{equation}\label{Eq: Dual_elementary_discretized}
f(\bar{t})-L\delta_N\sqrt{m} \geq 0 \text{ for every } \bar{t}\in T_N
\end{equation}
imply the semiinfinite constraint
$$f^c(t) \geq 0 \text{ for every } t\in T.$$
\end{lemma}
For ease of presentation we postpone the proof of Lemma \ref{Lemma:inner_approx} to the appendix. Note, that Lemma \ref{Lemma:inner_approx} provides a sufficient criterion for the SIP constraint \eqref{Constr: Dual_elementary}. Thus, replacing \eqref{Constr: Dual_elementary} by \eqref{Eq: Dual_elementary_discretized} gives an inner approximation of \eqref{Prob: Dual_Purity_Constraint_elementary}. Therefore, the existence of $y,Y_1,Y_2$ satisfying \eqref{Eq: Dual_elementary_discretized} in addition to the remaining constraints of \eqref{Prob: Dual_Purity_Constraint_elementary} guarantees that the DRO constraint \eqref{Prob: Primal_Purity_Constraint_elementary} is satisfied.
\subsection{Tractable approximations for DRO with convex objective}
We note that \eqref{Prob: Primal_Purity_Constraint_elementary} is
often considered as the (non-convex) DRO constraint embedded in an otherwise convex program, e.g. as illustrated by Case 1 and 2 in Section \ref{Sec: Problem Setting}. Hence, instead of considering constant $a_i,A_i$, we investigate in the following
paragraphs how the Lemma \ref{Lemma:inner_approx} approximation can be
applied to Case 1, i.e. decision variables $a_i$ and Case 2, with
decision variables $a_i^-,a_i^+$ that define the box
$A_i=[a_i^-,a_i^+]$. For the sake of simplicity, we assume that the
objective of DRO is linear. However, the results below hold analogously for other convex objective functions as well. For Case 1 let $a\in P \subseteq \mathbb{R}^k$ be a decision variable and consider:
\begin{subequations}\label{Prob: primal_linear_a_i}
\begin{align}
\max_{a\in P, Y_1,Y_2,y} & c^\top a \\
\text{ s.t.}~ & \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq b, \label{Constr: primal_linear_a_i_objective_constraint1} \\
& \sum_{i=1}^k a_i \mathbbm{1}_{A_i}^c(t) - \left\langle \begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_1
\end{bmatrix} , Y_1 \right\rangle \notag \\
& +\left\langle (t-\EW{})(t-\EW{})^\top , Y_2\right\rangle -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c(t) y_i\geq 0, \qquad \forall t\in T \label{Constr: primal_linear_a_i_objective_constraint2}\\
& Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I.
\end{align}
\end{subequations}
It turns out that computing lower bounds for \eqref{Prob: primal_linear_a_i} is tractable:
\begin{theorem}\label{Thm: SDP_for_elementary_with_varying_a_i}
A solution to the following semidefinite problem yields a feasible solution to the semiinfinite problem \eqref{Prob: primal_linear_a_i}.
\begin{subequations} \label{Prob: discretized_elementary_linear_obj}
\begin{align}
\max_{a\in P} &\sum_{i=1}^k c^\top a_i\\
\text{ s.t.}~ & \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq b \\
& \sum_{i=1}^k a_i \mathbbm{1}_{A_i}(\bar{t}) - \left\langle \begin{bmatrix}
\Sigma & \bar{t}-\EW{}\\
(\bar{t}-\EW{})^\top & \varepsilon_1
\end{bmatrix} , Y_1 \right\rangle +\langle (\bar{t}-\EW{})(\bar{t}-\EW{})^\top , Y_2\rangle \notag \\
& -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\bar{t}) y_i - L\delta_N\sqrt{m}\geq 0 \qquad\qquad\qquad\qquad\qquad \forall \bar{t}\in T_N, \label{Constr: discretized_purity_elementary_linear_obj}\\
& Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I.
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
Given an arbitrary $a\in P$. Due to a \ref{Lemma:inner_approx}, we observe that for every $a$ Constraint \eqref{Constr: discretized_purity_elementary_linear_obj} implies $f^c(t)\geq 0$ for every $t\in T$, i.e. \eqref{Constr: primal_linear_a_i_objective_constraint2}. Hence, the claim follows.
\end{proof}
We note that the objective $\sum_{i=1}^k a_i \mathbbm{1}_{A_i}$
is linear and thus convex in the $a_i$. Thus, if the number
of confidence sets $|I|$ is low, Problem \eqref{Prob:
discretized_elementary_linear_obj} satisfies the (weakened)
conditions needed for Theorem 1 in \cite{Wiesemann2014a} and can be
exactly reformulated as a convex program by applying their methods,
whereas the proposed method in this paper only provides a lower bound
on \eqref{Prob: primal_linear_a_i}. However, our approach can also be
used for a large number of confidence sets. In addition, it does not
depend on the convexity of $\sum_{i=1}^k a_i \mathbbm{1}^c_{A_i}$ and
can also be used in non-convex settings. This can be seen by the following result for Case 2, where $T=[0,M]^m$ and $A_i=[a_i^-,a_i^+]$ are supposed to be hypercubes:
\begin{subequations}\label{Prob: primal_linear_A_i}
\begin{align}
\max & \sum_{i=1}^k (c^-_i)^\top a^-_i + (c^+_i)^\top a^+_i\\
\text{s.t.}~ & \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq b, \label{Constr: primal_linear_A_i_objective_constraint1} \\
& \sum_{i=1}^k a_i \mathbbm{1}_{A_i}^c(t) - \left\langle \begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_1
\end{bmatrix} , Y_1 \right\rangle \notag \\
& \qquad +\left\langle (t-\EW{})(t-\EW{})^\top , Y_2\right\rangle -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c(t) y_i\geq 0, && \forall t\in T \label{Constr: primal_linear_A_i_objective_constraint2}\\
& a_i^-,a_i^+\in P, Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I.
\end{align}
\end{subequations}
Note, that $\sum_{i=1}^k a_i \mathbbm{1}^c_{[a_i^-,a_i^+]}$ is non-convex in the variables $a_i^-,a_i^+\in \mathbb{R}^m$.
In the following theorem, we model the elementary function
$\mathbbm{1}_{[a_i^-,a_i^+]}:T_N\rightarrow \mathbb{R}$ by binary variables
$\tilde{b}_{\bar{t}}^i$. Additionally, we ensure, that these variables properly model $\mathbbm{1}_{[a_i^-,a_i^+]}(\bar{t})$ by tracking the "jumps" from $0$ to $1$
at $a_{ij}^-$ in direction $j\in [m]$ by additional binary variables
$\Delta_{\bar{t}}^{-,i,j}$ and the "jumps" form $1$ to $0$ at $a_{ij}^+$ in direction $j\in [m]$ by
$\Delta_{\bar{t}}^{+,i,j}$ respectively. A similar modeling was given
by Dienstbier et. al. in \cite{Dienstbier2020a} for an engineering application in
the design of particulate products.
\begin{theorem}\label{Thm: MIP_multidim}
Let $M_\delta\coloneqq \{-\delta_N,0,\delta_N,\ldots, M\}$ the discretization of $[0,M]$, $T_0^j=\{\bar{t}\in T_N: \bar{t}_j=0\}\subseteq T_N$ a set of boundary points of $T_N$. Then, a solution to the following MISDP yields a feasible solution to \eqref{Prob: primal_linear_A_i}.
\begin{subequations}\label{Prob: Dual_Purity_Constraint_elementary_discretized}
\begin{align}
\max &\sum_{i=1}^k (c^-_i)^\top a^-_i + (c^+_i)^\top a^+_i\\
\text{s.t.}~& \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq 0 \label{Constr: discretized_dual_objective_elementary_geq0} \\
& \sum_{i=1}^k a_i\tilde{b}_{\bar{t}}^i - \left\langle \begin{bmatrix}
\Sigma & \bar{t}-\EW{}\\
(\bar{t}-\EW{})^\top & \varepsilon_1
\end{bmatrix} , Y_1 \right\rangle\\
& \qquad +\langle (\bar{t}-\EW{})(\bar{t}-\EW{})^\top , Y_2\rangle \notag \\
& \qquad -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\bar{t}) y_i -L\delta_N\sqrt{m}\geq 0 && \forall \bar{t}\in T_N, \label{Constr: discretized_purity_elementary}\\
& \tilde{b}_{\bar{t} + e_j\delta_N}^i -\tilde{b}_{\bar{t}}^i = \Delta_{\bar{t}}^{-,i,j}-\Delta_{\bar{t}}^{+,i,j} && \forall \bar{t}\in T_N, i\in [k], j\in [m], \label{Constr: jump_def}\\
& \sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} \Delta_{\bar{t}}^{-,i,j}+\Delta_{\bar{t}}^{+,i,j} \leq 2 && \forall i\in [k], j\in [m], t_0\in T_0^j, \label{Constr: sum_delta_bound}\\
& a_{ij}^-\geq \sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} (l+\delta_N) \Delta_{\bar{t}}^{-,i,j} && \forall i\in [k],j\in [m], t_0\in T_0^j, \label{Constr: a-_lower_bound}\\
& a_{ij}^+\leq M-\sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} (M-l) \Delta_{\bar{t}}^{+,i,j} && \forall i\in [k], j\in [m], t_0\in T_0^j,\label{Constr: a+_upper_bound}\\
& a_{ij}^+-a_{ij}^- \geq M \sum_{l\in M_\delta: \bar{t}=t_0+le_j} \Delta_{\bar{t}}^{+,i,j}\notag\\
& \quad\ -\hspace{-0.7cm}\sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} \left( (M-l) \Delta_{\bar{t}}^{+,i,j} - (l+\delta_N) \Delta_{\bar{t}}^{-,i,j}\right) && \forall i\in [k], j\in [m], t_0\in T_0^j,\label{Constr: adiff_lower_bound}\\
& 0 \leq a_{ij}^+ -a_{ij}^-\leq \sum_{\bar{t}\in T_N} \tilde{b}_{\bar{t}}^i-1 && \forall i\in [k],\forall j\in [m],\label{Constr: address_empty_set}\\
& a^-_i,a^+_i\in P, y \in \mathbb{R}_{\geq 0}^I, Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0},\\
& \Delta_{\bar{t}}^{-,i,j},\Delta_{\bar{t}}^{+,i,j},\tilde{b}_{\bar{t}}^i\in \{0,1\},
\end{align}
\end{subequations}
where $\tilde{b}_{\bar{t}}^i\coloneqq 0$ for every $\bar{t}\notin T_N$.
\end{theorem}
\begin{proof}
We consider a feasible solution $\Delta_{\bar{t}}^{-,i,j},\Delta_{\bar{t}}^{+,i,j},\tilde{b}_{\bar{t}}^i, a^-_i,a^+_i$ for \eqref{Prob: Dual_Purity_Constraint_elementary_discretized} and show that for every $i\in [k], \bar{t}\in T_N$ we have $\tilde{b}_{\bar{t}}^i=\mathbbm{1}_{[a_i^-,a_i^+]}(\bar{t})$. To this end, note that for every $i\in [k]$ there exists indeed an index $,\bar{t}$ with $\tilde{b}_{\bar{t}}^i=1$ due to \eqref{Constr: address_empty_set}. Now, given an arbitrary index $\bar{t}$ with $\tilde{b}_{\bar{t}}^i=1$, we first show that $\tilde{b}_{\bar{t}}^i=1$ implies $\mathbbm{1}_{[a_i^-,a_i^+]}(\bar{t})=1$, i.e., $\bar{t}\in [a_i^-,a_i^+]$:
\bigskip
We first observe, that for every direction $j$, there exists a $t_0\in T_0^j$ and $\kappa_j\in \{0,\delta_N,2\delta_N,\ldots,M\}$ such that $$\bar{t} = t_0+\kappa_j e_j,$$
i.e., we consider the line in direction $j$ passing through $\bar{t}$ and consequently through $t_0$ as well. Then, we define $\kappa_j^{\max}$ as the index of the last element on this line with $\tilde{b}_t^i=1$, i.e.,
$$\kappa_j^{\max}\coloneqq \max \{ l\in \{0,\delta_N,2\delta_N,\ldots,M\}: \tilde{b}_{t_0+le_j}^i=1\}.$$
Thus, $\tilde{b}_{t_0+(\kappa_j^{\max}+\delta_N)e_j}^i=0$ and \eqref{Constr: jump_def} implies $\Delta_{t_0+\kappa_j^{\max}e_j}^{-,i,j}=0, \Delta_{t_0+\kappa_j^{\max}e_j}^{+,i,j}=1$. Moreover, \eqref{Constr: a+_upper_bound} implies
\begin{equation}\label{Eq: a+_help}
a_{ij}^+ \leq M-(M-\kappa_j^{\max})=\kappa_j^{\max}=\bar{t}_j + (\kappa_j^{\max}-\kappa_j),
\end{equation}
where the latter equality originates from the definition of $\kappa_j$ above.
Similarly, we define
$$\kappa_j^{\min}\coloneqq \min \{l\in \{0,\delta_N,2\delta_N,\ldots,M\}: \tilde{b}_{t_0+le_j}^i=1\}.$$
Thus, $\tilde{b}_{t_0+(\kappa_j^{\min}-\delta_N)e_j}^i=0$ and \eqref{Constr: jump_def} implies $\Delta_{t_0+(\kappa_j^{\min}-\delta_N)e_j}^{-,i,j}=1, \Delta_{t_0+(\kappa_j^{\min}-\delta_N)e_j}^{+,i,j}=0$. Moreover, \eqref{Constr: a-_lower_bound} implies
\begin{equation}\label{Eq: a-_help}
a_{ij}^- \geq (\kappa_j^{\min}-\delta_N)+\delta_N=\kappa_j^{\min} = \bar{t}_j + \kappa_j^{\min}-\kappa_j.
\end{equation}
However, due to \eqref{Constr: sum_delta_bound} we know that these are the only nonzero entries for $\Delta_{t_0+le_j}^{-,i,j},\Delta_{t_0+le_j}^{+,i,j}$. Thus due to \eqref{Constr: adiff_lower_bound}, we obtain
$$a_{ij}^+-a_{ij}^- \geq M - (M-\kappa_j^{\max})-\kappa_j^{\min} = \kappa_j^{\max}-\kappa_j^{\min},$$
which implies equality in both \eqref{Eq: a+_help} and \eqref{Eq: a-_help} and thus $\bar{t}_j=\kappa_j\in [\kappa_j^{\min},\kappa_j^{\max}]=[a_{ij}^-, a_{ij}^+]$ for every index $\bar{t}\in T_N$ with $\tilde{b}_{\bar{t}}^i=1$.
\\
\\
For the reverse implication, we need to show that $\bar{t}\in [a_i^-,a_i^+]$ implies $\tilde{b}_{\bar{t}}^i=1$. Due to \eqref{Constr: address_empty_set}, we obtain that $[a_i^-,a_i^+]\neq \emptyset$ implies the existence of a $\bar{t}$ with $\tilde{b}_{\bar{t}}^i=1$. In particular, the previous implication shows that $\bar{t}\in [a_i^-,a_i^+]$. Beginning with this $\bar{t}$, we prove the following claim for an arbitrary direction $j$:
\begin{equation}\label{Eq: claim_main_Thm}
\tilde{b}_{\bar{t}}^i=1 \text{ implies } \tilde{b}_{\bar{t}+le_j}^i =1 \text{ for every } l: \bar{t}_j+l\in [a_{ij}^-, a_{ij}^+].
\end{equation}
Let $\bar{t}=t_0+\kappa_je_j$ with $t_0\in T_0^j$ as above. Then, with the same definitions for $\kappa_j^{\min},\kappa_j^{\max}$, the arguments from the previous implication, that led to equality in \eqref{Eq: a+_help} and \eqref{Eq: a-_help} imply $\kappa_j^{\min}=a_{ij}^-$, $\kappa_j^{\max}=a_{ij}^+$. Moreover, the definition of $\kappa_j^{\min}, \kappa_j^{\max}$ leads to:
$$1=\tilde{b}_{t_0+\kappa_j^{\min}e_j}^i=\tilde{b}_{t_0+(\kappa_j^{\min}+\delta_N)e_j}^i=\ldots =\tilde{b}_{t_0+\kappa_j^{\max}e_j}^i=1$$
with $(t_0+\kappa_j^{\min}e_j)_j=a_{ij}^-$ and $(t_0+\kappa_j^{\max}e_j)_j=a_{ij}^+$.
Hence, our claim \eqref{Eq: claim_main_Thm} follows and as the direction $j$ was chosen arbitrarily, we obtain that $\mathbbm{1}_{[a_i^-,a_i^+]}(\bar{t})=1$ also implies $\tilde{b}_{\bar{t}}^i=1$.
\end{proof}
Theorem \ref{Thm: MIP_multidim} yields
sufficient criteria for robustness of the DRO constraint. This is a
considerable advantage as to our knowledge no efficient alternative
approach is readily available.
Although binary SDP optimization is algorithmically
tractable, for large cardinality of $T_N$ solving \eqref{Prob:
Dual_Purity_Constraint_elementary_discretized} may be
computationally challenging for
modern solvers.
This challenge may be addressed as follows: Instead of bounding the
slope of $p$ through its Lipschitz constant $L$, more elaborate bounds
that strengthen Lemma \ref{Lemma:inner_approx} may reduce the numer of
necessary sample points for a good approximation of \eqref{Prob:
primal_linear_A_i}. In Section \ref{Sec: DRO reformulation}, we
present one such refinement for one-dimensional domains $T$. Instead
of a binary SDP, we will
receive a binary MIP as an approximation of \eqref{Prob:
primal_linear_A_i} that can typically be solved much faster.
As a next step, one could also generalize from Cases 1 and 2 from Section \ref{Sec: Problem Setting}
by simultaneously picking two types of decision variables out of
$a_i,A_i,\varepsilon_i,T_i$. This however leads to bilinearities in
\eqref{Prob: primal_linear_a_i} as these variables are either products with
with the dual variables as $\varepsilon_i$ in \eqref{Constr:
primal_linear_a_i_objective_constraint1} and $T_i$ through
$\mathbbm{1}_{T_i}^c$ in \eqref{Constr:
primal_linear_a_i_objective_constraint2} or as $a_i,A_i$ among
themselves in \eqref{Constr:
primal_linear_a_i_objective_constraint2}. Although bilinearities
with elementary functions on $A_i$ and $T_i$ might be algorithmically tractable
as these elementary functions can be approximated by binary variables,
we stick to Cases 1 and 2 from Section \ref{Sec: Problem Setting} and
postpone the study of more general cases to future research.
\section{Refinement for DRO on a one-dimensional domain $T$ and envelope confidence sets}\label{Sec: DRO reformulation}
Let us first illustrate the value of our approach even for one-dimensional domains $T$. To this end, let us briefly recall the robust value-at-risk example from Section \ref{Sec: Problem Setting}, where we consider the following DRO problem:
\begin{subequations}
\begin{align}
\text{VaR}_{\alpha,\Omega}(X)\coloneqq \max -a^+ & \\
\text{s.t.}\ & \alpha \leq \min_{\P\in \Omega} \P((-\infty,a^+]).
\end{align}
\end{subequations}
Observe, that in this particular example, the function $\mathbbm{1}_{(-\infty,a^+]}$ is monotonously increasing $a^+$ and decreasing in $t$ and thus fits in the framework in \cite{Wiesemann2014a} and, given we indeed know $\EW{\mass{}}, \sigma$ and omit confidence set constraints, also fits \cite{Zymler2013a}. However, in financial applications it is crucial to not be overly conservative, which motivates the current section, where we refine the framework from Section \ref{Sec: DRO_elementary_functions} for one-dimensional random variables with support $T$ by applying a sharper approximation of $\Omega$.
Moreover, note that a one-dimensional domain $T$ significantly simplifies the framework in comparison to Section \ref{Sec: DRO_elementary_functions} as the SDP constraints.
\eqref{Constr: First_Moment_elementary} and \eqref{Constr: Second_Moment_elementary} simplify to
$$\eqref{Constr: First_Moment_elementary} \Leftrightarrow (\EW{\mass{}}(t)-\EW{})^2 \leq \varepsilon_{\EW{}}/\sigma \text{ and } \eqref{Constr: Second_Moment_elementary} \Leftrightarrow \EW{\mass{}}(t-\EW{})^2 \leq \varepsilon_{\sigma} \sigma^2,$$
where due to rescaling of $\varepsilon_{\EW{}}$ and $\varepsilon_{\sigma}$, we can w.l.o.g. set $\sigma=1$. Oberserve, that \eqref{Constr: First_Moment_elementary} is now represented by
\begin{equation*}
\langle t,\mass{}\rangle = \EW{\mass{}}(t)\in [\EW{-},\EW{+}]
\end{equation*}
with the predefined bounds $\EW{-}=\EW{}(1-\varepsilon_{\EW{}})$ and $\EW{+}=\EW{} (1+\varepsilon_{\EW{}})$. However, in the remainder of this section, we will consider general bounds $\EW{-},\EW{+}$ as our results also hold in this more general case. For \eqref{Constr: Second_Moment_elementary}, we obtain by linearity of $\EW{\mass{}}$, that
$$\langle t^2-2\EW{}t,\mass{}\rangle = \EW{\mass{}}(t^2-2\EW{}t) = \EW{\mass{}}(t-\EW{})^2 - \EW{}^2 \leq \varepsilon_\sigma \sigma^2 -\EW{}^2.$$
In order to improve Program \eqref{Prob: DRO_with_undefined_ambiguity_set}, we further restrict the uncertain measure $\mass{}$ as follows: Let $\rho$ denote the probability density corresponding to $\mass{}$, then we define an \emph{envelope constraint} as
\begin{equation}\label{Eq: envelope}
0\leq \uRTD{}{t} \leq \bound{}{t}.
\end{equation}
Here, the nonnegativity constraint is redundant as $\mass{}\in \mathcal{M}(T)_{\geq 0}$ and thus $\rho \geq 0$. The upper bound however can model important information. In particular, the bound can be used to exclude measures that concentrate all the probability mass around a single point and thus have a high density at this point, e.g. Dirac point measures. Moreover, if the uncertainty is parametrized, bounds for \eqref{Eq: envelope} may be fairly simple to obtain. This can be illustrated by the following example:
\begin{example}
Let $T=[t_0,t_{\max}]$ $\uRTD{s}{t}$ be the PSD of a normal distribution with expectation $s$. Suppose $s$ varies between $\EW{-}$ and $\EW{+}$ and $\uRTD{}{t}$ lies in the convex hull of $\{\uRTD{s}{t}: s \in [\EW{-},\EW{+}]\}$. Then, a valid definition of $\bound{}{t}$ would be
\[
\bound{}{t} \coloneqq\left\{\begin{array}{lll} \uRTD{\EW{-}}{t} , & t\in [t_0, \EW{-}] \\
\uRTD{\EW{} }{\EW{}} , & t\in [\EW{-}, \EW{+}]\\
\uRTD{\EW{+}}{t} , & t\in [\EW{+},t_{\max}].\end{array}\right.
\]
\end{example}
Although constraints of type \eqref{Eq: envelope} may be useful to refine a DRO constraint like \eqref{Prob: Primal_Purity_Constraint_elementary}, we observe that simply adding the semiinfinite constraint
$$\uRTD{}{t}\leq \bound{}{t}\text{ for every } t \in T$$
adds another level of difficulty to \eqref{Prob: Primal_Purity_Constraint_elementary}. In order to avoid this, we only consider a finite sample $T_N\coloneqq \delta_N \mathbbm{Z} \cap T$ of $T$ and approximate only finitely many of the constraints in \eqref{Eq: envelope}. Here, $\delta_N>0$ denotes the sample stepwidth. Note, that this sample coincides with $T_N$ as known from Section \ref{Subsec: discretized_DRO_reformulation_elementary} for ease of presentation. Now, we use this discretization to bound the mass under $\uRTD{}{t}$ by using confidence set constraints as follows:
\begin{remark}\label{Rem: Riemannsum}
Consider intervals $[\tau, \tau +\delta_N)$ for all $\tau \in T_N$ and define the corresponding confidence set constraints by
\begin{equation*}
\mass{}([\tau,\tau+\delta_N])=\int_{\tau}^{\tau + \delta_N} 1 d\mass{}(t) = \langle \mathbbm{1}_{[\tau,\tau+\delta_N)}(t), \mass{} \rangle \le \delta_N \cdot \max_{t\in [\tau,\tau+\delta_N]}\bound{}{t}.
\end{equation*}
For $\delta_N \rightarrow 0$, we illustrate, that this converges against the constraint \eqref{Eq: envelope}:
As $\rho$ is the PSD, that corresponds to $\mass{}$, we have $d\mass{}=\rho(t)dt$. Let $R$ denote the antiderivative of $\rho$, then
\[\frac{1}{\delta_N} \int_{\tau}^{\tau + \delta_N} 1 d\mass{}(t)= \frac{R(\tau +\delta_N) - R(\tau)}{\delta_N} \rightarrow \rho(\tau)\qquad\qquad (\delta_N\rightarrow 0)\]
and since $\lim_{\delta_N\rightarrow 0} \max_{t\in [\tau,\tau+\delta_N]}\bound{}{t}=\bound{}{t}$ the claim follows. Defining $\bound{+}{\tau}\coloneqq \max_{t\in [\tau,\tau+\delta_N]}\bound{}{t}$ leads to
\begin{equation}\label{Eq: envelope_discretized}
\langle \mathbbm{1}_{[\tau,\tau+\delta_N)}(t), \mass{} \rangle \leq \delta_N \cdot \bound{+}{t}.
\end{equation}
\end{remark}
Combining the above moment and envelope constraints \eqref{Eq: envelope_discretized}, the DRO constraint \eqref{Prob: DRO_with_undefined_ambiguity_set} - the main target of investigation in this section - simplifies to:
\begin{subequations}
\label{Prob: Primal_Purity_Constraint_envelope}
\begin{align}
b \leq \min~& \langle a\mathbbm{1}_{[a^-,a^+]}(t),\mass{}\rangle && \label{Constr: Objective_Primal_Purity_Constraint}\\
\text{s.t.}~& \mass{} \in \mathcal{M}(T)_{\ge 0}\\
&\langle 1, \mass{} \rangle \geq 1 \label{Constr: Primal_Purity_Constraint_envelope1}\\
&\langle -1, \mass{} \rangle \geq -1 \\
& \langle -t, \mass{}\rangle \geq -\EW{+},\label{Constr: First_Moment1} \\
& \langle t, \mass{}\rangle \ge \EW{-}, \label{Constr: First_Moment2}\\
&\langle -t^2+2\EW{}t ,\mass{}\rangle \geq -\varianz{}^2\varepsilon_\sigma + \EW{}^2 \label{Constr: Second_Moment}\\
& \langle -\mathbbm{1}_{[\tau,\tau+\delta_N)}(t),\mass{} \rangle \geq - \delta_N \cdot \bound{+}{\tau} && \forall \tau \in T_N. \label{Constr: envelope_discretized}
\end{align}
\end{subequations}
We observe, that Constraint \eqref{Constr: envelope_discretized} would remain semiinfinite if we incorporated it for every $\tau \in T$. In order to apply conic duality to \eqref{Prob: Primal_Purity_Constraint_envelope}, we therefore only took finitely many such $\tau$, which results in a bigger ambiguity set $\Omega$. From a game-theoretic perspective, a larger $\Omega$ would translate to a stronger adverserial player and consequently a more conservative solution to \eqref{Prob: DRO_with_undefined_ambiguity_set}.
Since Problem \eqref{Prob: Primal_Purity_Constraint_envelope} is a special case of \eqref{Prob: Dual_Purity_Constraint_elementary}, it can be dualized with the same methods as illustrated in Section \ref{Sec: DRO_elementary_functions}. Here, the dual variables originating from constraints \eqref{Constr: Primal_Purity_Constraint_envelope1} -- \eqref{Constr: Second_Moment} correspond to dual variables $y_k$, e.g. \eqref{Constr: First_Moment1} if $k=3$. Additionally, we denote the dual variables that correspond to the envelope constraint \eqref{Constr: envelope_discretized} as $z \in \mathbb{R}_{\ge 0}^{T_N}$. As a dual program we obtain:
\begin{subequations}
\label{Prob: Dual_Purity_Constraint}
\begin{align}
\sup_{y \in \mathbb{R}^{5}_{\ge 0}, z \in \mathbb{R}_{\ge 0}^{T_N}}& \langle (1,-1, -\EW{+}, \EW{-},-\varepsilon_\sigma \varianz{}^2+\EW{}^2 ),y\rangle - \delta_N \sum_{\tau \in T_N} \bound{+}{\tau} \boundvariable{\tau} \\
\text{s.t.}\ & a\mathbbm{1}_{[a^-,a^+]}^c(t) - y_{1} + y_{2}+y_{3} t - y_{4} t + y_{5} (t^2-2\EW{}t) \notag\\
& \qquad\qquad\qquad\qquad\qquad + \sum_{\tau \in T_N} \mathbbm{1}^c_{[\tau,\tau+\delta_N)}(t) \boundvariable{\tau}\in \mathcal{C}(T)_{\ge 0}\label{Constr: dual_purity}
\end{align}
\end{subequations}
Hence, to ensure that the objective of \eqref{Prob: Dual_Purity_Constraint} remains larger than $b$, we need strong duality:
\begin{coroll}\label{Thm: strong_duality}
The duality gap of the problems \eqref{Prob: Primal_Purity_Constraint_envelope} and \eqref{Prob: Dual_Purity_Constraint} is zero.
\end{coroll}
\begin{proof}
We observe that as in Theorem \ref{Thm: strong_duality} $\mass{}$ given by a normal distribution $\mathcal{N}(\EW{},\sigma^2)$ is feasible for \eqref{Prob: Primal_Purity_Constraint_envelope}, i.e. \eqref{Prob: Primal_Purity_Constraint_envelope} is "consistent" as defined in \cite{Shapiro2000a}. Furthermore, $T$ is compact and the functions in the objective as well as in the constraints of \eqref{Prob: Primal_Purity_Constraint_envelope} are continuous. Hence, strong duality follows from Corollary 3.1 in \cite{Shapiro2000a}.
\end{proof}
Observe, that we can neglect to explicitly demand continuity in \eqref{Constr: dual_purity} since the left hand side consists only of continuous functions. Hence, the above program is a semiinfinite program, particularly a linear program with infinitely many constraints as $T\subseteq \mathbb{R}$ is compact.
\subsection{Computation of optimal solutions by a discretized counterpart}\label{Subsec: discretized_DRO_reformulation}
For the remainder of this section, we assume $T=[0,M]$ to be an interval and recall
$$T_N \coloneqq \delta_N \mathbb{Z} \cap [0,M] = \{0, \delta_N,2\delta_N,\ldots,M\}.$$
In order to simplify Constraint \eqref{Constr: dual_purity} one observes, that since every $\bar{t}\in T_N$ is contained in exactly one of the intervals $[\tau,\tau+\delta_N)$, namely if and only if $\bar{t}=\tau$ holds, we have:
\[\sum_{\tau \in T_N} \mathbbm{1}_{[\tau,\tau+h)}^c(\bar{t}) \boundvariable{\tau} = \boundvariable{\bar{t}}.\]
Thus, discretizing Constraint \eqref{Constr: dual_purity} leads to the following relaxation of \eqref{Constr: dual_purity}:
$$a\mathbbm{1}_{[a^-,a^+]}^c(\bar{t}) - y_{1} + y_{2}+y_{3} \bar{t} - y_{4}\bar{t} + y_{5} \bar{t}^2 - y_{5}2\EW{}\bar{t} +\boundvariable{\bar{t}} \geq 0 \qquad \forall \bar{t}\in T_N.$$
For the remainder of this section, let us suppose, we focus on Case 2 as illustrated in \eqref{Prob: primal_linear_A_i}, i.e. we aim to optimize a linear function over $a^-,a^+$ subject to the DRO Constraint \eqref{Prob: Dual_Purity_Constraint}:
\begin{subequations}\label{Prob: dist_robust_chromatography_discretized_conic}
\begin{align}
\max_{a^-,a^+,y,z}~ & c^\top (a^-,a^+)^\top \\
\text{s.t.}~ & b \le \langle (1,-1, -\EW{+}, \EW{-},-\varepsilon_\sigma\varianz{}^2+\EW{}^2 ),y\rangle - \delta_N \sum_{\bar{t}\in T_N} \bound{+}{\bar{t}} \boundvariable{\bar{t}} \label{Eq: dual_conic_obj}\\
& a\mathbbm{1}_{[a^-,a^+]}^c(\bar{t}) - y_{1} + y_{2}+y_{3} \bar{t} - y_{4}\bar{t} + y_{5} (\bar{t}^2 -2\EW{}\bar{t})+\boundvariable{\bar{t}} \geq 0 \qquad \forall \bar{t}\in T_N \label{Eq: dual_conic_constraint}\\
& a^-, a^+\in P, y \in \mathbb{R}^{5}_{\ge 0}, z \in \mathbb{R}_{\ge 0}^{T_N},
\end{align}
\end{subequations}
where $P\subseteq T \subseteq \mathbb{R}^2$ denotes a polytope. However, since \eqref{Eq: dual_conic_constraint} is only a relaxation of \eqref{Constr: dual_purity}, a solution to \eqref{Prob: dist_robust_chromatography_discretized_conic} does not necessarily satisfy \eqref{Constr: dual_purity}. In order to identify potential infeasibilities, Figures 1 - 6 below illustrate the form of Constraint \eqref{Eq: dual_conic_constraint} for possible choices of $a,a^-,a^+$ with $z=0$. Moreover, we denote the minimum of the polynomial
$$p_y(t)\coloneqq - y_{1} + y_{2}+y_{3} t - y_{4} t + y_{5} (t^2-2\EW{}t)$$
by $p_{\min}$ and thereby illustrate the different interactions between the elementary function $a\mathbbm{1}_{[a^-,a^+]}$ on the one hand and the polynomial $p_y$ on the other hand. Note that $a\mathbbm{1}_{[a^-,a^+]}$ is chosen as an example for such interactions as there are potentially also rapid changes in \eqref{Constr: dual_purity} caused by different values of $z_{\bar{t}}$ and $z_{\bar{t}+\delta_N}$.
\begin{minipage}[t]{0.3333\textwidth}
\captionsetup{width=.9\linewidth}
\includegraphics[width=\textwidth]{xnotinXwminlinks}
\captionof{figure}{$a<0$ and $p_\text{min}<a^-$}
\label{fig: xnotinXwminlinks}
\end{minipage}
\begin{minipage}[t]{0.3333\textwidth}
\captionsetup{width=.9\linewidth}
\includegraphics[width=\textwidth]{xnotinXwminmitte}
\captionof{figure}{$a<0$ and $a^-\leq p_\text{min}\leq a^+$}
\label{fig: xnotinXwminmitte}
\end{minipage}
\begin{minipage}[t]{0.33333\textwidth}
\captionsetup{width=.9\linewidth}
\includegraphics[width=\textwidth]{xnotinXwminrechts}
\captionof{figure}{$a<0$ and $p_\text{min} > a^+$}
\label{fig: xnotinXwminrechts}
\end{minipage}
\begin{minipage}[t]{0.3333\textwidth}
\captionsetup{width=.9\linewidth}
\includegraphics[width=\textwidth]{xinXwminlinks}
\captionof{figure}{$a>0$ and $p_\text{min}<a^-$}
\label{fig: xinXwminlinks}
\end{minipage}
\begin{minipage}[t]{0.3333\textwidth}
\captionsetup{width=.9\linewidth}
\includegraphics[width=\textwidth]{xinXwminmitte}
\captionof{figure}{$a>0$ and $a^-\leq p_\text{min}\leq a^+$}
\label{fig: xinXwminmitte}
\end{minipage}
\begin{minipage}[t]{0.33333\textwidth}
\captionsetup{width=.9\linewidth}
\includegraphics[width=\textwidth]{xinXwminrechts}
\captionof{figure}{$a>0$ and $p_\text{min}>a^+$}
\label{fig: xinXwminrechts}
\end{minipage}
In order to modify \eqref{Prob: dist_robust_chromatography_discretized_conic} in a way, that its solutions are also feasible for \eqref{Prob: Dual_Purity_Constraint}, we develop a similar modification to the one given by Lemma \ref{Lemma:inner_approx} for Constraint \eqref{Eq: dual_conic_constraint}. However, Lemma \ref{Lemma:inner_approx} exploits Lipschitz continuity, which is a general, but not very targeted argument. In contrast to this approach, we aim to identify the critical points of \eqref{Constr: dual_purity} with the following Lemma \ref{Lemma: crucial_points} and subsequently find an inner approximation, that sharpens Constraint \eqref{Constr: dual_purity} enough to make these critical points feasible. To ease the presentation, we introduce
$$f_{y,z}^c(t) \coloneqq p_y(t) + \sum_{\tau\in T_N} \mathbbm{1}_{[\tau,\tau+\delta_N)}^c(t)z_\tau + a\mathbbm{1}_{[a^-,a^+]}^c(t)$$
and observe, that Constraint \eqref{Constr: dual_purity} can be rewritten as $f_{y,z}^c(t)\geq 0$ for every $t\in T$.
\begin{lemma}\label{Lemma: crucial_points}
Let Constraint \eqref{Eq: dual_conic_constraint} be feasible for all $\bar{t} \in T_N$. Suppose Constraint \eqref{Constr: dual_purity} is violated, then it is violated either for $p_{\text{min}} = \frac{2\EW{}y_5+y_4-y_3}{2y_5}$ or for a point $t'\in [a^- -\delta,a^- + \delta]\cup [a^+ -\delta,a^+ +\delta]\cup \bigcup_{\bar{t}\in T_N} [\bar{t}-\delta,\bar{t}+\delta]$, where $\delta>0$ can be chosen arbitrarily small.
\end{lemma}
For ease of presentation, we postpone the proof of Lemma \ref{Lemma: crucial_points} to the appendix. Besides of identifying the critical points of \eqref{Eq: dual_conic_constraint}, we observe that the constant $a$ influences Constraint \eqref{Eq: dual_conic_constraint} differently, depending whether $a>0$ (see Figures \ref{fig: xinXwminlinks} - \ref{fig: xinXwminrechts}) or $a<0$ (see Figures \ref{fig: xnotinXwminlinks} - \ref{fig: xnotinXwminrechts}). Hence, we apply different approximation schemes for $a>0$ and $a<0$ respectively in order to achieve the following sharpened version of \eqref{Eq: dual_conic_constraint}:
\begin{lemma}\label{Lemma: refinement_main}
Let $a^-,a^+ \in T_N$ and $y,z$ satisfy Constraint \eqref{Eq: dual_conic_constraint}, i.e.,
$$f_{y,z}^c(\bar{t})= a\mathbbm{1}_{[a^-,a^+]}^c(\bar{t}) - y_{1} + y_{2}+y_{3} \bar{t} - y_{4}\bar{t} + y_{5} (\bar{t}^2 -2\EW{}\bar{t})+\boundvariable{\bar{t}} \geq 0 \qquad \forall \bar{t}\in T_N,$$
the following variant of \eqref{Eq: dual_conic_constraint}:
\begin{equation}\label{Eq: additional_constraint_(24c)_strengthened}
f_{y,z}^c(\bar{t}+\delta_N) -z_{\bar{t}+\delta_N}+z_{\bar{t}} \geq 0\text{ for every } \bar{t}\in T_N\setminus \{M\}
\end{equation}
and
\begin{align}
& z_{a^+}+p_y(a^+) \geq 0, \label{Eq: additional_constraint_for_jumps}\\
& z_{a^--\delta_N}+p_y(a^-) \geq 0.\label{Eq: additional_constraint_for_jumps2}
\end{align}
Then, the variables $y,z$ satisfy a lifted version of Constraint \eqref{Constr: dual_purity}, namely
\begin{equation}\label{Eq: lifted_constraint}
f^c_{y,z}(t) + p_y(p_{\text{min}}-\delta_N)-p_y(p_{\text{min}}) \geq 0 \text{ for every } t\in T.
\end{equation}
\end{lemma}
\begin{proof}
Let w.l.o.g. $t\in [\bar{t},\bar{t}+\delta_N)$. Here, Lemma \ref{Lemma: crucial_points} implies, that all the potential minimizers of $f^c_{y,z}$ are contained in
$$\{p_{\text{min}}\}\cup\bigcup_{\bar{t}\in T_N}[\bar{t}-\delta,\bar{t}+\delta]$$
since $a^-,a^+\in T_N$. Thus, we have that all potential minimizers are contained in
$$[\bar{t},\bar{t}+\delta] \cup [\bar{t}+\delta_N-\delta,\bar{t}+\delta_N]\cup\{p_{\text{min}}\}.$$
Hence, for $t\in [\bar{t},\bar{t}+\delta_N)$, we have
\begin{align*}
f^c_{y,z}(t) &\geq \min\left\{f^c_{y,z}(\bar{t}),C_1(\delta),C_2(\delta), f^c_{y,z}(\bar{t}+\delta_N), C_3\right\},
\end{align*}
with $C_1(\delta)\coloneqq \min_{\delta'\in (0,\delta]} f^c_{y,z}(\bar{t}+\delta')$, $C_2(\delta)\coloneqq\min_{\delta'\in (0,\delta]} f^c_{y,z}(\bar{t}+\delta_N-\delta')$ and $C_3\coloneqq f^c_{y,z}(p_{\text{min}})$.
We immediately observe that $f^c_{y,z}(\bar{t}),f^c_{y,z}(\bar{t}+\delta_N)\geq 0$ due to \eqref{Eq: dual_conic_constraint} and it suffices to prove that $C_1(\delta),C_2(\delta)$ and $C_3$ are larger than $-p_y(p_{\text{min}}-\delta_N)+p_y(p_{\text{min}})$ for sufficiently small $\delta>0$:
\begin{align*}
C_1(\delta) & = \min_{\delta'\in (0,\delta]} f^c_{y,z}(\bar{t}+\delta')\\
& = \min_{\delta'\in (0,\delta]} a\mathbbm{1}^c_{[a^-,a^+]}(\bar{t}+\delta')+ \sum_{\tau\in T_{N}}\mathbbm{1}^c_{[\tau,\tau+\delta_N)}(\bar{t}+\delta')z_\tau + p_y(\bar{t}+\delta')\\
& \overset{\eqref{Eq: Urysohn_approx}}{\geq} \min_{\delta'\in (0,\delta]} a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta')+ \sum_{\tau\in T_{N}}\mathbbm{1}_{[\tau,\tau+\delta_N)}(\bar{t}+\delta')z_\tau + p_y(\bar{t}+\delta')\\
& = \min_{\delta'\in (0,\delta]} a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta')+ z_{\bar{t}} + p_y(\bar{t}+\delta')\\
& \geq \min_{\delta'\in (0,\delta]} a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta')+ z_{\bar{t}} + p_y(\bar{t})-L\delta'\\
& \geq -L\delta,
\end{align*}
where the last inequality holds by \eqref{Eq: additional_constraint_for_jumps} if $\bar{t}=a^+$ and by \eqref{Eq: dual_conic_constraint} otherwise. The same arguments hold for $C_2(\delta)$ and we obtain:
\begin{align*}
C_2(\delta)& = \min_{\delta'\in (0,\delta]} f^c_{y,z}(\bar{t}+\delta_N-\delta')\\
& \overset{\eqref{Eq: Urysohn_approx}}{\geq} \min_{\delta'\in (0,\delta]} a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta_N-\delta')+ z_{\bar{t}} + p_y(\bar{t}+\delta_N-\delta')\\
& \geq \min_{\delta'\in (0,\delta]} a\mathbbm{1}_{[a^-,a^+]}(\bar{t}+\delta_N-\delta')+ z_{\bar{t}} + p_y(\bar{t}+\delta_N)-L\delta'\\
& \geq -L\delta,
\end{align*}
where the last inequality holds by \eqref{Eq: additional_constraint_for_jumps2} if $\bar{t}=a^--\delta_N$ and by \eqref{Eq: additional_constraint_(24c)_strengthened} otherwise. Let us now consider $f^c_{y,z}(p_{\text{min}})$ with $p_{\text{min}}\in [\bar{t},\bar{t}+\delta_N)$, then
\begin{align*}
C_3 & = f^c_{y,z}(p_{\text{min}})\\
& = a\mathbbm{1}^c_{[a^-,a^+]}(p_{\text{min}})+ \sum_{\tau\in T_{N}}\mathbbm{1}^c_{[\tau,\tau+\delta_N)}(p_{\text{min}})z_\tau + p_y(p_{\text{min}})\\
& \overset{\eqref{Eq: Urysohn_approx}}{\geq} a\mathbbm{1}_{[a^-,a^+]}(p_{\text{min}})+ \sum_{\tau\in T_{N}}\mathbbm{1}_{[\tau,\tau+\delta_N)}(p_{\text{min}})z_\tau + p_y(p_{\text{min}})\\
& = a\mathbbm{1}_{[a^-,a^+]}(p_{\text{min}})+ z_{\bar{t}} + p_y(p_{\text{min}})\\
& \geq a\mathbbm{1}_{[a^-,a^+]}(p_{\text{min}})+ z_{\bar{t}} + p_y(\bar{t})-(p_y(\bar{t})-p_y(p_{\text{min}}))\\
& = a\mathbbm{1}_{[a^-,a^+]}(\bar{t})+ z_{\bar{t}} + p_y(\bar{t})-(p_y(\bar{t})-p_y(p_{\text{min}}))\\
& \overset{\eqref{Eq: dual_conic_constraint}}{\geq} -(p_y(\bar{t})-p_y(p_{\text{min}}))\\
& \geq -(p_y(p_{\text{min}}-\delta_N)-p_y(p_{\text{min}})).
\end{align*}
Finally, we choose $\delta<(p_y(p_{\text{min}}-\delta_N)-p_y(p_{\text{min}}))/L$ and the claim follows.
\end{proof}
We note, that in this proof, we used \eqref{Eq: dual_conic_constraint}, \eqref{Eq: additional_constraint_(24c)_strengthened}, \eqref{Eq: additional_constraint_for_jumps} and \eqref{Eq: additional_constraint_for_jumps2} to ensure the feasibility of all the points close to the samplepoints in $T_N$, which, due to the assumption that $a^-,a^+\in T_N$, includes $a^-$ and $a^+$ as well. In Section \ref{Sec: DRO_elementary_functions}, this property was established by exploiting the Lipschitz continuity of $p_y$ with a global Lipschitz constant $L$. In contrast to this global argument, Lemma \ref{Lemma: refinement_main} is only based on the local slope at $p_{\text{min}}$, which in general is a weaker assumption and thereby strengthens our approximation significantly. In particular, combining these statements implies the following sufficient condition for \eqref{Constr: dual_purity}.
\begin{lemma}\label{Lemma: discretized_inner_approx}
Given $a^-,a^+\in T_N$. Suppose $y\in \mathbb{R}^{5}, z\in \mathbb{R}^{T_N}$ satisfy
\begin{equation}\label{Eq: additional_constraint_(24c)_strengthened_final}
f_{y,z}^c(\bar{t}) - y_{5}\delta^2_N \geq 0\text{ for every } \bar{t}\in T_N
\end{equation}
and
\begin{subequations}
\begin{align}
& z_{a^+}+p_y(a^+) - y_{5}\delta^2_N\geq 0,\\
& z_{a^--\delta_N}+p_y(a^-) - y_{5}\delta^2_N\geq 0. \text{ if } a<0,\\
& f_{y,z}^c(\bar{t}+\delta_N) -z_{\bar{t}+\delta_N}+z_{\bar{t}} -y_{5}\delta^2_N\geq 0\text{ for every } \bar{t}\in T_N\setminus\{M\}. \label{Eq: additional_constraint_(24c)_strengthened_final_shifted}
\end{align}
\end{subequations}
Then, $y\in \mathbb{R}^{5}, z\in \mathbb{R}^{T_N}$ satisfy $f_{y,z}^c(t)\geq 0$ for every $t\in T$, i.e. \eqref{Constr: dual_purity}.
\end{lemma}
As the proof of Lemma \ref{Lemma: discretized_inner_approx} is rather elementary, it is postponed to the appendix. We now have all the ingredients to prove that the following MIP provides feasible solutions to \eqref{Constr: dual_purity} and thus enables us to approximately solve \eqref{Prob: dist_robust_chromatography_discretized_conic} with the original SIP constraint \eqref{Constr: dual_purity}.
\begin{theorem}\label{Thm: MIP_onedim}
Suppose $a^-,a^+\in P$ implies $a^-\leq a^+$. Then, a solution to the following MIP satisfies \eqref{Constr: dual_purity}.
\begin{subequations}\label{Prob: MIP_onedim}
\begin{align}
\max_{a^-,a^+,\tilde{b},\Delta^-,\Delta^+,y,z}~ & c^\top (a^-,a^+)^\top \\
\text{s.t.}~ & \langle (1,-1, -\EW{+}, \EW{-},-\varepsilon_\sigma\varianz{}^2+\EW{}^2 ),y\rangle \notag \\
& \quad - \delta_N \sum_{\bar{t}\in T_N} \bound{+}{\bar{t}} \boundvariable{\bar{t}} \geq b \label{Constr: discretized_dual_objective_geq0}\\
& a\tilde{b}_{\bar{t}} + z_{\bar{t}} + p_y(\bar{t})-y_{5}\delta_N^2 \geq 0 && \forall \bar{t}\in T_N, \label{Constr: discretized_purity_strengthened}\\
& a\tilde{b}_{\bar{t}+\delta_N} +z_{\bar{t}} +p_y(\bar{t}+\delta_N) -y_{5}\delta_N^2\geq 0 &&\forall \bar{t}\in T_N\setminus\{M\} \label{Constr: discretized_purity_strengthened_shifted}\\
& z_{a^+}+p_y(a^+)-y_5\delta_N^2 \geq 0 \label{Constr: MIP_onedim_a+_special}\\
& z_{a^--\delta_N}+p_y(a^-)-y_5\delta_N^2 \geq 0 \label{Constr: MIP_onedim_a-_special}\\
& \sum_{\bar{t}\in T_N} \lift{\bar{t}}{+}+\lift{\bar{t}}{-} \leq 2 \label{Constr: sum_Delta_leq_2}\\
& \tilde{b}_{\bar{t}+\delta_N}-\tilde{b}_{\bar{t}}=\lift{\bar{t}}{-}-\lift{\bar{t}}{+} && \forall \bar{t}\in T_N. \label{Constr: discretized_fract_times}\\
& a^+-a^- = \sum_{\bar{t}\in T_N} \tilde{b}_{\bar{t}}-1 \label{Constr: adiff_onedim}\\
& a^- = \sum_{\bar{t}\in T_N} (\bar{t}+\delta_N) \Delta_{\bar{t}}^- \label{Constr: tbar_geq_a^-}\\
& a^+ = \sum_{\bar{t}\in T_N} \bar{t} \Delta_{\bar{t}}^+ \label{Constr: tbar_leq_a^+}\\
& a^-,a^+ \in P,\label{Constr: a^-_a^+_in_P}\\
& \tilde{b},\Delta^+,\Delta^- \in \{0,1\}^{T_N}\\
& y \in \mathbb{R}^{5}_{\ge 0}, z \in \mathbb{R}_{\ge 0}^{T_N} \label{Constr: MIP_onedim_nonneg_yz}
\end{align}
\end{subequations}
where $\tilde{b}_{\bar{t}}^i \coloneqq 0$ for every $\bar{t}\notin T_N$.
\end{theorem}
\begin{proof}
Compared to \eqref{Prob: dist_robust_chromatography_discretized_conic}, we restricted the variables $a^-,a^+$ to be in $T_N\subseteq T$ and modeled $\mathbbm{1}_{[a^-,a^+]}(\bar{t})$ with the help of decision variables $\tilde{b}_{\bar{t}}$. Hence, we show that $\tilde{b}_{\bar{t}}=1\Leftrightarrow\mathbbm{1}_{[a^-,a^+]}(\bar{t})=1$ for every $\bar{t}\in T_N$ in order to prove the claim:
Let $\tilde{b}_{\bar{t}}=1$, then we define on the one hand
$$\kappa^{\max}\coloneqq \max\{\bar{t}\in T_N: \tilde{b}_{\bar{t}}=1\}.$$
This implies that $\tilde{b}_{\kappa^{\max}}=1$ and $\tilde{b}_{\kappa^{\max}+\delta_N}=0$ and thus with \eqref{Constr: discretized_fract_times} we obtain $\Delta^-_{\kappa^{\max}}=0, \Delta^+_{\kappa^{\max}}=1$. On the other hand let
$$\kappa^{\min}\coloneqq \min\{\bar{t}\in T_N: \tilde{b}_{\bar{t}}=1\}.$$
Similarly, we observe that $\tilde{b}_{\kappa^{\min}}=1,\tilde{b}_{\kappa^{\min}-\delta_N}=0$ and consequently \eqref{Constr: discretized_fract_times} implies $\Delta^-_{\kappa^{\min}-\delta_N}=1, \Delta^+_{\kappa^{\min}-\delta_N}=0$. Thus, we have identified two indices $\kappa_{\min}-\delta_N,\kappa_{\max}\in T_N$ with nonzero $\Delta^-_{\kappa_{\min}-\delta_N},\Delta^+_{\kappa_{\max}}$ respectively. Due to \eqref{Constr: sum_Delta_leq_2}, these are the only such indices and we obtain $\Delta^-_{\bar{t}}=0$ for every $\bar{t}\in T_N\setminus\{\kappa^{\min}\}$ and $\Delta^+_{\bar{t}}=0$ for every $\bar{t}\in T_N\setminus\{\kappa^{\max}\}$. Moreover, Constraints \eqref{Constr: tbar_geq_a^-} and \eqref{Constr: tbar_leq_a^+} imply $a^-=\kappa^{\min}$ and $a^+=\kappa^{\max}$. Lastly, the definitions of $\kappa^{\min},\kappa^{\max}$ imply
$$a^-=\kappa^{\min}\leq \bar{t} \leq \kappa^{\max}=a^+.$$
For the reverse implication, we first observe that if there exists a feasible solution to \eqref{Prob: MIP_onedim}, there exists a $\bar{t}\in T_N$ with $\tilde{b}_{\bar{t}}=1$: To this end, we recall that we assumed that \eqref{Constr: a^-_a^+_in_P} implies $a^-\leq a^+$. Applied to \eqref{Constr: adiff_onedim}, we obtain $\sum_{\bar{t}\in T_N}\tilde{b}_{\bar{t}}\geq 1$ and thus the existence of a nonzero $\tilde{b}_{\bar{t}}$.
Thus, we can follow the same arguments as in the previous implication and conclude that
$$\kappa^{\min}=a^-,\ \kappa^{\max}=a^+,\ \Delta^-_{\kappa^{\min}-\delta_N}=1,\ \Delta^+_{\kappa^{\max}}=1$$
and
$\Delta^-_{\bar{t}}=0$ for every $\bar{t}\in T_N\setminus\{\kappa^{\min}-\delta_N\}$ as well as $\Delta^+_{\bar{t}}=0$ for every $\bar{t}\in T_N\setminus\{\kappa^{\max}\}$. Thus, we obtain for the respective $\tilde{b}_{\bar{t}}$:
$$1=\tilde{b}_{\kappa^{\min}}=\ldots = \tilde{b}_{\kappa^{\max}}=1.$$
Finally, since $\kappa^{\min}=a^- \leq \bar{t} \leq a^+ = \kappa^{\max}$, we have that $\tilde{b}_{\bar{t}}=1$.
\\
\\
We have now established the equivalence between \eqref{Constr: discretized_purity_strengthened}, \eqref{Constr: discretized_purity_strengthened_shifted} and
\eqref{Eq: additional_constraint_(24c)_strengthened_final}, \eqref{Eq: additional_constraint_(24c)_strengthened_final_shifted} respectively. Thus, the conditions for Lemma \ref{Lemma: discretized_inner_approx} are satisfied and since these constraints are an inner approximation of \eqref{Constr: dual_purity} the result follows.
\end{proof}
We would like to highlight, that Theorem \ref{Thm: MIP_onedim} computes feasible solutions and thereby lower bounds for Case 2 from Section \ref{Sec: Problem Setting} with an ambiguity set given by \eqref{Prob: Primal_Purity_Constraint_envelope}. Moreover, we observe that for $\delta_N\rightarrow 0$, this bound will converge to the actual value of Case 2.
\subsection{Convergence}
In the present section we prove, that $\varepsilon_N$-optimal solutions of the discretized problem \eqref{Prob: MIP_onedim} converge towards an optimal solution of the SIP:
\begin{subequations}\label{Prob: dist_robust_chromatography_conic}
\begin{align}
\max_{a^-,a^+,y,z}~ & c^\top (a^-,a^+)^\top \\
\text{s.t.}~ & b \leq \langle (1,-1, -\EW{+}, \EW{-},-\varepsilon_\sigma\varianz{}^2+\EW{}^2 ),y\rangle - \delta_N \sum_{\tau\in T_N} \bound{+}{\tau} \boundvariable{\tau} \label{Eq: dual_conic_obj2}\\
& a\mathbbm{1}_{[a^-,a^+]}^c(t) - y_{1} + y_{2} + y_{3}t - y_{4}t + y_{5} (t^2-2\EW{}t)\notag\\
& \qquad + \sum_{\tau\in T_N} \mathbbm{1}_{[\tau,\tau+\delta_N)}^c(t) z_\tau \geq 0 && \forall t\in T \label{Eq: dual_conic_constraint_SIP}\\
& (a^-, a^+)^\top\in P. \label{Constr: dist_rob_conic_a_in_polytope}\\
& y \in \mathbb{R}^{5}_{\ge 0},z \in \mathbb{R}_{\ge 0}^{T_N}\label{Constr: dist_robust_chromatic_conic_yz_nonneg}
\end{align}
\end{subequations}
Finding discretized counterparts of an SIP and proving their convergence is a rather standard approach in semiinfinite programming, maybe best illustrated by Lemma 6.1 in \cite{Shapiro2009a}. However, one usually considers relaxations of the SIP that occur by sampling of the SIP constraint, whereas \eqref{Prob: MIP_onedim} is an inner-approximation of \eqref{Prob: dist_robust_chromatography_conic}. Thus, we instead adjust the arguments in the proof of Lemma 6.1 in \cite{Shapiro2009a} for our purpose.
To this end, it is crucial to ensure that for every optimal solution to \eqref{Prob: dist_robust_chromatography_conic}, there exists a sequence of solutions to the discretized program \eqref{Prob: MIP_onedim} defined by $T_N$, whose objective value converges to the optimal value of \eqref{Prob: dist_robust_chromatography_conic} if $\delta_N\rightarrow 0$. Hence, we consider the following lemmas, where we abbreviate our previous notation and denote
$$c((a^-,a^+,y,z)) \coloneqq c^\top ((a^-)^\top,(a^+)^\top)^\top.$$
\begin{lemma}\label{Lem: SIP_has_nearby_MIP_solution_a>0}
Given $a>0$ and $\delta_N$ sufficiently small. For every optimal solution $(a^-,a^+,y,z)$ to \eqref{Prob: dist_robust_chromatography_conic}, there exists a solution $((a^-)',(a^+)',y',z')_N$ to the discretized program \eqref{Prob: MIP_onedim} such that $c((a^-,a^+,y,z)_N')\geq c((a^-,a^+,y,z)) - 2\|c\|_\infty\delta_N$.
\end{lemma}
\begin{lemma}\label{Lem: SIP_has_nearby_MIP_solution_a<0}
Given $a<0, b>0$ and $\delta_N$ sufficiently small. Then, for every optimal solution $(a^-,a^+,y,z)$ to \eqref{Prob: dist_robust_chromatography_conic}, there exists a solution $((a^-)',(a^+)',y',z')_N$ to the discretized program such that $c((a^-,a^+,y,z)_N')\geq c((a^-,a^+,y,z)) - 4\|c\|_\infty\delta_N$.
\end{lemma}
The proofs of Lemma \ref{Lem: SIP_has_nearby_MIP_solution_a>0} and \ref{Lem: SIP_has_nearby_MIP_solution_a<0} are based on increasing $\mathbbm{1}_{[a^-,a^+]}$ by enlargening/shrinking the interval $[a^-,a^+]$ and then adjust the remaining variables accordingly. As these manipulations are rather tedious and lengthy, we postpone these proofs to the appendix in order to ease the presentation and focus on the subsequent convergence result.
\begin{theorem}\label{Thm: convergence}
Suppose every optimal solution to \eqref{Prob: dist_robust_chromatography_conic} satisfies $a^-<a^+$. If $\varepsilon_N \downarrow 0$ and $\delta_N \downarrow 0$ as $N \rightarrow \infty$, then any accumulation point of a sequence $\{(a^-,a^+,y,z)_N\}$, of $\varepsilon_N$-optimal solutions of the discretized problems \eqref{Prob: MIP_onedim}, is an optimal solution of the problem \eqref{Prob: dist_robust_chromatography_conic}.
\end{theorem}
We note, that for $a>0$ and non-Dirac measures, the assumption $a^-<a^+$ is a direct implication of \eqref{Constr: Objective_Primal_Purity_Constraint} $b$ is strictly positive.
\begin{proof}
Proof Let $\overline{(a^-,a^+,y,z)}$ be an accumulation point of the sequence $\{(a^-,a^+,y,z)_N\}$. By passing to a subsequence if necessary, we can assume that $(a^-,a^+,y,z)_N \rightarrow \overline{(a^-,a^+,y,z)}$. Note, that Theorem \ref{Thm: MIP_onedim} implies, that $(a^-,a^+,y,z)_N$ is feasible for \eqref{Prob: dist_robust_chromatography_conic} and thus satisfies \eqref{Eq: dual_conic_obj2} -- \eqref{Constr: dist_robust_chromatic_conic_yz_nonneg}. Since every considered function is continuous, this implies immediately, that also the accumulation point $\overline{(a^-,a^+,y,z)}$ satisfies \eqref{Eq: dual_conic_obj2} -- \eqref{Constr: dist_robust_chromatic_conic_yz_nonneg}.
Let us now consider an arbitrary optimal solution $(a^-,a^+,y,z)$ to \eqref{Prob: dist_robust_chromatography_conic}.
Lemmas \ref{Lem: SIP_has_nearby_MIP_solution_a>0} and \ref{Lem: SIP_has_nearby_MIP_solution_a<0} show that, for sufficiently small $\delta_N$, there exists a solution $((a^-)',(a^+)',y',z')_N$ to the discretized program such that $c((a^-,a^+,y,z)_N')\geq c((a^-,a^+,y,z)) - 2\|c\|_\infty\delta_N$. Moreover, since $(a^-,a^+,y,z)_N$ was $\varepsilon_N$-optimal, we have $c((a^-,a^+,y,z)_N)+\varepsilon_N \geq c((a^-,a^+,y,z)_N')$ and thus combining these statements leads to:
\begin{align*}
c((a^-,a^+,y,z)) - 4\|c\|_\infty\delta_N & \leq c((a^-,a^+,y,z)_N')\\
& \leq c((a^-,a^+,y,z)_N)+\varepsilon_N\\
& \leq c((a^-,a^+,y,z))+\varepsilon_N
\end{align*}
Let now $N\rightarrow \infty$ and consequently $\delta_N,\varepsilon_N\rightarrow 0$, then we conclude
$$c((a^-,a^+,y,z))= c(\overline{(a^-,a^+,y,z)}).$$
Since $(a^-,a^+,y,z)$ was an optimal solution to \eqref{Prob: dist_robust_chromatography_conic}, we have that $\overline{(a^-,a^+,y,z)}$ is optimal for \eqref{Prob: dist_robust_chromatography_conic}.
\end{proof}
Theorem \ref{Thm: convergence} indicates, that the inner approximation given by Theorem \ref{Thm: MIP_onedim} may not be too conservative. In the remainder of this paper, we provide numerical evidence at the example of particle separation processes, that further support this intuition.
\section{Introduction}
\label{Sec:introduction}
Determining optimal solutions or optimizing processes has been studied
in applied mathematics since decades. Nowadays, deep structural
insights has led to many beautiful, practically efficient
methodologies and implementations.
Going a step further, in particular real-world applications are often
strongly affected by uncertainties. Since typically processes can
neither be fully controlled nor can parameters be measured exactly,
optimization under uncertainty is a research area that currently
receives increased attention. In this work, we consider quite general
optimization models under uncertainty that allow algorithmically
tractable reformulations.
In more detail, let $b\in \mathbb{R}$ be a scalar, $\Omega$ a set of probability measures, $x\in
\mathbb{R}^k$ decision variables and $t\in T$ the variable of a potential
adversarial. For sake of simplicity, assume $t$ is one-dimensional. Let the latter be defined on a compact domain $T$ and distributed according to $\P$. Here, $v:\mathbb{R}^k\times T\rightarrow \mathbb{R}$ denotes a function that connects the decision variables with the random variable $t$, e.g. $v(x,t)=x^\top t$ if we want to depict uncertain coefficients in a linear program. Then, a \emph{distributionally robust constraint} or DRO constraint is defined by
\begin{equation}\label{Eq: DRO_constraint}
b\leq \min_{\P\in \Omega} \mathbbm{E}_{\P}\left(v(x,t)\right).
\end{equation}
Constraints of this form have been intensively studied for decades. In
particular, setting $\Omega=\{\P\}$, such \emph{stochastic
constraints} are a central object of investigation in stochastic
optimization:
\begin{equation*}
b\leq \mathbbm{E}_{\P}\left(v(x,t)\right).
\end{equation*}
On the other hand, setting $\Omega=\{\delta_t: t\in T\}$, where $\delta_t$ denotes the Dirac point measures at $t\in T$, \eqref{Eq: DRO_constraint} provides a \emph{robust constraint}
\begin{equation*}
b\leq \min_{t\in T} v(x,t).
\end{equation*}
This special case of \eqref{Eq: DRO_constraint} constitutes a
central object of interest in robust optimization, and its
understanding is crucial in order to make progress in the field. In
particular, challenges consist in deriving algorithmically tractable
solution approaches for solving the robust counterparts, where
reformulations, decomposition and approximation algorithms are in the focus. Next,
we briefly review some relevant literature in optimization under uncertainty, and distributional robustness in particular.
\bigskip
When the uncertainty is distributed by a given distribution or the
constraints need to be satisfied in a probabilistic sense, stochastic
optimization is useful (see e.g. \cite{prekopa1998SO},
\cite{birge2006introduction} and \cite{Shapiro2003}). However, often
the underlying distributions are unknown.
Then, protecting problems against predefined
uncertainties is is a natural approach which lies in the core of robust optimization. A priorily,
uncertainty sets are defined. A robust problem then optimizes the
worst case cost and guarantees feasibility for all realizations of
uncertainty within the uncertainty sets (see \cite{soyster1973convex},
\cite{ben1998robust},\cite{ben1999robust} and \cite{ben2000robust}).
In recent years, models interpolating between these extreme
versions have been studied intensively, among them probust functions, where the uncertain variable has a non-stochastic and a stochastic part \cite{adelhuette2021joint}.
If the distribution itself is uncertain, distributionally robust
optimization \eqref{Eq: DRO_constraint} is a natural modelling
approach.
For recent surveys of this area, we refer to the very detailed
overviews \cite{Rahimian2019a} or \cite{Fengmin2021a} and the
references therein.
To summarize,
there are two main approaches for deciding about the uncertain
distributions against which protection is sought. They are assumed to
reside in a so-called ambiguity set. In discrepancy-based ambiguity
sets, it is assumed that a 'typical', nominal distribution is
given. Ambiguity sets contain distributions within an appropriately defined
distance from the nominal distribution. Natural distances are given
by Wasserstein-balls \cite{mohajerin2018data}).
In contrast, another modelling ansatz consists in defining
moment-based ambiguity sets where
the moments of the considered distributions satisfy given
bounds. As argued later, we will follow this modeling approach here. \cite{ghaoui2003worst} studies mean-variance or
Value-at-Risk risk measures, where the ambiguity set has a bound mean and covariance matrix and
develops algorithmically tractable algorithms for the robust counterpart. \cite{Popescu2007a} also considers moment information
where the problem is reduced to optimizing an univariate mean-variance robust objective.
In \cite{Xu2017a} Slater conditions are used to show correctness of a
duality-based reformulation of the robust counterpart. Additionally, two
discretization schemes are presented that provide convergent approximate solutions to the problem. Exact reformulations of DRO problems, that contain \eqref{Eq: DRO_constraint} as a constraint are only known for specific ambiguity sets. \cite{Delage2010a} present such reformulations for
convex problems and ambiguity sets with moment bounds.
Incorporating additional information to moment-based ambiguity sets forms a challenging task. However, if the probability distribution is known to be unimodal, the authors of \cite{Parys2015a} determine an SDP reformulation for \eqref{Eq: DRO_constraint}. The difference of \cite{Parys2015a} to the presented work mainly consists in addressing the interplay of an outer-level DRO with \eqref{Eq: DRO_constraint} as a constraint.
In contrast to adding unimodality information, the article \cite{Wiesemann2014a}, under some convexity assumptions, presents a tractable duality-based reformulation of \eqref{Eq: DRO_constraint}, that incorporates information on the confidence sets. Under the given assumptions, their approach can be applied to a DRO with \eqref{Eq: DRO_constraint} as a constraint.
In this paper we use ambiguity sets similar to \cite{Delage2010a} and
assume the mean and covariance matrix to be in a given range in addtition to confidence set information as illustrated in \cite{Wiesemann2014a}. However, we are able to not only consider convex models but address nonconvexities as we consider elementary functions -- the building blocks of any continuous function.
This work is structured as follows.
In Section \ref{Sec: Problem Setting}, the distributionally robust
problem formulation including elementary functions is introduced. Motivating
problem classes are presented that can be modeled by the DRO formulation
studied here. Section \ref{Sec: DRO_elementary_functions}
introduces a novel
semiinfinite inner approximation of the corresponding robust
counterpart together with a suitable discretization approach that
leads to a finite-dimensional mixed-integer positive semidefinite optimization
model. It is shown that its feasible solutions are also
feasible for the original robust DRO model. Subsequently,
Section \ref{Sec: DRO reformulation} makes the approach precise for
one-dimensional ambiguity sets and introduces an appropriate
mixed-integer linear optimization problem for the discretized robust
counterpart. It is proven that with discretization width tending to
zero, an optimum solution of the inner approximation converges to an
optimum of original robust counterpart. Computational results are
given in Section \ref{Sec:comp-results}. As motivating application, a
fundamental and difficult optimization task in material design is
studied. It is argued that particle separation under uncertainty
falls
into the class of problems studied here, whereas known approaches fall
short either in modelling capabilities or in quality of the obtained
solutions. We will make this clearer in the problem setting section.
Using realistic settings, it
turns out that the robust counterpart can be solved practically
efficiently via the mixed-integer linear model from Section \ref{Sec:
DRO reformulation}.
\section{Problem Setting and Notation}\label{Sec: Problem Setting}
\subsection{DRO Constraints Containing Elementary Functions}
We are concerned with optimizing optimization problems with DRO
constraints of the form \eqref{Eq:
DRO_constraint}, where $v(x,t)$ may consist of \emph{elementary functions}, i.e.,
$$v(x,t)=\sum_{i=1}^k a_i\mathbbm{1}_{A_i}(t), \text{ where } \mathbbm{1}_{A_i}(t)\coloneqq \begin{cases}
1 & \text{ if } t\in A_i\\
0 & \text{ otherwise.}
\end{cases}$$
The decisions $x$ may either influence the height of an elementary
function by setting $x_i=a_i$ or will determine the underlying sets
$A_i$. In the remainder of this paper, we will investigate both
situations separately. In principle, the approach could be extended to
considering both cases simultaneously, which however leads to bilinear
terms which is not considered further here.
Considering functions $v$ as above in \eqref{Eq: DRO_constraint} leads to
$$\mathbbm{E}_{\P}(v(x,t)) = \mathbbm{E}_{\P}\left(\sum_{i=1}^k a_i\mathbbm{1}_{A_i}(t)\right) = \sum_{i=1}^k a_i \mathbbm{P}(A_i)$$
and consequently the following formulation of \eqref{Eq: DRO_constraint}:
\begin{equation}\label{Eq: DRO_constraint_Prob}
b \leq \min_{\mathbbm{P}\in \Omega} \sum_{i=1}^k a_i \mathbbm{P}(A_i).
\end{equation}
As mentioned, we address the following two cases for DRO constraint
within an optimization model:
\textbf{Case 1:} The sets $A_i$ are given parameters, and optimization is
performed over the coefficients $x_i=a_i$:
\begin{subequations}\label{Prob: Case1}
\begin{align}
\max_{a\in P}\ & c^\top a: \\
\text{s.t.}\ & b \leq \min_{\mathbbm{P}\in \Omega} \sum_{i=1}^k a_i \mathbbm{P}(A_i),\label{Constr: Case1}
\end{align}
\end{subequations}
where, for ease of exposition, $P\subseteq \mathbb{R}^k$ denotes a set of additional convex constraints.
We note that this problem includes a variety of problems. We briefly illustrate this by an academic example on the mean-variance model from portfolio optimization, see Example 3 in \cite{Sengupta1985a}:
To this end, consider an investor, who aims to minimize the risk on the portfolio. Suppose he only has $k$ risky assets $O_i$ at his disposal, that provide a revenue $r_i$ in case of an event $A_i$ and $0$ otherwise, i.e. $O_i=r_i \mathbbm{1}_{A_i}$. Let these assets be independently, identically distributed with probability $\P\in \Omega$, where $\Omega$ denotes the pre-defined ambiguity set as described in Section \ref{Sec: Problem Setting}. Then, the mean-variance model reads:
$$\min_x x^\top \left( \varepsilon_\Sigma\Sigma \right)x: \min_{\P\in \Omega} E_{\P}\left(\sum_{i=1}^k x_iO_i\right) \geq w, \sum_{i=1}^k x_i=1, x\geq 0,$$
which as the assets $O_i$ are i.i.d. distributed equivalent to
$$\min_x \sum_{i=1}^k \varepsilon_\Sigma\sigma_i x_i^2: \min_{\P\in \Omega} \sum_{i=1}^k x_ir_i\P(A_i) \geq w, \sum_{i=1}^k x_i=1, x\geq 0.$$
As $\varepsilon_{\Sigma},\sigma_i,x_i\geq 0$, we can determine the asset allocation by replacing $x_i^2$ with $x_i$ in the objective and thereby obtain a linear objective. Moreover, by substituing $y_i=r_ix_i$, we obtain
$$-\max_y -\sum_{i=1}^k \frac{\varepsilon_\Sigma\sigma_i}{r_i} y_i: \min_{\P\in \Omega} \sum_{i=1}^k y_i\P(A_i) \geq w, y\in P,$$
where $P=\left\{y\in \mathbb{R}^k: \sum_{i=1}^k \frac{y_i}{r_i}=1, y\geq 0\right\}.$
However, instead of optimizing over the decision variables, we might also optimize over the sets $A_i$:
\textbf{Case 2:} The coefficients $a_i$ are given
parameters and it is assumed that the sets $A_i=[a_i^-,a_i^+]\subseteq
\mathbb{R}^m$ determine hypercubes. Optimization is performed over the
boundaries of these hypercubes, i.e., we set
$x_i=((a^-_i)^\top,(a^+_i)^\top)$. For well-posedness of $\mathbbm{P}(A_i)$, we assume w.l.o.g. that $A_i\subseteq T$ and consider:
\begin{subequations}\label{Prob: Case2}
\begin{align}
\max_{((a^-)^\top,(a^+)^\top)\in P}\ & \sum_{i=1}^k\sum_{j=1}^m c^-_{ij}a^-_{ij}+c^+_{ij}a^+_{ij}: \\
\text{s.t.}\ & b \leq \min_{\mathbbm{P}\in \Omega} \sum_{i=1}^k a_i \mathbbm{P}([a_i^-,a_i^+]),\label{Constr: Case2}
\end{align}
\end{subequations}
where again $P\subseteq \mathbb{R}^{2mk}$ is a set of (convex) constraints. It will turn out that Case 2 is more challenging than Case 1 as the function
$$v(a^-,a^+,t)\coloneqq \sum_{i=1}^k a_i\mathbbm{1}_{[a_i^-,a_i^+]}(t)$$
is not only non-convex in $t$ but also in $((a_i^-)^\top,(a_i^+)^\top)$. Despite of this mathematical challenge, this case already covers interesting applications in chemical separation processes as is illustrated in Section \ref{Sec:comp-results}. Moreover, we demonstrate the generality of Case 2 in an exemplary manner by the following application to risk theory:
One
of the most prominent risk measures in risk theory is the so-called
\emph{value-at-risk}. It is often applied as a tool to aid both,
financial controlling and reporting \cite{Dowd2002a}. We refer to \cite{ghaoui2003worst} for further details on this topic. At a confidence level of $\alpha$, the value-at-risk
$\text{VaR}_{\alpha}(X)$ is defined as follows. Given a
a random variable $X$ that measures the loss (e.g., of a market participant) and suppose
that this loss function is distributed by $F_X$. Then the value at risk at a confidence
level $\alpha$ is
$$\text{VaR}_{\alpha}(X) = -\inf\{x\in \mathbb{R}: F_X(x) > \alpha\}.$$
Hence, by assuming that $X$ may be randomly distributed by one of the uncertain distributions $\P\in \Omega$, we define a \emph{robust value-at-risk}:
\begin{subequations}\label{Prob: VaR}
\begin{align}
\text{VaR}_{\alpha,\Omega}(X)\coloneqq \max -a^+ & \\
\text{s.t.}\ & \alpha \leq \min_{\P\in \Omega} \P((-\infty,a^+]).
\end{align}
\end{subequations}
With our results in Sections \ref{Sec: DRO_elementary_functions} and
\ref{Sec: DRO reformulation}, we provide a mixed-integer linear
programming (MIP) approach to compute lower bounds on
$\text{VaR}_{\alpha,\Omega}$, thereby complementing the upper bounds
given in Section 4 of \cite{ghaoui2003worst}. Moreover, for the
one-dimensional case illustrated above, we can even prove in Section \ref{Sec: DRO reformulation} that these bounds converge to the true solution of \eqref{Prob: VaR}.
In the remainder of this section, we introduce some basic notation and
fundamental concepts.
For further details we refer to the seminal book
\cite{Barvinok2002a} and to \cite{Shapiro2000a}.
The major challenge to address Problems \eqref{Prob: Case1} and
\eqref{Prob: Case2} above are the DRO constraints \eqref{Constr:
Case1} and \eqref{Constr: Case2}. If these were linear constraints
of the form $b\leq \min_{p\in \Omega} \langle a,p \rangle$ with a set
of convex constraints $\Omega$, one could apply standard
reformulation arguments from robust optimization that consist in replacing the inner
adversarial optimization problem by the feasible region of its
dual and solve the resulting model as a
standard finite-dimensional convex problem. An inner product that
allows a similar reformulation of the DRO constraints is given as follows.
Let us denote by $\P$ a probability measure on the compact domain $T$ that is defined by a probability density $\uRTD{}{t}$, i.e. $d\P = \uRTD{}{t}dt.$
On the one hand, according to Riesz-Markov-Kakutani representation theorem the above measure $\P$ is unique, i.e. it is the only solution that satisfies
$$I(f)=\int f d\P$$
for the linear functional $I:\mathcal{C}(T)\rightarrow \mathbb{R}$,
\[I(f)\coloneqq\int_0^T f(t)\uRTD{}{t}dt.\]
As illustrated in Section III.3.2 in \cite{Barvinok2002a}, the corresponding inner product
$$\langle f, \P\rangle \coloneqq \int_T f d\P$$
constitutes a non-degenerate inner product or a \emph{duality}. In particular, this duality is not only defined on probability measures but more generally on \emph{signed Radon measures}, which we denote by $\mathcal{M}(T)$. More importantly, as our results do not require a probability measure, we denote the measure over which we minimize by $\mass{}$ instead of $\P$ to indicate that we are generally referring to a signed Radon measure.
Finally, with the help of the above product $\langle \cdot , \cdot \rangle: \mathcal{C}(T)\times \mathcal{M}(T)\rightarrow \mathbb{R}$, we can consider the inner product of $\mass{}=\P$ and the function $\sum_{i=1}^k a_i\mathbbm{1}_{A_i}$ and formulate \eqref{Eq: DRO_constraint_Prob} as follows:
\begin{subequations}\label{Prob: DRO_with_undefined_ambiguity_set}
\begin{align}
b \le \min~& \langle \sum_{i=1}^k a_i\mathbbm{1}_{A_i}(t),\mass{}\rangle & \\
\text{s.t.}~& \mass{} \in \mathcal{M}(T)_{\ge 0},\\
&\langle 1, \mass{} \rangle \ge 1, \label{Constr: DRO_with_undefined_ambiguity_set_1}\\
&\langle -1, \mass{} \rangle \ge -1, \label{Constr: DRO_with_undefined_ambiguity_set_2}\\
& \mass{} \in \Omega,
\end{align}
\end{subequations}
where $\mass{}\in \mathcal{M}(T)_{\geq 0}$ indicates that $\mass{}$ is
contained in the cone of nonnegative Radon measures. In addition, the
remaining constraints require $\mass{}$ to be both, a probability
measure that is also contained in the ambiguity set $\Omega$.
We note in passing that with this modelling approach, we will be able to
reformulate \emph{nonconvex} DRO constraints that depend on elementary
functions. Algorithmical tractability will be achieved as a
mixed-integer positive semidefinite program (SDP) via a
discretization approach.
\subsection{Strenghtening DRO Models by Moment Control and
Confidence Sets}
One of the major challenges in distributional robustness consists in
choosing the ambiguity set $\Omega$ such that \eqref{Prob:
DRO_with_undefined_ambiguity_set} is algorithmically tractable and
large enough to protect the solutions
against all realistic uncertainties, while avoiding the inclusion of unrealistic
ones.
We note that additional constraints may slow down the computation of
\eqref{Prob: DRO_with_undefined_ambiguity_set}, but can improve the
objective value of \eqref{Prob:
DRO_with_undefined_ambiguity_set}. Thus, identifying good
constraints is a crucial task on its own. In the following paragraphs,
we will elaborate on three classes of constraints that can be added to
our DRO model, while maintaining algorithmic tractability.
First, a typical approach either assumes that the \emph{first moment}, i.e. the expectation, of $\mass{}$ is known, see e.g. \cite{Parys2015a} or is at least assumed to be contained in an ellipsoid \cite{Delage2010a}. In this article, we follow the modeling in \cite{Delage2010a}, where it is assumed, that an estimation of the correct expectation $\EW{}$ and covariance matrix $\Sigma$ is known. Moreover, we assume, that the ellipsoidal set is shaped by these two parameters and a third parameter $\varepsilon_{\EW{}}>0$, that is chosen in a way, that the ellipsoid given by $$\varepsilon_{\EW{}}-(\EW{\mass{}}(t)-\EW{})^\top \Sigma (\EW{\mass{}}(t)-\EW{})\geq 0, \Sigma\succeq 0$$
contains $\EW{\mass{}}(t)$. By applying Schur's complement, we obtain the following equivalent SDP constraint, which fits \eqref{Prob: DRO_with_undefined_ambiguity_set}:
\begin{equation}\label{Eq: Sec2_first_moment}
\left\langle \begin{bmatrix}
\Sigma & t-\EW{}\\
(t-\EW{})^\top & \varepsilon_{\EW{}}
\end{bmatrix}, \mass{}\right\rangle \succeq 0.
\end{equation}
Second, it is often a natural assumption that the underlying uncertain
probability distributions are monomodal functions, see
e.g. \cite{Parys2015a}. We will also use this assumption in the computational
results section. This is a challenging property. On the positive side, one of the
main results in \cite{Parys2015a} is that, if $\Omega$ contain
monomodal distributions with fixed first and second moments,
\eqref{Prob: DRO_with_undefined_ambiguity_set} can be reformulated as
an SDP. However,
incorporating this SDP into either \eqref{Prob: Case1} or \eqref{Prob:
Case2} leads to bivariate variables and is thereby intractable in
general for both, \eqref{Prob: Case1} or \eqref{Prob: Case2}.
This fact supports the statement in \cite{Rahimian2019a}, that "with the current state of literature, monomodality cannot be modeled in a tractable manner".
In the present article, we address this challenge by exploiting the fact that monomodal distributions tend to have a relatively small variance. Therefore, we apply a moment approach, which is along similar lines as the one presented in \cite{Delage2010a}, i.e. in addition to the bounds on the first moment, we impose an upper bound on the \emph{second moment} as follows
\begin{equation}\label{Eq: Sec2_second_moment}
\langle -(t-\EW{})(t-\EW{})^\top ,\mass{}\rangle \succeq -\varepsilon_\Sigma\Sigma.
\end{equation}
Note that this constraint is equivalent to the bound $\text{Var}_{\mass{}}(t)\preceq \varepsilon_\Sigma \Sigma$.
Finally, we can also add \emph{confidence set} constraints as considered, e.g. in \cite{Wiesemann2014a}, where we restrict the probability of certain subsets $T_i\subseteq T$, i.e.,
\begin{equation}\label{Eq: Sec2_confidence_sets}
\langle \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}(t), \mass{} \rangle \ge \varepsilon_i \text{ for every } i\in I.
\end{equation}
For $\varepsilon_i>0$, these constraints model $\mass{}(T_i)\geq \varepsilon_i$ and for $\varepsilon_i<0$, they model $\mass{}(T_i)\leq -\varepsilon_i$. In particular, the normalization constraints \eqref{Constr: DRO_with_undefined_ambiguity_set_1} and \eqref{Constr: DRO_with_undefined_ambiguity_set_2} can be modeled in this way by setting $T_i=T$ and $\varepsilon_i=\pm 1$.
\subsection{Relation to the Literature}
We note that $\sum_{i=1}^k a_i \mathbbm{P}(A_i) = \mathbbm{E}_{\P}(v(x,t))$ encodes the expectation of a non-convex, in our case a piecewise-constant, function $v$ in $t\sim \mathbbm{P}$. This is a crucial distinction from results presented in \cite{Wiesemann2014a} and \cite{Delage2010a}. Here, the underlying function $v(x,t)$ has to be both, convex and piecewise-affine in $x$ and $t$, see Condition (C3) in \cite{Wiesemann2014a} and Assumption 2 in
\cite{Delage2010a}. In \cite{Wiesemann2014a} and \cite{Xu2017a}, there are exceptions to these assumptions, however in these exceptions the number of confidence set $|I|$ has to be very low (see Observation 1ff in the electronic compendium of \cite{Wiesemann2014a}) or even $|I|=0$ (\cite{Xu2017a}).
In the present article, we generalize this
setting by considering sums of elementary functions $\mathbbm{1}_{A_i}(t)$, that generally do not satisfy any of those assumptions.
Lastly, we comment on discrepancy-based DRO models that restrict
$\Omega$, in particular Wasserstein balls. Given a nominal
probability distribution $\mass{0}\in \Omega$, these measures usually
limit the probability mass that needs to be transferred to arrive at
another $\mass{}\in \Omega$. Here we will use confidence set
constraints, so that they achieve a similar effect. Moreover, in our
computational results in Section \ref{Sec:comp-results}, we consider
an ambiguity set $\Omega$, that contains realistic uncertainties
with a rather large Wasserstein distance between each other, so that
an additional Wasserstein constraint does not strengthen the model.
|
1302.3023
|
\section{Introduction}
Over several decades the interest in iron (II) phthalocyanine (FePc) has been
motivated by various applications as well as by its proximity to iron proteins.
In more recent times FePc has become popular as a model system for core-level
spectroscopy (XAS, XMCD) studies.\cite{Miedema, Stepanow,Barto10} Certain progress
has been made in creating artificial ordered structures of FePc.\cite{Abel,Gredig}
Perhaps the most significant finding of the recent decade was the discovery of an
unquenched orbital moment of iron in FePc by means of in-field M\"ossbauer
spectroscopy\cite{Filoti} and XMCD.\cite{Barto10}
This has in turn brought about a surge of computational activity on FePc. Along
with various density-functional calculations,\cite{ReynoldsFiggis,LiaoScheiner,
MaromKronik,KHO,Brena,Nakamura,Wang,Sumimoto,Bialek} it is worth to mention
multiplet structure calculations\cite{Thole,Miedema,Stepanow} based on a
phenomenological model. The main ingredients of the latter approach are the
Coulomb repulsion, allowed for by way of the Slater-Condon parameters, and the
crystal (ligand) field (CF) on Fe$^{2+}$. The multiplet calculations\cite{Thole,Miedema,Stepanow}
were mainly aimed at simulating x-ray absorption spectra; however, they produced
an interesting by-product. This is a map of ground states of Fe$^{2+}$ in CF
parameter space (Fig. 2 of Ref. \onlinecite{Miedema}). An early version of such
a diagram for the point group $D_{4h}$ was produced by K\"onig and Schnakig,\cite{KoenigS}
but the idea itself goes back to the classical work of Tanabe and Sugano,\cite{TS}
who dealt with the cubic symmetry. Unfortunately, in the case of $D_{4h}$ one cannot
plot but 2-dimensional sections of the 3-dimensional space of CF parameters,
the choice of these sections in Refs. \onlinecite{KoenigS} and \onlinecite{Miedema}
being rather suboptimal. Besides, the diagrams in Refs. \onlinecite{KoenigS} and
\onlinecite{Miedema} have the disadvantage that CF parameters in energy units are
plotted on the axes, and so the diagrams depend on the Slater-Condon (or Racah)
parameters employed. As against that, the original work of Tanabe and Sugano\cite{TS}
presented the result in terms of a dimensionless ratio of the CF parameter to
Racah's $B$, which led to the celebrated series of universal diagrams. Still,
Miedema's diagrams are of interest. They have an enigmatic cornered shape, the
domain boundaries are piecewise-linear, with repeatedly encountered, characteristic
slopes. These features of the diagrams have so far remained unexplained.
As regards agreement with experiment, the calculations leave much to be desired.
Density-functional calculations make inconclusive predictions of the ground state.
Thus, Reynolds and Figgis\cite{ReynoldsFiggis} could not decide between $^3E_g$ and
$^3B_{2g}$ because the two lie too close in energy. Marom and Kronik\cite{MaromKronik}
found either $^3B_{2g}$ ($e_g^4 a_{1g}^1 b_{2g}^1$)
or $^3A_{2g}$ ($a_{1g}^2 b_{2g}^2 e_g^{\uparrow\uparrow} $), depending on
computational details. \cite{remark1}
More
recently, Nakamura et al.\cite{Nakamura} found $^3A_{2g}$ in an isolated FePc
molecule, but $^3E_g$ in a columnar stack of such molecules. Establishing the symmetry
of the ground state does not settle the dispute: within the correct $^3E_g$
one should further distinguish between the configurations $b_{2g}^2 e_g^3 a_{1g}^1$,
as conjectured by Dale et al.,\cite{Dale} and $a_{1g}^2 e_g^3 b_{2g}^1$, found in
the multiplet calculations.\cite{Miedema,Stepanow} The two ground-state
configurations
lead to distinct types of magnetic behavior.
This work aims at determining the CF parameters of Fe(II) phthalocyanine.
As we will show below, the known experimental facts on that compound (obtained by
magnetic and spectroscopic measurements) in connection with our CF analysis
leave no choice: there is only one domain in the field of CF parameters yielding a
ground state that does
not contradict established knowledge. In such a way our calculations resolve
the confusing puzzle about the ground state of FePc that existed for many years. As
a byproduct the peculiar shape of Miedema's diagrams is explained.
In the following, we consider the $3d^6$ configuration in
a CF of symmetry $D_{4h}$ and allow for Coulomb repulsion between the $3d$
electrons. The CF is {\em a priori} assumed neither strong nor weak as compared
with the Coulomb interaction. Hybridization of the Fe $3d$ orbitals
to neighboring ligands is thought to be included into the relevant CF parameters.
In that sense it is better to call our theory ligand field theory instead. But
the ligand $p$ orbitals are not treated in an explicit way and we keep the term
CF theory for simplicity. The spin-orbit coupling is neglected at first (since it is much
weaker than either the CF or the Coulomb repulsion) but taken into account in a
later discussion of magnetic properties.
This paper is organized as follows. In the next section we briefly review the
experimental facts that bear on our knowledge of the ground state of FePc and
reiterate the current status of this knowledge. Further, in Section III, a diagram
of ground states of FePc is constructed from numerical calculations. Our diagram
is similar to that of Ref. \onlinecite{Miedema}, the main two differences being
that (i) dimensionless coordinates of the Tanabe-Sugano type are used and (ii)
the section of the 3-dimensional space of CF parameters is chosen on the principle
that the coordination polyhedron is a plane square. In Section IV, the same diagram
is reproduced analytically, which includes explicit expressions for all domain
boundaries. The subsequent discussion in Section V hinges upon the good agreement
of the exact (numerical) and approximate (analytical) diagrams. The piecewise-linear
shape of the domain boundaries finds a natural explanation in the linearity of the
underlying equations. A conclusion is made that FePc is in a strong CF mode and
approximate values of the CF parameters are given (or rather, ratios of CF
parameters to Racah's $C$). The ground-state configuration turns out to be
$a_{1g}^2 e_g^3 b_{2g}^1$ ($^3E_g$), as in Refs. \onlinecite{Miedema,Stepanow}.
Section VI recapitulates the conclusions.
\section{Experimental facts and their implications}
\subsection{Magnetic susceptibility}
As early sources of our knowledge of the ground state of FePc one usually cites
magnetic susceptibility studies of $\beta$-FePc powder\cite{Dale} and single
crystals.\cite{Barraclough} The experimental data of both papers are in
reasonable agreement with each other. At temperatures between 100 and 300 K the
susceptibility follows the Curie-Weiss law with $\mu_{\rm eff} \approx
3.8\,\mu_{\rm B}$ (for powder). This is between the spin-only values of $\mu_{\rm eff}$
for $S=1$ and $S=2$ ($2\sqrt{2}\,\mu_{\rm B} \approx 2.8\,\mu_{\rm B}$ and $2\sqrt{6}\,
\mu_{\rm B} \approx 4.9\,\mu_{\rm B}$, respectively). Below about 20 K the
susceptibility of $\beta$-FePc becomes temperature-independent.
These facts found an explanation in a simple model with $S=1$ and effective
$g$-factors employed in both works.\cite{Dale,Barraclough} The spectrum of the
model consists of a singlet ground state with $M_S=0$ and an excited doublet
with $M_S=\pm 1$ situated at $\sim 70$ cm$^{-1}$. It is unclear why Dale et
al.\cite{Dale} thought to justify this model by proposing $b_{2g}^2 e_g^3 a_{1g}^1$
($^3E_g$) as the ground configuration (and calling it an orbital singlet). Their
work contains no experimental evidence of $^3E_g$ being the ground state of FePc.
Barraclough et al.\cite{Barraclough} noticed the discrepancy between the orbitally
degenerate $^3E_g$ and Dale's assertion that the ground state should be an orbital
singlet, and postulated $^3B_{2g}$ instead. As pointed out in Ref. \onlinecite{StillmanTh},
this was no proof, $^3A_{2g}$ could have done equally well.
The model used in Refs. \onlinecite{Dale} and \onlinecite{Barraclough} is not
without its difficulties. So, it cannot explain the presence of an excited state
(or states) at $\sim 300$ cm$^{-1}$, as pointed out in Ref. \onlinecite{Dale}.
The existence of such an excited state follows from the fact that the
susceptibility deviates from the Curie-Weiss law above room temperature, as
observed by Lever.\cite{Lever} (A slight downward curvature is also visible
in $\chi^{-1}$ vs $T$ data obtained more recently on $\alpha$-FePc.\cite{Filoti})
This can be viewed as an argument in favor of $^3E_g$ rather than an orbital singlet.
The six-fold degenerate $^3E_g$ would be split by the spin-orbit interaction
into 4 singlets and a doublet, the overall splitting being $\sim\zeta\sim 400$
cm$^{-1}$. The observed susceptibility behavior would find a plausible explanation
if one of the singlets was the ground state, the doublet (or a quasi-doublet)
was situated at $\sim 70$ cm$^{-1}$, and a further state (or states) at $\sim 300$
cm$^{-1}$.
Another difficulty of Dale's triplet model consists in the values of the $g$-factors,
which differ significantly from 2. Thus, Dale et al.\cite{Dale} obtain $g_{\perp}=2.86$
(and $g_{||}=1.93$). That is, nearly one Bohr magneton has to come from an orbital
moment. Such a large orbital contribution is explained more naturally by the
presence of an unquenched orbital moment (i.e., by orbital degeneracy of the ground
state) rather than by mixing in of excited states. We note that Barraclough et
al.,\cite{Barraclough} who assert most emphatically the equivalence of their
approach to that of Ref. \onlinecite{Dale}, obtained an isotropic $g$-factor,
$g_{\perp}=g_{||}=2.64$. Generally speaking, Barraclough's $g$-factors should be
more trustworthy, since they were deduced from data measured on a single
crystal.\cite{Barraclough} The difficulty, however, is that, according to Eq. (4)
of Ref. \onlinecite{Dale}, the zero-field splitting must vanish for
$g_{\perp}=g_{||}$. At the same time, it is emphasized that this splitting,
$\sim 70$ cm$^{-1}$, is very large.\cite{Barraclough}
In any case, it should be regarded as firmly established that the susceptibility
is a maximum in the plane of the FePc molecule.\cite{Barraclough} This conclusion
has been recently confirmed in an independent experiment.\cite{Barto10} As regards
the ground states conjectured to explain the susceptibility data, they cannot be
viewed as deduced from experiment.
\subsection{Other techniques}
An x-ray diffraction experiment of Coppens et al.\cite{Coppens} found the occupation
numbers of the Fe 3d orbitals in FePc: $b_{2g}^{1.65} e_g^{2.13} a_{1g}^{0.88}
b_{1g}^{0.75}$. On account of covalency, these numbers sum up to 5.41 rather than 6.
Restoring the normalization to 6, one has $b_{2g}^{1.83} e_g^{2.36} a_{1g}^{0.98}
b_{1g}^{0.83}$. Coppens et al. regarded their result as a direct confirmation of
Dale's conjecture, $b_{2g}^2 e_g^3 a_{1g}^1$ ($^3E_g$). Yet, the analysis in Ref.
\onlinecite{Coppens} was limited to spin-triplet states. An unprejudiced look at
the quintet states, in particular at $b_{2g}^2 e_g^{\uparrow\uparrow}
a_{1g}^{\uparrow} b_{1g}^{\uparrow}$ ($^5B_{2g}$), suggests a higher degree of
agreement with Coppens' results. However, $^5B_{2g}$ can be ruled out because it
would have resulted in too high a magnetic moment, $\mu_{\rm eff} = 4.9\,\mu_{\rm B}$.
Turning now to the optical absorption experiments of Stillman and Thomson,\cite{StillmanTh}
we note that they were carried out on FePc solution in dichlorobenzene. This system
is chemically distinct from either the free FePc molecule or $\alpha$ or $\beta$
FePc. Therefore, without casting doubt upon Stillman and Thomson's assertion of
a $^3A_{2g}$ ground state, we state merely that their result is not relevant to
the system under consideration herein.
A M\"ossbauer spectroscopy study of Filoti et al.\cite{Filoti} found in $\alpha$-FePc
a very large (66 T) hyperfine field on $^{57}$Fe. Unlike the usual Fermi's contact
field, the hyperfine field in $\alpha$-FePc has a positive sign (meaning
${\bm H}_{\rm hf} \uparrow\uparrow {\bm \mu}_{\rm Fe}$) and can only originate from
a large unquenched orbital moment. The latter was estimated to be about
1,\cite{Filoti} but no definite information about its orientation could be obtained.
A more recent XMCD experiment of Bartolom\'e et al.\cite{Barto10} found in FePc
an orbital moment of $0.53\,\mu_{\rm B}$ lying in the plane of the molecule. In the
same work\cite{Barto10} it was demonstrated by direct measurements that the plane
of the molecule contains the easy magnetization direction, in agreement with the
early finding of Barraclough et al.\cite{Barraclough}
To summarize the section, there is no experimental evidence of the ground state
of FePc being either $^3B_{2g}$ or $^3A_{2g}$. Nor do Coppens'
data\cite{Coppens} provide sufficient confirmation for Dale's conjecture of
$b_{2g}^2 e_g^{\uparrow\downarrow\uparrow} a_{1g}^{\uparrow}$ ($^3E_g$). All one
can say at this point is that it should be a $^3E_g$ state endowed with magnetic
anisotropy of an easy-plane kind.
\section{Numerical calculations}
\subsection{Crystal field Hamiltonian}
The CF Hamiltonian operating on a single $3d$ electron in a tetragonal ($D_{4h}$)
environment is written as follows:
\begin{equation}
{\cal H}_{\rm CF} = B_{20} O_2^0 + B_{40} O_4^0 + B_{44} O_4^4
\label{HCF}
\end{equation}
Here $O_n^m$ are Stevens' operator equivalents\cite{Stevens} in the
$\ell$-representation ($\ell =2$): $O_2^0 = 3\ell_z^2-6$, $O_4^0 = 35\ell_z^4
-155\ell_z^2+72$, $O_4^4 = \frac{1}{2}(\ell_+^4+\ell_-^4)$; $B_{nm}$ are CF
parameters. In older literature one sometimes comes across Ballhausen's CF
parameters.\cite{Ballhausen} These are related to the $B_{nm}$'s in a simple way:
\begin{equation}
Dq = \textstyle\frac{12}{5}B_{44},~~~Ds = 3B_{20},~~~
Dt = \textstyle\frac{12}{5}B_{44} - 12B_{40}
\label{Ballh}
\end{equation}
It is well known that the five real $d$ orbitals belong to distinct irreducible
representations of the point group $D_{4h}$. Therefore, in the basis of those
orbitals the CF Hamiltonian (\ref{HCF}) takes a diagonal form, the eigenvalues
being\cite{Ballhausen}
\begin{equation}
\begin{array}{lcccl}
E(d_{xy}) & = & E(b_{2g}) & = & 6 B_{20} + 12 B_{40} - 12 B_{44} \\
E(d_{xz,yz}) & = & E(e_g) & = & -3 B_{20} - 48 B_{40} \\
E(d_{z^2}) & = & E(a_{1g}) & = & -6 B_{20} + 72 B_{40} \\
E(d_{x^2-y^2})& = & E(b_{1g}) & = & 6 B_{20} + 12 B_{40} + 12 B_{44}
\end{array}
\label{d_energies}
\end{equation}
Note that Ballhausen's original equations (5-14) and (5-15) need to be augmented
with the cubic terms, $+6Dq$ and $-4Dq$, respectively, before being converted to
the Stevens notation by means of Eqs. (\ref{Ballh}).
So far no restrictions have been imposed on the CF, except that it should be
compatible with the point group $D_{4h}$. Yet, much more is known about the
structure of the FePc molecule than just the symmetry of the Fe site. Thus,
the nearest environment of the iron atom consists of four nitrogen atoms making
a plane square, the Fe-N bonds being aligned with either the $x$ or the $y$ axis.
This fact enables us to reduce the number of independent CF parameters by one.
A rather general CF model known as the superposition model (see Ref.
\onlinecite{NewmanNg} for a comprehensive review), relates pairs of CF parameters
$B_{nm}$ with equal $n$ on the basis of shape of the coordination polyhedron.
Omitting the rather straightforward calculations, we state the result: for a plane
square the superposition model demands that
\begin{equation}
B_{44} = \textstyle\frac{35}{3} B_{40}
\label{superpos}
\end{equation}
\subsection{Hamiltonian matrix}
Our calculations dealt with a Hamiltonian consisting of the CF (\ref{HCF}) and
the Coulomb repulsion and operating on the $3d^6$ configuration. The basis states
were taken in the form of simple products of one-electron $d$ orbitals,
\begin{equation}
\prod_{i=1}^6 | m_i \sigma_i \rangle
\label{prod}
\end{equation}
with $m_i =0,\pm 1,\pm 2$, and $\sigma_i = \pm 1/2$. There are ${10 \choose 6}
= 210$ such states in total.
Nonzero matrix elements of ${\cal H}_{\rm CF}$ (\ref{HCF}) are of two kinds. First
of all, there are diagonal matrix elements, given by
\begin{equation}
B_{20} \sum_{i=1}^6 \left( 3m_i^2 - 6 \right) +
B_{40} \sum_{i=1}^6 \left( 35m_i^4 - 155m_i^2 + 72 \right)
\label{diagCF}
\end{equation}
Secondly, there are nonzero matrix elements between the states (\ref{prod}) that
differ in one pair of quantum numbers $m_i$, $m_i$ being $-2$ in one of the states
and $+2$ in the other one. All such matrix elements equal $12B_{44}$.
The matrix elements of the Coulomb repulsion have been treated extensively in the
literature. Here we follow Griffith's fundamental treatise.\cite{Griffith} Again,
there are two distinct kinds of nonzero matrix elements. The diagonal ones are
given by
\begin{equation}
\sum_{k=0,2,4} F^k \sum_{i>j} \left[ c_{m_i m_i}^k c_{m_j m_j}^k
-\delta_{\sigma_i \sigma_j} \left( c_{m_i m_j}^k \right)^2 \right]
\label{diagCoul}
\end{equation}
where $F^k$ are the Slater-Condon parameters ($k=0,2,4$) and
\begin{equation}
c_{mm'}^k = \sqrt{\frac{4\pi}{2k+1}} \int Y_{2m}^* Y_{2m'} Y_{k,m-m'} d\Omega
\label{Gaunt}
\end{equation}
The integral in Eq. (\ref{Gaunt}) is known as the Gaunt coefficient. Numerical
values of $c_{mm'}^k$ were taken from Table 4.4 of Griffith's book.\cite{Griffith}
The inner sum in Eq. (\ref{diagCoul}) is taken over all 15 pairs of filled $d$
orbitals. The first term in brackets describes the so-called Coulomb contribution,
while the second one, relevant to pairs of orbitals with parallel spins only,
is the exchange contribution.
The Coulomb repulsion also has off-diagonal matrix elements. These are nonzero
only between the states with equal $M_L$ and $M_S$ that differ in two occupied
$d$ orbitals, say, $|m_{i1}\sigma_{i1}\rangle$ and $|m_{j1}\sigma_{j1}\rangle$
in State \#1, as against $|m_{i2}\sigma_{i2}\rangle$ and $|m_{j2}\sigma_{j2}
\rangle$ in State \#2. It must hold that $m_{i1}+m_{j1} = m_{i2}+m_{j2}$ and
$\sigma_{i1}+\sigma_{j1} = \sigma_{i2}+\sigma_{j2}$. The matrix element between
the states \#1 and \#2 is expressed as follows
$$ \sum_{k=0,2,4} F^k \left[ \delta_{\sigma_{i1}\sigma_{i2}}
\delta_{\sigma_{j1}\sigma_{j2}} c_{m_{i1}m_{i2}}^k c_{m_{j2}m_{j1}}^k \right. $$
\begin{equation}
\left. -\delta_{\sigma_{j1}\sigma_{i2}} \delta_{\sigma_{i1}\sigma_{j2}}
c_{m_{j1}m_{i2}}^k c_{m_{j2}m_{i1}}^k \right]
\label{offdiagCoul}
\end{equation}
Thus, the matrix elements of the Hamiltonian are linear combinations of the CF
parameters, $B_{20}$, $B_{40}$, and $B_{44}$, as well as the Slater-Condon
parameters, $F^0$, $F^2$, and $F^4$. The latter are conveniently replaced by the
Racah parameters,
\begin{equation}
F^0 = A +\textstyle\frac{7}{5}C, ~~~F^2 = 49B+7C,~~~F^4 = \textstyle\frac{63}{5} C
\label{defRacah}
\end{equation}
The parameter $A$ is hereafter set to zero, because its only effect is to shift
the energies of all the states of $d^n$ by the same amount, $An(n-1)/2$.
\subsection{Degeneracy diagram}
The calculation consisted in setting and numerically diagonalizing the Hamiltonian
matrix for given sets of parameters $B_{20}$, $B_{40}$, $B_{44}$, $B$, $C$, and
subsequently determining the degeneracy of the ground state. Five characteristic
values of degeneracy were encountered:
$$
\begin{array}{rcl}
1: &~~~~^1A~~~~& S=0,~{\rm no~orbital~degeneracy} \\
3: & ^3A & S=1,~{\rm no~orbital~degeneracy} \\
5: & ^5A & S=2,~{\rm no~orbital~degeneracy} \\
6: & ^3E & S=1,~{\rm double~orbital~degeneracy} \\
10: & ^5E & S=2,~{\rm double~orbital~degeneracy}
\end{array}
$$
At this stage the ground states are labeled tentatively. So $A$ can be anything
of the following: $A_{1g}$, $A_{2g}$, $B_{1g}$, or $B_{2g}$, which we are unable
to distinguish. On the other hand, $^3E$=$^3E_g$ and $^5E$=$^5E_g$ as will be explained
in Section IV.
The construction of the diagram (Figure \ref{numerical}) was organized as follows.
All energies were expressed in the units of the Racah parameter $C$. The ratios
$B_{20}/C$ and $B_{40}/C$ were treated as independent variables defined on a dense
mesh. In the spirit of Tanabe and Sugano,\cite{TS} the ratio $B/C$ was fixed to
a value appropriate for Fe$^{2+}$, $B/C = 0.227$, as in Table 7.3 of Ref.
\onlinecite{AB}. The CF parameter $B_{44}$ was not regarded as an independent one.
Rather, it was found from Eq. (\ref{superpos}), as prescribed by the superposition
model.\cite{NewmanNg} As a result, the $B_{20}/C-B_{40}/C$ plane was partitioned
into domains of five different kinds, according to the degeneracy found at each
point. The diagram (Figure \ref{numerical}) has a cornered shape reminiscent of
the diagrams in Refs. \onlinecite{Miedema} and \onlinecite{KoenigS}. The domain
boundaries appear straight lines with characteristic slopes. Several sets of
parallel lines are encountered. The central part of the diagram is an area of
weak CF; in compliance with Hund's first rule, the ground state here has $S=2$.
The periphery of Figure \ref{numerical} is a region of strong CF; here $S=0$ or 1.
\begin{figure}
\centerline{\includegraphics[width=0.47\textwidth]{fig1.eps}}
\caption{\label{numerical} Partition of CF parameter space among differently
degenerate ground states, as found from numerical calculations in the absence
of spin-orbit interaction. The possible ground states are denoted according to their total spin
and the absence ($^{2S+1}A$) or presence ($^{2S+1}E$) of orbital degeneracy.
They will be further specified in Figure \ref{analytic}.}
\end{figure}
The numerical calculations have the advantage of producing an immediate graphical
result. However, it is not easy to analyze the character of a ground state
expressed in a 210-dimensional basis. Of special interest to us is $^3E_g$, which
appears in six non-adjacent domains in Figure \ref{numerical}. So we would like
to know if those $^3E_g$ are similar or distinct. Furthermore, we would like to
find out the origin of the cornered shape of the diagram, why the boundaries are
straight and the slopes repeated. Finally, we pose a question, how the diagram
would change if $B/C$ and/or $B_{44}/B_{40}$ were different to the ones used so
far. Answers to the above questions should be sought by means of analytical
calculations.
\section{Analytical treatment}
\subsection{Weak crystal field: quintet states}
In the weak-field approximation the CF is treated as a perturbation with respect
to the intra-atomic Coulomb interaction, whose eigenstates are spectral terms with
certain $L$ and $S$. Since the CF acts on spatial but not on spin variables, terms
with different $S$ do not mix together (as long as the spin-orbit coupling is
neglected). In the $d^6$ configuration there is a single quintet term, $^5D$,
whose Coulomb energy is\cite{Griffith}
\begin{equation}
E_{\rm Coulomb} = -35B + 7C
\label{Coulomb5D}
\end{equation}
The remaining task consists in diagonalizing the CF Hamiltonian (\ref{HCF}) on the
wave functions of $^5D$, since there are no other terms with $S=2$. To this end,
it is convenient to interpret Eq. (\ref{HCF}) in a slightly different way than it
was done in Section III. Namely, $O_n^m$ are now regarded as Stevens' operators
in the $L$ representation ($L=2$): $O_2^0 = 3L_z^2 - 6$ etc. Since $^5D$ contains
a single $d$ electron above a closed semi-shell, it is only this one electron that
is exposed to the CF. Therefore, $L=\ell$ and the coefficients $B_{nm}$ in Eq.
(\ref{HCF}) are the same in both representations. So we can simply take over the
one-electron CF energies (\ref{d_energies}). In doing so, we capitalize the irrep
labels, to indicate that they now refer to many-electron states, and append the
multiplicity 5. We also prefix $E_{\rm Coulomb}$ (\ref{Coulomb5D}). The resulting
energies of the quintet states are as follows:
$$
\begin{array}{lll}
E(^5B_{2g}) & = -35B + 7C + 6 B_{20} + 12 B_{40} - 12 B_{44} & ~({\rm Q1}) \\
E(^5E_g) & = -35B + 7C - 3 B_{20} - 48 B_{40} & ~({\rm Q2}) \\
E(^5A_{1g}) & = -35B + 7C - 6 B_{20} + 72 B_{40} & ~({\rm Q3}) \\
E(^5B_{1g}) & = -35B + 7C + 6 B_{20} + 12 B_{40} + 12 B_{44} & ~({\rm Q4})
\end{array}
$$
\subsection{Strong crystal field: singlet states}
In the strong-CF approximation the zeroth-order states are constructed from
one-electron eigenstates of the CF Hamiltonian (\ref{HCF}), then their energies
are corrected for the Coulomb repulsion. A first question that arises is: which
six one-electron $d$ states are filled in a CF of symmetry $D_{4h}$? To give
a possibly general answer, it is convenient to express all relevant energies in
the units of $B_{44}$. Thus, the one-electron CF energies (\ref{d_energies}) are
divided by $B_{44}$. Then, equating pairs of the so modified expressions, one
obtains 5 equations linear in $B_{20}/B_{44}$ and $B_{40}/B_{44}$. The corresponding
lines in the parameter plane $B_{20}/B_{44} - B_{40}/B_{44}$ (Figure \ref{fig2})
are loci of points where the sequence of CF levels changes. For example, the levels
$a_{1g}$ and $b_{1g}$ cross over on a line decribed by
\begin{equation}
-B_{20}/B_{44} + 5 B_{40}/B_{44} = 1
\label{example}
\end{equation}
as readily obtained by equating the last two Eqs. (\ref{d_energies}). Equation
(\ref{example}) describes the upper one of the two parallel lines in Figure \ref{fig2};
the lower line arises from the condition $E(a_{1g}) = E(b_{2g})$. Likewise, the
equation $E(e_g) = E(b_{1g,2g})$ generates a pair of parallel lines with a negative
slope. A single line passing through the origin is produced by the relation $E(a_{1g})
=E(e_g)$. Finally, the equation $E(b_{1g}) = E(b_{2g})$ leads to no line; this is why
there are 5 solid lines in Figure \ref{fig2}, rather than 6 as expected
combinatorially. The CF levels $b_{1g}$ and $b_{2g}$ do not swap at any inner point
of Figure \ref{fig2}, but do so at infinity, where $B_{44}$ changes sign. Therefore,
$b_2$ (a short for $b_{2g}$) stands always to the left of $b_1$ ($b_{1g}$) in the
level sequences indicated within each one of the 12 domains. The sequence labels,
read from left to right, name the CF levels in order of ascending energy if $B_{44}>0$,
and in order of descending energy if $B_{44}<0$.
\begin{figure}
\centerline{\includegraphics[width=0.47\textwidth]{fig2.eps}}
\caption{\label{fig2} Partition of the $B_{20}/B_{44}-B_{40}/B_{44}$ plane among all
possible permutations of the four one-electron CF levels. }
\end{figure}
Up to this point no restrictions have been placed on the CF, apart from those imposed
by the $D_{4h}$ symmetry. Now we do restrict the CF by demanding that it must
additionally comply with the superposition model, Eq. (\ref{superpos}). This implies
that the system is now bound to the horizontal dashed line in Figure \ref{fig2}.
The dashed line cuts through six domains. The corresponding intervals on the abscissa
axis are numbered 1 to 6 in order of ascending $B_{20}/B_{44}$, with $B_{44}>0$.
Thus, the interval \#1 stands for $B_{20}/B_{44} <-\frac{40}{21}$, \#2 means that
$-\frac{40}{21} < B_{20}/B_{44} < -\frac{4}{7}$ etc. The same intervals, but with
$B_{44}<0$, will be referred to by overscore numbers $\overline{1}$ to $\overline{6}$.
Now, examining the CF level sequences in the above 12 intervals, one encounters only
3 situations where there is a CF gap between the highest occupied and the lowest
unoccupied orbitals. The corresponding ground-state electronic configurations are as
follows:
\begin{equation}
\begin{array}{ll}
b_{2g}^2 e_g^4 & {\rm ~intervals}~\#2,\,\#3,\,\#4 \\
a_{1g}^2 e_g^4 & {\rm ~intervals}~\#5,\,\#6,\,\#\overline{1} \\
a_{1g}^2 b_{1g}^2 b_{2g}^2 & {\rm ~intervals}~\#\overline{4},\,\#\overline{5}
\end{array}
\label{intervals}
\end{equation}
Within the above intervals of $B_{20}/B_{44}$ the ground state is a singlet, provided
that the CF is sufficiently strong. In all other cases the Fermi level is caught at
the partially occupied quadruply degenerate $e_g$ level and there is a possibility
of both a singlet and a (spin) triplet ground state with the same energy.
A subsequent allowance for intra-atomic (Hund's) exchange makes the triplet states
energetically more favorable than the singlet ones. Therefore, singlets are no viable
candidates for ground state in all intervals where triplets with the same CF energy
are possible. In such intervals only the triplets will be considered (in the next
subsection).
Conversely, in the three cases where there are viable singlet states (\ref{intervals}),
competing triplet states will be taken into consideration as well, constructed from
excited CF configurations. Such triplets still have a chance of becoming ground state
on account of Hund's exchange in situations where the CF is not strong enough, near
interval boundaries, etc.
Let us now turn to our direct task --- computing the energies of the singlet states
(\ref{intervals}). The CF energies are computed most readily, by summing up the
energies of the six occupied one-electron states as given by Eqs. (\ref{d_energies}).
First-order correlation corrections are then computed following Slater's
prescription:\cite{Slater} for each pair of occupied $d$ states a so-called Coulomb
integral $J(d_1,d_2)$ is added; a further {\em exchange} contribution $K(d_1,d_2)$
is deducted for pairs with equal spins. $J$'s and $K$'s between the real $d$ orbitals
were expressed in terms of the Racah parameters by Griffith, see Table A26 of his
book.\cite{Griffith} The resulting singlet energies are as follows:
$$
\begin{array}{lcrl}
E(b_{2g}^2e_g^4) & = & -168 B_{40} - 24 B_{44} -30 B + 15 C & ~~({\rm S1}) \\
E(a_{1g}^2e_g^4) & = & -24 B_{20} - 48 B_{40} + 10 B + 15 C & ~~({\rm S2}) \\
E(a_{1g}^2b_{1g}^2b_{2g}^2) & = & 12 B_{20} + 192 B_{40} - 20 B + 15 C & ~~({\rm S3})
\end{array}
$$
We note that for low-lying single-product states, such as those considered in this work,
the factor of $C$ depends solely on $S$ and is given by
\begin{equation}
({\rm factor~of~}C) = (S_{\max}+1)^2 - (S+1)^2
\label{factorofC}
\end{equation}
where $S_{\max}=n/2$ is the hypothetical maximum spin of $n$ electrons
in the absence of the Pauli principle. For fewer than six $d$ electrons, states with
$S=S_{\max}$ are allowed and their energies have no contribution in $C$. For $d^6$,
$S_{\max}=3$ and the factors of $C$ equal 15, 12, and 7 for $S=0$, 1, and 2,
respectively. Equation (\ref{factorofC}) is a consequence of the great simplicity
acquired by the Coulomb and exchange integrals when $A$ and $B$ are set to zero:
$$
J(d_i,d_j) = K(d_i,d_j) = (1+2\delta_{ij}) C,
$$
cf. Table A26 of Ref. \onlinecite{Griffith}. No simple relations are known for the
factors of $B$, which have to be calculated in each case separately.
\subsection{Strong crystal field: triplet states}
Construction and finding the energies of the (spin) triplet states are carried out in
a similar fashion. One peculiarity is the large number of triplets (9 in total),
which have to be constructed for all 12 intervals of $B_{20}/B_{44}$. Where no triplet
state is permitted by the ground CF configuration, the first excited configuration
will be considered instead.
We proceed from the interval \#1, $B_{20}/B_{44} < -\frac{40}{21}$, $B_{44}>0$. Here
(as well as in the interval \#$\overline{6}$, $B_{20}/B_{44} > \frac{24}{7}$,
$B_{44}<0$) the ground CF configuration is $b_{1g}^2 b_{2g}^2 e_g^2$, which allows
one triplet state, $d_{x^2-y^2}^2 d_{xy}^2 d_{xz}^{\uparrow} d_{yz}^{\uparrow}$,
as well as three singlet ones. According to the first Hund's rule, it will be the
triplet that will become ground state upon allowance for the Coulomb interaction.
(It was for this reason that the singlets were left out in the previous subsection.)
The symmetry of the triplet state is $^3A_{2g}$, as determined
by the antisymmetrized product of $d_{xz}$ and $d_{yz}$.
The energy is computed following the same
prescription as in the previous subsection and equals
$$
E(b_{1g}^2 b_{2g}^2 e_g^{\uparrow\uparrow}) = 18 B_{20} - 48 B_{40} - 9B + 12C
~~~~~~({\rm T1})
$$
Let us move to the interval \#2, $-\frac{40}{21} < B_{20}/B_{44} < -\frac{4}{7}$,
$B_{44}>0$. The ground CF configuration, $b_{2g}^2e_g^4$, consists of fully occupied
orbitals and is necessarily a singlet. To construct a spin triplet state, one spin-down
electron is promoted, with a simultaneous reversal of spin, from the $e_g$ orbital
to the first unoccupied CF level $b_{1g}$. The result is either $d_{xy}^2 d_{xz}^2
d_{yz}^{\uparrow} d_{x^2-y^2}^{\uparrow}$ or $d_{xy}^2 d_{yz}^2 d_{xz}^{\uparrow}
d_{x^2-y^2}^{\uparrow}$. This is a doubly orbitally degenerate state $^3E_g$. Its
energy is
$$
E(b_{2g}^2 e_g^{\uparrow\downarrow\uparrow} b_{1g}^{\uparrow})
= 9 B_{20} - 108 B_{40} - 12 B_{44} - 24 B + 12 C ~~~ ({\rm T2})
$$
Proceeding as before, we find that the most favorable spin triplet state in the
interval \#3 is another $^3E_g$, whose energy is given by
$$
E(b_{2g}^2 e_g^{\uparrow\downarrow\uparrow} a_{1g}^{\uparrow})
= -3 B_{20} - 48 B_{40} - 24 B_{44} - 28 B + 12 C ~~~ ({\rm T3})
$$
The remaining six spin-triplet states include: a $^3B_{2g}$ in the intervals \#4 and \#5,
with
$$
E(e_g^4 b_{2g}^{\uparrow} a_{1g}^{\uparrow})
= -12 B_{20} - 108 B_{40} - 12 B_{44} - 22 B + 12 C ~~~ ({\rm T4})
$$
a $^3E_g$ in the interval \#6, with
$$
E(a_{1g}^2 e_g^{\uparrow\downarrow\uparrow} b_{2g}^{\uparrow})
= -15 B_{20} + 12 B_{40} - 12 B_{44} - 14 B + 12 C ~~ ({\rm T5})
$$
a $^3E_g$ in the interval \#$\overline{1}$, with
$$
E(a_{1g}^2 e_g^{\uparrow\downarrow\uparrow} b_{1g}^{\uparrow})
= -15 B_{20} + 12 B_{40} + 12 B_{44} - 14 B + 12 C ~~ ({\rm T6})
$$
a $^3A_{2g}$ in the intervals \#$\overline{2}$ and \#$\overline{3}$, with
$$
E(a_{1g}^2 b_{1g}^2 e_g^{\uparrow\uparrow})
= -6 B_{20} + 72 B_{40} + 24 B_{44} - 29 B + 12 C ~~~ ({\rm T7})
$$
a $^3E_g$ in the interval \#$\overline{4}$, with
$$
E(a_{1g}^2 b_{1g}^2 b_{2g}^{\uparrow} e_g^{\uparrow})
= 3 B_{20} + 132 B_{40} + 12 B_{44} - 29 B + 12 C ~~ ({\rm T8})
$$
and a $^3E_g$ in the interval \#$\overline{5}$, with
$$
E(b_{1g}^2 b_{2g}^2 a_{1g}^{\uparrow} e_g^{\uparrow})
= 15 B_{20} + 72 B_{40} - 13 B + 12 C ~~~~ ({\rm T9})
$$
Note that the factor of $C$ in Eqs. (T1--T9) is invariably 12, as follows from Eq.
(\ref{factorofC}) with $S=1$ and $S_{\max}=3$.
\subsection{The $B_{20}/C - B_{40}/C$ diagram}
The search for the ground state consists in a systematic comparison of energies of
pairs of candidate states, as given by Eqs. (Q1-Q4, S1-S2, T1-T9). For example,
equating (Q1) to (T1) results in
\begin{equation}
-12 B_{20} + 60 B_{40} - 12 B_{44} = 26 B + 5 C
\label{Q1T1}
\end{equation}
Eliminating $B_{44}$ by means of Eq. (\ref{superpos}) and dividing the result by $C$,
one arrives at an equation of a straight line in the plane of the parameters $B_{20}/C$
and $B_{40}/C$:
\begin{equation}
\frac{B_{40}}{C} = -0.15 \frac{B_{20}}{C} - 0.0625 - 0.325 \frac{B}{C}
\label{Q1T1_bis}
\end{equation}
Left of this line there should be a domain where the ground state is the triplet $T_1$
($^3A_{2g}$ or $b_{1g}^2 b_{2g}^2 e_g^{\uparrow\uparrow}$), right of the line, towards
the origin, lies the domain where the ground state is the quintet $Q_1$ ($^5B_{2g}$).
In the spirit of Tanabe-Sugano, the ratio $B/C$ is fixed, $B/C=0.227$, as in Table 7.3
of Ref. \onlinecite{AB}.
\begin{figure}
\centerline{\includegraphics[width=0.49\textwidth]{fig3.eps}}
\caption{\label{analytic} Partition of CF parameter space among possible ground states
of FePc. The labels are mnemonic: so $S_1$ is a singlet whose energy is given by Eq. (S1).
The borderlines are as described by Eq. (\ref{A_1}) with the coefficients of Table I.}
\end{figure}
Proceeding as above, one obtains equations for all 33 borderlines appearing in the
diagram, Figure \ref{analytic}. (It suffices to consider pairs of states belonging to
the same, or perhaps, to adjacent intervals of $B_{20}/B_{44}$.) By analogy with Eq.
(\ref{Q1T1_bis}), these expressions are presented as
\begin{equation}
\frac{B_{40}}{C} = a \frac{B_{20}}{C} + b + b' \frac{B}{C}
\label{A_1}
\end{equation}
The numerical factors $a$, $b$, and $b'$ are listed in Table I. A line is referred to by
naming the two domains it separates. Remarkably, one finds in the second column of Table I
repeatedly six characteristic slopes,
\begin{equation}
0,-\textstyle\frac{9}{200},-\frac{3}{20},\frac{9}{80},\frac{3}{50},{\rm ~and~}\frac{1}{40}.
\label{slopes}
\end{equation}
These are obtained by means of Eq. (\ref{superpos}) from the interval boundaries in
Figure \ref{fig2}: so $B_{20}/B_{44} = 24/7$ leads to $B_{20}/B_{40} = 1/40$ etc.
One exception that is not on the list (\ref{slopes}) but is encountered twice in the second
column of Table I is $-3/160$.
\begin{table}
\caption{Values of coefficients in Eq. (\ref{A_1}).}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{rrrr}
\hline \hline
label & $a~~~$ & $b~~~$ & $b'~~~~~~$ \\ \hline
$T_1Q_1$ & $-3/20$ & $-1/16$ & $-13/40~~~$ \\
$T_2Q_1$ & $1/40$ & $1/24$ & $11/120~~~$ \\
$T_1T_2$ & $-9/200$ & $0$ & $-3/40~~~$ \\
$~~~~S_1T_2$ & $~~~~-9/200$ & $~~~~~3/200$ & $~~~~-3/100~~~$ \\
$S_1Q_1$ & $-3/160$ & $1/40$ & $1/64~~~$ \\
$S_1T_3$ & $1/40$ & $1/40$ & $-1/60~~~$ \\
$S_1T_4$ & $3/50$ & $3/200$ & $-1/25~~~$ \\
$T_3Q_1$ & $-9/200$ & $1/40$ & $7/200~~~$ \\
$T_3Q_2$ & $0$ & $1/56$ & $1/40~~~$ \\
$T_4Q_2$ & $-9/200$ & $1/40$ & $13/200~~~$ \\
$T_4Q_3$ & $-3/160$ & $1/64$ & $13/320~~~$ \\
$T_5Q_3$ & $-9/200$ & $1/40$ & $21/200~~~$ \\
$S_2Q_3$ & $-3/20$ & $1/15$ & $3/8~~~$ \\
$S_2T_5$ & $9/80$ & $-3/80$ & $-3/10~~~$ \\
$S_2T_4$ & $3/50$ & $-3/200$ & $-4/25~~~$ \\
$Q_1Q_2$ & $9/80$ & $0$ & $0~~~$ \\
$Q_2Q_3$ & $1/40$ & $0$ & $0~~~$ \\
$T_3T_4$ & $9/80$ & $0$ & $-3/40~~~$ \\
$T_4T_5$ & $1/40$ & $0$ & $-1/15~~~$ \\
$T_6Q_3$ & $9/80$ & $-1/16$ & $-21/80~~~$ \\
$T_7Q_3$ & $0$ & $-1/56$ & $-3/140~~~$ \\
$T_7Q_4$ & $3/50$ & $-1/40$ & $-3/100~~~$ \\
$S_2T_6$ & $-9/200$ & $3/200$ & $3/25~~~$ \\
$T_6T_7$ & $-9/200$ & $0$ & $3/40~~~$ \\
$Q_3Q_4$ & $-3/20$ & $0$ & $0~~~$ \\
$T_8Q_4$ & $1/40$ & $-1/24$ & $-1/20~~~$ \\
$T_7T_8$ & $9/80$ & $0$ & $0~~~$ \\
$S_3Q_4$ & $-3/20$ & $-1/5$ & $-3/8~~~$ \\
$S_3T_8$ & $9/80$ & $3/80$ & $9/80~~~$ \\
$T_9Q_4$ & $9/80$ & $1/16$ & $11/40~~~$ \\
$S_3T_9$ & $1/40$ & $-1/40$ & $7/120~~~$ \\
$T_1Q_4$ & $3/50$ & $1/40$ & $13/100~~~$ \\
$T_1T_9$ & $1/40$ & $0$ & $1/30~~~$ \\ \hline \hline
\end{tabular}
\label{tab:fit&}
\end{center}
\end{table}
\section{Discussion}
In the preceeding section we constructed a diagram of ground states of FePc in the
absence of spin-orbit coupling, Figure \ref{analytic}. A total of 16 distinct ground
states are present in the diagram: 3 singlets ($S_1-S_3$), 9 spin triplets ($T_1-T_9$),
and 4 spin quintets ($Q_1-Q_4$). The respective energies are given by Eqs. (S1-S3, T1-T9,
Q1-Q4). Explicit expressions were derived for the domain boundaries, Eq. (\ref{A_1}) and
Table I. The boundaries are segments of straight lines, which is a consequence of the
linearity of Eqs. (S1-S3, T1-T9, Q1-Q4). This gives the diagram
its peculiar cornered shape, with characteristic, repeated slopes. As clear from the
structure of Eq. (\ref{A_1}), the slopes do not depend on the ratio $B/C$.
Taking a slightly different $B/C$ would shift the domain boundaries somewhat, but
will not affect their slopes. As against that, the slopes will change if the ratio
$B_{44}/B_{40}$ deviates from the value prescribed by the superposition model, Eq.
(\ref{superpos}). Moreover, such a deviation of $B_{44}/B_{40}$ from 35/3 may lead
to a loss of parallelity of certain boundary lines. For example, from a simple
analysis of the one-electron CF energies (\ref{d_energies}) one finds
$$
\begin{array}{l}
({\rm slope~of~}T_1Q_1) = ({\rm slope~of~}Q_3Q_4) = (5-B_{44}/B_{40})^{-1} \\
({\rm slope~of~}S_3Q_4) = (2B_{44}/B_{40} -30)^{-1} \\
({\rm slope~of~}S_2Q_3) = -3/20
\end{array}
$$
Apparently the above lines are only parallel if the condition (\ref{superpos}) is
fulfilled. In reality the superposition model is an approximation and small deviations
from Eq. (\ref{superpos}) are to be expected. In the above example, the borderlines
$T_1Q_1$ and $Q_3Q_4$ will remain parallel exactly, while the others only
approximately. A more extensive analysis of this matter is beyond the scope of the
present work.
On the whole, the diagram constructed analytically (Figure \ref{analytic}) is remarkably
similar to that calculated numerically (Figure \ref{numerical}). We take it as a sign of
validity of the strong-CF approximation used to compute the energies of the singlet and
triplet states, Eqs. (S1-S3, T1-T9). (N.B. The quintet energies (Q1-Q4) are essentially
exact, without relying on the weakness of the CF.) This demonstrates the applicability
of techniques based on single-determinant wave functions, even though it is important
to allow for correlations (nonzero $B$ and $C$).
Our next task is to locate the standpoint of FePc in the diagrams of Figures
\ref{numerical} and \ref{analytic}. In the subsequent discussion the domain boundaries
are assumed to be positioned as in the more accurate Figure \ref{numerical}, whereas
the ground states associated with the domains are as constructed analytically and
indicated in Figure \ref{analytic}. The search can be limited to an acute angle adjacent
to the abscissa axis, within
the first quadrant of Figure \ref{analytic}:
\begin{equation}
0 < B_{40} < 0.45 B_{20}
\label{condition}
\end{equation}
Indeed, the $d_{x^2-y^2}$ orbital of Fe
overlaps most strongly with the ligand orbitals and therefore has a much higher energy
than the other $3d$ orbitals, in particular, $d_{xy}$. By Eqs. (\ref{d_energies}),
$E(d_{x^2-y^2}) - E(d_{xy}) = 24B_{44} > 0$, whence by Eq. (\ref{superpos}),
$B_{40}>0$. To prove the right-hand part of the double inequality (\ref{condition}),
one should rewrite the CF Hamiltonian (\ref{HCF}), taken in conjunction with Eq.
(\ref{superpos}), as a classical anisotropy energy,
$$ E_a = B_{20} (3\cos^2 \theta -1) ~~~~~~~~~~~~~~~~~~~~~~~~~~ $$
$$ + B_{40} (35\cos^4 \theta -30\cos^2 \theta + 3
+ \textstyle\frac{35}{3} \sin^4 \theta \cos 4\phi ) $$
and demand that $\theta =\pi /2$, $\phi = \pi /4$ be a local minimum. This is to account
for the well established fact that the easy magnetization direction lies in the plane of
the FePc molecule.\cite{Barraclough,Barto10}
A further experimental fact to take into consideration is that the ground state is
a spin triplet ($S=1$) and that is is orbitally degenerate ($^3E_g$).\cite{Filoti,Barto10}
Within the sector defined by the condition (\ref{condition}) there are only two domains
where $^3E_g$ is the ground state --- a quadrangle $T_3$ and a triangle $T_5$. We carried
out an extensive numerical study of the magnetic susceptibility (with due allowance for
the spin-orbit coupling) and found that $\chi_{||}(T) > \chi_{\perp}(T)$ everywhere within
$T_3$, but $\chi_{||}(T) < \chi_{\perp}(T)$ inside $T_5$. (Here the subscript "$||$" refers
to the direction parallel to the 4-fold symmetry axis.) One has to conclude, therefore,
that the standpoint of FePc in Figure \ref{analytic} lies inside the triangle $T_5$.
The corresponding ground-state configuration is $a_{1g}^2 e_g^{\uparrow\downarrow\uparrow}
b_{2g}^{\uparrow}$, cf. Eq. (T5). It is distinct from the configuration $T_3$, or
$b_{2g}^2 e_g^{\uparrow\downarrow\uparrow} a_{1g}^{\uparrow}$, postulated by Dale et
al.\cite{Dale} and adopted by Filoti et al.\cite{Filoti} On a simple model the latter
authors have demonstrated that $T_3$ has necessarily an easy-axis anisotropy, which agrees
with our analysis. The experiment,\cite{Barraclough,Barto10} however, insists on an
easy-plane anisotropy and so $T_3$ has to be definitively abandoned. After all, Dale's
choice of $T_3$ was a mere conjecture, without a sufficient experimental foundation.
It should also be noted that Miedema et al.\cite{Miedema,Stepanow} proceeded from the
correct ground-state configuration $T_5$, even though they did not explain their choice.
\begin{figure}
\centerline{\includegraphics[width=0.45\textwidth]{fig4.eps}}
\caption{\label{chi} Temperature dependence of reciprocal susceptibility. Closed circles are
experimental data,\cite{Barraclough} solid line is 1.22 times the calculated $\chi^{-1}$. }
\end{figure}
The difference between the two $^3E_g$ configurations is easy to understand.
In both cases there is one $e_g$ hole; the two real $e_g$ orbitals ($d_{xz}$
and $d_{yz}$) can be combined to give states with $\ell_z=\pm 1$. An extra singly
occupied orbital in $T_3$ is
$a_{1g}$ ($d_{z^2}$), with $\ell_z=0$. Therefore, the z-component of the total
orbital moment is $L_z=\pm 1$ and the spin-orbit coupling leads to an easy-axis
anisotropy in $T_3$. In $T_5$ it is the $d_{xy}$ orbital ($b_{2g}$) that is
singly occupied and the situation is quite different. Now three orbitals,
$d_{xy}$, $d_{xz}$, and $d_{yz}$, are accessible to the holes. If the $e_g$ and
$b_{2g}$ levels were perfectly degenerate, there would be no anisotropy at all.
The fact that the degeneracy is lifted results in a weak easy-plane anisotropy,
such as the one observed.
We undertook an attempt to refine the position of the system inside the triangle $T_5$
on the basis of the available susceptibility data.\cite{Barraclough} We find that
the most likely standpoint is near the left corner of the triangle, at $B_{20}/C=0.84$,
$B_{40}/C=0.0074$. Powder susceptibility was calculated as $\frac{1}{3}\chi_{||} +
\frac{2}{3}\chi_{\perp}$, with $B=917\,{\rm cm}^{-1}$ and $C=4040\,{\rm cm}^{-1}$
(as in Table 7.3 of Ref. \onlinecite{AB}). The spin-orbit coupling constant $\zeta$
was set to $400\,{\rm cm}^{-1}$. The so computed susceptibility proved higher than
the experimental one and had to be reduced by a factor of 0.8, to make both curves
match. (Accordingly, in Fig. \ref{chi} the experimental reciprocal
susceptibility\cite{Barraclough} is compared with the calculated $\chi^{-1}$ times 1.22.)
The reduction factor 0.8 can be attributed to covalency, neglected in our model.
Apart from the rescaling, the calculated $\chi^{-1}(T)$ does agree with the experiment.
In our calculation the sextet $^3E_g$ is split by the spin-orbit interaction. The ground
state is a singlet and so is the first excited state, situated $20\,{\rm cm}^{-1}$ above
the ground state. The second excited state, at $52\,{\rm cm}^{-1}$, is a doublet, followed
by two singlets, at $165\,{\rm cm}^{-1}$ and $225\,{\rm cm}^{-1}$. It will be recalled that
the model spectrum of Refs. \onlinecite{Dale} and \onlinecite{Barraclough} consisted of
a ground singlet and an excited doublet at $64\,{\rm cm}^{-1}$. The most essential
distinction of our spectrum is the presence of an excited singlet at $20\,{\rm cm}^{-1}$.
A clue to this point might be provided by a measurement of the specific heat. The isolated
molecule has no magnetic moment but the application of an external magnetic field $H_x$
in easy-plane direction gives rise to a spin moment $m_S^x=-2\mu_{\rm B} \langle \hat S_x \rangle$
that saturates at about $m_S^x \approx 2 \mu_{\rm B}$ for fields exceeding 40 T in agreement
with $S=1$. We find a ratio of orbital and spin moments
$m_L^x/m_S^x=\langle \hat L_x \rangle / \left( 2 \langle \hat S_x \rangle \right) \approx 0.65$
for our refined parameter set in reasonable agreement with the ratio of 0.83 that was
measured by XMCD.\cite{Barto10,remark} Therefore, we confirm the existence of an extraordinarily
large, highly unquenched orbital moment in FePc.
\section{Conclusion}
Published experimental data suggest that FePc has an orbitally degenerate ground state
with $S=1$, the easy magnetization direction lying in the plane of the molecule.
There is a single domain in the CF parameter space where these conditions are met ---
the triangle $T_5$ in Figure \ref{analytic}. The corresponding ground-state
configuration is $a_{1g}^2 e_g^3 b_{2g}^1$. The standpoint of FePc is situated in
the left corner of the triangle, about $B_{20}/C=0.84$, $B_{40}/C=0.0074$, whereas $B_{44}$
is given by Eq. (\ref{superpos}). This point lies in a strong-CF region, where the notion
of single-determinant states has a certain validity.
\begin{acknowledgments}
The authors are thankful to Dr. Guillaume Radtke for helpful discussions. A significant
part of this work was carried out during a three-month stay of M.D.K. at the University
of Aix-Marseille and he wishes to express his gratitude to the staff at the Faculty of
Sciences for hospitality and to CNRS for financial support.
\end{acknowledgments}
|
1302.3352
|
\section{Introduction}
The Hasse-Arf theorem (see \cite[IV.3]{SeL})
controls the jumps of the ramification filtration for abelian groups.
Aim of this short note is to give an intuitive geometric argument, why
the Hasse-Arf theorem is true.
Unfortunately our approach does not
provide a new proof of the Hasse-Arf theorem; It is based on the
recently proved Oort conjecture for cyclic groups, and the known proof of
the Oort conjecture is heavily based on the Hasse-Arf theorem.
However, we have enjoyed this argument and we believe that it reveals the nature
of the Hasse-Arf theorem. We would like to mention that the author has spent a
lot of time trying to prove the Hasse-Arf theorem with the methods
of the article \cite{KaranKontSemi} but without success.
After posting a first version on the arXiv, M. Mattignon informed the author that
essentially the same observation was made in his article \cite[section 6 p.16]{MatignonGreen98} jointly written by B. Green. He also explained that the proof of Sen's \cite{Sen69} theorem by Lubin
\cite{Lubin95}, implies the truth of the Hasse-Arf theorem and is really a geometric proof
of the Hasse-Arf theorem. We would like to thank him for his response.
We would like to also thank M. Romagny for his comments.
Let $\O=k[[t]]$ be a complete local ring over an algebraically closed field $k$ of
characteristic $p>0$.
This ring $\O$ is equipped with a valuation $v$ and a local uniformiser $t$.
Suppose that a finite $p$-group $G$ acts on $\O$. Then a ramification
filtration is defined by
\[
G_i:=\{ \sigma \in G : v(\sigma(t)-t) \geq i+1\}.
\]
The Hasse Arf theorem reduces \cite[IV.3 exer. 3]{SeL} to the following statement
for the cyclic case:
\begin{theorem}
Let $G$ be a cyclic $p$-group of order $p^n$. The ramification filtration
is given by
\[
G_0=G_1=\cdots=G_{j_0}\gneqq G_{j_0+1}=\cdots=G_{j_1}\gneqq G_{j_1+1}
=\cdots= G_{j_{n-1}} \gneqq \{1\},
\]
i.e. the jumps of the ramification filtration appear at the integers $j_0,\ldots,j_{n-1}$.
Then
\[
j_k=i_0 + i_1 p +i_2 p^2 +\cdots i_k p^k.
\]
\end{theorem}
Notice that since $G$ is assumed to be a $p$-group
$G_1=G$.
The Harbater-Katz-Gabber compactification theorem asserts that there is a Galois cover
$X\rightarrow \mathbb{P}^1$ ramified only at one point $P$ of $X$ with Galois group
$G=\mathrm{Gal}(X/\mathbb{P}^1)=G_1$ such that $G_1(P)=G_1$ and the action of
$G_1(P)$ on the completed local ring $\O_{X,P}$ coincides with the original
action of $G_1$ on $\O$.
The Oort conjecture (now a theorem proved by A.Obus, S. Wewers \cite{ObusWewers} and F. Pop \cite{PopOort}) states:
\begin{theorem}
Let $X$ be a projective nonsingular curve, defined over an
algebraically closed field $k$ of characteristic $p>0$, acted on by a cyclic group $G$.
There is a proper smooth family of curves $\mathcal{X} \rightarrow {\rm Spec} A$,
where $A$ is a local ring with maximal ideal $m$ so that $A/m=k$ and
the special fibre of $\mathcal{X}_m=\mathcal{X} \times_{{\rm Spec} A} {\rm Spec} k$ is the
original curve $X$ and the generic fibre $\mathcal{X}_0$ is a curve defined over
a field of characteristic $0$.
\end{theorem}
We will apply the Oort conjecture on the Harbater-Katz-Gabber compactification $X$ coming from
the local action of $G={\mathbb{Z}}/p^n{\mathbb{Z}}$ on $\O$. We construct a relative notion of horizontal
ramification divisor. This divisor intersects the special fibre on a single point $P$ and
the generic fibre at a set with $P_1,\ldots,P_s$ points.
The Hasse-Arf has the following
geometric interpretation:
\begin{theorem}
The numbers $i_k$ are numbers of different orbits of the action of the group $G$ on the
ramified
points in the generic fibres with size $p^k$.
\end{theorem}
The method of proof reflects the philosophy of lifting geometrical objects
from positive characteristic to characteristic zero, and use the easier
characteristic zero case. This was one of the main ideas of modular
representation theory and is also one of the main ideas in defining cohomology theories
over Witt rings, see for example \cite{SerreMexico}.
\section{Horizontal Ramification Divisors}
Let $P$ be the unique ramification point on the special fibre.
Let $\sigma \in G_1(P)$, $\sigma \neq 1$, and let $\tilde{\sigma}$ be a lift
of $\sigma$ in $\mathcal{X}$. The scheme $\mathcal{X}$ is regular at $P$,
and the completion of $\mathcal{O}_{\mathcal{X},P}$ is isomorphic to the ring $R[[T]]$.
Weierstrass preparation theorem \cite[prop. VII.6]{BourbakiComm} implies that:
\[
\tilde{\sigma}(T)-T=g_{\tilde{\sigma}}(T) u_{\tilde{\sigma}}(T),
\]
where $g_{\tilde{\sigma}}(T)$ is a distinguished Weierstrass polynomial
of degree $m+1$ and $u_{\tilde{\sigma}}(T)$ is a unit in $R[[T]]$.
The polynomial $g_{\tilde{\sigma}}(T)$ gives rise to a horizontal divisor
that corresponds to the fixed points of $\tilde{\sigma}$. This
horizontal divisor might not be reducible.
The branch divisor corresponds to the union of the fixed points of
any $\sigma \in G_1(P)$.
Next lemma gives an alternative definition of a horizontal branch divisor for the
relative curves $\mathcal{X} \rightarrow \mathcal{X}^G$, that works even when
$G$ is not a cyclic group.
\begin{lemma} \label{lemmaBRANCH}
Let $\mathcal{X} \rightarrow {\rm Spec} A$ be an $A$-curve, admitting a
fibrewise action of the finite group $G$, where $A$ is a
Noetherian local ring.
Let $S={\rm Spec} A$, and $\Omega_{\mathcal{X}/S}$, $\Omega_{\mathcal{Y}/S}$ be
the sheaves of relative differentials of $\mathcal{X}$ over $S$ and
$\mathcal{Y}$ over $S$, respectively. Let $\pi:\mathcal{X} \rightarrow
\mathcal{Y}$ be the quotient map.
The sheaf
\[
\mathcal{L}(-D_{\mathcal{X}/\mathcal{Y}})= \Omega_{\mathcal{X}/S} ^{-1}
\otimes_S \pi^* \Omega_{\mathcal{Y}/S}.
\]
is the ideal sheaf the horizontal Cartier divisor
$D_{\mathcal{X}/\mathcal{Y}}$. The intersection of $D_{\mathcal{X}/\mathcal{Y}}$ with the special and generic fibre
of $\mathcal{X}$ gives the ordinary branch divisors for curves.
\end{lemma}
\begin{proof}
We will first prove that
the above defined divisor $D_{\mathcal{X}/\mathcal{Y}}$ is indeed
an effective Cartier divisor. According to \cite[Cor. 1.1.5.2]{KaMa}
it is enough to prove that
\begin{itemize}
\item $D_{\mathcal{X}/\mathcal{Y}}$ is a closed subscheme which is flat over $S$.
\item for all geometric points ${\rm Spec} k \rightarrow S$ of $S$, the
closed subscheme $D_{\mathcal{X}/\mathcal{Y}}\otimes_S k$ of $\mathcal{X} \otimes_S k$ is a
Cartier divisor in $\mathcal{X} \otimes _S k/k$.
\end{itemize}
In our case the special fibre is a nonsingular curve.
Since the base is a local ring and the special fibre is nonsingular,
the deformation $\mathcal{X} \rightarrow {\rm Spec} A$ is smooth.
(See the remark after the definition 3.35 p.142 in \cite{LiuBook}).
The smoothness of the curves $\mathcal{X}\rightarrow S$,
and $\mathcal{Y}\rightarrow S$, implies that the sheaves
$\Omega_{\mathcal{X}/S}$ and $\Omega_{\mathcal{X}/S}$ are $S$-flat,
\cite[cor. 2.6 p.222]{LiuBook}.
On the other hand the sheaf $\Omega_{\mathcal{Y},{\rm Spec} A}$ is
by \cite[Prop. 1.1.5.1]{KaMa} $\O_{\mathcal{Y}}$-flat.
Therefore, $\pi^*(\Omega_{\mathcal{Y}, {\rm Spec} A})$ is $\O_{\mathcal{X}}$-flat
and ${\rm Spec} A$-flat \cite[Prop. 9.2]{Hartshorne:77}.
Finally, observe that the intersection with the special and generic
fibre is the ordinary branch divisor for curves according to
\cite[IV p.301]{Hartshorne:77}.
\end{proof}
For a curve $X$ and a branch point $P$ of $X$ we will
denote by $i_{G,P}$ the order function of the filtration of $G$ at $P$.
The Artin representation of the group $G$ is defined
by $\mathrm{ar}_P(\sigma)=-f_P i_{G,P}(\sigma)$ for $\sigma\neq 1$ and
$\mathrm{ar}_P(1)= f_P\sum_{\sigma\neq 1} i_{G,P}(\sigma)$ \cite[VI.2]{SeL}.
We are going to use the Artin representation at both the special
and generic fibre. In the special fibre we always have $f_P=1$ since the
field $k$ is algebraically closed. The field of quotients of $A$ should
not be algebraically closed therefore a fixed point there might have $f_P \geq 1$.
The integer $i_{G,P}(\sigma)$
is equal to the multiplicity of $P\times P$ in the intersection of
$\Delta .\Gamma_\sigma$ in the relative $A$-surface
$\mathcal{X} \times_{{\rm Spec} A} \mathcal{X}$,
where $\Delta$ is the
diagonal and $\Gamma_\sigma$ is the graph of $\sigma$ \cite[p. 105]{SeL}.
Since the diagonals $\Delta_0,\Delta_\eta$ and the graphs of $\sigma$ in the special and generic fibres respectively of
$\mathcal{X}\times_{{\rm Spec} A} \mathcal{X}$ are algebraically equivalent divisors we have:
\begin{proposition}\label{bertin-gen}
Assume that $A$ is an integral domain, and let $\mathcal{X}\rightarrow {\rm Spec} A$
be a deformation of $X$.
Let $\bar{P}_i$, $i=1,\cdots,s$ be the horizontal branch divisors
that intersect at the special fibre, at point $P$, and let $P_{i}$ be
the corresponding points on the generic fibre. For the Artin
representations attached to the points $P,P_{i}$ we have:
\begin{equation} \label{equali}
\mathrm{ar}_P(\sigma)=\sum_{i=1}^s \mathrm{ar}_{P_{i}}(\sigma).
\end{equation}
\end{proposition}
This generalizes a result of J. Bertin \cite{BertinCRAS}. Moreover
if we set $\sigma=1$ to the above formula we obtain a relation
for the valuations of the differents
in the special and the generic fibre, since the value
of the Artin's representation at $1$ is the valuation of
the different \cite[prop. 4.IV,prop. 4.VI]{SeL}. This observetion
is equivalent to claim 3.2 in \cite{MatignonGreen98} and is one
direction of a local
criterion for good reduction theorem proved in \cite[3.4]{MatignonGreen98},
\cite[sec. 5]{KatoDuke87}.
\subsection{The Artin representation on the generic fibre}
We can assume that after a base change of the family $\mathcal{X} \rightarrow {\rm Spec}(A)$
the points $P_i$ at the generic fibre have degree $1$.
Observe also that
at the generic fibre the Artin representation can be computed as follows:
\[
\mathrm{ar}_{Q}(\sigma)=\left\{
\begin{array}{l}
1 \mbox{ if } \sigma(Q)=Q,\\
0 \mbox{ if } \sigma(Q)\neq Q.
\end{array}
\right.
\]
The set of points $S:=\{P_1,\ldots,P_s\}$ that are the intersections of the ramification
divisor and the generic fibre are acted on by the group $G$.
Let $S_k$ be the subset of $S$ fixed by ${\mathbb{Z}}/p^{n-k}{\mathbb{Z}}$, i.e.
\[
P\in S_k \mbox{ if and only if } G(P)={\mathbb{Z}}/p^{n-k}{\mathbb{Z}}.
\]
Let $s_k$ be the order of $S_k$.
Observe that since for a point $Q$ in the generic fibre $\sigma(Q)$ and $Q$ have
the same stabilizers (they are conjugate but $G$ is abelian) the sets $S_k$ are
acted on by $G$. Therefore orders of $S_k$ are $s_k=p^{k}i_k$ where $i_k$
is the number of orbits
of the action of $G$ on $S_k$.
Observe that
\[
G_{j_k}=\left\{
\begin{array}{ll}
{\mathbb{Z}}/p^{n-k}{\mathbb{Z}} & \mbox{ for } 0 \leq k \leq n-1 \\
\{1\} & \mbox{ for } k \geq n.
\end{array}
\right.
\]
An element in $G_{j_k}$ fixes only elements with stabilizers that contain $G_{j_k}$.
So $G_{j_0}$ fixes only $S_0$, $G_{j_1}$ fixes both $S_0$ and $S_1$ and
$G_{j_k}$ fixes all elements in $S_0,S_1,\ldots,S_k$.
So eq. (\ref{equali}) implies that an element $\sigma$ in $G_{j_k}-G_{j_{k+1}}$
satisfies $\mathrm{ar}_P(\sigma)=j_k$ and by using equation (\ref{equali}) we arive at
\[
j_k=i_0+ p i_1 + \cdots p^k i_k.
\]
This completes the proof of the Hasse-Arf theorem. The argument is illustrated in figure
\ref{picture}.
\begin{figure}[h]
\caption{The horizontal Ramification divisor \label{picture}}
\begin{center}
\includegraphics[scale=0.6]{./CyclicActions.eps}
\end{center}
\end{figure}
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1302.3477
|
\section{Introduction}
One of the central and most recognizable results of statistical physics is the value of constant-volume heat capacity, $C_v$, of a harmonic and classical solid:
\begin{equation}
C_v=3N
\end{equation}
\noindent where $N$ is the number of atoms and $k_{\rm B}=1$. Known as the Dulong-Petit law, Eq. (1) is the result of a solid having $3N$ phonons \cite{landau}.
Experimentally, $C_v$ is almost never $3N$ even in the classical limit $\frac{\hbar\omega_{\rm D}}{T}\ll 1$, where $\omega_{\rm D}$ is Debye frequency, an effect attributed to anharmonicity of interatomic interactions \cite{cowley,marad,anderson,ida,grimvall0,grimbook,grimvall,oga1,dorog,wallace,oga2}. In addition to heat capacity, anharmonicity governs many other properties of condensed matter systems, including thermal expansion, thermal and electric conductivity, elasticity, phase transitions, defect mobility, melting and so on.
There has been a large amount of research into anharmonic effects \cite{cowley,anderson,grimbook} that has resulted in qualitative understanding of the effect anharmonicity on system properties. The common approach is to expand the potential energy $U$ in Taylor series over atomic displacements $u$:
\begin{align}
&U=\frac{1}{2}\sum\limits_{ll^\prime}\phi(r_0^{ll^\prime})+\frac{1}{2}\sum\limits_{ll^\prime x}\phi_x({ll^\prime})(u_x^l-u_x^{l^\prime})+\\ \nonumber
&\frac{1}{4}\sum\limits_{ll^\prime xy}\phi_{xy}(ll^\prime)(u_x^l-u_x^{l^\prime})(u_y^l-u_y^{l^\prime})+\\ \nonumber
&\frac{1}{12}\sum\limits_{ll^\prime xyz}\phi_{xyz}(ll^\prime)(u_x^l-u_x^{l^\prime})(u_y^l-u_y^{l^\prime})(u_z^l-u_z^{l^\prime})+\\ \nonumber
&\frac{1}{48}\sum\limits_{ll^\prime xyz\omega}\phi_{xyz\omega}(ll^\prime)(u_x^l-u_x^{l^\prime})(u_y^l-u_y^{l^\prime})(u_z^l-u_z^{l^\prime})(u_\omega^l-u_\omega^{l^\prime}) \nonumber
\end{align}
\label{expa}
\noindent where the anharmonic coefficients $\phi_{x...}$ are given by the derivatives at equilibrium separations in a usual way \cite{marad}.
As noted by Cowley \cite{cowley}, $\phi_{x...}$ are very complicated to evaluate even if the potential functions are known. Complications related to evaluating $\phi_{x...}$ necessitated approximations, which, as Cowley further notes \cite{cowley}, are quite inadequate for real systems and are useful in order-of-magnitude calculations only. However, assuming that interactions include only pair and short-range (nearest-neighbor) interactions $\phi$ and considering, for example, a face-centered cubic lattice, low-order perturbation theory gives $C_v$ as a function of $\phi$ and $T$ as \cite{marad}:
\begin{align}
C_v=&3N\Big(1-T\frac{1}{8}\frac{\phi^{\rm IV}(r_0)}{(\phi^{''}(r_0))^2}+T\frac{172.3}{4608}\frac{(\phi^{'''}(r_0))^2}{(\phi^{''}(r_0))^3}-\\ \nonumber
&\frac{1}{3}\frac{\hbar^2}{M}\frac{1}{T^2}\phi^{''}(r_0)+O(T^{-3})\Big)
\label{mar}
\end{align}
This relationship is one of the few that provide a closed form for evaluation of $C_v$, assuming that $\phi$ are known and, importantly, represent a faithful representation of interatomic forces.
Unfortunately, the quantitative evaluation of anharmonicity effects has remained a challenge, with the frequent result that the accuracy of leading-order anharmonic perturbation theory is unknown and the magnitude of anharmonic terms is challenging to justify \cite{cowley,marad,grimvall0,wallace,fultz}. Experimental data such as phonon lifetimes and frequency shifts can provide quantitative estimates for anharmonicity effects and anharmonic expansion coefficients in particular, although this involves complications, and limits the predictive power of the theory \cite{cowley}. As noted starting from the early studies \cite{marad,grimvall0}, the main problem with the approach based on expansions such as Eq. (2) and subsequent understanding of anharmonic effects is that good-quality models for interatomic forces are not generally available.
The problem of the anharmonic theory relying on the knowledge of interatomic interaction models has been noted earlier \cite{cowley,marad,wallace}. It has been stated that ``undoubtedly the unsatisfactory nature of these models is the limiting factor in our understanding of many anharmonic properties'', and that if anharmonic calculations are to have any quantitative significance, the realistic models are necessary \cite{cowley}. Consequently, theoretical work on interatomic potentials was stated as an essential future effort at the time \cite{cowley}. We note in passing that despite the progress in materials modeling since that time, the problem remains. Indeed, apart from relatively small number of materials, a negligible fraction of all known ones, it has proved impossible to develop a general recipe for successfully mapping the interatomic interactions onto the sets of empirical functions to be used in expansion such as (2). The problem is particularly acute with modern materials which often have complicated interactions in the form of hydrogen-bonded, polymeric and many-body interactions, magnetic correlations, non-trivial band gap changes with temperature, anisotropy, layered structures, large number of distinct atoms in organic and biological systems and other factors that severely limit the development of high-quality interaction models. Importantly, because the anharmonic effects are believed to be small, a small departure of a potential model from a high quality one renders the partitioning into harmonic and anharmonic parts (2) and subsequent interpretation of anharmonicity meaningless.
Another approach to treat anharmonicity is to invoke Gr\"{u}neisen approximation, where the softening of phonon frequencies $\omega_i$ is quantified by parameters $\gamma_i=-\frac{V}{\omega}\left(\frac{\partial\omega_i}{\partial V}\right)_T$, and discuss the macroscopic equations of state \cite{anderson}. However, $C_v$ was not previously calculated in this approach in the form free of adjustable parameters and suitable for direct numerical evaluations.
In view of persisting difficulties of evaluating anharmonic effects, it is important to have an alternative general method of estimation of anharmonic $C_v$. It is also important to have a method applicable not only to crystalline systems, but also to amorphous solids as well as liquids, systems for which the traditional perturbation approaches are not suitable, as discussed below in more detail.
In the course of studying the problem of glass transition, we have proposed \cite{prb} that the effects of anharmonicity on $C_v$ can be evaluated as
\begin{equation}
C_v=3N(1+\alpha T)
\label{10}
\end{equation}
\noindent where $\alpha$ is the coefficient of thermal expansion.
There is no contradiction in the relationship (\ref{10}) between the constant-volume $C_v$ and thermal expansion, as might be perceived. As discussed below in detail, the relationship is due to the softening of bulk modulus with temperature at constant volume due to intrinsic anharmonicity, an effect that can be related to $\alpha$ in Gr\"{u}neisen approximation.
In Eq. (\ref{10}), all potentially complicated effects of anharmonicity discussed above are evaluated by one parameter, $\alpha$. Importantly, $\alpha$ is not an adjustable parameter, but is fixed by system properties. Another important feature of Eq. (\ref{10}) is that $\alpha$ can be independently measured or calculated in a straightforward way no matter how complicated interactions in a system are. Appealingly simple, Eq. (\ref{10}) provides an important and straightforward way of estimating the effect of anharmonicity on $C_v$. Perhaps not unexpectedly, the simplicity is achieved by making approximations, and this paper is partly devoted to assessing these approximations, a point to which we return below.
Importantly, Eq. (\ref{10}) can be used to evaluate anharmonicity in two important types of condensed matter systems, glasses and liquids, for which calculations based on anharmonic expansions such as (2) do not work. Indeed, the evaluation of the anharmonic terms in Eq. (2) and coefficients $\phi_{x...}$ involves sums over wave vectors $k$ in a crystal \cite{cowley,marad}. On the other hand, $k$ are not defined in amorphous glasses, at least not at large $k$. In liquids, an expansion such as (2) can not be made even in principle because atoms do not oscillate around fixed positions as in solids, which is the starting point of theories based on Eq. (2) and similar ones.
In this paper, we extend our new approach, and address the validity of Eq. (\ref{10}) across a wide range of crystals, glasses and viscous liquids. We perform molecular dynamics (MD) simulations, and find a good agreement between simulation results and Eq. (\ref{10}) in several crystalline and amorphous solids as well viscous liquids in a wide temperature range.
We note that using MD simulation to study Eq. (\ref{10}) has two important advantages over experiments. First, experimental $C_v$ is calculated from the measured $C_p$ as $C_v=C_p-VT\alpha^2 B$, where $B$ is the bulk modulus. There are uncertainties in experimentally determined $\alpha$ and $B$, particularly at high temperature, which implies uncertainty in $C_v$ \cite{anderson}. In the MD simulation, this problem does not originate because simulations can be performed at constant volume. Second, the classical limit $\frac{\hbar\omega_{\rm D}}{T}\ll 1$ giving $C_v=3N$ is not achieved in many experimental systems due to high $\omega_{\rm D}$ \cite{anderson}. Consequently, it is often not clear to what extent the deviation of experimental $C_v$ from $3N$ is due to anharmonicity or quantum effect of phonon excitation. This issue does not originate in our MD simulations, which are classical.
We finally note that when evaluations of anharmonic effects are possible for certain systems, traditional perturbation approaches achieve the accuracy at the level of order-of-magnitude agreement with experiments or simulations (see, e.g., Refs \cite{marad,wallace,grimvall0}). We aim for at least the same level of accuracy in our new general method of evaluating anharmonic effects. The accuracy is determined by certain approximations that are used to derive a simple form of Eq. (\ref{10}). We find that Eq. (\ref{10}) gives correct order-of-magnitude evaluation of anharmonic effects, the result that is considered as best of what can be achieved in the traditional perturbation expansion approximations.
\section{Theory}
We start with the derivation of Eq. (\ref{10}). The free energy of a harmonic solid in the high-temperature approximation is $F=3NT\ln\frac{\hbar\bar{\omega}}{T}$, where $\bar{\omega}^{3N}=\omega_1\omega_2...\omega_{3N}$ is geometrically averaged phonon frequency \cite{landau}. In the harmonic case, $\bar{\omega}$ is constant, giving the entropy $S=-\left(\frac{\partial F}{\partial T}\right)_v=3N\left(1+\ln\frac{T}{\hbar\bar{\omega}}\right)$ and $C_v=T\left(\frac{\partial S}{\partial T}\right)_v=3N$. Anharmonicity results in the decrease of $\bar{\omega}$ with temperature. Then, $S=3N\left(1+\ln\frac{T}{\hbar\bar{\omega}}-\frac{T}{\bar{\omega}}\frac{{\rm d}\bar{\omega}}{{\rm d}T}\right)$, and
\begin{equation}
C_v=3N\left(1-\frac{2T}{\bar{\omega}}\frac{{\rm d}\bar{\omega}}{{\rm d}T}+ \frac{T^2}{\bar{\omega}^2}\left(\frac{{\rm d}\bar{\omega}}{{\rm
d}T}\right)^2-\frac{T^2}{\bar{\omega}}\frac{{\rm d^2}\bar{\omega}}{{\rm d}T^2}\right)
\label{cv}
\end{equation}
\noindent where the derivatives are taken at constant volume.
In the high-temperature limit where $F=3NT\ln\frac{\hbar\bar{\omega}}{T}$, Eq. (\ref{cv}) is exact, and is the starting point of our theory. Evaluation of $C_v$ requires the knowledge of $\frac{{\rm d}\bar{\omega}}{{\rm d}T}$, which we calculate below.
The phonon pressure, $P_{\mathrm{ph}}$, is $P_{\mathrm{ph}}=-\left(\frac{\partial F}{\partial V}\right)_T=\frac{3NT\gamma}{V}$, where $\gamma$ is the average Gr\"{u}neisen parameter $\gamma=\frac{1}{3N}\sum\limits_{i=1}^{3N}\gamma_i$ and $\gamma_i=-\frac{V}{\omega_i}\left(\frac{\partial\omega_i}{\partial V}\right)_T$ \cite{anderson}. This gives the bulk modulus $B_{\mathrm{ph}}=-\frac{3NT\gamma(q-1)}{V}$ and $\left(\frac{\partial B_{\mathrm{ph}}}{\partial T}\right)_v=-\frac{3N\gamma(q-1)}{V}$, where $q=\frac{\partial\ln\gamma}{\partial\ln V}$. Experimentally, $q$ is known to be fairly constant across the range of systems (e.g. $q$=2.1 for Pb, 3.2 for Ge \cite{grimbook}, 1.4 for MgO \cite{anderson}, 1.5--2 for alkali halides \cite{roberts}, 1.7 for MgSiO$_3$ perovskite \cite{stix} etc). For simplicity, we set $\left(\frac{\partial B_{\mathrm{ph}}}{\partial T}\right)_v=-\frac{3N\gamma}{V}$ as this does not affect our order-of-magnitude evaluations of $c_v$, a point to which we return below. Using $\gamma=\frac{V\alpha B}{C_v}$ and $B=B_0+B_{\mathrm{ph}}$, where $B$ and $B_0$ is the total and static bulk modulus, respectively, we find
\begin{equation}
\left(\frac{\partial B_{\mathrm{ph}}}{\partial T}\right)_v=-\alpha(B_0+B_{\mathrm{ph}})
\label{bulk}
\end{equation}
\noindent where we set $C_v=3N$ in this approximation.
For small $\alpha T$, which is often the case in the experimental temperature range, Eq. (\ref{bulk}) implies $B\propto -T$, consistent with the experiments \cite{anderson}. We note that experimentally, $B$ linearly decreases with temperature at both constant pressure and constant volume (constant-volume decrease can be small in some systems) \cite{anderson,gold,and1,yamamo}. The decrease of $B$ with $T$ at constant volume is due to the intrinsic anharmonicity related to the softening of interatomic potential at large vibrational amplitudes; the decrease of $B$ at constant pressure has an additional contribution from thermal expansion.
The next step is to assume that $\bar{\omega}^2\propto B$, a relationship that holds true if $\omega_i^2\propto B$. For acoustic modes, $\omega_i^2\propto B$ because $\omega_i^2=k^2c^2\propto B+\frac{4}{3}G$ and the shear modulus $G$ scales with $B$ via the Poisson ratio that is nearly constant in all systems. Therefore, $\bar{\omega}^2\propto B$ is applicable to any system as long as the phonon spectrum is treated in Debye approximation, as is often the case. In a general case of a spectrum that includes optic modes, the relationship $\bar{\omega}^2\propto B$ can be addressed by studying how $\omega_i$ and $B$ change in response of external parameters such as temperature and pressure. It has been found that $\omega_i^2\propto B$ is the case for optic modes in a wide temperature range, both longitudinal and transverse \cite{aguado}. The increase of $\omega_i$ including acoustic and optic modes is also seen in a wide pressure range, accompanied by the simultaneous increase of $B$ \cite{klug}.
Finally, combining $\bar{\omega}^2\propto B_0+B_{\mathrm{ph}}$ and Eq. (\ref{bulk}), we find $\frac{1}{\bar{\omega}}\left(\frac{{\rm d}\bar{\omega}}{{\rm d}T}\right)_v=-\frac{\alpha}{2}$. Putting the last relationship in Eq. (\ref{cv}) gives Eq. (\ref{10}). We note that the last two terms in Eq. (\ref{cv}) cancel out if
$\left(\frac{{\rm d}\bar{\omega}}{{\rm d}T}\right)_v\propto\bar{\omega}$, as is the case here.
As follows from the previous discussion, the evaluation of $c_v$ can be made more precise if values of $q$ are retained in the calculation. In this case, $\left(\frac{\partial B_{\mathrm{ph}}}{\partial T}\right)_v=-\delta(B_0+B_{\mathrm{ph}})$, where $\delta=\alpha(q-1)$. Combining it with $\bar{\omega}^2\propto B_0+B_{\mathrm{ph}}$ gives $\frac{1}{\bar{\omega}}\left(\frac{{\rm d}\bar{\omega}}{{\rm d}T}\right)_v=-\frac{\delta}{2}$. Using it in Eq. (\ref{cv}) gives
\begin{equation}
C_v=3N(1+\delta T)
\label{11}
\end{equation}
Here, similar to Eq. (\ref{10}), all anharmonic effects are represented by one parameter, $\delta$. This parameter quantifies the decrease of $B$ with temperature at constant volume. Concerned with demonstrating an order-of-magnitude evaluation of $c_v$ using our new approach, we will not pursue Eq. (\ref{11}) further, and concentrate on Eq. (\ref{10}).
\section{Molecular dynamics simulations}
We now discuss our MD simulations. We have aimed for diversity of structures and interactions, and consequently chosen several systems with different symmetry, structure and interatomic potentials: crystalline Ge, NaCl, Al$_2$O$_3$ (corundum), TiO$_2$ (rutile), ZrSiO$_4$, SiO$_2$ glass and a model liquid system. For Ge, we used many-body environment-dependent (``bond-order'') Tersoff potential \cite{tersoff}. Here, the interaction strength between any two atoms depends on their environment and coordination. This potential therefore represents an example of a crystalline system which can not be treated in the expansion approach (2). Empirical potentials for Al$_2$O$_3$ \cite{cor}, TiO$_2$ \cite{tio2}, NaCl \cite{martin} ZrSiO$_4$ \cite{zir1,zir2,zir3} and SiO$_2$ glass \cite{tsu} included long-range Coloumb and short-range Buckingham or Morse interactions. Ref. \cite{sio2} discusses details of generation of SiO$_2$ glass structure. For the liquid, we employed Lennard-Jones (LJ) potentials designed to simulate a binary liquid in the supercooled viscous state \cite{lj}. The binary liquid consists of two distinct atomic types with different interaction parameters and effective sizes to avoid crystallization at low temperature.
We note here that the empirical potentials we employed may or may not closely reproduce the experimental $\alpha$ or other properties such as $c_v$ or $B$. However, this is not important for our study as we aim to show that a given force field, even though approximate, still results in the relationship between $c_v$ and $\alpha$ given by Eq. (\ref{10}). In this sense, it is only important that a force field gives physically sensible set of $\omega_i$ (e.g., real $\omega_i$) and other physical characteristics such as elasticity and thermal expansion that show commonly observed temperature dependence, because our derivation of Eq. (\ref{10}) is based on these properties and relationships between them.
We have used DL\_POLY programme \cite{dlpoly} for our MD simulations. For solids, the number of atoms was in the 12,000-27,000 range depending on the system. For the LJ liquid, we used 64,000 atoms. We have verified that increasing the number of atoms does not change the results. The energy of the system, $E$, was calculated in constant-energy and volume ensemble simulations by equilibrating the system at a given temperature. The system volume and $\alpha$ were calculated in constant-pressure ensemble simulations. We have performed simulations in a wide temperature range (see Figure 1) with temperature step of 1 K. Each temperature point was simulated on a separate processor using our high-throughput computing cluster. The constant-volume specific heat was calculated as $c_v=\frac{1}{N}\frac{{\rm d}E}{{\rm d}T}$. To reduce the fluctuations of the derivative, we have fitted the energy using high-order polynomials and cubic splines, and verified that $c_v$ is not sensitive to the polynomial order used and fitting parameters.
In Figures 1--2 we show the calculated $c_v$ and relative volume $\frac{V}{V_0}$, where $V_0$ is the system volume at the lowest simulated temperature, for 6 solid systems and for the LJ liquid. We observe that $c_v$ for different systems increase above the Dulong-Petit value of 3 in the wide temperature range. We found an exception to this behavior in crystalline Ar, where the increase of $c_v$ is preceded by its decrease at low temperature. In soft crystals such as Ar with large anharmonicity ($\gamma=$3.5), we do not expect the starting assumptions of the Gr\"{u}neisen approximation that we employed here to hold. Figures 1--2 enable us to see how well the slope of $c_v$ predicted by Eq. (\ref{10}) agrees with the actual value of $\alpha$. Consequently, we calculated $\alpha_c$ from Figure 1a as $c_v=3(1+\alpha_c T)$ and $\alpha=\frac{1}{V_0}\frac{\Delta V}{\Delta T}$ from Figure 1b and 2b. For the LJ liquid, $\alpha_c$ was calculated from the linear increase of $c_v$ at low temperature in Fig. 2a, for the reasons discussed below in detail. For some systems, $c_v$ and $\frac{V}{V_0}$ are not linear with temperature in the whole temperature range. In this case, we have calculated $\alpha_c$ and $\alpha$ at each temperature, and have taken the average.
\begin{figure}
\begin{center}
{\scalebox{0.4}{\includegraphics{fig1.eps}}}
\end{center}
\caption{$c_v$ (a) and $\frac{V}{V_0}$ (b) for simulated crystalline and amorphous systems.}
\label{1}
\end{figure}
\begin{figure}
\begin{center}
{\scalebox{0.4}{\includegraphics{fig2.eps}}}
\end{center}
\caption{$c_v$ (a) and $\frac{V}{V_0}$ (b) for LJ liquid. (c) shows coordinates of three atoms with large atomic displacements.}
\label{2}
\end{figure}
The calculated values of $\alpha_c$ and $\alpha$ are: crystalline Ge ($\alpha_c=3.6\cdot 10^{-5}$ K$^{-1}$, $\alpha=2.6\cdot 10^{-5}$ K$^{-1}$), TiO$_2$ ($\alpha_c=1.1\cdot 10^{-5}$ K$^{-1}$, $\alpha=2.8\cdot 10^{-5}$ K$^{-1}$), NaCl ($\alpha_c=7\cdot 10^{-5}$ K$^{-1}$, $\alpha=14\cdot 10^{-5}$ K$^{-1}$), ZrSiO$_4$ ($\alpha_c=1.3\cdot 10^{-5}$ K$^{-1}$, $\alpha=2\cdot 10^{-5}$ K$^{-1}$), Al$_2$O$_3$ ($\alpha_c=1.3\cdot 10^{-5}$ K$^{-1}$, $\alpha=0.7\cdot 10^{-5}$ K$^{-1}$), SiO$_2$ glass ($\alpha_c=2.4\cdot 10^{-5}$ K$^{-1}$, $\alpha=2.9\cdot 10^{-5}$ K$^{-1}$), LJ liquid ($\alpha_c=1.75\cdot 10^{-3}$ K$^{-1}$, $\alpha=1.72\cdot 10^{-3}$ K$^{-1}$).
We observe that the anharmonic contribution to $c_v$ can be evaluated by Eq. (\ref{10}) fairly well. $\frac{\lvert\alpha_c-\alpha\rvert}{\alpha}$, averaged over all systems, is within 40\%. The discrepancy between the predicted and calculated values is within the approximations we made in deriving Eq. (\ref{10}), including our neglecting the dependence of $\gamma$ on volume, which can alter the predicted $\alpha_c$ by up to a factor of 2 (see Eq. (\ref{11}) and the discussion preceding Eq. (\ref{bulk})). We therefore find that Eq. (\ref{10}) gives the correct order-of-magnitude evaluation of anharmonic effects, the result that is considered as best of what can be achieved using the traditional perturbation expansion approximations (see, e.g., Refs \cite{marad,wallace,grimvall0}).
\section{Heat capacity of a liquid in the viscous regime}
We now discuss why and how the above theory applies to viscous liquids in addition to solids. We first demonstrate that the temperature range where $c_v$ linearly increases in Fig. 2a, and where we calculate $\alpha_c$, corresponds to a viscous liquid. Following a somewhat general definition, a viscous liquid is a liquid whose relaxation time $\tau$ is much larger than Debye vibration period of about 0.1 ps: $\tau\gg\tau_{\rm D}$. Here, $\tau$ is the average time between consecutive atomic jumps in a liquid at one point in space \cite{frenkel}. In Fig. 2c we plot the coordinates of three atoms from the simulation of the binary LJ liquid at 50 K, corresponding to temperature in the middle of the linear increase of $c_v$ in Fig. 2a. We observe atomic displacements reaching 8 \AA\ during the time of our simulation, witnessing that large-amplitude diffusive motions are present as is the case for liquids. Second, we estimate $\tau$ as the average time between atomic jumps by definition. Averaged over different atoms, $\tau$ is approximately 15 ps. Therefore, $\tau\gg\tau_{\rm D}$, corresponding to a viscous liquid.
There are two types of motion in a liquid: phonon motion that includes one longitudinal mode and two transverse modes with frequency $\omega>\frac{1}{\tau}$, and diffusional motion \cite{frenkel}. Consequently, the total liquid energy, $E$, is the sum of the phonon energy, $E_{\mathrm{ph}}$, and diffusional energy, $E_{\mathrm{dif}}$: $E=E_{\mathrm{ph}}+E_{\mathrm{dif}}$. $E_{\mathrm{dif}}$ includes both kinetic energy of diffusing atoms and potential energy of their interaction with other atoms. As argued by Frenkel, a particle spends time $\tau$ vibrating in between jumps \cite{frenkel}. The time it takes a particle to jump from one equilibrium position to the next is approximately equal to $\tau_{\rm D}$. Therefore, the probability of a jump is $\rho=\frac{\tau_{\rm D}}{\tau}$. In statistical equilibrium, the number of atoms in the transitory diffusing state is $N_{\mathrm{dif}}=N\rho$, where $N$ is the total number of atoms, giving
\begin{equation}
N_{{\mathrm{dif}}}=N\frac{\tau_{\rm D}}{\tau}
\label{num}
\end{equation}
Eq. (\ref{num}) implies that in a viscous liquid where $\tau\gg\tau_{\rm D}$, the relative number of diffusing atoms at any given moment of time is negligible. Consequently, $E_{\mathrm{dif}}$ can be ignored, giving $E=E_{\mathrm{ph}}$ at any given moment of time. It is easy to show that the same result, $E=E_{{\mathrm{ph}}}$, also applies to the energy averaged over time $\tau$ \cite{next}.
The phonon energy of a liquid in the regime $\tau\gg\tau_{\rm D}$ is given, to a very good approximation, by the phonon energy of its solid. This is supported by the explicit equation for the liquid energy in the next section, and can be qualitatively discussed as follows. The only difference between the phonon states in a liquid and a solid is that the former does not support all transverse modes as a solid does, but only modes with frequency $\omega>\frac{1}{\tau}$ \cite{frenkel}. When $\tau\gg\tau_{\rm D}$, the fraction of missing transverse modes in a liquid is negligible and, furthermore, contributes a vanishingly small term to the phonon energy because the phonon density of states is proportional to $\omega^2$.
We therefore conclude that the energy of the viscous liquid is equal to the phonon energy, as in the solid. Consequently, Eq. (\ref{10}), derived for solids on the basis of phonons and Gr\"{u}neisen approximation, applies to viscous liquids too. This explains our earlier finding that the increase of liquid $c_v$ in the low-temperature viscous regime in Fig. 2a is well described by our proposed Eq. (\ref{10}).
\section{The origin of non-monotonic behavior of liquid $c_v$}
It is interesting to note the non-monotonic behavior of $c_v$ in Fig. 2 with a maximum. We explain this behavior as a result of two competing effects. On one hand, $c_v$ increases in the viscous regime due to anharmonicity as discussed above. On the other hand, $c_v$ decreases at high temperature as a result of progressively decreasing number of transverse waves with frequency $\omega>\frac{1}{\tau}$. We have studied this effect in a series of recent papers \cite{lenergy1,lenergy2,lenergy3}, and shown that the associated decrease of $c_v$ is in quantitative agreement with experimental data of many liquids. Explicitly, the energy of a classical liquid is \cite{lenergy2}:
\begin{equation}
E=NT\left(1+\frac{\alpha T}{2}\right)\left(3-\left(\frac{\tau_{\rm D}}{\tau}\right)^3\right)
\label{energy}
\end{equation}
At low temperature when $\tau\gg\tau_{\rm D}$, Eq. (\ref{energy}) gives $E=3NT\left(1+\frac{\alpha T}{2}\right)$ and $C_v=3N(1+\alpha T)$, Eq. (4). This is the result we observe in Fig. 2a at low temperature. At high temperature when $\tau\rightarrow\tau_{\rm D}$, the last term in Eq. (\ref{energy}), $\left(3-\left(\frac{\tau_{\rm D}}{\tau}\right)^3\right)$, can not be ignored. Its decrease with temperature dominates over $T\left(1+\frac{\alpha T}{2}\right)$ because $\tau$ decreases with temperature exponentially or faster. The result is that in the low-viscous regime $\tau\rightarrow\tau_{\rm D}$, $c_v$ decreases with temperature \cite{lenergy1,lenergy2,lenergy3}. The combination of two competing effects gives the maximum of $c_v$ as is seen in Fig. 2a.
We finally note that experimentally, the non-monotonic behavior of $c_v$ shown in Figure 2a is challenging to observe. On one hand, the noticeable decrease of $c_v$ requires $\tau$ approaching $\tau_{\rm D}$ as discussed above and, therefore, requires experimenting with low-viscous liquids such as metallic, noble-atom and some molecular liquids \cite{lenergy3}. These liquids tend to easily crystallize on cooling, preventing the formation of the viscous regime and accompanied linear increase of $c_v$. On the other hand, the linear increase of $c_v$ could be observed in viscous liquids such as silicates, chalcogenide and other systems. However, reaching low-viscous regime $\tau\rightarrow\tau_{\rm D}$ in these systems and accompanied decrease of $c_v$ due to the loss of transverse waves requires high temperatures where experiments are challenging. Moreover, viscous liquids often have strong bonds and high Debye temperature of internal vibrations, with the result that $c_v$ continues to increase even at high temperature due to progressive excitation of internal vibrations, counteracting the decrease of $c_v$ due to the loss of transverse modes. As a result, experiments typically observe either decrease of $c_v$ in low-viscous liquids or increase of $c_v$ in high-viscous liquids but not both. These problems did not originate in our MD simulations in which we were able to reach both low-viscous ($\tau\rightarrow\tau_{\rm D}$) and high-viscous liquid state ($\tau\gg\tau_{\rm D}$) and which, furthermore, were classical.
\section{Summary}
In summary, we have discussed a new way of evaluating the effects of anharmonicity on system's thermodynamic functions such as heat capacity, and have demonstrated its good predictive power. Importantly, our theory can be used to evaluate anharmonic $c_v$ in a system of any complexity including glasses and viscous liquids, in contrast to previous treatments of anharmonicity. In liquids, anharmonicity results in the increase of $c_v$ at low temperature, contributing to a non-monotonic behavior and a maximum of $c_v$.
|
2202.06166
|
\section*{Reproducibility Statement}
In an effort to make our work transparent and reproducible, the entirety of the data and codes used in this work have been made publicly available. Detailed instructions on how to access the data and reproduce the results presented here can be found in the online documentation available at \url{http://citymag.gitlab.io/nuri/paper}. The analysis was performed on 86\,GB worth of magnetic field data using the open-source software \textsc{NURI} \cite{nuri_software}, specifically designed to analyze time-series data produced by our urban magnetometry network.
\section{Introduction}
Cities are among the most complex systems that are of utmost importance for humanity. The multifaceted and dynamic properties of cities are determined by intricate combinations of natural, anthropogenic, and socio-economic factors. In recent years, a novel approach to the study of the cities was introduced \cite{DOBLER2015115}, in which a city is studied, similar to an astronomical object in the multi-messenger astronomy approach, with an array of observational instruments such as, for example, multispectral cameras \cite{hyperspec1,hyperspec2,hyperspec3}. The analysis of such data has led to important insights into the working of cities \cite{s16122047,rs13081426,rs12162540}, of importance in such diverse areas as improving energy efficiency, reducing pollution, and increasing our understanding of social organization via the analysis of the work/sleep patterns of urban dwellers (with measurements carried out in a way to ensure privacy of individuals \cite{privacy}).
Motivated by the success of the multispectral approach, we built a prototype network for urban magnetometery \cite{2019_bowen} and conducted measurements in the San Francisco Bay Area, analyzing the dominant sources of magnetic signals and learning to extract subtle information in the presence of much larger backgrounds.
Here, we report the next step in the urban-magnetometry program, in which we compare the magnetic signatures of two cities, Berkeley (CA) and Brooklyn Borough of New York City (NY). Apart from the anticipated result that ``New York never sleeps'', our measurements indicate that each city has distinct magnetic signatures that can, perhaps, be exploited for the analysis of anomalies in city operation and long-term trends of the development of cities.
\section{Experimental details}
\subsection{Sensor type and data acquisition}
Two types of magnetometers were used to measure the magnetic field in Brooklyn. The base stations were built using three-axis fluxgate magnetometers manufactured by BioMed Jena GmbH. These Biomed sensors are tied to a specific location and have a sensitivity of about 70\,pT/$\sqrt{\mathrm{Hz}}$. A power supply from the same manufacturer is used to connect the magnetometers to a computer. Digitized magnetic field measurement data are transferred using a Universal Serial Bus (USB) connection. The data streamed from the Biomed sensors are sampled at 3960\,Hz and recorded on the computer using the publicly available {\sc UrbanMagnetometer} software\,\cite{data_acq}. Data from each magnetic field direction (that is, X, Y and Z) are stored hourly in separate binary files.
Field measurements were performed using Magnetoresistive Vector Magnetometers (VMR) manufactured by Twinleaf LLC\footnote{\url{https://twinleaf.com/vector/VMR/}} with sensitivity of 300\,pT/$\sqrt{\mathrm{Hz}}$. In this work, we analyze the total scalar field and not individual vector measurements from each axes.\footnote{There may be a small systematic effect in these measurements: the VMR sensors are based on HMC1001 series sensors and have variation over time and temperature such that the calibration of each axes can be off by as much as 10\,\%, which may cause the recorded ``scalar'' measurements to not be purely scalar.} In terms of acquisition, the Twinleaf sensors do not require any data-acquisition device and can be powered directly from a laptop USB port, making them ideal for field measurements (see Section \ref{Sec:field}).
The geomagnetic field was acquired from the United States Geological Survey (USGS) using the open-source library Geomag Algorithms\footnote{\url{https://github.com/usgs/geomag-algorithms}}. The USGS station (FRN) nearest to Berkeley is located 200 miles away, in Fresno, California. For the Brooklyn data, the nearest USGS station (FRD) is in Corbin, Virginia, about 300 miles away from New York City.
\subsection{Activity period}
Data from the Biomed sensors were obtained over four weeks from each city during the calendar year 2016 for Berkeley and 2018 for Brooklyn. More specifically, the data used from Berkeley were taken from Monday March 14, 2016, through Monday April 11, 2016. The data from Brooklyn were acquired from Monday May 7, 2018, through Monday June 4, 2018; this period included the US Federal Memorial Day holiday, observed the last Monday of the month (05/28/2018) which is highlighted in red in Fig.\,\ref{Fig:full_time_series}. Holidays are usually characterized by a quieter magnetic environment due to a dropping off human activities; this can be particularly noticeable when the holiday falls on a weekday and the nearby environment surrounding the sensor has an overall magnetic field of higher amplitude during working hours. Finally, the geomagnetic field measured by the respective USGS station closest to each city has variations representing a small fraction of our measured variation, i.e. 3\% and below.
\subsection{Sensor locations}
The Berkeley measurements (originally presented in \citep{2019_bowen}) were conducted using geographically separated magnetometers in the city of Berkeley. The 4-week data used in this work were generated by one of the Biomed sensors located in a residential area 90 meters away from the Bay Area Rapid Transit (BART) rail system. The city of Berkeley has about 120,000 residents, living primarily in houses with a few low-rise buildings in the downtown. A BART line, which crosses the city, is the dominant source of magnetic field above the natural background during daytime \citep{fraser}.
In Brooklyn, the Biomed sensor was placed on the 12\textsuperscript{th} floor of the downtown-located Transit Building (370 Jay Street) in one of the corner offices of NYU's Center for Urban Science and Progress. In sharp contrast to Berkeley that has a population density of about 4,600 people per square kilometer (2020 census), Brooklyn is over 3 times denser with a population density approaching 15,000 people per square kilometer and its downtown constitutes a major transportation axis connecting it to downtown Manhattan. Located 40 meters underneath the sensor's position, beneath the Transit Building, is the Jay Street-MetroTech subway station, which is served by three subway lines at all times and by several additional lines during commute hours.
Due to limited resources, we were only able to stream data seamlessly from one base station in Brooklyn, located in the Jay Street building. As the observations are limited to a single-point measurement from the building, the question arises as to whether or not these magnetic field fluctuations are characteristic of the magnetic environment of Brooklyn, or, alternatively, if they are fluctuations in the magnetic field that are the result of a geographically localized set of magnetic sources in the Jay Street building. To address this concern, we performed a series of auxiliary experiments using multiple portable magnetic sensors (see Section \ref{Sec:field}) and demonstrate that while local processes measured by the Jay Street sensor are predominant, characteristic observations of the Brooklyn urban magnetic field can also be extracted from the data.
\section{Comparative Data Analysis}
\subsection{Time-domain observations}
The total scalar field for the entire four-week period for both cities is shown in Fig.\,\ref{Fig:full_time_series}. While daily variations of the magnetic field in Berkeley appear similar regardless of the day of the week (weekday fluctuations are similar to weekend fluctuations), the periodic behavior observed in the Brooklyn weekday data appears to stop on weekends and holidays. We note, however, that since the sensor in Brooklyn is placed within a business building, the drop in activities within the building during weekends and holidays (e.g., stopped elevators, lights off) represents a direct cause for the drop in magnetic field activities observed by the magnetometer. One should also note that the measured field is relatively far from the geomagnetic mean, indicating that the field on the sensor has a large contribution from a local source.
The change in variance during weekdays and weekends is shown on the top plots of Fig.\,\ref{Fig:distcomp}. We note that the dispersion of the magnetic field in Berkeley is two orders of magnitude less at night than during the day, dropping from $10^{-2}$ to $10^{-4}\,{\mathrm{\upmu T}^2}$, while nightly variations in downtown Brooklyn remains high with a variance lying above 0.1\,$\mathrm{\upmu T}^2$ all the time. Decreased amplitude fluctuations in Berkeley occur roughly between 1 to 4:30\,AM, when BART is not in service. We also note that nighttime activities differ slightly from weekday to weekend; that is a direct consequence of reduced public transport activities on weekends. In Brooklyn, however, the changes between daytime and nighttime variations are less pronounced and the weekend variation has only minor day/night variability. While a decrease in anthropogenic activity during weekdays is usually observed at around 4-7\,PM, when business activities are reduced, the decrease in the magnetic field only starts to be seen at around 11\,PM, thereby suggesting that the magnetic field measured by the Biomed sensor is not solely driven by the occupancy of the building.
The bottom plots of Fig.\,\ref{Fig:distcomp} show the distributions of magnetic field data for the full dataset as well as for daytime and nighttime periods. For each city, day and nighttime distributions were fitted independently using a skewed Gaussian profile
\begin{multline}
f\left(x;A,\mu,\sigma,\gamma\right) = \frac{A}{\sigma\sqrt{2\pi}}\exp{\left[-(x-\mu)^2/\left(2\sigma^2\right)\right]}\\
\times\left\{1+\mathrm{erf}\left[\frac{\gamma(x-\mu)}{\sigma\sqrt{2}}\right]\right\} ,
\end{multline}
where $A$, $\mu$, $\sigma$ and $\gamma$ correspond respectively to the amplitude, mean, standard deviation, and skewness of the profile, and $\mathrm{erf[\,]}$ is the error function. The best-fit results for each distribution are presented in Table\,\ref{table:best_fit}. Two observations can be made that distinguish the Berkeley magnetic field variations from Brooklyn. First, we note that while day and night time distributions recorded by the sensor in Brooklyn are centered around a consistent mean magnetic field of about 92.8\,$\mathrm{\upmu T}$, the mean of both distributions in Berkeley are different with high significance, from a mean of 49.361(6)\,$\mathrm{\upmu T}$ during the day to 49.925(9)\,$\mathrm{\upmu T}$ at night. The second observation that can be made is the change in skewness of the distribution in Berkeley where the nighttime distribution profile sees an increase in skewness compared with daytime variations. In Brooklyn, on the other hand, the distribution remains roughly Gaussian all the time with a low absolute skewness of around 1.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{|l|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Params}} &
\multicolumn{2}{c|}{\textbf{Berkeley}} &
\multicolumn{2}{c|}{\textbf{Brooklyn}}\\
\cline{2-5} &
\textbf{Daytime} &
\textbf{Nighttime} &
\textbf{Daytime} &
\textbf{Nighttime}\\
\hline
$A$ & 67,980(407) & 11,488(654) & 158,547(504) & 79,559(321) \\ \hline
$\mu$ [$\mathrm{\upmu T}$] & 49.361(6) & 49.925(9) & 92.802(14) & 92.833(18) \\ \hline
$\sigma$ [$\mathrm{\upmu T}$] & 0.281(5) & 0.212(18) & 0.930(12) & 0.617(12) \\ \hline
$\gamma$ & 1.39(8) & -4.61(148) & -1.15(5) & -0.93(7)\\ \hline
\end{tabular}
\caption{Best skewed Gaussian fit of daytime and nighttime magnetic field distributions for Berkeley and Brooklyn data.}
\label{table:best_fit}
\end{table}
\subsection{Frequency content}
In Fig.\,\ref{Fig:psd_high_freq}, we show the Power Spectral Density (PSD) at high-frequency for both Berkeley and Brooklyn data. The drop in magnetic field activities in Berkeley at nighttime, identified in the variance plot (see top panels from Fig.\,\ref{Fig:distcomp}), can be explained by the decrease in low-frequency signals (up to 10\,Hz) in the PSD. We also note a significant difference in amplitude of the power-line and other high-frequency signals between both cities (see bottom panels from Fig.\,\ref{Fig:psd_high_freq}).
Low-frequency signals for both daytime and nighttime periods are shown in Fig.\,\ref{Fig:psd_low_freq}. We notice that a 20-minute signal at $8.3\times10^{-4}\,\mathrm{Hz}$ is observed during daytime in Berkeley which is known to be associated to the BART activities \cite{2019_bowen}. In Brooklyn, a similar signal is also observed, but at nighttime. In order to identify this 20-minute periodic signal in the time-series data, we made 100-minute averages of the daytime Berkeley and nighttime Brooklyn data. Applying a high-pass filter with a cutoff frequency at 0.001\,Hz allows us to improve the extraction and visibility of the 20-minute periodic signal (see bottom panels in Fig.\,\ref{Fig:psd_low_freq}). While the signal is already visible in the average 100-minute data for Berkeley, the noise in the unfiltered data from Brooklyn provides greater challenges to identifying the 20-minute signal.
In Fig.\,\ref{Fig:wavelet}, we show a scalogram which demonstrate the richness of urban magnetic field data. The quiet nighttime perturbations in Berkeley, previously shown in \cite{2019_bowen} are recovered. Using the full-rate data, one can see how high-frequency ranges are richer in anthropogenic activities. In particular, irregular signals below the power frequency can often be seen and are more prominently in the Berkeley data.
\subsection{Auxiliary Field Measurements}
\label{Sec:field}
The measurements previously made in Berkeley \cite{2019_bowen} revealed coherent magnetic field fluctuations in a geographically distributed magnetometer array. The
significant correlations between stations allowed identification of these
fluctuations with a ``global'' magnetic field which characterizes the
magnetic signature of the city of Berkeley (or, rather, the broader East
Bay). Therefore, in order to fully determine the signature of Brooklyn, or of New York City (NYC) at large, a comparative analysis of in-situ data taken from two different environments must be made.
In Fig.\,\ref{Fig:samples}, we show the behavior of the magnetic field in five distinct locations throughout Brooklyn. Each measurement was acquired using magnetoresistive vector magnetometers manufactured by Twinleaf LLC with data sampled at 200\,Hz and sensitivity of up to 300\,pT/$\sqrt{\mathrm{Hz}}$. As one can notice, downtown Brooklyn is an urban environment with a high diversity of magnetic field sources (e.g., elevators moving in buildings, cars on surface streets, subways crossing the Manhattan Bridge), thereby making the identification of a more global magnetic signature more challenging.
All field measurements were acquired using two stations to allow cross-correlation between both instruments. Figure\,\ref{Fig:xcorr} demonstrates how challenging the cross-correlation between individual stations that are geographically separated can be. While two stations placed close to each other (i.e., within a few meters, see first column) hold highly cross-correlated data, the information quickly becomes uncorrelated the further away one station is from another.
However, using low-pass filters, it becomes possible to correlate signals from different environments. For instance, in the second column of Fig.\,\ref{Fig:xcorr}, we cross-correlate the magnetic field recorded from the sidewalk in front of the Jay Street building with the magnetic field recorded from inside the twelfth floor of the Jay Street building and identified a correlated signal with a lag time of 5 minutes between both stations. Similarly, when recording the magnetic field from the inside of two buildings located across the street from each other (see last column in Fig.\,\ref{Fig:xcorr}), a noticeable anti-correlated behavior can be observed.
\section{Discussion}
\subsection{Optimal sensor network distributions}
This work represents an initial demonstration of the potential complexity in small-scale magnetic field variability in dense urban environments. Indeed, in the context of the ``magnetic field of a city'', our observations show that the power in high spatial frequency modes is larger in a dense city like Brooklyn; this may be a feature of larger cities with a wider variety of magnetic sources. Small-spatial-scale effects must therefore be considered when designing optimal sensor network systems so they can map out the spatial variability of magnetic field on multiple scales.
\subsection{Impact of small-scale effects on global field}
In this work, we attempt to characterize the global magnetic field of cities, that is, the portion of the magnetic field that has spatial and temporal variation, but is observed to have spatiotemporal correlations over the extent of the city system. A global magnetic system can be defined as the set of extended and point sources that contribute non negligibly to its magnetic field. In the case of a city system, this often contains multiple subsystems such as individual buildings and trains.
A subsystem generally contains a geographically localized (within a volume) set of magnetic sources, which are both extended and point-like in nature. For instance, building-specific fluctuations represent subsystems within larger systems where the time dependence includes effects from both the structure itself and the behavioral signals from the population that is using the structure. In our work, the base station located within the Jay Street building is subject to fields due to sources with a spatial extent comparable to that of the building, as well as any point-sources within its subsystem. Presumably, the field measured with a sensor in the building can also have contributions from other subsystems like trains, for example, from the subway station underneath the Jay Street building.
Spatial and temporal variations in the magnetic field are often observed in the data and can be due to a variety of reasons, including the motion of magnetized objects or time variations in the current generating the magnetic field. While not all point sources, that is, sources confined to a certain small volume, have their magnetic field measurable beyond the vicinity of a few nearby sensors, these sources come with some characteristic radial dependence that perturb the larger spatial scale magnetic field, thereby making the extraction of the underlying global city-system field more challenging to perform.
\subsection{Inferring global properties from local measurements}
Local measurements in a dense urban environment may have periodicities similar to the daily/weekly trends observed in all urban systems and one could probably argue that the subsystems are coupled to these large-scale systems. However, periodicities in the global field (e.g. the extended urban environment) are harder to measure.
We further point out that point-source perturbations can be used to understand buildings in a global field but from an in-situ experimental standpoint. However, it is hard to constrain any dynamics using a point-source assumption and a small number of sensors. Our interpretation of "subsystem" urban magnetic fields (i.e., local fluctuations) is that they basically consists of multiple dipoles (or multipoles) moving in potentially complex ways. A single sensor (even with vector measurements) is unable to uniquely determine a dipole moment/orientation. A minimum of two sensors are therefore needed; the same is true for fields generated by line current. An added challenge in understanding the variation of a localized source using a few measurement sites comes from the time dependence of the signals.
Further investigations will need to address, to what extent localized fields can, in practice, be isolated and identified using a magnetometer network.
The magnetic environment within a subsystem may be highly chaotic. A determination of the large-scale field properties from local measurements
requires an analysis of the statistical variability in the measurements. An exception to a purely statistical approach might be when there is a single dominant source in the local subsystem which can be subtracted from the local field.
\section{Conclusion}
In this pilot study, magnetic signatures obtained in different urban environments (Berkeley and Brooklyn) were compared. We find that there are major differences in magnetic signatures in these two test cases, for example, the difference in the contrast of magnetic signatures between day and night time.
There are many ways to analyze the rich urban magnetic data. As we have shown in this work, some of them allow reducing the complex data stream to a few key parameters that can be used to monitor the dynamics of the city.
The results of this work point towards a number of possible future directions. For example, this is an extension to sensor networks as introduced in \cite{2019_bowen}; correlation with other (nonmagnetic) data, for instance, those from multispectral cameras \cite{hyperspec3,s16122047}.
A specific advantage of magnetometry for urban studies is that it can provide information on the functioning of infrastructure within its boundaries, but at a distance (e.g., a moving elevator or operating machinery within a building that can be detected from the outside), so uses might include, post-disaster assessment (e.g., vulnerability of partially destroyed buildings), infrastructure monitoring (e.g., assessment of sensorless bridges with short bursts of observations), monitoring the stability of the power grid (with instabilities being precursors of outages), etc.
Some interesting multidisciplinary questions one could address include: How does an anomalous event such as epidemic or pandemic affect the urban magnetic signature? Are there significant monthly and/or seasonal variations of magnetic signatures? What are the origins of these variations, are they the same for different cities?, etc. It is the authors' belief that answering these questions of ``comparative urban magnetometry'' will teach us a lot about cities and this knowledge will eventually translate into tangible economic and social benefits.
There are also technical improvements that can benefit future urban-magnetometry studies. For example, if a measurement is done near a local source, vibrations of the sensor can lead to spurious signals. These, however, can be identified by correlating the magnetic readout with accelerometer data. In fact, the Twinleaf magnetometers that we used are already equipped with such auxiliary sensors.
\nocite{*}
|
1807.03444
|
\section{introduction}
Quantum systems in contact with an environment are a very important topic which concerns many branches of physics, including quantum optics\cite{Gardiner2000}, quantum thermodynamics \cite{BenentiWhitney2017}, quantum computing \cite{NielsenChuang2000} and more. A particularly important class of open quantum systems is that of boundary driven systems, where the system is coupled to the environment only at the extremities. The boundary dissipative coupling drives the system towards a non-equilibrium steady state (NESS) which usually present a non-vanishing current which depends significantly on the properties of the system. For this reason, boundary dissipatively driven quantum systems are particularly important to study quantum transport.
Similarly to the case of closed quantum systems, for open quantum system it is useful to have analytical solutions to guide the physical understanding of more complex systems. However, for open quantum systems the number of known analytical solutions is limited. In the following we will consider only open quantum systems whose dynamics is described by a master equation in Lindblad form \cite{GoriniSudarshan1976, Lindblad1976}, and for which the generator of the evolutoin is called the Lindbladian. An exact matrix product ansatz can be constructed for the boundary driven $XXZ$ chain in some regimes of boundary dissipative driving \cite{Prosen2011a, Prosen2011b, Prosen2014, Popkov2013b, Popkov2014, Prosen2015}. Furthermore, for a boundary driven $XX$ chain also in the presence of dephasing, a cleverly designed ansatz has been used to exactly calculate the one-point and two-point correlation functions \cite{Znidaric2010}. The spectrum of the Lindblad operator (Lindbladian) of the tight-binding fermionic chain (or $XX$ chain) in a dephasing environment has also been exactly computed by mapping it into a Hubbard chain with imaginary interaction strength \cite{MedvedyevaProsen2016}.
A different class of analytically solvable open quantum many-body systems belongs to the class of quadratic bosonic or fermionic systems. For a boundary driven non-interacting bosonic, fermionic and $XX$ chain, correlation functions have been analytically computed \cite{Manzano2012, Manzano2013}. In a seminal work in $2008$ \cite{Prosen2008}, Prosen showed that diagonalizing the Lindbladian of any quadratic fermionic system can be reduced to diagonalizing a $4L\times4L$ antisymmetric matrix, which can be further reduced to diagonalizing a $2L\times 2L$ generic matrix \cite{Prosen2010}, that is a matrix with no obvious symmetries and for which it is difficult to find explicit analytical exact expressions. He also provided a perturbative expression for the relaxation gap, that is the real part of the slowest decaying modes, for a boundary driven Ising chain, obtained in the limit of large $L$. Similar calculations were developed for non-interacting bosons \cite{ProsenSeligman2012}.
In our previous work \cite{GuoDario2017}, we showed that for number conserving quadratic systems (be them bosonic, fermionic or made of spins), finding that the rapidities (eigenvalues of the Lindbladian) and the decay modes could be reduced to the problem of diagonalizing an $L\times L$ matrix which could be of special form, i.e. a bordered Toeplitz matrix, which has known analytical solutions \cite{Yueh2005, Kouachi2006, Willms2008}. This allowed us to find explicit analytical solutions.
However, for the case of non-number conserving Hamiltonians, the method described in \cite{GuoDario2017}, would result in diagonalizing an $2L\times 2L$ block bordered Toeplitz matrix, for which we could not find analytical solutions. Building on this approach, we now address the problem of solving a boundary driven $XY$ chain. We found that the problem contains another symmetry which, when exploited, would allow to turn the problem into diagonalizing an $L\times L$ tridiagonal bordered $2-$Toeplitz matrix, which can be reduced to solving a scalar trigonometric non-linear equation. Moreover we give explicit solution for the special case of an Ising chain with transverse baths, and we show that the relaxation gap is independent of the system size $L$.
We here summarize the main steps discussed in the paper, highlighting the key equations. In Sec.\ref{sec:model} we introduce the dissipatively boundary driven model we study. In Sec.\ref{sec:solvemaster} we show how to reduce the problem of diagonalizing the Lindbladian to diagonalizing a $2L\times 2L$ matrix. Then in Sec.\ref{sec:observable} we show that, in ordering to compute observables, it is sufficient to solve a Lyapunov equation (Eq.(\ref{eq:pxxp})) which can efficiently be solved numerically. In Sec.\ref{sec:solveXY}, we apply our approach to the case of a boundary driven $XY$ chain, for which we use the symmetries in Eq.(\ref{eq:eigenPsub}), to turn the problem into solving an $L\times L$ bordered $2-$Toeplitz matrix ($\mathbf{Q}^{\pm}$ in Eq.(\ref{eq:Qpm})), which can be turned into solving the trigonometric equation (\ref{eq:lambda}). For the special case of an Ising chain, we find $L-$independent analytical solutions given in Eq.(\ref{eq:solutions}). In Sec.\ref{sec:summary} we draw our conclusions.
\section{model}\label{sec:model}
We consider an open quantum system of L sites with fermionic particles. Its dynamics is described by the quantum Lindblad master equation \cite{GoriniSudarshan1976, Lindblad1976} with Lindbladian $\mathcal{L}$
\begin{align}\label{eq:Lindblad}
\frac{d\hat{\rho}}{dt} = \mathcal{L}(\hat{\rho}) = -\im \left[\Hop, \hat{\rho}\right] + \mathcal{D}(\hat{\rho}).
\end{align}
Here $\hat{\rho}$ is the density operator of the system, $\Hop$ is the Hamiltonian, and the dissipator $\mathcal{D}$ describes the dissipative part of the evolution.
We consider the Hamiltonian
\begin{align}\label{eq:Ham}
\Hop = \sum_{i,j=1}^L\mathbf{h}_{i,j}\hat{\alpha}^{\dagger}_i\hat{\alpha}_j + \frac{1}{2}\sum_{i,j=1}^L\mathbf{g}_{i,j}\hat{\alpha}^{\dagger}_i\hat{\alpha}^{\dagger}_j + \frac{1}{2} \sum_{i,j=1}^L\mathbf{g}_{j,i}^{\ast}\hat{\alpha}_i\hat{\alpha}_j,
\end{align}
where $\hat{\alpha}^{\dagger}_j (\hat{\alpha}_j)$ creates (annihilates) one fermion on site $j$. $\mathbf{h}$ is an $L\times L$ Hermitian matrix, and $\mathbf{g}$ is an $L\times L$ anti-symmetric matrix satisfying $\mathbf{g}^t = -\mathbf{g}$. The dissipative part is given by
\begin{align}\label{eq:dissipator}
\mathcal{D}(\hat{\rho}) = &\sum_{i,j=1}^L \left[\mathbf{\Lambda}^+_{i,j}\left(\hat{\alpha}^{\dagger}_i\hat{\rho}\hat{\alpha}_j-\hat{\alpha}_j\hat{\alpha}^{\dagger}_i\hat{\rho}\right)
\right. \nonumber \\ &+ \left. \mathbf{\Lambda}^-_{i,j}\left(\hat{\alpha}_i\hat{\rho}\hat{\alpha}^{\dagger}_j-\hat{\alpha}^{\dagger}_j\hat{\alpha}_i\hat{\rho}\right) + \hc \right],
\end{align}
where $\mathbf{\Lambda}^+$ and $\mathbf{\Lambda}^-$ are $L\times L$ real, symmetric and non-negative matrices. We note the last two terms of Hamiltonian in Eq.(\ref{eq:Ham}) is new compared to \cite{GuoDario2017} while the dissipator $\mathcal{D}$ in Eq.(\ref{eq:dissipator}) remains the same.
\section{solving the master equation}\label{sec:solvemaster}
\subsection{Mapping the density operator into new representations}
As in \cite{GuoDario2017}, first we perform a one-to-one mapping from the density operator basis elements $\vert n_1, n_2, \dots n_L \rangle \langle n'_1, n'_2, \dots n'_L \vert $ to a state vector basis (with $2L$ sites) which we denote as $\vert n_1, \dots n_L, n'_1, \dots n'_{L} \rangle_{\mathcal{A}}$. As a result, the operator $\hat{\alpha}_i$ acting on site $i$ to the left of the density matrix is mapped to $\hat{a}_i$ acting on the state vector on the $i$-th site too, while the operator $\hat{\alpha}_i$ acting on the right of the density matrix is mapped to $\hat{a}^{\dagger}_{L+i}$ acting on the state vector. We refer to the representation defined by the $2L$ modes $\hat{a}_i$ as the $\mathcal{A}$ representation.
To enforce the fermionic anti-commutation relations over all the sites, we perform a second mapping from $2L$ modes $\hat{a}_i$ to another set of $2L$ modes $\hat{b}_i$, which we refer to as the $\mathcal{B}$ representation:
\begin{subequations}
\begin{align}
& \hat{b}_i = \hat{a}_i, \;\;\;\; \hat{b}^{\dagger}_i = \hat{a}^{\dagger}_i \\
&\hat{b}_{L+i} = \mathcal{P}\hat{a}_{L+i}, \;\;\;\; \hat{b}^{\dagger}_{L+i} = \hat{a}^{\dagger}_{L+i}\mathcal{P},
\end{align}
\end{subequations}
where $\mathcal{P}$ is the parity operator \cite{Prosen2008, GuoDario2017} defined as
\begin{align}
\mathcal{P} = e^{\im \pi \sum_{j=1}^{2L}\hat{b}^{\dagger}_j\hat{b}_j}.
\end{align}
The Hamiltonian term in $\mathcal{B}$ representation can be written as
\begin{align}\label{eq:Hnew}
[ \Hop, \hat{\rho} ]_{\mathcal{B}} =& \sum_{i,j=1}^L \left( \mathbf{h}_{i,j}\hat{b}^{\dagger}_i\hat{b}_j + \frac{1}{2} \mathbf{g}_{i,j}\hat{b}^{\dagger}_i\hat{b}^{\dagger}_j - \frac{1}{2} \mathbf{g}_{i,j}^{\ast}\hat{b}_i\hat{b}_j \right. \nonumber \\ &- \left. \mathbf{h}_{j,i}\hat{b}^{\dagger}_{L+i}\hat{b}_{L+j} - \frac{1}{2} \mathbf{g}_{i,j}\hat{b}_{L+i}\hat{b}_{L+j} \right. \nonumber \\
&+ \left. \frac{1}{2} \mathbf{g}_{i,j}^{\ast}\hat{b}^{\dagger}_{L+i}\hat{b}^{\dagger}_{L+j} \right),
\end{align}
and the dissipative part of Eq.(\ref{eq:Lindblad}) can be written in the $\mathcal{B}$ representation as
\begin{align}\label{eq:Dnew}
\mathcal{D}^{\mathcal{B}} \vert \rho \rangle_{\mathcal{B}} = \sum_{i,j=1}^L & \left( \mathbf{\Lambda}^+_{ij} \hat{b}^{\dagger}_i \hat{b}^{\dagger}_{L+j} - \mathbf{\Lambda}^+_{ji} \hat{b}_i \hat{b}^{\dagger}_j + \mathbf{\Lambda}^-_{ji} \hat{b}_{L+i} \hat{b}_{j} -\right. \nonumber \\
&\left. \mathbf{\Lambda}^-_{ji} \hat{b}^{\dagger}_i \hat{b}_j - {\mathbf{\Lambda}^+_{ij}} \hat{b}^{\dagger}_{L+i} \hat{b}^{\dagger}_j - {\mathbf{\Lambda}^+_{ji}} \hat{b}_{L+i} \hat{b}^{\dagger}_{L+j}\right. \nonumber \\ -
&\left.{\mathbf{\Lambda}^-_{ji}} \hat{b}_{i}\hat{b}_{L+j} -
{\mathbf{\Lambda}^-_{ji}} \hat{b}^{\dagger}_{L+i}\hat{b}_{L+j} \right)\vert \rho \rangle_{\mathcal{B}} ,
\end{align}
where $\mathcal{D}^{\mathcal{B}}$ is the dissipator $\mathcal{D}$ in the ${\mathcal{B}}$ representation while $\vert \rho \rangle_{\mathcal{B}}$ the density operator $\hat{\rho}$ in $\mathcal{B}$ representation. We note that the Lindbladian in the $\mathcal{B}$ representation, $\mathcal{L}^{\mathcal{B}}$, satisfies
\begin{align}
\left[\mathcal{L}^{\mathcal{B}}, \mathcal{P}\right] = 0
\end{align}
since $\mathcal{L}^{\mathcal{B}}$ is quadratic in operators $\hat{b}$ and $\hat{b}^{\dagger}$, which anti-commute with $\mathcal{P}$ as in \cite{GuoDario2017}. As a result, the even parity sector and odd parity sector are separated, and we have dropped the $\mathcal{P}$ operator in Eq.(\ref{eq:Dnew}) by assuming that we are working in the even parity sector, namely $\mathcal{P}=1$.
\subsection{Master equation in the new representations}
Combining Eqs.(\ref{eq:Hnew}, \ref{eq:Dnew}), and using the more compact notation $\textbf{b}_{1\rightarrow L}$ for the column vector, $\left( \hat{b}_1; \dots \hat{b}_L \right)$, it is possible to rewrite $\mathcal{L}^{\mathcal{B}}$ as
\begin{align}
\mathcal{L}^{\mathcal{B}} =& \left(
\begin{array}{c}
\textbf{b}_{1\rightarrow L}^{\dagger} \\
\textbf{b}_{1\rightarrow L} \\
\textbf{b}_{(L+1)\rightarrow 2L} \\
\textbf{b}_{(L+1)\rightarrow 2L}^{\dagger} \\
\end{array}
\right)^t \mathbf{G}
\left(
\begin{array}{c}
\textbf{b}_{1\rightarrow L} \\
\textbf{b}_{1\rightarrow L}^{\dagger} \\
\textbf{b}_{(L+1)\rightarrow 2L}^{\dagger} \\
\textbf{b}_{(L+1)\rightarrow 2L} \\
\end{array}
\right) \nonumber \\
&-{\rm tr}(\mathbf{\Lambda}^-+\mathbf{\Lambda}^+),
\end{align}
where the coefficient matrix $\mathbf{G}$ is a $4L\times 4L$ matrix
\begin{align}
\mathbf{G} = \left(
\begin{array}{cccc}
\bar{\mathbf{h}} &-\im\mathbf{g}/2 & \mathbf{\Lambda}^+& \textbf{0} \\
\im\mathbf{g}^{\ast}/2 &-\bar{\mathbf{h}}^t & \textbf{0}& -\mathbf{\Lambda}^- \\
{\mathbf{\Lambda}^-}^t & \textbf{0}& -\bar{\mathbf{h}}^{\dagger}& \im\mathbf{g}/2 \\
\textbf{0} & -{\mathbf{\Lambda}^+}^t & -\im\mathbf{g}^{\ast}/2& \bar{\mathbf{h}}^{\ast} \\
\end{array}
\right). \label{eq:Gmatrix}
\end{align}
Here $\bar{\mathbf{h}} = \frac{1}{2}\left( -\im \mathbf{h} - {\mathbf{\Lambda}^-}^t + \mathbf{\Lambda}^+ \right)$. Denoting
\begin{align}
\mathbf{M} = \left( \begin{array}{cc}
\bar{\mathbf{h}} & -\im\mathbf{g}/2 \\
\im\mathbf{g}^{\ast}/2 & -\bar{\mathbf{h}}^t
\end{array} \right)
\end{align}
and
\begin{align}
\mathbf{J} = \left( \begin{array}{cc}
\mathbf{\Lambda}^+ & \textbf{0} \\
\textbf{0} & -\mathbf{\Lambda}^-
\end{array} \right),
\end{align}
we can rewrite $\mathbf{G}$ in a more compact form
\begin{align}\label{eq:CompactG}
\mathbf{G} = \left( \begin{array}{cc}
\mathbf{M} & \mathbf{J} \\
-\mathbf{Y} \mathbf{J}^t \mathbf{Y} & \mathbf{Y} \mathbf{M}^{\ast} \mathbf{Y}
\end{array} \right).
\end{align}
Here we have used
\begin{align}
\mathbf{Y} = -\im \left( \begin{array}{cc}
\textbf{0} & \textbf{1}_L \\
-\textbf{1}_L & \textbf{0}
\end{array} \right),
\end{align}
where $\textbf{1}_l$ denotes an identity matrix of size $l$. In the following we will also use
matrices
\begin{align}
\mathbf{X} = \left( \begin{array}{cc}
\textbf{0} & \textbf{1}_L \\
\textbf{1}_L & \textbf{0}
\end{array} \right)
\end{align}
and
\begin{align}
\mathbf{Z} = \left( \begin{array}{cc}
\textbf{1}_L & \textbf{0} \\
\textbf{0} & -\textbf{1}_L
\end{array} \right).
\end{align}
Now we assume that there exists a transformation
\begin{align} \label{eq:trans1}
\left( \begin{array}{cccc}
\textbf{b}_{1\rightarrow L} \\
\textbf{b}_{1\rightarrow L}^{\dagger} \\
\textbf{b}_{(L+1)\rightarrow 2L}^{\dagger} \\
\textbf{b}_{(L+1)\rightarrow 2L} \\
\end{array} \right) = \mathbf{W} \left( \begin{array}{cccc}
\textbf{c}_{1\rightarrow L}\\
\textbf{c}_{(L+1)\rightarrow 2L} \\
\textbf{c}_{(L+1)\rightarrow 2L}^{\prime} \\
\textbf{c}_{1\rightarrow L}^{\prime} \\
\end{array} \right),
\end{align}
which perserves the fermionic commutation relation
\begin{align}\label{eq:commutation}
\left\lbrace \left( \begin{array}{cccc}
\textbf{c}_{1\rightarrow L}\\
\textbf{c}_{(L+1)\rightarrow 2L} \\
\textbf{c}_{(L+1)\rightarrow 2L}^{\prime} \\
\textbf{c}_{1\rightarrow L}^{\prime} \\
\end{array} \right), \left( \begin{array}{cccc}
\textbf{c}_{1\rightarrow L}^{\prime}\\
\textbf{c}_{(L+1)\rightarrow 2L}^{\prime} \\
\textbf{c}_{(L+1)\rightarrow 2L} \\
\textbf{c}_{1\rightarrow L} \\
\end{array} \right)^t \right\rbrace = \textbf{1}_{4L}.
\end{align}
From Eq.(\ref{eq:trans1}) we have
\begin{align}\label{eq:trans2}
&\left( \begin{array}{cccc}
\textbf{c}_{1\rightarrow L}^{\prime}\\
\textbf{c}_{(L+1)\rightarrow 2L}^{\prime} \\
\textbf{c}_{(L+1)\rightarrow 2L} \\
\textbf{c}_{1\rightarrow L} \\
\end{array} \right) \nonumber \\
&= \left( \begin{array}{cc}
\textbf{0} & \mathbf{X} \\
\mathbf{X} & \textbf{0}
\end{array} \right)\mathbf{W}^{-1} \left( \begin{array}{cc}
\mathbf{X} & \textbf{0} \\
\textbf{0} & \mathbf{X}
\end{array} \right)\left( \begin{array}{cccc}
\textbf{b}_{1\rightarrow L}^{\dagger} \\
\textbf{b}_{1\rightarrow L} \\
\textbf{b}_{(L+1)\rightarrow 2L} \\
\textbf{b}_{(L+1)\rightarrow 2L}^{\dagger} \\
\end{array} \right).
\end{align}
Substituting Eqs.(\ref{eq:trans1}, \ref{eq:trans2}) into Eq.(\ref{eq:commutation}), we get
\begin{align} \label{eq:Winverse}
\mathbf{W}^{-1} = \left( \begin{array}{cc}
\textbf{0} & \mathbf{X} \\
\mathbf{X} & \textbf{0}
\end{array} \right) \mathbf{W}^{t} \left( \begin{array}{cc}
\mathbf{X} & \textbf{0} \\
\textbf{0} & \mathbf{X}
\end{array} \right)
\end{align}
In the following we refer to the new representation defined by $\hat{c}$ as the $\mathcal{C}$ representation,
using the transformation in Eq.(\ref{eq:trans1}), we get
\begin{align} \label{eq:LC}
\mathcal{L}^{\mathcal{C}} = & \left(
\begin{array}{c}
\textbf{c}_{1\rightarrow L}^{\prime} \\
\textbf{c}_{L+1\rightarrow 2L}^{\prime} \\
\textbf{c}_{L+1\rightarrow 2L} \\
\textbf{c}_{1\rightarrow L} \\
\end{array}
\right)^t \mathbf{W}^{-1}\mathbf{G}\mathbf{W}
\left(
\begin{array}{c}
\textbf{c}_{1\rightarrow L} \\
\textbf{c}_{L+1\rightarrow 2L} \\
\textbf{c}_{L+1\rightarrow 2L}^{\prime} \\
\textbf{c}_{1\rightarrow L}^{\prime} \\
\end{array}
\right) \nonumber \\
&-{\rm tr}(\mathbf{\Lambda}^-+\mathbf{\Lambda}^+),
\end{align}
where $\mathcal{L}^{\mathcal{C}}$ denotes the Lindbladian $\mathcal{L}$ in the $\mathcal{C}$ representation. This implies that the problem of finding the normal master modes of the system reduces to diagonalizing the $4L\times 4L$ matrix $\mathbf{G}$.
\subsection{Normal master modes}
The matrix $\mathbf{G}$ satisfies two symmetries which imply that, for each eigenvalue $\omega$, there exist also the eigenvalues $-\omega$ and $\pm \omega^*$. These symmetries will be the first step to simplify the problem from solving a $4L\times 4L$ matrix to a $2L\times 2L$ matrix. More in detail, $\mathbf{G}$ satisfies
\begin{align}
&\left( \begin{array}{cc}
\mathbf{X} & \textbf{0} \\
\textbf{0} & \mathbf{X}
\end{array} \right) \mathbf{G} \left( \begin{array}{cc}
\mathbf{X} & \textbf{0} \\
\textbf{0} & \mathbf{X}
\end{array} \right) = -\mathbf{G}^t \label{eq:xsymmetry} \\
&\left( \begin{array}{cc}
\textbf{0} & \mathbf{Y} \\
-\mathbf{Y} & \textbf{0}
\end{array} \right) \mathbf{G} \left( \begin{array}{cc}
\textbf{0} & \mathbf{Y} \\
-\mathbf{Y} & \textbf{0}
\end{array} \right) = -\mathbf{G}^{\ast} \label{eq:ysymmetry}
\end{align}
Using Eq.(\ref{eq:xsymmetry}) we find that if
\begin{align}
\vec{\mathbf{x}} = \left(
\begin{array}{c}
\vec{\mathbf{u}} \\
\vec{\mathbf{v}} \\
\end{array}
\right)
\end{align}
is a right eigenvector of $\mathbf{G}$ for the eigenvalue $\omega$, then
\begin{align}
\vec{\mathbf{x}}^t \left( \begin{array}{cc}
\mathbf{X} & \textbf{0} \\
\textbf{0} & \mathbf{X}
\end{array} \right) = \left(
\begin{array}{c}
\mathbf{X} \vec{\mathbf{u}} \\
\mathbf{X} \vec{\mathbf{v}} \\
\end{array}
\right)^t
\end{align}
is a left eigenvector of $\mathbf{G}$ to $-\omega$. Moreover, from Eq.(\ref{eq:ysymmetry}) we obtain that
\begin{align}
\left( \begin{array}{cc}
\textbf{0} & \mathbf{Y} \\
-\mathbf{Y} & \textbf{0}
\end{array} \right) \vec{\mathbf{x}}^{\ast} = \left(
\begin{array}{c}
\mathbf{Y} \vec{\mathbf{v}}^{\ast} \\
-\mathbf{Y} \vec{\mathbf{u}}^{\ast} \\
\end{array}
\right)
\end{align}
is another right eigenvector of $\mathbf{G}$ to $\omega^{\ast}$.
At this point we define a $2L \times 2L$ matrix $\mathbf{P}$
\begin{align}\label{eq:defineP}
\mathbf{P} = \left(
\begin{array}{cc}
\bar{\mathbf{P}} & -\im\mathbf{g}/2 \\
\im\mathbf{g}^{\ast}/2 & \bar{\mathbf{P}}^{\ast}
\end{array}
\right)
\end{align}
with
\begin{align}
\bar{\mathbf{P}} = \bar{\mathbf{h}} - \mathbf{\Lambda}^+ = \left( -\im \mathbf{h} - {\mathbf{\Lambda}^-}^t - \mathbf{\Lambda}^+ \right)/2
\end{align}
an $L\times L$ matrix. $\mathbf{P}$ satisfies the symmetry
\begin{align}
\mathbf{X} \mathbf{P} \mathbf{X} = \mathbf{P}^{\ast}.
\end{align}
Therefore if $\vec{\mathbf{y}}$ is a right eigenvector of $\mathbf{P}$ to $\omega$, then $\mathbf{X} \vec{\mathbf{y}}^{\ast}$ is another right eigenvector of $\mathbf{P}$ to $\omega^{\ast}$. Assuming that $\mathbf{P}$ has the eigen-decomposition
\begin{align}\label{eq:eigenP}
\mathbf{P} \left(\begin{array}{cc}
\mathbf{R} & \mathbf{T}^{\ast} \\
\mathbf{T} & \mathbf{R}^{\ast}
\end{array} \right) = \left(\begin{array}{cc}
\mathbf{R} & \mathbf{T}^{\ast} \\
\mathbf{T} & \mathbf{R}^{\ast}
\end{array} \right) \left(\begin{array}{cc}
\lmp & \textbf{0} \\
\textbf{0} & \lmpc
\end{array} \right),
\end{align}
then, from Eqs.(\ref{eq:CompactG},\ref{eq:defineP},\ref{eq:eigenP}), it is possible to verify that $ \left(\begin{array}{c}
\mathbf{R} \\
\mathbf{T} \\
-\mathbf{R} \\
\mathbf{T}
\end{array} \right)$ and $ \left(\begin{array}{cc}
-\mathbf{T}^{\ast} \\
-\mathbf{R}^{\ast} \\
\mathbf{T}^{\ast} \\
-\mathbf{R}^{\ast}
\end{array} \right)$ constitute $2L$ right eigenvectors of $\mathbf{G}$, corresponding to the $2L$ eigenvalues $\lmp$ and $\lmpc$. From the symmetry in Eq.(\ref{eq:xsymmetry}) we know there exists $2L$ additional eigenvalues which are $-\lmp$ and $-\lmpc$, and we denote the corresponding eigenvectors as $ \left(\begin{array}{c}
\mathbf{A} \\
-\mathbf{B} \\
\mathbf{C} \\
\mathbf{D}
\end{array} \right)$ and $ \left(\begin{array}{c}
\mathbf{D}^{\ast} \\
-\mathbf{C}^{\ast} \\
\mathbf{B}^{\ast} \\
\mathbf{A}^{\ast}
\end{array} \right)$. With these notations, $\mathbf{W}$ can be written as
\begin{align}\label{eq:W4}
\mathbf{W} = \left(
\begin{array}{cccc}
\mathbf{R} & -\mathbf{T}^{\ast} & \mathbf{D}^{\ast} & \mathbf{A} \\
\mathbf{T} & -\mathbf{R}^{\ast} & -\mathbf{C}^{\ast} & -\mathbf{B} \\
-\mathbf{R} & \mathbf{T}^{\ast} & \mathbf{B}^{\ast} & \mathbf{C} \\
\mathbf{T} & -\mathbf{R}^{\ast} & \mathbf{A}^{\ast} & \mathbf{D} \\
\end{array}
\right),
\end{align}
and then $\mathbf{G}$ can be diagonalized as follows
\begin{align}\label{eq:eigen4}
\mathbf{W}^{-1}\mathbf{G}\mathbf{W} = \left(
\begin{array}{cccc}
\lmp & \textbf{0} & \textbf{0} & \textbf{0} \\
\textbf{0} & \lmpc & \textbf{0} & \textbf{0} \\
\textbf{0} & \textbf{0} & -\lmpc & \textbf{0} \\
\textbf{0} & \textbf{0} & \textbf{0} & -\lmp \\
\end{array}
\right).
\end{align}
Substituting Eq.(\ref{eq:eigen4}) into Eq.(\ref{eq:LC}), also noticing from Eq.(\ref{eq:eigenP}) that
\begin{align}
\sum_{i=1}^L\left(\lambda_{P, i}+\lambda_{P, i}^{\ast}\right) = {\rm tr}(\mathbf{P}) = - {\rm tr}(\mathbf{\Lambda}^++\mathbf{\Lambda}^-),
\end{align}
we get the compact expression
\begin{align}\label{eq:LCdiag}
\mathcal{L}^{\mathcal{C}} =& 2\sum_{i=1}^L \lambda_{P, i}\hat{c}^{\prime}_i\hat{c}_i + 2\sum_{i=1}^L \lambda_{P, i}^{\ast}\hat{c}^{\prime}_{L+i}\hat{c}_{L+i}.
\end{align}
This means that, in order to compute all the $4L$ rapidities, it is sufficient to diagonalize the $2L\times 2L$ matrix $\mathbf{P}$ in Eq.(\ref{eq:defineP}).
\section{Computing quadratic observables} \label{sec:observable}
We now show that in order to compute any two-particles observable it is sufficient to solve the Lyapunov equation (\ref{eq:pxxp}). In order to do so we first need to derive the Lyapunov equation, and then we need to show its connection to the two-particles observables. To start, we define
\begin{align}
\mathbf{E} =& \left( \begin{array}{cc}
\mathbf{R} & -\mathbf{T}^{\ast} \\
\mathbf{T} & -\mathbf{R}^{\ast}
\end{array} \right) \\
\mathbf{F} =& \left( \begin{array}{cc}
\mathbf{D}^{\ast} & \mathbf{A} \\
-\mathbf{C}^{\ast} & -\mathbf{B}
\end{array} \right) \\
{\tilde{\pmb{\lambda}}} =& \left( \begin{array}{cc}
\lmp & \textbf{0} \\
\textbf{0} & \lmpc
\end{array} \right).
\end{align}
We can see that $\mathbf{X} \mathbf{E} \mathbf{X} = -\mathbf{E}^{\ast}$. With these definitions, we can write $\mathbf{W}$ as
\begin{align}
\mathbf{W} = \left( \begin{array}{cc}
\mathbf{E} & \mathbf{F} \\
-\mathbf{Z} \mathbf{E} & -\im\mathbf{Y} \mathbf{F}^{\ast} \mathbf{X}
\end{array} \right).
\end{align}
Using Eq.(\ref{eq:Winverse}), we get
\begin{align}
\mathbf{W}^{-1} = \left( \begin{array}{cc}
\mathbf{X} \mathbf{F}^t \mathbf{X} & \mathbf{F}^{\dagger}\mathbf{Z} \\
-\mathbf{E}^{\dagger} & -\mathbf{E}^{\dagger}\mathbf{Z}
\end{array} \right).
\end{align}
Then from $\mathbf{W}^{-1}\mathbf{W} = \textbf{1}_{4L}$, we get
\begin{align}
\left( \begin{array}{cc}
\mathbf{X} \mathbf{F}^t \mathbf{X} \mathbf{E} - \mathbf{F}^{\dagger}\mathbf{E} & \mathbf{X} \mathbf{F}^t \mathbf{X} \mathbf{F} - \mathbf{F}^{\dagger}\mathbf{X} \mathbf{F}^{\ast}\mathbf{X} \\
\textbf{0} & -\mathbf{E}^{\dagger}\mathbf{F} + \mathbf{E}^{\dagger}\mathbf{X} \mathbf{F}^{\ast}\mathbf{X}
\end{array} \right) = \textbf{1}_{4L},
\end{align}
from which we get two independent matrix equations
\begin{align}
&\mathbf{X} \mathbf{F}^t \mathbf{X} \mathbf{E} - \mathbf{F}^{\dagger}\mathbf{E} = \textbf{1}_{2L} \rightarrow \mathbf{F}^{\dagger} = \mathbf{X} \mathbf{F}^t \mathbf{X} - \mathbf{E}^{-1} \label{eq:indep1} \\
&\mathbf{X} \mathbf{F}^t \mathbf{X} \mathbf{F} - \mathbf{F}^{\dagger}\mathbf{X} \mathbf{F}^{\ast}\mathbf{X} = \textbf{0}
\end{align}
Now we rewrite Eq.(\ref{eq:eigen4}) in terms of $\mathbf{E}, \mathbf{F}, {\tilde{\pmb{\lambda}}}$
\begin{align}
&\left( \begin{array}{cc}
\mathbf{M} & \mathbf{J} \\
-\mathbf{Y} \mathbf{J}^t \mathbf{Y} & \mathbf{Y} \mathbf{M}^{\ast} \mathbf{Y}
\end{array} \right)\left( \begin{array}{cc}
\mathbf{E} & \mathbf{F} \\
-\mathbf{Z} \mathbf{E} & -\im\mathbf{Y} \mathbf{F}^{\ast} \mathbf{X}
\end{array} \right) \nonumber \\
=&\left( \begin{array}{cc}
\mathbf{E} & \mathbf{F} \\
-\mathbf{Z} \mathbf{E} & -\im\mathbf{Y} \mathbf{F}^{\ast} \mathbf{X}
\end{array} \right)\left(\begin{array}{cc}
{\tilde{\pmb{\lambda}}} & \textbf{0} \\
\textbf{0} & -{\tilde{\pmb{\lambda}}^{\ast}} \\
\end{array} \right),
\end{align}
from which we have
\begin{align}
& \mathbf{M}\mathbf{E} - \mathbf{J}\mathbf{Z} \mathbf{E} = \mathbf{E} {\tilde{\pmb{\lambda}}} \\
& \mathbf{M}\mathbf{F} - \im \mathbf{J} \mathbf{Y} \mathbf{F}^{\ast}\mathbf{X} = -\mathbf{F} {\tilde{\pmb{\lambda}}^{\ast}}
\end{align}
Since $\mathbf{F}^{\ast} = \mathbf{X} \mathbf{F}\mathbf{X} - {\mathbf{E}^t}^{-1}$, we have
\begin{align}
\mathbf{M}\mathbf{F} - \im\mathbf{J}\mathbf{Y}(\mathbf{X} \mathbf{F}\mathbf{X}-{\mathbf{E}^t}^{-1})\mathbf{X} = -\mathbf{F}{\tilde{\pmb{\lambda}}^{\ast}}.
\end{align}
from which we get
\begin{align}
(\mathbf{M} - \mathbf{J}\mathbf{Z})\mathbf{F} + \im\mathbf{J}\mathbf{Y} {\mathbf{E}^t}^{-1}\mathbf{X} = -\mathbf{F} \mathbf{X} {\tilde{\pmb{\lambda}}} \mathbf{X}
\end{align}
Since $\mathbf{M} -\mathbf{J}\mathbf{Z} = \mathbf{P}$, we have
\begin{align}
\mathbf{P}\mathbf{F}\mathbf{X} \mathbf{E}^t + \im\mathbf{J} \mathbf{Y} = -\mathbf{F}\mathbf{X} {\tilde{\pmb{\lambda}}}\mathbf{E}^t = -\mathbf{F}\mathbf{X} \mathbf{E}^t \mathbf{P}^t,
\end{align}
therefore we get
\begin{align}
-\mathbf{P} \mathbf{F} \mathbf{X} \mathbf{E}^t \mathbf{X} - \mathbf{F}\mathbf{X} \mathbf{E}^t \mathbf{X} \mathbf{P}^{\dagger} = \mathbf{J} \mathbf{Z}.
\end{align}
Denoting $\mathbf{\Omega} = -\mathbf{F}\mathbf{X} \mathbf{E}^t \mathbf{X}$, we get the equation for $\mathbf{\Omega}$
\begin{align}\label{eq:pxxp}
\mathbf{P}\mathbf{\Omega}+\mathbf{\Omega}\mathbf{P}^{\dagger} = \mathbf{J}\mathbf{Z}.
\end{align}
This is Lyapunov equation, which can be solved with methods which scale as $O(L^3)$ \cite{Lyap1, Lyap2}. We should also stress that for quadratic open systems, it is often possible reduce the analysis of the problem in equations of this form, see for example \cite{ZunkovicProsen2010, BanchiZanardi2014}.
Now we show that Eq.(\ref{eq:pxxp}) is relevant to compute observables. We start by demonstrating that
\begin{align}\label{eq:annihilation}
{}_{\mathcal{A}}\langle \id \vert \hat{c}^{\prime}_k = 0, \quad \forall 1 \leq k \leq 2L,
\end{align}
where ${}_{\mathcal{A}}\langle \id \vert$ is the transpose of the identity operator in the $\mathcal{A}$ representation, $\vert \id \rangle = \sum_{i_1, i_2, \dots, i_L}\vert i_1, i_2, \dots, i_L, i_1, i_2, \dots, i_L \rangle_{\mathcal{A}}$. From the inverse of Eq.(\ref{eq:trans1}) we have
\begin{subequations}
\begin{align}
\hat{c}_{i}^{\prime} &= \sum_{k=1}^L\left( \mathbf{T}^t_{ik}\hat{b}_k + \mathbf{R}^t_{ik}\hat{b}^{\dagger}_k + \mathbf{T}^t_{ik}\hat{b}^{\dagger}_{L+k} - \mathbf{R}^t_{ik}\hat{b}_{L+k} \right) \nonumber \\
& =\sum_{k=1}^L\left[ \mathbf{T}^t_{ik}\left(\hat{b}_k +\hat{b}^{\dagger}_{L+k} \right) + \mathbf{R}^t_{ik}\left(\hat{b}^{\dagger}_k-\hat{b}_{L+k}\right) \right]
\\
\hat{c}_{L+i}^{\prime} &= \sum_{k=1}^L \left(-\mathbf{R}^{\dagger}_{ik}\hat{b}_k - \mathbf{T}^{\dagger}_{ik}\hat{b}^{\dagger}_k - \mathbf{R}^{\dagger}_{ik}\hat{b}^{\dagger}_{L+k} + \mathbf{T}^{\dagger}_{ik}\hat{b}_{L+k} \right) \nonumber \\
&=\sum_{k=1}^L \left[-\mathbf{R}^{\dagger}_{ik}\left(\hat{b}_k+\hat{b}^{\dagger}_{L+k}\right) - \mathbf{T}^{\dagger}_{ik}\left(\hat{b}^{\dagger}_k-\hat{b}_{L+k}\right) \right].
\end{align}
\end{subequations}
Therefore to prove Eq.(\ref{eq:annihilation}), it is sufficient to prove that for any $1\leq k \leq L$,
\begin{align}\label{eq:identity1}
{}_{\mathcal{A}}\langle \id \vert \left(\hat{b}_k + \hat{b}^{\dagger}_{L+k}\right) = 0
\end{align}
and
\begin{align}\label{eq:identity2}
{}_{\mathcal{A}}\langle \id \vert \left(\hat{b}^{\dagger}_k - \hat{b}_{L+k}\right) = 0,
\end{align}
which has already been proved in \cite{GuoDario2017}. Now using Eq.(\ref{eq:trans1}) we get
\begin{subequations} \label{eq:btoc}
\begin{align}
\hat{b}_i &= \sum_{k=1}^L\left( \mathbf{R}_{ik}\hat{c}_k - \mathbf{T}_{ik}^{\ast}\hat{c}_{L+k} + \mathbf{D}_{ik}^{\ast}\hat{c}_{L+k}^{\prime} + \mathbf{A}_{ik}\hat{c}_k^{\prime} \right) \\
\hat{b}^{\dagger}_i &= \sum_{k=1}^L\left( \mathbf{T}_{ik}\hat{c}_k - \mathbf{R}_{ik}^{\ast}\hat{c}_{L+k} - \mathbf{C}_{ik}^{\ast}\hat{c}_{L+k}^{\prime} - \mathbf{B}_{ik}\hat{c}_k^{\prime} \right).
\end{align}
\end{subequations}
We can thus write
\begin{align}
\mathbf{O} =& {\rm tr}\left(\left(
\begin{array}{c}
\textbf{b}^{\dagger} \\
\textbf{b} \\
\end{array}
\right) \left(
\begin{array}{c}
\textbf{b} \\
\textbf{b}^{\dagger} \\
\end{array}
\right)^t \hat{\rho}\right) \nonumber \\
=& {}_{\mathcal{A}}\langle \id \vert \left(
\begin{array}{c}
\textbf{b}^{\dagger} \\
\textbf{b} \\
\end{array}
\right) \left(
\begin{array}{c}
\textbf{b} \\
\textbf{b}^{\dagger} \\
\end{array}
\right)^t \vert \rho_{\rm ss} \rangle.
\end{align}
Substituting Eqs.(\ref{eq:btoc}) into the above equation, and using Eq.(\ref{eq:annihilation}), we get
\begin{align}
\mathbf{O} =& \left(\begin{array}{cc}
\mathbf{T}\mathbf{A}^t - \mathbf{R}^{\ast}\mathbf{D}^{\dagger} & -\mathbf{T}\mathbf{B}^t + \mathbf{R}^{\ast}\mathbf{C}^{\dagger} \\
\mathbf{R}\mathbf{A}^t - \mathbf{T}^{\ast}\mathbf{D}^{\dagger} & -\mathbf{R}\mathbf{B}^t + \mathbf{T}^{\ast}\mathbf{C}^{\dagger}
\end{array}
\right) \nonumber \\
=& \mathbf{X} \mathbf{E} \mathbf{X} \mathbf{F}^t = -\mathbf{\Omega}^t .
\end{align}
Therefore, the quadratic observables $\mathbf{O}$ can be determined by solving Eq.(\ref{eq:pxxp}).
\section{Solutions for a boundary driven XY model}\label{sec:solveXY}
In the following we apply our method to a boundary driven $XY$ model. The Lindblad equation in this case is
\begin{align}
\mathcal{L}_{\rm XY} (\hat{\rho}) = -\im [\Hop_{\rm XY}, \hat{\rho}] + \mathcal{D}_{\rm XY}(\hat{\rho}),
\end{align}
with
\begin{align}
\Hop_{\rm XY} =& \frac{J\left(1+\gamma\right)}{2}\sum_{i=1}^{L-1}\hat{\sigma}^x_i\hat{\sigma}^x_{i+1} \nonumber \\
&+ \frac{J\left(1-\gamma\right)}{2}\sum_{i=1}^{L-1}\hat{\sigma}^y_i\hat{\sigma}^y_{i+1} + h_z \sum_{i=1}^L\hat{\sigma}^z_i,
\end{align}
and
\begin{align}
\mathcal{D}_{\rm XY}(\hat{\rho}) = \sum_{l=1,L}&\left[\Lambda^+_{l}(2\hat{\sigma}^+_{l}\hat{\rho} \hat{\sigma}^-_{l}-\{\hat{\sigma}^-_{l}\hat{\sigma}^+_{l}, \hat{\rho}\}) \right. \\ &+ \left. \Lambda^-_{l}(2\hat{\sigma}^-_{l}\hat{\rho} \hat{\sigma}^+_{l}- \{\hat{\sigma}^+_{l}\hat{\sigma}^-_{l}, \hat{\rho} \}) \right], \label{eq:DissLC}
\end{align}
Applying Jordan-Wigner transformation \cite{JordanWigner1928, LiebMattis1961}, the $XY$ chain can be mapped into a fermionic chain
\begin{align}
\Hop_F =& J\sum_{i=1}^{L}\left[\hat{\alpha}_i\hat{\alpha}^{\dagger}_{i+1}+\hat{\alpha}_{i+1}\hat{\alpha}^{\dagger}_i + \gamma\left(\hat{\alpha}_i\hat{\alpha}_{i+1}+\hat{\alpha}^{\dagger}_{i+1}\hat{\alpha}^{\dagger}_i\right)\right] \nonumber \\
&+h_z\sum_{i=1}^L(2\hat{\alpha}^{\dagger}_i\hat{\alpha}_i-1),
\end{align}
The dissipator can also be mapped into fermionic representation $\mathcal{B}$ as in \cite{GuoDario2017}, after which we can read the $L\times L$ matrices $\bar{\mathbf{P}}$ and $\mathbf{g}$ with non-zero elements
\begin{align}
&\mathbf{P}_{l, l+1} = \mathbf{P}_{l+1, l} = -\frac{\im J}{2}; \quad \forall 1 \leq l < L \\
&\mathbf{P}_{l, l} = -\im h_z; \quad \forall 1 < l < L \\
&\mathbf{P}_{1, 1} = -\im h_z - \frac{\Gamma_1}{2}, \mathbf{P}_{L, L} = -\im h_z - \frac{\Gamma_L}{2},
\end{align}
where $\Gamma_1 = \Lambda_1^+ + \Lambda_1^-$ and $\Gamma_L = \Lambda_L^+ + \Lambda_L^-$,
and
\begin{align}
\mathbf{g}_{l, l+1} = J\gamma, \quad \mathbf{g}_{l+1, l} = -J\gamma; \quad \forall 1 \leq l < L.
\end{align}
$\bar{\mathbf{P}}$ is a tridiagonal bordered Toeplitz matrix, and $\mathbf{g}$ is an anti-symmetric tridiagonal Toeplitz matrix. In the isotropic case of $\gamma=0$, $\mathbf{P}$ is block diagonal with $\bar{\mathbf{P}}$ and $\bar{\mathbf{P}}^{\ast}$ on its diagonal, $\bar{\mathbf{P}}$ has shown to be analytically diagonalizable in \cite{Yueh2005, Kouachi2006, Willms2008}, something which we exploited in \cite{GuoDario2017}.\\
Here we show analytical solutions for arbitrary $J, \gamma$, provided $h_z=0$. Eq.(\ref{eq:eigenP}) for $\mathbf{P}$ can be expanded into two independent equations
\begin{subequations}\label{eq:eigenPsub}
\begin{align}
&\bar{\mathbf{P}}\mathbf{R} - \frac{\im \mathbf{g}\mathbf{T}}{2} = \mathbf{R}\lmp \\
&\bar{\mathbf{P}}^{\ast}\mathbf{T} + \frac{\im \mathbf{g}\mathbf{R}}{2} = \mathbf{T}\lmp.
\end{align}
\end{subequations}
We then introduce two $L\times L$ diagonal matrices $\mathbf{K}^{\pm}$ with the diagonal elements
\begin{align}
\mathbf{K}^+_{i, i} &= (-1)^{i+1} \\
\mathbf{K}^-_{i, i} &= (-1)^i,
\end{align}
for $1 \leq i \leq L$. For the $XY$ chain the following relations
\begin{align}
&\mathbf{K}^{\pm} \mathbf{K}^{\pm} = \id_L \\
&\mathbf{K}^{\pm} \bar{\mathbf{P}} \mathbf{K}^{\pm} = \bar{\mathbf{P}}^{\ast} \\
&\mathbf{K}^{\pm} \mathbf{g} \mathbf{K}^{\pm} = -\mathbf{g}
\end{align}
are valid. Using them we can solve Eqs.(\ref{eq:eigenPsub}) with the ansatz
\begin{align}\label{eq:ansatz}
\mathbf{T} = \mathbf{K}^{\pm} \mathbf{R}.
\end{align}
Substituting Eq.(\ref{eq:ansatz}) into Eqs.(\ref{eq:eigenPsub}), we get a single equation
\begin{align}
\left(\bar{\mathbf{P}}- \frac{\im \mathbf{g} \mathbf{K}^{\pm} }{2}\right) \mathbf{R} = \mathbf{R}\lmp.
\end{align}
Therefore, to diagonalize the boundary driven $XY$ chain with $0$ magnetic field, one only needs to diagonalize two $L\times L$ matrices
\begin{align}
\mathbf{Q}^{\pm} = \bar{\mathbf{P}}- \im \mathbf{g} \mathbf{K}^{\pm}/2. \label{eq:Qpm}
\end{align}
Moreover, $\mathbf{Q}^{\pm}$ is a tridiagonal bordered 2-Toeplitz matrix, whose characteristic determinant is
\begin{align}\label{eq:exactodd}
\Delta_L = &\frac{(d_1d_2)^{m-1}}{\sin(\theta)}\left[d_1d_2\left(\frac{\Gamma_1+\Gamma_L}{2}-\lambda\right)\sin(m+1)\theta \right. \nonumber \\ &- \left. \left(\frac{\Gamma_1\Gamma_L\lambda}{4} + \frac{d_1^2\Gamma_1 + d_2^2\Gamma_L}{2}\right)\sin(m\theta)\right],
\end{align}
when $L=2m+1$ is odd and
\begin{align}\label{eq:exacteven}
\Delta_L = &\frac{(d_1d_2)^{m-1}}{\sin(\theta)}\left[\left(\frac{\Gamma_1\Gamma_L}{4}+d_2^2+\frac{(\Gamma_1+\Gamma_L)\lambda}{2}\right)\sin(m\theta) \right. \nonumber \\
&+ \left. d_1d_2\sin(m+1)\theta + \frac{\Gamma_1\Gamma_L}{4} \frac{d_1}{d_2} \sin(m-1)\theta \right]
\end{align}
when $L=2m$ is even (see Eqs.(4.a, 4.b) in \cite{Kouachi2006}). Here the eigenvalues $\lambda$ and $\theta$ are related by
\begin{align}
\lambda^2 = d_1^2+d_2^2 + 2d_1d_2\cos(\theta). \label{eq:lambda}
\end{align}
And $d_1, d_2$ are define as
\begin{align}
&d_1 = -\im J\frac{1 \mp\gamma}{2} \\
&d_2 = -\im J\frac{1 \pm\gamma}{2}
\end{align}
for $\mathbf{Q}^{\pm}$, respectively. Therefore, the diagonalization of matrices $\mathbf{Q}^{\pm}$ are reduced to solving the scalar trigonometric equations Eqs.(\ref{eq:exactodd}, \ref{eq:exacteven}) in the complex number $\theta$. For an Ising chain, which corresponds to $\gamma=1$, we have $d_1d_2=0$. In this special case, Eqs.(\ref{eq:exactodd}, \ref{eq:exacteven}) have closed analytic solutions and all the allowed eigenvalues are (see Proposition 4.2 in \cite{Kouachi2006})
\begin{align}
\left\lbrace \pm\im J, -\frac{\Gamma_{1,L}}{2}, -\frac{\Gamma_{1,L}}{4}\pm\frac{\sqrt{\Gamma_{1,L}^2-16J^2}}{4} \right\rbrace . \label{eq:solutions}
\end{align}
It is interesting to point out that none of these eigenvalues depend on $L$, which means the Lindbladian has a constant relaxation gap irrespective of the system size $L$.\\
\begin{figure}
\includegraphics[width=\columnwidth]{Fig1}
\caption{ (a) Relaxation gap $\Delta$ versus system size $L$ for different values of magnetic field $h_z$. (b) Real and imaginary part of the rapidities $\lambda$ closest to the real axis and with positive imaginary part, for $h_z/J=0.01$ and different system sizes: yellow diamonds for $L=50$, red squares for $L=75$ and blue circles for $L=100$. (c) Real and imaginary part of the rapidities $\lambda$ closest to the real axis and with positive imaginary part, for $L=100$ and different magnetic fields: yellow diamonds for $h_z/J=0.03$, red squares for $h_z/J=0.02$ and blue circles for $h_z/J=0.01$. Common parameters to the three panels are $\gamma=1$, $\Gamma_1=1$, $\Gamma_L=1$.
} \label{fig:fig1}
\end{figure}
We should now compare this result with numerical solutions and with the perturbative expression in \cite{Prosen2008}. In this study it was found that for $h_z \ne 0$ the relaxation gap scales as $1/L^3$ (see Eq.(85) of \cite{Prosen2008}). There is no contrast between this result and ours, because for $h_z=0$ the predicted relaxation gap (real part of the slowest decaying mode) also vanishes. In Fig.\ref{fig:fig1}(a) we show how the relaxation gap $\Delta$ scales with the system size for different values of the magnetic field $h_z$, computed by diagonalizing numerically the $2L\times 2L$ matrix $\mathbf{P}$ in Eq.(\ref{eq:defineP}). The scaling follows a power-law well approximated by $L^{-3}$, and the gap decreases when $h_z$ is smaller. We also investigate the real and imaginary parts of the rapidities $\lambda$. For small $h_z$, the rapidities with the smallest real part are found near $\pm\im J$ and approach these values for larger system size $L$ and as $h_z$ tends to $0$. This is shown in Fig.\ref{fig:fig1}(b-c) where we respectively increase the system size $L$ or decrease $h_z$.
We note here that the presence of slow decaying modes which have a large imaginary part can result, when computing two-time observables on the steady state, in long-lasting periodic motions which break the continuous time-translation symmetry, thus resulting in a time crystal \cite{Wilczek2012, WatanabeOshikawa2015, SachaZakrzewski2018}. For studies using two-time correlations to investigate time crystal in dissipative systems see for example \cite{WangPoletti2018, TuckerRey2018}.
\section{conclusions}\label{sec:summary}
In this work we have studied the steady state of dissipatively boundary driven fermionic quadratic system in which the particle number is not conserved, i.e. an $XY$ chain. We have shown that, not only it is possible to convert the problem of computer all the relaxation rates and normal master mode of the Lindblad master equation to diagonalizing an $L\times L$ matrix (where $L$ is the number of spins), but also that the matrix has a particular structure (it is a tridiagonal bordered $2-$Toeplitz matrix) which can then be solved as a scalar trigonometric equation.
Moreover, for the special case of the Ising chain we find explicit analytical solutions which are independent of the system size $L$.
The method here presented can be useful to study both the time evolution (since it gives access to all the normal master modes and rapidities) and steady states for open quadratic fermionic systems far from equilibrium even when the total number of particles is not conserved. Note that, once the problem is brought into a $L\times L$ matrix of bordered $2-$Toeplitz form, it may also be possible to find other further analytical solutions to Eqs.(\ref{eq:exactodd}, \ref{eq:exacteven}), as well as the expressions for the eigenvectors, by referring to, for example, references \cite{Yueh2005, Kouachi2006, Willms2008, Fonseca2007}.
\begin{acknowledgments}
D.P. acknowledges discussions with A.M. Rey. D.P. also acknowledges support from the Ministry of Education of Singapore AcRF MOE Tier-II (Project No. MOE2016-T2-1-065). C.G. acknowledges support from National Natural Science Foundation of China (11504430).
\end{acknowledgments}
|
1502.02974
|
\section{XOR games with $d$-outputs}
\section{Introduction.}
Quantum non-local correlations are one of the most intriguing aspects of Nature, evidenced in the violation of Bell inequalities. Besides their foundational interest, these correlations have also proven to be useful in information processing tasks such as secure device-independent randomness amplification and expansion \cite{rand}, cryptographic secure key generation \cite{securekey} and reduction of communication complexity \cite{CommCompl}.
Concerning such applications, it is typically of most interest to compute the classical and quantum value of the Bell expression, the classical value being the maximum over local realistic assignments of outcomes while the quantum value is the maximum attained using measurements on entangled quantum states. However, neither of these values is easy to calculate. Computing the classical value is done by means of an integer program and is in general a hard problem \cite{Pitowsky, Hastad}. On the other hand, it is not even known whether the quantum value is computable for all Bell inequalities, since there is a priori no restriction on the dimension of the Hilbert space for the quantum states and measurements; although in some instances it is possible to compute the value efficiently or to find a good approximation. A hierarchy of semi-definite programs from \cite{Navascues} is typically used to get (upper) bounds on the quantum value, although the quality of approximation achieved by these bounds remains unknown. The size of these programs also increases exponentially with the number of inputs and outputs in the Bell expression, so that a central problem of utmost importance in non-locality theory is to find easily computable good bounds to handle general classes of Bell inequalities.
An important class of Bell inequalities for which the quantum value \textit{can} be computed exactly is the class known as two-party binary \textsc{xor} games or equivalently as bipartite two-outcome correlation inequalities. In a binary \textsc{xor} game, the two parties Alice and Bob receive inputs $x \in [m_A], y \in [m_B]$ (we denote $[m_A] := \{1,\dots, m_A\}$) and respond with outputs $a, b \in \{0,1\}$. The winning constraint for each pair of inputs $(x, y)$ only depends on the \textsc{xor} modulo $2$ of the parties' answers, i.e., the Bell expression in the binary \textsc{xor} game only involves probabilities $P(a \oplus_{2} b = k| x, y)$ for $k \in \{0,1\}$. The fact that these are equivalent to Bell inequalities for correlation functions with binary outcomes is seen by noting that in this case the correlators $\mathcal{E}_{x,y}$ are given by $\mathcal{E}_{x,y} = \sum_{k=0,1} (-1)^k P(a \oplus_{2} b = k | x, y)$. For these games, it was shown in \cite{Cleve, Wehner} based upon a theorem by Tsirelson \cite{Tsirelson} that the quantum value can be computed efficiently by means of a semi-definite program, although computing the classical value is known to be a hard problem even for this class of games \cite{Hastad}. Besides binary \textsc{xor} games, few general results are known regarding the maximum quantum violation of classes of Bell inequalities.
The study of correlation Bell inequalities for binary outcomes was in part driven by the fact that many of the quantum information-processing protocols were developed for qubits, for which binary outcome games appear naturally. Recently, there has been much interest in developing applications of higher-dimensional entanglement \cite{Exp-high-dim, Qudit-Toffoli, Qudit-randomness, Qudit-key-dist} for which Bell inequalities with more than two outcomes may be naturally suited. Therefore, both for fundamental reasons as well as for these applications, the study of Bell inequalities with more outcomes is crucial.
A natural extension of the binary outcome \textsc{xor} games is to the class of generalized \textsc{xor}-d games, where the outputs of the two parties are not restricted to be binary, although the winning constraint still depends upon the generalized \text{xor} (addition modulo $d$), with $d$ being the number of outcomes.
The generalization can also be extended to the class known as \textsc{linear} games \cite{Hastad}, where the parties output answers that are elements of a finite Abelian group and the winning constraint depends upon the group operation acting on the outputs. Linear games are the paradigmatic example of non-local games with more than two outcomes, and a study of their classical and quantum values is crucial, especially in light of applications such as \cite{Jed}.
In the context of Bell inequalities, these were first studied in \cite{Buhrman} where a large alphabet generalization of the CHSH inequality called CHSH-d was considered, which has since been investigated in \cite{Liang, Ji, Bavarian, Howard, GRRH+15},
An important property of the \textsc{xor}-d games concerns their relationship with communication complexity, following \cite{vanDam, Wang} it is seen that correlations (boxes) winning a non-trivial total function \textsc{xor}-d game for prime $d$ can result in a trivialization of communication complexity.
A related information-theoretic principle called \textit{no quantum advantage in non-local computation} (no-NLC) has also been suggested in \cite{NLC}; this proposes that quantum correlations are those that do not provide any advantage over classical correlations in the task of distributed non-local computation of arbitrary binary functions, while general no-signaling correlations do. It is also of interest to investigate whether the above principle can be extended to functions of more outcomes.
In this paper, we present a novel efficiently computable bound to the quantum value of linear games and use it to derive several interesting properties, with particular emphasis on the important case of \textsc{xor}-d games for prime $d$. We illustrate the bound with the example of the CHSH-d game for prime and prime power $d$, recovering recent results derived using alternative (more technical) methods.
As another illustration, we use the bound to show that for uniformly chosen inputs, no non-trivial total function \textsc{xor}-d game can be won with a quantum strategy and consequently that
these no-signaling boxes that trivialize communication complexity cannot be realized within quantum theory. We further prove a large alphabet generalization of the no-NLC principle, showing that quantum theory provides no advantage in the task of non-local computation of a restricted class of functions with $d$ outcomes for prime $d$.
For the sake of clarity of exposition, we only include sketches of proofs in the main text with details deferred to the Appendices.
\section{A bound on the quantum value of linear games.}
\label{sec:lin-bound}
Linear games are a generalization of \textsc{xor} games to an arbitrary output alphabet size and are defined as follows:
\begin{mydef}
A two-player linear game $\textsl{g}^{l} = (q, f)$ is one where two players Alice and Bob receive questions $u$, $v$ from sets $Q_A$ and $Q_B$ respectively, chosen from a probability distribution $q(u,v)$ by a referee. They reply with respective answers $a, b \in (G,+)$ where $G$ is a finite Abelian group with associated operation $+$. The game is defined by a winning constraint $a + b = f(u,v)$ for some function $f : Q_A \times Q_B \rightarrow G$.
\end{mydef}
The most interesting linear games are arguably the \textsc{xor}-d games, denoted $\textsl{g}^{\oplus}$ which are the linear games corresponding to the cyclic group $\mathbb{Z}_d$, the integers with operation addition modulo $d$ ($\oplus_d$).
The value of the linear game is given by the expression
\begin{equation}
\omega_s(\textsl{g}^{l}) = \max_{\{P_{A,B|U,V}\} \in \mathcal{S}} \sum_{\substack{u \in Q_A \\ v \in Q_B}} \sum_{a,b \in G} q(u,v) V(a,b|u,v) P(a,b | u, v),
\end{equation}
where $V(a,b|u,v) = 1$ if $a + b =f(u,v)$ and $0$ otherwise and the maximum is taken over all boxes $\{P_{A,B|U,V}\}$ in the set $\mathcal{S}$ which may correspond to the set of classical $\mathcal{C}$, quantum $\mathcal{Q}$ or more general no-signaling boxes $\mathcal{NS}$.
The maximum classical value of the game (the maximum over all deterministic assignments of $a, b$ for each respective input $u,v$ or their convex combinations) is denoted $\omega_c(\textsl{g}^{l})$, the maximum value of the game achieved by a quantum strategy (POVM measurements on a shared entangled state of arbitrary Hilbert space dimension) is denoted $\omega_q(\textsl{g}^{l})$, while the maximum value achieved by a no-signaling strategy (where neither party can signal their choice of input using the correlations) is denoted $\omega_{ns}(\textsl{g}^{l})$. These games have been studied \cite{Hastad, Khot} in the context of hardness of approximation of several important optimization problems, in attempts to identify the existence of polynomial time algorithms to approximate the optimum solution of the problem to within a constant factor. Linear games belong to the class of unique games \cite{Kempe}; in a unique game $\textsl{g}^u$ for every answer $a$ of Bob, there is a unique answer $b = \pi_{u,v}(a)$ that wins the game, where $\pi_{u,v}$ is some permutation that depends on the input pair $(u,v)$. For every game in this class, a no-signaling box exists that wins the game, so that $\omega_{ns}(\textsl{g}^l) = \omega_{ns}(\textsl{g}^u) = 1$. Such a box for the general unique game with $d$ outcomes is defined by the entries $P(a,b|u,v) = 1/d$ if $b = \pi_{u,v}(a)$ and $0$ otherwise for all input pairs $(u,v)$, this strategy clearly wins the game, and is no-signaling since the output distribution seen by each party is fully random for every input, i.e., $P(a|u) = P(b|v) = 1/d$.
As in the case of Boolean functions \cite{BBLV,Brassard2}, the classical value $\omega_c(g^l)$ for any linear game is strictly greater than the pure random guess value $1/|G|$, this is shown in Lemma \ref{lem:cl-value}.
\begin{lemma}
\label{lem:cl-value}
For any linear game $g^{l}$ corresponding to a function $f(u,v)$ with $u \in Q_A, v \in Q_B$ and for an arbitrary probability distribution $q(u,v)$, we have
\begin{equation}
\label{cl-low-b}
\omega_c(g^{l}) \geq \frac{1}{|G|} \left( 1 + \frac{|G|-1}{m}\right),
\end{equation}
where $m = \min\{|Q_A|, |Q_B|\}$.
\end{lemma}
\begin{proof}
Let $d = |G|$, Alice and Bob receive inputs $u, v$ of $\log_d |Q_A|$ and $\log_d |Q_B|$ dits respectively. Suppose w.l.o.g that $|Q_A| \leq |Q_B|$ ($m = |Q_A|$), and let the two parties share a uniformly distributed random variable $w$ of $\log_d |Q_A|$ dits. The following classical strategy achieves the lower bound in Eq.(\ref{cl-low-b}). Bob outputs $b = f(w,v)$, while Alice checks if $u = w$ and if so outputs $a = e$; if not she outputs a uniformly distributed $a \in G$. In the case when $u = w$ which happens with probability $\frac{1}{m}$, $a + b = e + f(w,v) = f(u,v)$ and the strategy succeeds. When $u \neq w$, we have that $a + f(w,v)$ is uniformly random since $a$ is uniform, and the strategy succeeds with probability $\frac{1}{d}$. The value achieved by this strategy is therefore $\frac{1}{m} + \left(1 - \frac{1}{m}\right) \frac{1}{d}$.
\end{proof}
Computing the quantum value of the linear game is an onerous task, for which efficiently computable bounds are hard to find.
We now present a bound on the quantum value of a linear game in Theorem \ref{thm2} by using the norms of a set of \textit{game matrices} defined using the characters of the associated group. The detailed derivation of the bound is shown in the proof of this theorem presented in the Appendix \ref{sec:lin-bound-app}, and the utility and possible tightness of the bound (in scenarios such as the CHSH-d game that is applicable to tasks such as relativistic bit commitment \cite{Jed}) is considered in this section.
\begin{thm}\label{thm2}
\label{norm-bound}
The quantum value of a linear game $\textsl{g}^l$ with input sets $ Q_A, Q_B$ can be bounded as
\begin{eqnarray}
\label{xor-d-bound-2}
\omega_{q}(\textsl{g}^{l}) \leq \frac{1}{|G|} \left[ 1 + \sqrt{|Q_A| |Q_B|} \sum_{x \in G\setminus \{e\}} \Vert \Phi_{x} \Vert \right],
\end{eqnarray}
where
$\Phi_{x} = \sum_{(u,v) \in Q_A \times Q_B} q(u,v) \chi_{x}(f(u,v)) | u \rangle \langle v|$ are the game matrices, $\chi_{x}$ are the characters of the group $G$ and $\Vert \cdot \Vert$ denotes the spectral norm. In particular, for an \textsc{xor}-d game with $m_A$ and $m_B$ inputs for the two parties, the quantum value can be bounded as
\begin{eqnarray}
\label{eq:xor-d-bound-3}
\omega_{q}(\textsl{g}^{\oplus}) \leq \frac{1}{d} \left[ 1 + \sqrt{m_A m_B} \sum_{k= 1}^{d-1} \Vert \Phi_{k} \Vert \right],
\end{eqnarray}
with $\Phi_k = \sum_{\substack{u \in [m_A] \\ v \in [m_B]}} q(u,v) \zeta^{k f(u,v)} |u \rangle \!\langle v|$ and $\zeta = \exp{(2 \pi I/d)}$.
\end{thm}
\begin{proof}
We sketch the proof of the bound using the Fourier transform for the \textsc{xor}-d games here, the generalization to linear games uses the analogous Fourier transform on finite Abelian groups \cite{Terras} and is deferred to the Appendix \ref{sec:lin-bound-app}. For a quantum strategy given by projective measurements $\{\Pi_{u}^{a} \}, \{\Sigma_{v}^{b} \}$ on a pure state $| \Psi \rangle \in \mathbb{C}^{D \times D}$, we introduce the generalized correlators $\langle A_{u}^{x} \otimes B_{v}^{y} \rangle$
for unitary operators defined as
\begin{equation}
A_{u}^{x} = \sum_{a \in G} \zeta^{-ax} \Pi_{u}^{a} \; \; \text{and} \; \; B_{v}^{y} = \sum_{b \in G} \zeta^{-by} \Sigma_{v}^{b}.
\end{equation}
The probabilities $P(a,b|u,v)$ that enter the game expression are calculated from the inverse transform to be
\begin{eqnarray}
\label{eq:game-prob}
P(a \oplus_d b = f(u,v) | u,v) = \frac{1}{d} \sum_{k =0}^{d-1} \zeta^{k f(u,v)} \langle A_{u}^{k} \otimes B_{v}^{k} \rangle.
\end{eqnarray}
Now, with vectors $|\alpha_{k} \rangle, |\beta_{k} \rangle$ and the \textsc{xor}-d game matrices $\Phi_{k}$ defined as
\begin{eqnarray}
&&|\alpha_{k} \rangle = \sum_{u \in Q_A} \left((A_{u}^{k})^{\dagger} \otimes \leavevmode\hbox{\small1\kern-3.8pt\normalsize1} \right) |\Psi \rangle \otimes |u \rangle \; \; \text{,} \; \; \nonumber \\
&&|\beta_{k} \rangle = \sum_{v \in Q_B} \left(\leavevmode\hbox{\small1\kern-3.8pt\normalsize1} \otimes B_{v}^{k} \right) |\Psi \rangle \otimes |v \rangle, \nonumber \\
&&\Phi_{k} = \sum_{(u,v) \in Q_A \times Q_B} q(u,v) \zeta^{k f(u,v)} | u \rangle \langle v|,
\end{eqnarray}
the game expression $\sum_{(u,v) \in Q_A \times Q_B} q(u,v) P(a \oplus_d b = f(u,v)|u,v)$ can be rewritten using Eq.(\ref{eq:game-prob}) as
$(1/d) \sum_{k=0}^{d-1} \langle \alpha_k | \leavevmode\hbox{\small1\kern-3.8pt\normalsize1} \otimes \Phi_{k} | \beta_{k} \rangle$ and the norm bound in Eq.(\ref{eq:xor-d-bound-3}) follows.
\end{proof}
It should be noted that as shown in \cite{Kempe}, the quantum value of a linear game can be efficiently approximated, to be precise for any linear game $\textsl{g}^{l}$ with $\omega_q(\textsl{g}^{l}) = 1 - \delta$, there exists an efficient algorithm to approximate this value using a semi-definite program and a rounding procedure that gives an entangled strategy achieving $\omega_q^{\text{app}}(\textsl{g}^{l}) = 1 - 4 \delta'$, where $\delta/4 \leq \delta' \leq \delta$. While this is highly significant and useful for proving results such as a parallel repetition theorem for the quantum value of such games \cite{Kempe}, it would appear to be good for approximating the quantum value when the latter is close to unity, which is not the case for simple examples like the CHSH-d game. For uniform probability inputs $q(u,v) = 1/|Q_A| |Q_B|$ or when the input distribution possesses certain symmetries, as we shall see, the simple linear algebraic bound above supplements this result and proves to be very useful to derive other interesting properties of these games.
We first illustrate the applicability and possible tightness of the bound by considering the flagship scenario of the CHSH-d game which generalizes the well-known CHSH game to a higher dimensional output. In this game, Alice and Bob are asked questions $u, v$ chosen uniformly at random from a finite field $\mathbb{F}_d$ of size $d$ so that $q(u, v) = 1/d^2$, where $d$ is a prime, or a prime power. They return answers $a, b \in \mathbb{F}_d$ with an aim to satisfy $a \oplus b = u \cdot v$ where the arithmetic operations are from the finite field. In \cite{Bavarian}, an intensive study of this game was performed, with two significant results obtained on the asymptotic classical and quantum values of the game. We now apply Theorem \ref{thm2} to re-derive in a simple manner the upper bound for the quantum value of CHSH-d. Comparison with the numerical results of \cite{Ji, Liang} indicates that the bound in the following example of the CHSH-d game may not be tight in general, also note that the optimum value of the game for Pauli measurements was recently derived in \cite{Howard}.
\begin{example}[see also \cite{Bavarian}]
\textit{The quantum value of the CHSH-d game for prime and prime power $d$, i.e., $d = p^r$ where $p$ is prime and $r \geq 1$ is an integer, can be bounded as
\begin{equation}
\label{eq:Bavarian-bound}
\omega_q(CHSH-d) \leq \frac{1}{d} + \frac{d-1}{d \sqrt{d}}.
\end{equation} }
\end{example}
\begin{proof}
Let us consider the CHSH-d game with associated function $f(u, v) = u \cdot v$. The entries of the game matrix $\Phi_k$ for prime $d$ are by definition $\Phi_k(u,v) = q(u,v)\zeta^{k (u \cdot v)}$ where $\zeta = \exp{\frac{2 \pi I}{d}}$ and $u, v \in \{0, \dots, d-1\}$, and we consider uniform probability inputs $q(u,v) = 1/d^2$. It is readily seen that for prime $d$, the game matrices $\Phi_k$ for $k \in \{1, \dots, d-1\}$ are equal to each other up to a permutation of rows (or columns). Moreover, a direct calculation using $\sum_{j=0}^{d-1} \zeta^j = 0$ yields that $\Phi_k^{\dagger} \Phi_k = \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}/d^3$, so that $\Vert \Phi_k \Vert = 1/d \sqrt{d}, \; \; \forall k \in [d-1]$. Substitution into Eq.(\ref{eq:xor-d-bound-3}) with $m_A = m_B = d$ yields the bound in Eq.(\ref{eq:Bavarian-bound}) for prime $d$.
Strictly analogous results are obtained for prime power $d = p^{r}$, where $p$ is prime and $r > 1$ is an integer. Note that here the operation $u \cdot v$ in the CHSH-d game is not defined as multiplication modulo $d$, but as multiplication in the finite field $\mathbb{F}_d$, see \cite{fields, Bavarian}. The non-zero elements of $\mathbb{F}_d$ under this multiplication operation form a cyclic group of size $d-1$, and we have $a^d = a, \; \; \forall a \in \mathbb{F}_d$. Here again,
the game matrices $\Phi_k$ for $k \in [d-1]$ are equal to each other up to a permutation of rows (or columns). By explicit calculation, using the following properties of the characters: $\chi_k(a+b) = \chi_k(a) \chi_k(b)$ for any $a, b \in \mathbb{F}_d$; $\chi_k(a) = 1 \Longleftrightarrow a = 0$ and $\sum_{a \in \mathbb{F}_d} \chi_k(a \cdot b) = 0$ for $b \neq 0$ we obtain that $\Phi_k^{\dagger} \Phi_k = \frac{1}{d^3} \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$ for all $k$. Substituting $\Vert \Phi_k \Vert = \frac{1}{d \sqrt{d}}, \; \; \forall k \in [d-1]$ into Eq.(\ref{eq:xor-d-bound-3}) with $|Q_A| = |Q_B| = d$ yields the bound.
\end{proof}
Given the quantum bound, a natural question is whether there are linear games where the quantum value $\omega_q(g^{l})$ equals one, i.e., can there be quantum strategies that win a linear game? The interest in the question also stems from the domain of communication complexity. Following the results of \cite{vanDam, Wang}, any non-trivial total function \textsc{xor}-d game for prime $d$ and $n$ dits as input $\textbf{u} = (u_1, \dots, u_n), \textbf{v} = (v_1, \dots, v_n)$ is won by a no-signaling box that can result in a trivialization of communication complexity. To elaborate, it was shown that any no-signaling box that wins a non-trivial total function \textsc{xor}-d game for prime $d$ must contain as a sub-box, one of the functional boxes of the form $P(a \oplus_d b = f(u,v) | u,v) = 1/d$ for $a, b, u, v \in \{0, \dots, d-1\}$; having $d^n$ copies of the box and addressing this sub-box in each, Alice and Bob can compute any function of $d$ outputs with a single dit of communication, resulting in a trivialization of communication complexity.
We now apply the bound to exclude these boxes that result in a trivialization of communication complexity from the set of quantum boxes. In particular, the following Lemma \ref{comm-comp-lem} shows that no non-trivial game for a total function $f(u,v)$ (a total function is one which is defined for all input pairs $(u,v)$) within the class of \textsc{xor}-d games $\textsl{g}^{\oplus}$ with uniformly chosen inputs can be won by a quantum strategy, meaning that there is no pseudo-telepathy game \cite{Brassard} within this class.
\begin{lemma}
\label{comm-comp-lem}
For \textsc{xor}-d games $\textsl{g}^{\oplus}$ corresponding to total functions with $m$ questions per player, when the input distribution is uniform $q(u, v) = 1/m^2$, $\omega_q(\textsl{g}^{\oplus}) = 1$ iff $\omega_c(\textsl{g}^{\oplus}) = 1$, i.e., when rank$(\Phi_1) = 1$.
\end{lemma}
\begin{proof}
The constraint that the input distributions of questions to the players are uniform, $q(u,v) = 1/m^2$ for all $u, v$, is equivalent to $\Vert \Phi_k \Vert \leq 1/m$ since both the maximum (absolute value) column sum and row sum matrix norms are equal to $1/m$. Now $\omega_q(\textsl{g}^{\oplus}) = 1$ requires from the bound in Eq.(\ref{eq:xor-d-bound-3}) that $\Vert \Phi_k \Vert = 1/m$ for all $k \in \{1, \dots, d-1\}$. Consider the matrix ${\Phi_1}^{\dagger} \Phi_1$ which has entries $({\Phi_1}^{\dagger} \Phi_1)_{u,v} = \sum_{w=1}^{m} q(w,u) q(w,v) \zeta^{-f(w,u) + f(w,v)}$, where $\zeta = \exp{(2 \pi I/d)}$ is the $d$-th root of unity. Let $\{ \lambda_j \}$ be the maximum eigenvector corresponding to eigenvalue $1/m^2$ of ${\Phi_1}^{\dagger} \Phi_1$, with complex entries $\lambda_j = \vert \lambda_j \vert \zeta^{{\theta}_j}$. Let the entries of the eigenvector be ordered by absolute value, $\vert \lambda_1 \vert \geq \dots \geq \vert \lambda_m \vert$ and consider the eigenvalue equation corresponding to $\lambda_1$, we have
\begin{equation}
\sum_{v, w = 1}^{m} |\lambda_v| \zeta^{-f(w,1) + f(w,v) + \theta_v} = m^2 |\lambda_1| \zeta^{\theta_1}.
\end{equation}
Clearly the above equation can only be satisfied when $\vert \lambda_j \vert = \vert \lambda_{j'} \vert \;\; \forall j, j'$ and when the phases add, i.e., when $f(w,v) - f(w,1) + \theta_v = f(w',v') - f(w',1) + \theta_{v'}\; \; \forall v,w,v',w'$, in particular choosing $w = w'$ here, we get $f(w,v) - f(w,v') = \theta_{v'} - \theta_{v} \; \forall w,v,v'$. With all $|\lambda_{j}|$ equal, the rest of the eigenvalue equations (for $u \neq 1$) lead to similar consistent constraint equations.
We deduce that $\omega_q(\textsl{g}^{\oplus}) = 1$ only when the columns of the game matrix $\Phi_1$ are proportional to each other, the proportionality factor between columns $k, l$ being $\zeta^{f(u,k) - f(u,l)} = \zeta^{\theta_l - \theta_k}$. In this case (with $\text{rank}(\Phi_1) = 1$), a classical winning strategy which always exists for the first column of the game matrix $\Phi_1$ can be straightforwardly extended to a classical winning strategy for the entire game, meaning $\omega_c(\textsl{g}^{\oplus}) = 1$ also.
\end{proof}
It was recently shown that all the extremal points of the no-signaling polytope for any number of inputs and outputs cannot be realized within quantum theory \cite{our}. It remains an open question whether \textit{all} such vertices lead to a trivialization of communication complexity (at least in a probabilistic setting), if so this would be a compelling reason for their exclusion from correlations that can be realized in nature. Also, note that while the exclusion of the boxes trivializing communication complexity from the quantum set is not surprising, we include it here as an illustration of the applicability of the bound. Indeed in subsequent work \cite{GRRH+15}, the techniques used in this paper have also been applied to exclude boxes that win games corresponding to partial functions $f(u,v)$ from the quantum set, this further illustrates the utility of the technique since these latter boxes do not trivialize communication complexity and therefore can't be excluded on that basis.
\section{Linear games with no quantum advantage: the task of non-local computation.}
\label{sec:nlc}
Even though the quantum non-local correlations cannot be used to transmit information, they enable the performance of several tasks impossible in the classical world, such as the expansion and amplification of intrinsic randomness, device-independent secure key generation, etc. An unexpected limitation of quantum correlations however is the fact that they do not provide any advantage over classical correlations in the performance of a fundamental information-theoretic task, namely the non-local distributed computation of Boolean functions \cite{NLC}, even though certain super-quantum no-signaling correlations do.
Consider a Boolean function $f(z_1, \dots, z_n)$ from $n$ bits to $1$ bit. A non-local (distributed) computation of the function is defined as follows. Two parties, Alice and Bob, are given inputs $(x_1, \dots, x_n)$ and $(y_1, \dots, y_n)$ obeying $x_i \oplus_2 y_i = z_i$, each bit $x_i, y_i$ being $0$ or $1$ with equal probability. This ensures that neither party has access to any input $z_i$ on their own. To perform the non-local computation, Alice and Bob must output bits $a$ and $b$ respectively such that $a \oplus_2 b = f(x_1 \oplus_2 y_1, \dots, x_n \oplus_2 y_n)$. Their goal is thus to maximize the probability of success in this task for some given input distribution $p(z_1, \dots z_n) = p(x_1 \oplus_2 y_1, \dots, x_n \oplus_2 y_n)$. In \cite{NLC}, it was shown that surprisingly for \textit{any} input distribution $p(z_1, \dots, z_n)$, Alice and Bob sharing quantum resources cannot do any better than classical resources (both give rise to only a linear approximation of the computation), while they could successfully perform the task if the resources they shared were limited by the no-signaling principle alone. This no-advantage in non-local computation (NANLC) was so striking that it was postulated as an information-theoretic principle that picks out quantum theory from among general no-signaling theories, in relation to the correlations that the theory gives rise to \cite{NLC}.
The above consideration of functions with a single-bit output is important since these encapsulate all decision problems, a natural class of problems used to define computational complexity classes. In the program of characterizing quantum correlations however, we must consider functions with multi-bit outputs as well as functions with higher input and output alphabets. We now use the bound (\ref{eq:xor-d-bound-3}) to construct a generalized non-local computation task for functions with higher input-output alphabet.
Consider the following generalization of the non-local computation task to \textsc{xor}-d games, namely the computation of the function $g(z_1, \dots, z_n)$ with $z_i \in \{0, \dots, d-1\}$ where $d$ is a prime. In these games which we label $NLC_d$, Alice and Bob receive $n$ dits $\textbf{x}_n = (x_1, \dots, x_n)$ and $\textbf{y}_n = (y_1, \dots, y_n)$ which obey $x_i \oplus_d y_i = z_i$. Their task is to output dits $a, b$ respectively such that
\begin{equation}
\label{eq:func-NLC}
a \oplus_d b = g(\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1}) \cdot (x_n\oplus_d y_{n}),
\end{equation}
where $\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1}$ is the dit-wise \textsc{xor} of the $n-1$ dits, i.e., $\{x_1 \oplus_d y_1, \dots, x_{n-1} \oplus_d y_{n-1}\}$ and $g$ is an arbitrary function from $n-1$ dits to $1$ dit. The inputs are chosen according to
\begin{align}\label{probdistr}
\frac{1}{d^{n+1}} p(\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1})
\end{align}
for $p(\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1})$ being an arbitrary probability distribution. As mentioned previously, all unique games including the \textsc{xor}-d games have no-signaling value of unity, so that in general $(1=) \omega_{ns}(NLC_d) > \omega_q(NLC_d)$.
We now present in Theorem \ref{thm-nlc} the result that the games $NLC_d$ defined above exhibit no quantum advantage, the detailed proof of this theorem is presented in Appendix \ref{sec:app-nlc}.
\begin{thm}
\label{thm-nlc}
The games $NLC_d$ for arbitrary prime $d$ and for input distribution satisfying \eqref{probdistr} have no quantum advantage, i.e., $\omega_c(NLC_d) = \omega_q(NLC_d)$.
\end{thm}
\textit{Sketch of proof.}
Consider the games $NLC_d$ for prime $d$ and arbitrary number $n$ of input dits for each party. Denote the total number of inputs for each party by $m=d^n$, and the corresponding game matrices by ${\Phi^{(n)}_k}$. The $NLC_d$ games are composed of ``building-block games"
$G(t):=\DE{a \oplus_d b= t \cdot (x \oplus_d y)}$,
with $t \in \{0, \dots, d-1\}$.
Denote the Fourier vectors as $|f_j\rangle$, i.e.,
$|f_{j} \rangle = \left(1, \zeta^j, \dots, \zeta^{(d-1)j}\right)^T$, where as usual $\zeta = \exp{\frac{2 \pi I}{d}}$.
We find that ${\Phi^{(n)}_k}^{\dagger} \Phi^{(n)}_k$ are block-circulant matrices and are hence diagonal in the basis formed by the tensor products of the Fourier vectors $\{|f_{i_1}\rangle \otimes \dots |f_{i_{n}} \rangle\}$ with $i_1, \dots, i_n \in \{0, \dots, d-1\}$. Explicit calculation of the maximum eigenvector yields that $\| \Phi^{(n)}_k \| = d \Lambda$ for $\Lambda := \max_{i_n \in \{0, \dots, d-1\}} \lambda(i_n)$ with $\lambda(i_n)$ being the number of times the
game $G{(d-1 \cdot i_n)}$ appears in the first row of $\Phi^{(n)}_k$. Let $\mu \in \{0, \dots, d-1\}$ denote the value of $i_n$ for which the maximum of $\lambda(i_n)$ is achieved.
For prime $d$, we obtain the following bound
on the quantum value in the uniform case
\begin{equation}
\label{uni-q-bound}
\omega_q(NLC_d) \leq \frac{1}{d}\left(1 + \frac{(d-1) \Lambda}{d^{n-1}} \right).
\end{equation}
The explicit classical strategy where Alice outputs deterministically $a = \mu x_n$ independent of her input $\textbf{x}_{n-1}$ and Bob outputs $b = \mu y_n$
independently of his input $\textbf{y}_{n-1}$ recovers this bound.
\qed
Let us state some open questions in this line of research. Note that the slight restriction in Eq. (\ref{eq:func-NLC}) (a fixed dependence on $x_n \oplus_d y_n$), means that the games do not cover the entire class of functions considered in \cite{NLC}, it remains open whether there is no quantum advantage for the remaining functions in this class as well. It is also of interest to identify other tasks beyond NLC where quantum correlations do not provide an advantage over classical ones, and the bound should be useful to characterize these. Also, we remark that the original NANLC principle (and most of the other principles proposed so far) is known to not pick out exactly the set of quantum correlations since there exists a set of the so-called almost quantum correlations \cite{Navascues} that also satisfies the principle. The generalized NANLC principle subsumes the original NANLC principle, since the latter corresponds to the special case $d=2$. While we expect it to be, it remains to be checked whether the generalized NANLC principle proposed here is also satisfied by the almost quantum set. Finally, it is also of interest to find whether any of the inequalities corresponding to these games define facets of the classical polytope (a facet of a polytope is a face with dimension one less than that of the polytope). Games with this property (having $\omega_c = \omega_q$ and defining facets of the classical polytope) define non-trivial boundaries of the quantum set and it has been posed as an open question in \cite{GYNI, UPB2} whether such games exist for two-party Bell scenarios.
\section{Conclusions.} In this paper, we have presented an easily computable bound on the quantum value of linear games, with particular emphasis on \textsc{xor}-d games for prime $d$. We have illustrated this bound by using to rule out from the quantum set a class of no-signaling boxes that result in a trivialization of communication complexity. To do this, we have shown that no uniform input total function \textsc{xor}-d game can be a pseudo-telepathy game. We have also shown how the recently discovered bound on the CHSH-d game in \cite{Bavarian} can be derived in a simple manner for prime and prime power $d$, in this context it is interesting to note that these games have recently found application in relativistic bit commitment \cite{Jed}. Finally, we have extended the NANLC principle to general prime dimensional output, showing that quantum theory provides no advantage over classical theories in the distributed non-local computation of a class of functions with prime dimensional output.
In the future, it would be interesting to extend the proposed bound on the quantum value to classes of Bell inequalities beyond linear games, especially to the more general unique games. Further applications of the bound such as in the device-independent detection of genuine multipartite entanglement \cite{BGLP, GM} for arbitrary Hilbert space dimensions, in multi-party communication complexity, as well as in the identification of information processing tasks with no quantum advantage \cite{NLC}, are of immediate interest.
\textit{ Acknowledgements.} We thank P. Horodecki and M. Horodecki for useful discussions, as well as Matej Pivoluska and J\c edrzej Kaniewski for useful comments on an earlier version of this manuscript. R.R. is supported by the ERC AdG grant QOLAPS and the Foundation for Polish Science TEAM project co-financed by the EU European Regional Development Fund. R. A. acknowledges support from the ERC AdG grant OSYRIS, the EU project SIQS, the Spanish project FOQUS and the John Templeton Foundation. G.M. acknowledges support from the Polish Ministry of Science and Higher Education Grant no. IdP2011 000361 and the Brazilian agency Fapemig (Fundação de Amparo à Pesquisa do estado de Minas Gerais).
|
2208.07741
|
\section{Introduction}
\section{Related work}\label{section:reliability}
The QUIC protocol currently provides a reliable bytestream abstraction using \textit{streams}.
Application data transiting through QUIC streams is carried inside \texttt{STREAM} frames. Upon detection of a loss, the application data carried by the lost \texttt{STREAM} frames are retransmitted in new \texttt{STREAM} frames sent in new QUIC packets.
QUIC~\cite{ietf-quic-recovery} uses two thresholds to detect losses: a \textit{packet-based} threshold and a \textit{time-based} threshold. The packet-based threshold marks a packet as lost after packets with a sufficiently higher packet number have been received, indicating that pure re-ordering is unlikely. The specification recommends that an unacknowledged packet with number $x$ is marked as lost when the acknowledgement of a packet with a number larger or equal to $x+3$ has been received. This is similar to TCP's fast retransmit heuristic. The time-based threshold marks an unacknowledged packet as lost after a sufficient amount of time when a packet sent later gets acknowledged.
The specification recommends that an unacknowledged packet sent at time $t$ is marked as lost after $t + \frac{9}{8}*RTT$ if a packet sent later has already been acked. These two thresholds are only reached after at least one round-trip time, resulting in a late retransmission for delay-constrained applications. Such applications would benefit from a reliability mechanism that corrects packet losses \textit{a-priori}.
There are ongoing discussions within the IETF \cite{coding-for-quic} and the research community \cite{michel2019quic,de2019pluginizing,rquic:2019} to add Forward Erasure Correction (FEC) capabilities to QUIC.
This mechanism consists in sending redundant information (Repair Symbols) before packets (Source Symbols) are detected as lost.
It is especially useful for applications that cannot afford to wait for a retransmission, either due to strong delay requirements or connections suffering from long delays. Google experimented with a naive XOR-based FEC solution in early versions of QUIC \cite{swett-quic-fec}. The IRTF Network Coding research group explored alternative solutions~\cite{coding-for-quic}. Relying on a XOR code~\cite{swett-quic-fec} does not enable sending several repair symbols to protect a window of packets, preventing the solution from recovering loss bursts. The standardisation work~\cite{coding-for-quic} does not provide any performance evaluation nor technique to schedule source and repair symbols. QUIC-FEC~\cite{michel2019quic} proposes several redundancy frameworks and codes for the QUIC protocol. In this previous work, we studied file transfers with different codes such as XOR, Reed-Solomon and Random Linear Codes (RLC). We showed that FEC with QUIC can be beneficial for small file transfers but is harmful for longer bulk transfers compared to Selective-Repeat ARQ (SR-ARQ) mechanisms. One of the main limitations of QUIC-FEC~\cite{michel2019quic} is that the code rate is fixed during the connection, leading to the sending of unnecessary coded packets.
Pluginized QUIC (PQUIC)~\cite{de2019pluginizing} proposes a FEC plugin equivalent to the RLC part of QUIC-FEC. Finally, rQUIC~\cite{rquic:2021} presents an adaptive algorithm to regulate the code rate in function of the channel loss rate for QUIC communications. rQUIC has two main differences with our work. First, rQUIC assumes that isolated losses are not due to congestion. When it recovers isolated lost packets, rQUIC hides their loss signal to QUIC's congestion control to benefit from a larger bitrate. We follow the IRTF NWCRG recommendations~\cite{irtf-nwcrg-coding-and-congestion-09}: we never hide any signal to the congestion control. If isolated losses are not due to congestion, then a specific congestion control ignoring isolated losses can be used instead of hiding the loss signal. The second difference is that we adapt the redundancy both to the channel conditions and the application requirements. Our solution will neither send the same amount of redundancy nor the same pattern of Repair Symbols for a bulk download and for a real-time video conferencing application.
Network coding has been considered for other transport protocols \cite{rfc5109,ietf-rtcweb-fec,cavusoglu2003real}.
TCP/NC~\cite{sundararajan2009tcpnc} adds network coding to TCP connections by applying a coding layer beneath the transport layer. It improves the TCP throughput by recovering from packets losses that block the TCP window. CTCP~\cite{kim2012ctcp} pushes the idea further and proposes a revised congestion control algorithm for wireless communications. Tetrys~\cite{tournoux2011fly} proposes a coding mechanism focused on real-time video applications and develops heuristics to adjust the coding rate to the sender's behaviour. RFC5109~\cite{rfc5109} defines a standard RTP packet format to allow the use of FEC for RTP applications.
An IETF draft~\cite{ietf-rtcweb-fec} presents guidelines and requirements for the use of FEC for protecting video and audio streams in WebRTC. Minion~\cite{nowlan2012fitting} also uses coding to support unreliable data transfer above TCP.
Existing work propose multi-path solutions to handle links with poor delay and fluctuating bandwidth~\cite{kuhn2014daps,chiariotti2021hop,garcia2017low} and use FEC to reduce head-of-line blocking.
Unfortunately, none of the current solutions adapts the reliability mechanism to different classes of applications.
\section{Buffer-limited file transfers}\label{section:buffer_limited}
We here present and evaluate the reliability mechanism for buffer-limited file transfers. In this setup, the receive window (\textit{rwin}) is relatively small compared to the congestion window (\textit{cwin}) of the sender, making every loss event potentially blocking and increasing the download completion time. In addition to protect the download from tail losses, we protect every window of packets to avoid stalling due to lost packets blocking the stream flow-control window.
\subsection{Reliability mechanism}
For this use-case, $ds{}()$ returns $\hat{l}$ to ensure that $ad$ stays larger than $md$, according to the estimated loss rate.
$FECPattern{}()$ behaves as shown in Algorithm~\ref{algo:buffer_ew}.
We spread the Repair Symbols along the sent Source Symbols in order to periodically allow the receiver to unblock its receive window by recovering the lost Source Symbols and deliver the stream data in-order to the application. More precisely, the $FECPattern{}()$ operation sends one Repair Symbol every $\frac{1}{\hat{l}}$ Source Symbols. The algorithm needs three loss statistics. The first is the estimated uniform loss rate $\hat{l}$.
The two others are the $\hat{G_p}$ and $\hat{G_r}$ parameters of the Gilbert loss model. The Gilbert model~\cite{gemodel} is a two-states Markov model representing the channel, allowing representing network configurations where losses occur in bursts. These loss patterns cannot be easily recovered by a simple XOR error correcting code as shown in the original QUIC article~\cite{langley2017quic} but can be recovered by the random linear codes used by \textit{FlEC}{}. In the \textit{GOOD} state of the Gilbert model, packets are received while the packets are dropped in the \textit{BAD} state. $\hat{G_p}$ is the transition probability from the GOOD to the BAD state while $\hat{G_r}$ is the transition probability from the BAD to the GOOD state. In order to estimate the loss statistics $\hat{l}$, $\hat{G_p}$ and $\hat{G_r}$, we implement a \textit{loss monitor} that estimates the loss rate and Gilbert model parameters over a QUIC connection.
When the sender is blocked by the QUIC stream flow control, $FECPattern{}()$ sends more Repair Symbols to recover from the remaining potentially lost Source Symbols. While spreading the Repair Symbols along the coding window helps to recover the lost Source Symbols more rapidly compared to a block approach where all the repair symbols are sent at the end of the window, this also potentially consumes more bandwidth. Indeed, the Repair Symbols do not protect the entire window. This means that with an equal number of losses, some specific loss patterns will lead to Repair Symbols protecting a portion of the window with no loss and portions of the window requiring more Repair Symbols to be recovered.
\begin{algorithm}\small
\caption{FECPattern{} for buffer-limited use-case}\label{algo:buffer_ew}
\begin{algorithmic}[1]
\Require $last$, the ID of the last symbol present in the coding window when $FECPattern{}()$ was triggered the last time
\Require $nTriggered$, the number of times FECPattern{}() has already been triggered since no new symbol was added to the window.
\Require $maxTrigger$, the maximum number of times we can trigger this threshold for the same window
\Require $nRSInFlight$, the number of Repair Symbols currently in flight
\Require $W$, the current coding window.
\Require $FCBlocked()$, telling us if we are currently blocked by flow control.
\Require $\hat{l}$, $\hat{G_p}$, $\hat{G_r}$, see Table~\ref{tab:symbols}.
\If {$nRSInFlight \geq 2*\lceil|W|*\hat{l}\rceil$}
\State \Return $false$ \Comment{Wait for feedback before sending new RS}
\EndIf
\State $nUnprotected \gets W.last - last$
\State $n \gets min(\frac{1}{\hat{G_p}}, |W|)$
\State $protect \gets nUnprotected = 0 \lor nUnprotected \geq n \lor FCBlocked()$
\If{$protect \land nUnprotected \neq 0$}
\Comment{\textit{Start Repair Symbols sequence}}
\State $nTriggered \gets 1$
\State $last \gets W.last$
\State $maxTrigger \gets \lceil max(\hat{l}*nUnprotected, \frac{1}{\hat{G_r}}) \rceil$
\ElsIf{$protect$}
\If{$ FCBlocked() \lor nTriggered < maxTrigger$}
\State $nTriggered \gets nTriggered + 1$ \Comment{\textit{Continue sending symbols}}
\Else
\State $protect \gets false$ \Comment{\textit{Enough symbols have been sent}}
\EndIf
\EndIf
\State \Return $protect$
\end{algorithmic}
\end{algorithm}
\subsection{Evaluation}
We now evaluate our generic mechanism under a buffer-limited file transfer use-case. We first study a specific network configuration that could benefit from \textit{FlEC}{}. We then evaluate its overall performance using experimental design.
\subsubsection{\textit{FlEC}{} for SATCOM}
We choose the satellite communications (SATCOM) use-case where the delay can easily reach several hundreds of milliseconds~\cite{thomas2019quicsat,kuhn-quic-4-sat}. In those cases, end-hosts need a large receive buffer in order to reach the channel capacity. If they do not use a sufficiently large buffer, packet losses can have a significant impact on the throughput, preventing the sender to send new data as long as the data at the head of the receive buffer have not been correctly delivered to the application. The studied network configuration has a round-trip-time of 400 milliseconds and a bandwidth of 8~Mbps. Those are lower-bound values compared to current deployments~\cite{kuhn-quic-4-sat,thomas2019quicsat}. The bandwidth-delay product is thus 400kB.
Higher BDP configurations are studied in the experimental design analysis of the next section.
We study the benefits brought by \textit{FlEC}{} with several receive window sizes.
\paragraph{Download completion time and throughput}
Figure~\ref{fig:buffer_bbr_05} shows the download completion time ratio between \textit{FlEC}{} and regular QUIC with a 5~MB file and 0.5\% of packet loss. Each box in the graph is computed from 95 runs with different seeds for the ns-3 rate error model. The bandwidth is set to 8~Mbps and the congestion control is BBR. For each transfer using \textit{FlEC}{}, we decrease the receive window by 5\% at the receiver in order to store the received repair symbols in the remaining space. With receive windows smaller than the BDP (ranging from 70~kB to 400~kB), the sender is flow-control-blocked once per RTT during a time proportional to the $\frac{rwin}{cwin}$ ratio. This implies that the download completion time with small receive windows is large even without any packet loss. When losses occur, the repair symbols sent \emph{a priori} help to unblock the receive window at the receiver-side and avoid blocking the data transfer for more than one RTT. For the 70~kB receive window, the 5\% reduction to store the repair symbols is significant compared to the benefit of FEC and has a negative impact on the goodput.
With the 400~kB receive window, the sender only blocks in the presence of losses during the round-trip. The earlier the loss occurs during the round-trip, the longer the sender will be blocked by the flow control for the next round-trip, since it needs to retransmit the data to unblock the receive window. Sending \emph{a priori} Repair Symbols for these configurations allows reducing or completely avoiding those blocking situations, at the price of a small reduction in goodput. The transmission of Repair Symbols in a sliding-window manner (i.e. interleaved with the Source Symbols) as described in Algorithm~\ref{algo:buffer_ew} helps to recover from losses earlier compared to sending all the Repair Symbols at once in a block fashion. The price to pay compared to a block pattern is an goodput reduction as some loss patterns might require more Repair Symbols to be recovered with this method. For the large receive windows, sending Repair Symbols \emph{a priori} does not unblock the window but still helps to recover from tail losses. With such a high RTT, the impact of a tail loss relative to the download completion time is still significant.
Figure~\ref{fig:buffer_bbr_2} shows the result of our experiments with a 2\% packet loss rate. It is thus more common that the sender becomes flow-control blocked. This makes the approach worth even for smaller receive window sizes such as 70kB as the sender will be slowed down a lot more often.
\paragraph{Delay-bandwidth tradeoff}
Figure~\ref{fig:buffer_bytes_tradeoff_150kB} illustrates the delay-bandwidth tradeoff operated when using \textit{FlEC}{} instead of regular QUIC. Each point on the figure concerns a single experiment and represents the download completion time and the bytes overhead of the solution. The bytes overhead is computed by dividing the total amount of bytes of UDP payload sent by the server by the size of the file transferred (5MB). For this graph, the experiments use a small receive window of 150kB and the loss rate is 2\%.
As the receive window is small, sending FEC unblocks the receive window upon losses and allows drastically lowering the download completion time. The price to pay is an additional bytes overhead compared to the regular QUIC solution. In this rwin-limited scenario, the available bandwidth is generally larger than what is used due to the rwin restriction.
Figure~\ref{fig:buffer_bytes_tradeoff_6MB} shows experiments results with the opposite scenario: the receive window is 6MB large, which is larger than both the file to transfer and the bandwidth-delay product of the link. This case is similar to the bulk use-case of section~\ref{section:bulk}. We can see that \textit{FlEC}{} leads to stable latency results at the expense of a larger bytes overhead. As the receive window is larger than the file to transfer, the sender will never be flow-control blocked during the download. In this case, \textit{FlEC}{} minimizes the latency essentially by recovering from tail losses.
\begin{figure*}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 1.7cm,width=\linewidth]{dct_buffer_bbr_loss_0.5}
\caption{DCT ratio, 0.5\% losses.}
\label{fig:buffer_bbr_05}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{dct_buffer_bbr_loss_2}
\caption{DCT ratio, 2\% losses}
\label{fig:buffer_bbr_2}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 1.7cm,width=\linewidth]{time_bytes_tradeoff_buffer_bbr_loss_2_rwin_150kB.pdf} \caption{Time-bandwidth tradeoff, 2\% loss.}
\label{fig:buffer_bytes_tradeoff_150kB}
\end{minipage}
\end{figure*}
\begin{figure*}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 1.7cm,width=\linewidth]{time_bytes_tradeoff_buffer_bbr_loss_0.5_rwin_6MB.pdf}
\caption{Time-bandwidth tradeoff with a 0.5\% loss link and a 6MB receive window.}
\label{fig:buffer_bytes_tradeoff_6MB}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{dct_buffer_bbr_experimental_design.pdf}
\caption{Experimental design analysis for several receive window configurations.}
\label{fig:buffer_exp_design}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{dct_buffer_experimental_design_with_bursts_bbr.pdf}
\caption{Experimental design analysis using Gilbert model with bursts of 1 to 5 packets.}
\label{fig:buffer_exp_design_bursts}
\end{minipage}
\end{figure*}
\subsubsection{Experimental design analysis}
Figure~\ref{fig:buffer_exp_design} shows the aggregated results of simulations using experimental design. We show the CDF of the download completion time ratio between \textit{FlEC}{} and \texttt{picoquic}~\cite{picoquic}. Each CDF on the figure is built from 95 experiments with parameters selected from the ranges depicted on top of Figure~\ref{fig:buffer_exp_design}. Each CDF curve corresponds to downloads using the
receive window size specified in the legend. The congestion control used is still BBR.
We observe positive results using \textit{FlEC}{} for the majority (75\%) of the network configurations, especially for smaller receive window sizes (80\% positive results for windows smaller or equal to 400kB). Some configurations still expose negative results using \textit{FlEC}{}, even for smaller receive window sizes. These configurations are those whose bandwidth-delay product is small compared to the receive window. To verify this, we computed the average $\frac{BDP}{rwin}$ ratio on all the experiments for which \textit{FlEC}{} took more time to complete than \texttt{picoquic}, whose value is 0.48. For the experiments where the \textit{FlEC}{} download was faster, the average value of this ratio is 1.53.
Let us now assess the performance of our solution using a bursty loss model in order to see whether \textit{FlEC}{} stays robust even in presence of loss bursts. Figure~\ref{fig:buffer_exp_design_bursts} shows the results of an experimental design analysis with a Gilbert loss model with $G_{\hat{p}}$ ranging from 0.1\% to 1.5\% and $G_{\hat{r}}$ set to 33\% (i.e. an expected burst size of 3 packets) and a maximum burst size of 5 packets. Loss events thus occur less often compared to Figure~\ref{fig:buffer_exp_design}, leading to fewer blocking periods for QUIC during the experiments but with a higher probability of loosing several packets in a row. We can see that Algorithm~\ref{algo:buffer_ew} still offers benefits in the presence of bursty losses. Similarly to Figure~\ref{fig:buffer_exp_design}, \textit{FlEC}{} especially improves the results for experiments with a large $\frac{cwin}{rwin}$ ratio.
\section{Bulk file transfers}\label{section:bulk}
We here present the implementation and evaluation of the reliability mechanism proposed for bulk file transfers. The metric to minimize is the total download completion time. Sending unneeded repair symbols reduce the goodput and increase the download completion time. The expected behaviour is therefore similar to SR-ARQ with tail loss protection. The Repair Symbols are always sent within what is allowed by the congestion window, meaning that \textit{FlEC}{} does not induce any additional link pressure.
\subsection{Bulk reliability mechanism}
For a file transfer, we set the delay-sensitivity threshold to be equal to $-\hat{l}$.
\begin{equation} \label{equation_bulk_l}
\textstyle \hat{r} - md/ad < -\hat{l} \rightarrow sendRepairPacket()
\end{equation}
Substituting $\hat{l}$ by $(1-\hat{r})$ in Equation~\ref{equation_bulk_l}, we can rewrite it as
\begin{equation} \label{equation_bulk_r}
\textstyle md/ad > 1 \rightarrow sendRepairPacket()
\end{equation}
so that we send Repair Symbols only when a packet is detected as lost and it has not been protected yet. The transmission of a Repair Symbol triggered by this threshold increases $ad$ by 1 until $ad$ becomes equal to $md$.
Using the threshold defined in Equation~\ref{equation_bulk_l} ensures a reliable delivery of the data but does not improve the download completion time in the case of tail losses. $md$ only increases after a packet is marked as lost by the QUIC loss detection mechanism. The $FECPattern{}()$ operation controls the \emph{a priori} transmission of Repair Symbols. In contrast with the previous solution~\cite{cohen2020adaptive}, we redefine $FECPattern{}()$ and set it to $true$ only when all the application data has been sent instead of setting it to $true$ once per RTT. This implies that only the last flight of packets will be protected. All the previous flights will be recovered through retransmissions. Indeed, given the fact that the receive window is large enough compared to the congestion window, there will be no silence period implied by any packet loss except for the last flight of packets. The total download completion time will thus not be impacted by any loss before the last flight of packets. Without using FEC, the loss of any packet in the last flight will cause a silence period between the sending of that lost packet and its retransmission.
We track the loss conditions throughout the download and trigger the $FECPattern{}()$ threshold according to the observed loss pattern. This loss-rate-adaptive approach is especially beneficial when enough packets are exchanged to accurately estimate the loss pattern. This occurs when the file is long or when loss information is shared among connections with the same peer. When a sufficient number of repair symbols are sent to protect the expected number of lost source symbols, the algorithm keeps slots in its congestion window to transmit new data. Another approach would be to define $FECPattern{}()$ to use all the remaining space in the congestion window to send repair Symbols, with the drawback of potentially consuming more bandwidth than needed.
\subsection{Evaluation}
We now evaluate \textit{FlEC}{} with the $ds{}$() and $FECPattern{}()$ protocol operations defined for the bulk use-case.
\subsubsection{Experimental setup}
We base our implementation on the PQUIC~\cite{de2019pluginizing} pluginized QUIC implementation on commit \textit{68e61c5}~\cite{pquic-github}. PQUIC is itself based on the \texttt{picoquic}~\cite{picoquic} QUIC implementation.
We perform numerous experiments and compare it with the regular QUIC without our plugins. We use ns-3~\cite{riley2010ns3} version 3.33 with the Direct Code Execution (DCE)~\cite{camara2014dce} module. The DCE module allows using ns-3 with the code of a real implementation in a discrete time environment. This means that the actual code of the QUIC and \textit{FlEC}{} implementation is running and that the underlying network used by the implementation is simulated by ns-3, making the experiments fully reproducible while running real code. Figure~\ref{fig:exp_topo} shows the experimental setup. We use ns-3's \textit{RateErrorModel} to generate reproducible loss patterns with different seeds and configure the network queues to 1.5 times the bandwidth-delay product.
We run the system into a Ubuntu 16.04 Linux system with 20GB of RAM, using 16 cores \textit{Intel(R) Xeon(R) Silver 4314} CPUs.
Although the congestion control is orthogonal to our proposed reliability mechanism, the Reno~\cite{newreno} and CUBIC~\cite{ha2008cubic} congestion control algorithms supported by PQUIC suffer from bandwidth underestimation under severe loss conditions. We thus perform experiments using the BBR~\cite{bbr} congestion control algorithm. BBR avoids underestimating the network bandwidth upon packet losses by looking at the receive rate and delay variation during the transfer. While not being explored in this paper, other congestion control algorithms~\cite{gcc,vegas} use other signals than packet losses to detect congestion.
\begin{figure}
\centering
\includegraphics[trim=0cm 0.2cm 0cm 0cm,width=0.7\linewidth]{exp_topo.png}
\caption{Experimental topology using NS-3 with Direct Code Execution.}
\label{fig:exp_topo}
\vspace{-0.5cm}
\end{figure}
\subsubsection{Experimental design}
We evaluate the bulk use-case by sending files of several sizes and first see how \textit{FlEC}{} compares with QUIC using its regular reliability mechanism. For this evaluation, we rely on an experimental design~\cite{experimental-design}. This approach consists in defining ranges of parameters instead of choosing precise values in order to mitigate the experimentation bias and explore network configurations showing the limits of the presented solution. We use the WSP~\cite{santiago2012construction} space-filling algorithm to cover the parameter space with 94 points. One experiment is run for each point in the parameter space.
Figure~\ref{fig:results_bulk} shows the cumulative distribution function (CDF) of the Download Completion Time (DCT) ratio between \textit{FlEC}{} and \texttt{picoquic}~\cite{picoquic} used as our reference QUIC implementation. The experiments consist in the download files of size 10kB, 40kB, 100kB, 1MB and 10MB. For each file size, 95 experiments are run using experimental design. 40kB and 100kB are the average response sizes for Google Search on mobile and desktop devices~\cite{langley2017quic}. The parameter space is described on top of the Figure. The loss rate varies between 0.1\% and 8\% to cover both small loss rates and loss rates experienced under intense network conditions such as In-Flight Communications~\cite{rula2018mile}. The round-trip-time varies between 10ms and 200ms to experience both low delays and large delays such as those encountered in satellite communications. As shown in the Figure, $ds{}()$ and $FECPattern{}()$ implement here a bulk-friendly reliability mechanism. By automatically protecting the tail of the downloaded file, we obtain similar results as previous works~\cite{de2019pluginizing,michel2019quic}. A few of the experiments with 40 and 100kB files provided poorer results compared to QUIC. With those file sizes, \textit{FlEC}{} uses one more stream frame to transmit the data, needing in some rare cases one additional round-trip to transmit this additional packet. While not shown graphically in this article, replacing BBR by Cubic~\cite{ha2008cubic} provides similar results. These experiments are provided in the artefacts that come with the article upon publication.
Figure~\ref{fig:results_bulk_flec_vs_ac_rlnc} compares $\textit{FlEC}{}$ with an implementation of AC-RLNC~\cite{cohen2020adaptive} following Table~\ref{tab:use_cases}. We observe that $\textit{FlEC}{}$ still outperforms AC-RLNC as sending repair symbols every RTT consumes too much bandwidth for the bulk use-case, while $\textit{FlEC}{}$ only sends repair symbols \emph{a priori} for the last flight of packets, relying on retransmissions for all the other packets as their retransmission arrives before the end of the download.
\subsubsection{Experimenting with a real network}
We now extend our study and analyze the benefits of \textit{FlEC}{} over a real network between a regular QUIC and \textit{FlEC}{} server on a Ubuntu 18.04 server located at UCLouvain and a client wired to a Starlink access point located in Louvain-la-Neuve (Belgium). We performed a total of 20150 uploads of 50kB from the client to the server. Among those 20150 uploads, 430 encountered at least one packet loss during the transfer. Figure~\ref{fig:starlink_bulk_downloads_with_losses} shows the CDF of the download completion time for these 430 uploads. The median download completion time for these uploads is 247ms for \textit{FlEC}{} and 272ms for regular QUIC. The average download completion time is 340ms for \textit{FlEC}{} and 393ms for QUIC. Unsurprisingly, \textit{FlEC}{} improves the download completion time for the transfers where the loss events occur during the RTT.
\subsubsection{CPU performance} While it has been demonstrated that PQUIC protocol plugins deteriorate noticeably the performance~\cite{de2019pluginizing}, we analyze the CPU impact of the \textit{FlEC}{} framework by transferring 1GB files on the loopback interface. Without \textit{FlEC}{}, we achieved a throughput of 650 Mbps. With \textit{FlEC}{} configured for the bulk use-case (i.e. sending Repair Symbols at the end of the transfer only), it dropped to 300 Mbps. This is inline with earlier observations on PQUIC performance. We believe that with a native implementation, the impact of the \textit{FlEC}{} framework would be barely noticeable.
We also analyzed the throughput sending one Repair Symbol every ten Source Symbols and obtained a throughput (i.e. not goodput) of 280 Mbps, meaning that the encoding and decoding of Repair Symbols implies only a small overhead compared to the framework in itself.
\begin{figure*}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0.5cm 0cm 0.5cm 0.4cm,width=0.95\linewidth]{bulk_experimental_design_bbr}
\caption{DCT ratio for bulk use-case using BBR. $FECPattern{}()$ and $ds{}()$ ensure that Repair Symbols only protect the tail of the file.}
\label{fig:results_bulk}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0.5cm 0cm 0.5cm 0.4cm,width=0.95\linewidth]{bulk_experimental_design_flec_vs_ac_rlnc_bbr}
\caption{DCT ratio between \textit{FlEC}{} and AC-RLNC~\cite{cohen2020adaptive} for regular bulk use-case using the BBR congestion control.}
\label{fig:results_bulk_flec_vs_ac_rlnc}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0.5cm 0cm 0.5cm 0.4cm,width=0.95\linewidth]{starlink_bulk_downloads_with_losses}
\caption{DCT comparing $\textit{FlEC}{}$ and the regular QUIC for downloads with at least one packet loss, performed on a real Starlink network access.}
\label{fig:starlink_bulk_downloads_with_losses}
\end{minipage}
\end{figure*}
\section{Conclusion}
In this paper, we redefine the QUIC reliability mechanism and enable its per-use-case customization. Flexible Erasure Correction (\textit{FlEC}{}) allows efficiently combining retransmissions and Forward Erasure Correction.
Applications can either use a standard Selective-Repeat ARQ mechanism or tailor a Forward Erasure Correction mechanism that fits their own traffic pattern and sensitivity to delays. Our \textit{FlEC}{} implementation leverages the PQUIC protocol plugins to enable the application to insert its own algorithm to select the level of redundancy and the stream scheduling decisions. We customize \textit{FlEC}{} for three different use-cases. We evaluate and demonstrate that \textit{FlEC}{} can be configured with small to no effort by applications to significantly enhance the quality of experience compared to the existing QUIC loss recovery mechanisms. \textit{FlEC}{} is currently a single-path implementation. In the future, we plan to study how \textit{FlEC}{} can be used together with several network interfaces to improve the transfer for the considered use-case and go further than reducing head-of-line blocking using a tailored redundancy and path scheduler.
\section*{Artefacts}
Simulation scripts and the code of \textit{FlEC}{} are publicly available from \url{https://github.com/francoismichel/flec}.
\section{Tunable reliability mechanisms}\label{section:tunable}
Sending Repair Symbols for delay-sensitive applications is done at the cost of bandwidth when there is no loss to recover. This is why Repair Symbols should be sent carefully to avoid consuming bandwidth with no or low additional benefit.
Some adaptive FEC mechanisms have been proposed to adjust the redundancy overhead to the measured loss rate. Both CTCP~\cite{kim2012ctcp} and TCP/NC~\cite{sundararajan2009tcpnc} adjust the level of redundancy according to the measured loss rate. rQUIC~\cite{rquic:2021} proposes a similar idea. While these approaches can show significant benefits compared to classical retransmission mechanisms, they still increase the overhead compared to the more efficient selective-repeat mechanisms in bulk download use-cases. By using a causal scheduling algorithm, we allow our solution to react to the current channel condition and adopt similar behaviours to SR-ARQ when it is needed by the use-case.
On the other-side, real-time applications cannot fully leverage the benefits of such transport-layer coding mechanisms because there is no way for an application to precisely express its requirements. The result is that such applications typically implement their own coding-enabled protocol~\cite{rfc3550rtp,rfc2733rtpfec}.
We reconcile strong application delay requirements and regular transport protocols by providing them with a tunable reliability mechanism that applications can adapt to their needs with small to no effort.
We use QUIC to demonstrate our ideas, but they could be applicable to other transport protocols as well.
QUIC stacks are mostly implemented as libraries that can be used by a wide range of applications.
While QUIC can easily be tuned on the server-side to better fit the application requirements, obtaining such a flexibility on the end-user devices is more complicated, as the application wants to tune the underlying stack to meet its requirements.
In the TCP/IP stack, this tuning is mainly done by using socket options or system-wide parameters. Socket Intents~\cite{schmidt2013socketintents} and the ongoing work~\cite{ietf-taps-arch-06} within the IETF TAPS working group show that there is an interest to insert some knowledge from the application to the transport protocol.
The QUIC specification~\cite{ietf-quic-transport} does not currently define a specific API between the application and the transport protocol but specifies a set of actions that could be performed by the application on the streams (e.g. reading and writing data on streams) and on the connection itself (e.g. switching on/off 0-RTT connection establishment or terminating the connection). The QUIC specification allows the application to pass information about the relative priority of the streams. However, it is unclear how, e.g., an application could express timeliness constraints.
On the other side, the current FEC specification for QUIC~\cite{coding-for-quic} does not
guide the application to choose a code rate nor which parts of the application data should be FEC-protected and when coded symbols should be sent. Furthermore, different applications may require different strategies to send redundancy.
In a video-conferencing application, Repair Symbols could protect a whole video frame. An IoT application~\cite{eggert2020quiciot} with limited buffers may want to protect the data incrementally to ensure a fast in-order delivery despite losses
\paragraph*{Contributions}
Previously~\cite{cohen2020adaptive}, we provided a joint coding scheme and algorithm in which one can theoretically manage the delay-rate tradeoff to get the required QoS. However,
we did not cope with the complex and various requirements of real applications.
We advocate that sending redundancy packets in transport protocols \textbf{should be done in adequacy with the application needs in order to provide satisfactory results.}
In previous solutions, the FEC mechanism does not track both the channel condition and application requirements throughout the data transfer. In this work, we consider both the application's requirements and the network conditions to schedule the redundancy more efficiently.
The contribution of this article is thus a complete redefinition of the reliability mechanism of the QUIC protocol by making it general and flexible. We introduce a general loss recovery framework able to implement both a classical SR-ARQ mechanism and FEC.
We leverage the idea of protocol plugins~\cite{de2019pluginizing} and implement our reliability mechanism as a framework exposing two anchors points to applications. Applications can redefine these anchors according to their delay-sensitivity and traffic pattern. We explore three different use-cases and show that adapting the reliability mechanism to the use-case can drastically improve the quality of the transmission. The first use-case is the bulk download scenario, discussed in Section~\ref{section:bulk}. The second use-case discussed in Section~\ref{section:buffer_limited} is a scenario where the peer's receive window is small, resulting in the sender being regularly blocked by the flow control during loss events. The third use-case discussed in Section~\ref{section:messaging} is a scenario where the application sends messages that must arrive before a specific deadline. The three use-cases are described below.
\subsection{Bulk file transfer}
Bulk file transfer is the simplest use-case we consider. It consists in the download of a single file under the assumption that the receive buffer is large compared to the bandwidth-delay product of the connection. This is the classical use case for many transport protocols. Current open-source QUIC implementations use default receive window sizes that support such a use-case. The receive window starts at 2 MBytes for locally-initiated streams in \texttt{picoquic}~\cite{picoquic-flowcontrol-limit}. The Chromium browser's implementation~\cite{chromium-flow-control-limit}
starts with an initial receive window of 6MB per stream and 16MB for the whole connection. The metric that we minimize here is the total time to download the whole file. This includes REST API messages
that often need to be completely transferred in order to be processed correctly by the application.
As already pointed out~\cite{flach2013reducing,michel2019quic,langley2017quic}, a packet loss during the last round-trip-time can have a high relative impact on the download completion time. The latter may indeed be doubled for small files due to the loss of a single packet. Protecting these tail packets can drastically improve the total transfer time at a cost significantly smaller than the cost of simply duplicating all these packets.
On the other hand, protecting other packets than the tail ones
with FEC can be harmful for the download completion time. The packet losses in the middle of the download can be recovered without FEC before any quiescence period provided that the receiver uses sufficiently large receive buffers.
\subsection{Buffer-limited file transfers}
In numerous network configurations, the available memory on the end devices is a limiting factor. It is common to see delays longer than 500 milliseconds in satellite communications, while their bandwidth is in the order of several dozens of Megabits per second~\cite{thomas2019quicsat,kuhn-quic-4-sat}. Furthermore, with the arrival of 5G, some devices will have access to bandwidth up to 10Gbps~\cite{rappaport2013millimeter,agiwal2016next}.
While the edge latency of 5G infrastructures is intended to be in the order of a few milliseconds~\cite{3gpp.21.915}, the network towards the other host during an end-to-end transport connection may be significantly higher, partially due to the large buffers on the routers and the buffer-filling nature of currently deployed congestion control mechanisms.
Packet losses occurring on those high Bandwidth-Delay Product (BDP) network configurations imply a significant memory pressure on reliable transport protocols running on the end devices.
To ensure an in-order delivery, the transport protocol running on these devices needs to keep the data received out-of-order during at least one round-trip-time, requiring receive buffer sizes to grow to dozens of megabytes for each connection.
At the same time, QUIC is also considered for securing connections on IoT devices~\cite{eggert2020quiciot,kumar2019quicmqtt}. Those embedded devices cannot dedicate large buffers for their network connections.
Receive buffers that cannot bear the bandwidth-delay product of the network they are attached to are unable to fully utilize its capacity, even without losses. This typically occurs when the receive window is smaller than the sender's congestion window. Measurements show that TCP receivers frequently suffer from such limitations \cite{langley2017quic}. The problem gets even worse in case of packet losses as they prevent the receiver to deliver the data received out-of-order to the application. Those data will remain in memory, reducing the amount of new data that can be sent until the lost data is correctly retransmitted and delivered to the application.
Sacrificing a few bytes of the receiver memory in order to handle repair symbols and protect the receive window from being blocked upon packet losses can drastically improve the transfer time, even in a file transfer use-case.
In such cases, FEC can be sent periodically along with non-coded packets during the download and not just at the end of the transfer.
\subsection{Delay-constrained messaging}
Finally, we consider applications with real-time constraints such as video conferencing.
Those applications send messages (e.g. video frames) that need to be successfully delivered within a short amount of time. The metric to optimize is the number of messages delivered on-time at the destination.
FEC can significantly improve the quality of such transfers by recovering from packet losses without retransmissions, at the expense of using more bandwidth. Researchers have already applied FEC to video applications~\cite{cavusoglu2003real,puri2001forward}. Some~\cite{cavusoglu2003real} take a redundancy rate as input and allocates the Repair Symbols given the importance of the video frame. Others~\cite{puri2001forward} propose a congestion control scheme that reduces the impact of isolated losses on the sending rate. They then use this congestion control to gather knowledge from the transport layer to the application in order to adapt the transmission to the current congestion. We propose the reverse idea: the application transfers its knowledge directly in the transport protocol to automatically adjust its stream scheduler and redundancy rate given the application's requirements.
\section{FlEC}\label{section:flec}
\begin{figure}
\centering
\includegraphics[trim=0cm 0.2cm 0cm 0cm,width=0.9\linewidth]{design}
\caption{Design of the solution: a general framework with two pluggable anchors to redefine the reliability mechanism given the use-case.}
\label{fig:design}
\end{figure}
In this section, we present the Flexible Erasure Correction (\textit{FlEC}{}) framework. \textit{FlEC}{} starts from a previous theoretical work, AC-RLNC~\cite{cohen2020adaptive}. This previous work proposes a decision mechanism to schedule repair symbols depending on the network conditions and the feedback received from the receiver. In this approach, repair symbols are sent in reaction to two thresholds:
the first is triggered as a function of the number of missing degrees of freedom by the receiver, and the second threshold sends repair symbols \textit{once every RTT}. The original goal of the proposed algorithm~\cite{cohen2020adaptive} is to trade bandwidth for minimizing the in-order delivery delay of data packets.
We start from this idea of tracking the sent, seen and received degrees of freedom as a first step to propose a redundancy scheduler for the transport layer.
However, while this first idea provides a general behaviour, this may be insufficient for real applications with tight constraints that cannot be expressed with AC-RLNC's parameters. For example, a video-conferencing application may prefer to maximize bandwidth over low-delay links and therefore rely on retransmissions only, while FEC is needed over high-delay links as such retransmissions cannot meet the application's delay constraints.
Instead of proposing configurable constant thresholds to tune the algorithm, we make it dynamic by proposing two redefinable functions: $ds{}()$ (for ``\textit{delay-sensitivity}'') and $FECPattern{}()$. These two functions can be completely redefined to instantiate a reliability mechanism closely corresponding to the use-case. This allows having completely different FEC behaviours for use-cases with distinct needs such as HTTP versus video-conferencing. The $ds{}()$ threshold represents the sensitivity of the application to the in-order delivery delay of the data sent. In AC-RLNC, the FEC scheduler sends redundancy once per RTT. In \textit{FlEC}{}, the $FECPattern{}()$ dynamic function allows triggering the sending of FEC at specific moments of the transfer depending on the use-case. Sending FEC for every RTT may deteriorate the application performance, especially when the delay is low enough to rely on retransmissions only. Having a dynamic $FECPattern()$ function avoids this problem. For instance, in a bulk download scenario, it can trigger FEC at the end of the download only and rely on retransmissions otherwise. For video transfer, it can trigger FEC after each video frame is sent. Figure~\ref{fig:design} illustrates the idea of \textit{FlEC}{}. The regular QUIC reliability mechanism is based on SR-ARQ. In \textit{FlEC}{}, the SR-ARQ mechanism is a particular case among many other possibilities. Algorithm~\ref{causalRLNCalgo} shows our generic framework and Table~\ref{tab:symbols} defines the variables used by our algorithms. We implement \textit{FlEC}{} using PQUIC~\cite{de2019pluginizing} and define $FECPattern{}()$ and $ds{}()$ as protocol operations. However, the same principles can be applied without PQUIC with the application redefining the operations natively thanks to the user-space nature of QUIC.
\begin{table}
\centering
\begin{tabularx}{0.9\linewidth}{|c|X|}
\hline
$\hat{l}$ & the estimated uniform loss rate \\
\hline
$\hat{r}$ & the estimated receive rate \\
\hline
$\hat{G_p}$, (resp. $\hat{G_r}$) & the estimated transition probability from the GOOD to the BAD (resp. BAD to GOOD) state of a Gilbert loss model~\cite{gemodel} \\
\hline
$md$ & missing degrees of freedom\\
\hline
$ad$ & added degrees of freedom \\
\hline
$ds{}()$ & customizable threshold eliciting Repair Symbols given the application's delay sensitivity \\
\hline
$FECPattern{}()$ & customizable condition to send FEC using the application's traffic pattern \\
\hline
\end{tabularx}
\caption{Definition of the different symbols.}
\label{tab:symbols}
\end{table}
\begin{algorithm}\small
\begin{algorithmic}[1]
\Require $\hat{l}$
\Require $feedback$, the most recent feedback received from the peer
\Require $W$, the current coding window
\State $\hat{r} \gets 1-\hat{l}$
\State $ad \gets computeAd(W)$
\State $md \gets computeMd(W)$
\If{$feedback = \varnothing$}
\If{FECPattern()}
\State \Return $NewRepairSymbol$
\Else
\State \Return $NewData$
\EndIf
\Else
\State updateLossEstimations(feedback)
\If{FECPattern()}
\State \Return $NewRepairSymbol$
\ElsIf{$\hat{r}-\frac{md}{ad} < ds()$}
\State \Return $NewRepairSymbol$
\Else
\State \Return $NewData$
\EndIf
\EndIf
\caption{Generic redundancy scheduler algorithm. The $ds{}()$ and $FECPattern{}()$ thresholds are redefined by the underlying application. The algorithm is called at each new available slot in the congestion window of the protocol.\label{causalRLNCalgo}}
\end{algorithmic}
\end{algorithm}
The $computeMd$ function computes the number of missing degrees ($md$) of freedom (i.e. missing source symbols) in the current coding window. The $computeAd$ function computes the number of added degrees ($ad$) of freedom (i.e. repair symbols) that protect at least one packet in the current coding window. Compared to AC-RLNC, we only consider in-flight repair symbols in $ad$ to support retransmissions when repair symbols are lost.
The higher the value returned by $ds{}()$, the more likely it is to send repair symbols prior to the detection of a lost source symbol and the more robust is the delay between the sending of the source symbols and their arrival at the receiver.
The extra cost is the bandwidth utilization.
Sending repair symbols \emph{a priori} occupies slots in the congestion window and is likely to increase the delay between the generation of data in the application and its actual transmission. Setting $ds{}()$ to $-\hat{l}$ triggers the transmission of repair symbols only in reaction to a newly lost source symbol, implementing thus a behaviour similar to regular QUIC retransmissions. In this work, retransmissions are done using repair symbols to illustrate that the approach is generic. However, regular uncoded retransmissions can be used for better performance without loss of generality.
$FECPattern{}()$ allows regulating the transmission of \emph{a priori} repair symbols regardless of the channel state, in contrast with AC-RLNC~\cite{cohen2020adaptive} where this threshold is triggered once per RTT.
Table~\ref{tab:use_cases} describes how $ds{}()$ and $FECPattern{}()$ can be redefined to represent reliability mechanisms that fit the studied use-cases. The first row of the table shows how to implement the classical Selective-Repeat ARQ mechanism used by default in QUIC. The second one implements the behaviour of AC-RLNC~\cite{cohen2020adaptive}. $FECPattern{}$ is triggered once every RTT according to the EW parameter of AC-RLNC. The third one is tailored for the bulk use-case: $ds{}()$ is set to send Repair Symbols only when there are missing symbols at the receiver and $FECPattern{}()$ sends Repair Symbols when there is no more data to send. The two other rows are explained in details in the next sections. In this Table, $c$ is a non-negative user-defined constant. The higher $c$ is, the more sensitive we are to a variance in the loss rate.
\begin{table}
\centering
\begin{tabularx}{0.9\linewidth}{|c|c|X|}
\hline
\textbf{Use-case} & \textbf{$ds{}()$} & \textbf{$FECPattern{}()$} \\
\hline
Bulk transfer (SR-ARQ) & $-\hat{l}$ & $false$ \\
\hline
AC-RLNC~\cite{cohen2020adaptive} & $c\cdot\hat{l}$ & $true$ every RTT \\
\hline
Bulk transfer & $-\hat{l}$ & $allStreamSent()$ \\
\hline
Buffer-limited bulk & $c\cdot\hat{l}$& Algorithm~\ref{algo:buffer_ew} \\
\hline
Messaging & $-\hat{l}$ & Algorithm~\ref{algo:ew_messages} \\
\hline
\end{tabularx}
\caption{Definition of $ds{}()$ and $FECPattern{}()$ for the considered use-cases. }
\label{tab:use_cases}
\end{table}
\subsection{Comparing \textit{FlEC}{} and previous work}
The origin of \textit{FlEC}{} comes from the shortcomings of AC-RLNC~\cite{cohen2020adaptive} and QUIC-FEC~\cite{michel2019quic}.
As said earlier in this section, \textit{FlEC}{} shares with AC-RLNC the idea of tracking the state of the communication in terms of received, seen and lost symbols. However, it adds the tight and diverse application requirements to the loop in order to adopt a correct behaviour for use-cases where FEC can be beneficial. It also adds all the transport-layer considerations such as staying fair to the congestion control of the protocol upon loss recovery.
\textit{FlEC}{} also builds upon QUIC-FEC as it integrates similar transport layer considerations. For instance, \textit{FlEC}{} uses as similar wire format as well as the concept of \texttt{RECOVERED} frame in order to differentiate packet acknowledgements from symbols recoveries. However, QUIC-FEC was designed without any care of the application traffic pattern or channel condition: the packet redundancy was not adaptive at all.
\section{Implementation}\label{section:implementation}
\textit{FlEC}{} is composed of two parts. First, the general \textit{FlEC}{} framework allows defining reliability mechanisms in a flexible way. This part is generic and is not intended to vary.
The second part contains the $FECPattern{}()$ and $ds{}()$ operations. These operations are designed to vary depending on the use-case, so the app can redefine them based on their requirements.
We implement our \textit{FlEC}{} framework inside PQUIC~\cite{de2019pluginizing}. We implement the behaviours of the three use-cases discussed in this article by redefining $FECPattern{}()$ and $ds{}()$ to support the adequate reliability mechanism for each of them.
Similarly to previous works~\cite{michel2019quic,cohen2020adaptive}, we rely on random linear codes for the encoding and decoding of the symbols. This choice is made out of implementation convenience although other error correcting codes can be used as encoding/decoding tools of our work with only little adaptation. We advocate that even simpler codes such as Reed-Solomon can provide benefits for the considered use-cases although the benefit may be lower (e.g. such simple block codes cannot mix the repair symbols of different generations conversely to random linear codes).
We re-implemented
the FEC plugin originally proposed in PQUIC~\cite{de2019pluginizing} to match the latest design of the FEC extension for QUIC~\cite{coding-for-quic}. We enhanced the $GF(2^8)$ RLC implementation to use dedicated CPU instructions and adding an online system solver for faster symbols recovery. Most of the \textit{FlEC}{} protocol operations consist in monitoring the current packet loop and providing a shim layer between the PQUIC design and the \textit{FlEC}{} symbols scheduling algorithm. While we propose the \textit{FlEC}{} framework as a protocol plugin, it can also be implemented natively and provided by default with the protocol implementation. The application can also provide its native implementation for the $ds{}()$ and $FECPattern{}()$ operations.
The whole \textit{FlEC}{} framework implementation takes 8200 lines of code. It adds a complete FEC extension to the QUIC protocol with the RLC error correcting code using PQUIC protocol plugins. This code is generic and does not have to be redefined by any application.
The codes needed to define $ds{}()$ and $FECPattern{}()$ for the bulk and buffer-limited use-case have been written with respectively 57 and 97 lines of C code while the code for the messaging use-case takes 335 lines of C code. These two small functions are the parts that can be redefined by the application to stick to their use-case. Applications can also use our implementations for the three use-cases explored in this paper.
\begin{comment}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& \textit{FlEC}{} & Bulk & Buffer & Message \\
\hline
\#protoops & 61 & 2 & 2 & 7 \\
\hline
compressed size (kB) & 65.6 & 1.7 & 2.0 & 6.5\\
\hline
\# lines of code & 8249 & 57 & 95 & 335 \\
\hline
\end{tabular}
\caption{Protocol operations and their sizes for the FlEC framework and each considered use-case.}
\label{tab:protoops}
\end{table}
\end{comment}
\section{Introduction}
\IEEEPARstart{T}{he} transport layer is one of the key layers of the protocol stack. It ensures the end-to-end delivery of application data through the unreliable network layer. There are two main families of transport protocols: the unreliable datagram protocols like UDP, DCCP \cite{kohler2006designing}, RTP \cite{rfc1889} or QUIC datagrams~\cite{ietf-quic-datagram-10}, and the reliable ones such as TCP~\cite{rfc793}, SCTP~\cite{rfc4960} or QUIC \cite{langley2017quic}.
During the last years, QUIC has attracted a growing interest thanks to its design. The QUIC specification was finalised in May 2021 \cite{rfc9000} and as of November 2021, it is already supported by more than 30M domains on the Internet~\cite{zirngibl2021s}. QUIC structures its control information and data into frames and supports stream multiplexing. Finally, QUIC includes the authentication and encryption functions of Transport Layer Security (TLS) \cite{rfc8446}. By using the latter to encrypt and authenticate all data and most of the headers, QUIC prevents interference from middleboxes. Coupled with the availability of more than two dozens open-source implementations \cite{quicwg-implems-list}, QUIC has become a very interesting platform for transport layer research~\cite{kakhki2017taking,de2019pluginizing,polese2019survey,rquic:2019,ietf-quic-recovery}. While QUIC leverages the loss recovery and congestion control techniques that are part of modern TCP implementations, other loss recovery mechanisms using FEC have recently been considered to recover earlier from packet losses~\cite{coding-for-quic,michel2019quic,rquic:2021}.
Packet losses, either caused by congestion or transmission errors are frequent in today's networks and seriously considered for the design of modern transport protocols~\cite{langley2017quic}. The first transport protocols relied on simple Automatic Repeat reQuest (ARQ) mechanisms \cite{rfc793,bertsekas1992data} to recover from losses. Over the years, a range of heuristics have been proposed. For TCP, this includes the fast retransmit heuristic~\cite{rfc6582}, selective acknowledgements~\cite{rfc2018}, the Eifel algorithm \cite{ludwig2000eifel}, recent acknowledgments~\cite{ale8985}, tail loss recovery techniques \cite{rajiullah2015evaluation,rfc5827}, and others. Other reliable transport protocols have also benefited from this effort. SCTP and QUIC include many of the optimizations added to TCP over the years \cite{budzisz2012taxonomy,langley2017quic}.
While retransmissions remain the prevalent technique to recover from packet losses, coding techniques have been proposed in specific scenarios such as ATM networks \cite{biersack1992performance}, audio/video traffic \cite{carle1997survey} or multicast services \cite{gemmell2003pgm,rfc3452} where the cost of retransmissions grows with the group size. Some of these approaches are supported by RTP extensions~\cite{rfc2733rtpfec,rfc6682rtpfecraptor}. Several of these approaches have been applied to TCP~\cite{sundararajan2009tcpnc,kim2012ctcp}, usually by using a coding sublayer below TCP and hiding the coding functions from the transport layer \cite{medard_xors:2020,sundararajan2011network,cui2014fmtcp}.
Despite these efforts, many Internet applications still select either an unreliable transport (such as UDP) or TCP that forces in-order delivery and suffers from head-of-line blocking~\cite{langley2017quic}. If an application developer needs another reliability model, she needs to implement the logic directly inside the application
In this article we propose to revisit the reliability mechanisms in the transport layer. Our main contribution is that we enable applications to finely tune the reliability mechanism of the transport protocol to closely fit their needs. We implement our solution using QUIC and protocol plugins~\cite{de2019pluginizing}, but our ideas are generic and can be applied to other protocols as well. We evaluate the flexibility of our techniques by considering a range of applications and show that our application-tailored reliability mechanism outperforms a one-size-fits-all solution.
This paper is organised as follows. We first discuss the current reliability mechanisms (Section~\ref{section:reliability}) of transport protocols and see how flexibility is currently provided by existing solutions (Section~\ref{section:tunable}). We then propose Flexible Erasure Correction (\textit{FlEC}{}), a novel reliability mechanism that can be easily redefined on a per-application basis (Section~\ref{section:flec}) to adapt the reliability mechanism to the application needs. We implement \textit{FlEC}{} inside QUIC (Section~\ref{section:implementation}) and demonstrate the benefits of the approach by studying three different use-cases (Sections~\ref{section:bulk}-\ref{section:messaging}) with competing needs that can all improve their quality of experience using \textit{FlEC}{}.
\section{Delay-constrained messaging}\label{section:messaging}
In this section, we present the implementation and evaluation of \textit{FlEC}{} tailored for delay-constrained messaging. The goal is to protect whole messages instead of naively interleaving Repair and Source Symbols. Using application knowledge, \textit{FlEC}{} protects as much frames as possible at once.
\subsection{Reliability mechanism}\label{section:messaging_mechanism}
We consider an application sending variable-sized messages, each having its own delivery deadline.
To convey these deadlines, we extend the transport API (Section~\ref{section:messaging_api}). Furthermore, we replace the QUIC stream scheduler to leverage application information (Section~\ref{section:messaging_scheduler}). This can be done easily since applications are bundled with their QUIC implementation and are able to easily extend it.
We then discuss and evaluate a specific use-case in Section \ref{section:messaging_evaluation}.
\subsubsection{Application-specific API}\label{section:messaging_api}
We propose the following API enabling an application to send deadline-constrained messages.
\paragraph{\texttt{send\_fec\_protected\_msg(msg, deadline)}}
The application submits its deadline-constrained messages. The QUIC protocol already supports the stream abstraction as an elastic message service. However, the stream priority mechanism proposed by QUIC, while being dynamic, is not sufficient to support message deadlines.
The protocol operation attached to this function inserts the bytes submitted by the application in a new QUIC stream, closes the stream, and attaches the application-defined delivery deadline to it.
The message must be delivered at the receiver within this amount of time to be considered useful.
If the network conditions prevent an on-time delivery of the message, the message may be cancelled, possibly before being sent and the underlying stream be reset.
\paragraph{\texttt{next\_message\_arrival(arrival\_time)}}
This API call allows the application to indicate when it plans to submit the next message. While this API function is not useful for all kinds of unreliable messaging applications, applications having a constant message sending rate such as video-conferencing might benefit from providing such information.
\subsubsection{Application-tailored stream scheduler}\label{section:messaging_scheduler}
The knowledge provided by the application to the transport layer is not only useful for the coded reliability mechanism. The information provided by the application-defined API calls is also valuable for the QUIC stream scheduler. Without this information, the QUIC scheduler schedules high priority streams first and has two different ways to handle the scheduling of streams with the exact same priority: $i)$ round-robin or $ii)$ FIFO.
We let the application define its own scheduler to schedule its streams more accurately.
Algorithm~\ref{stream_scheduler} describes
our QUIC stream scheduler for deadline-constrained messaging applications.
\begin{algorithm}\small
\caption{Application-tailored scheduler for delay-constrained messaging.}\label{stream_scheduler}
\begin{algorithmic}[1]
\Require $ \mathcal S $, the set of available QUIC streams
\Require $\hat{OWD}$, the estimated one-way delay of the connection
\Require $now$, the timestamp representing the current time
\Require $FCBlocked(stream)$, telling if the specified stream is flow control-blocked.
\Require $closestDeadlineStream(\mathcal{S}, deadline)$, returning the non-expired stream with the closest delivery deadline to the specified deadline
\State $scheduledStream \gets \varnothing$
\State $currentDeadline \gets now + \hat{OWD}$ \Comment{Initialization}
\While {$scheduledStream = \varnothing$}
\State $candidate \gets closestDeadlineStream(\mathcal{S}, currentDeadline)$
\If {$candidate = \varnothing$}
\textbf{break}
\EndIf
\If {$\neg FCBlocked(candidate)$}
\State $scheduledStream \gets candidate$
\Else
\State $\mathcal{S} \gets \mathcal{S} \setminus \{candidate\}$
\State $currentDeadline \gets candidate.deadline$
\EndIf
\EndWhile
\If {$scheduledStream = \varnothing$}
\State \Return $defaultStreamScheduling(\mathcal{S})$ \Comment{Fallback}
\EndIf
\end{algorithmic}
\end{algorithm}
The $closestDeadlineStream()$ function searches among all the available streams attached to a deadline to find the stream having the closest expiration deadline while still having chances to arrive on-time given the current one-way delay. The scheduler chooses the non-flow-control blocked stream that is the closest to expire while still having a chance to be delivered on-time to the destination. Our implementation estimates the one-way delay as $\frac{RTT}{2}$. Other methods exist~\cite{frommgen2018multipathowd,huitema-quic-ts-06}. Recent versions of \texttt{picoquic} include a mechanism for estimating the one-way delay~\cite{picoquic} when the hosts clocks are synchronized. In the absence of clock synchronization, the estimated one-way delay can only be interpreted relatively, which helps to estimate the one-way delay variation but not for decision thresholds such as the one used in Algorithm~\ref{stream_scheduler}.
\subsubsection{FECPattern{}() and ds{}() for delay-constrained messaging}
We now describe how our application redefines \textit{FlEC}{}. Our application is sensitive to the delivery delay of entire messages more than the in-order delivery delay of individual packets. We thus set the $ds{}()$ threshold to $-\hat{l}$ as it is useful to retransmit non-recovered lost packets that can still arrive on-time. $FECPattern{}()$ is described in Algorithm~\ref{algo:ew_messages}. The algorithm triggers the sending of Repair Symbols to protect as many messages as possible according to the messages deadline and the next expected message timestamp if provided by the application. The rationale is the following. If the unprotected messages can wait for new messages to arrive before being protected, $FECPattern{}()$ does not send Repair Symbols and waits for the arrival of new messages. Otherwise, Repair Symbols are sent to protect the entire window until it is considered fully protected. This idea of waiting for new messages before protecting comes from the fact that the messages can be small and sending Repair Symbols for each sent message can lead to a high overhead. By doing so, $FECPattern{}()$ adapts the code rate according to the application needs.
\begin{algorithm}\small
\caption{$FECPattern{}()$ for delay-constrained messaging.}\label{algo:ew_messages}
\begin{algorithmic}[1]
\Require $ \mathcal S $, the set of available QUIC streams
\Require $\hat{OWD}$, the estimated one-way delay of the connection
\Require $now$, the timestamp representing the current time
\Require $closestDL(\mathcal{S}, deadline)$, returning the message deadline that will expire the sooner from the specified deadlin
\Require $last$, the last protected message.
\Require $nTriggered$, the number of times FECPattern{} has already been triggered since no symbol was added to the window.
\Require $maxTrigger$, the maximum number of times we can trigger this threshold for the same window
\Require $nextMsg$ (is $+\infty$ if the message API is not plugged), the maximum amount of time to wait before a new message arrives.
\Require $cwin$, $bif$, the congestion window and bytes in flight.
\Require $\theta$ space to save in cwin for directly upcoming messages.
\Require $\hat{l}$, $\hat{G_p}$, $\hat{G_r}$, see Table~\ref{tab:symbols}.
\State $nextDL \gets closestDL(\mathcal{S}, max(now + \hat{OWD}, last.deadline))$
\State $protect \gets (nextDL = \varnothing \lor now + \hat{OWD} + nextMsg + \epsilon \geq nextDL)$
\State $nUnprotected \gets W.last - last$
\If{$protect \land nUnprotected \neq 0$}
\Comment{\textit{Start Repair Symbols sequence}}
\State $nTriggered \gets 1$
\State $last \gets W.last$
\State $maxTrigger \gets \lceil max(\hat{l}*nUnprotected, \frac{1}{\hat{G_{r}}}) \rceil$
\ElsIf{$protect$}
\If{$nTriggered < maxTrigger$}
\State $nTriggered \gets nTriggered + 1$ \Comment{\textit{Continue sending}}
\Else
\State $protect \gets false$ \Comment{\textit{Enough symbols have been sent}}
\EndIf
\EndIf
\State \Return $appLimited() \land protect \land \frac{cwin}{bif} > \theta$
\end{algorithmic}
\end{algorithm}
\subsection{Evaluation}
\label{section:messaging_evaluation}
\begin{figure*}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{results_messages_bbr_with_api_95_pct}
\caption{Message delivery time 95th percentile, comparing \textit{FlEC}{} with API and the regular QUIC.}
\label{fig:p95_with_api_bbr}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{results_messages_bbr_with_api_98_pct}
\caption{Message delivery time 98th percentile, comparing \textit{FlEC}{} with API and the regular QUIC.}
\label{fig:p98_with_api_bbr}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{results_messages_bbr_with_api_99_pct}
\caption{Message delivery time 99th percentile, comparing \textit{FlEC}{} with API and the regular QUIC.}
\label{fig:p99with_api_bbr}
\end{minipage}
\end{figure*}
\begin{figure*}
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{message_long_deliveries_pct_bbr_with_and_without_api}
\caption{Messages received on-time comparing QUIC and \textit{FlEC}{} with and without API, using BBR.}
\label{fig:messages_ratio_with_and_without_api}
\end{minipage}
\hfill
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{bytes_sent_message_bbr_with_and_without_api}
\caption{Bytes sent by the server, comparing QUIC and \textit{FlEC}{} with and without API, using BBR.}
\label{fig:bytes_sent_with_and_without_api}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\textwidth}
\centering
\includegraphics[trim=0cm 0.7cm 0cm 0cm,width=\linewidth]{video_experimental_design_bbr}
\caption{Experimental design analysis for the delay-constrained messaging use-case using BBR.}
\label{fig:message_experimental_design}
\end{minipage}
\end{figure*}
We evaluate \textit{FlEC}{} under the messaging use-case using an application sending video frames as messages.
We set the deadline to 250 milliseconds, meaning that each frame must be delivered within this time. We use 86 seconds of the video recording from the Tixeo video-conference application~\cite{tixeo}. The framerate and bitrate are adjusted by the application. This video recording starts at 15 frames per second during the first 6 seconds and runs at 30 images per second afterwards. For frame, we record its delivery delay between when the application sends it and when it is delivered at the receiver.
We send each video frame in a different QUIC stream to avoid head-of-line blocking across frames upon packet losses.
The regular QUIC solution uses the default round-robin scheduler provided by PQUIC.
In the first set of experiments, we set the bandwidth to 8~Mbps and observe the performance of \textit{FlEC}{} in the presence of losses. For each experiment, the delay is sampled in the $[5, 200]ms$ range. We then perform an experimental design analysis over a wider parameters space.
Figure~\ref{fig:p95_with_api_bbr} and Figure~\ref{fig:p98_with_api_bbr} show the 95$^{th}$ and 98$^{th}$ percentiles of the message delivery times for each experiment.
We can see that while 95\% of the video frames are delivered successfully in every experiment, regular QUIC struggles to deliver 98\% of the submitted frames on time (i.e. before 250 milliseconds) with a one-way delay above 75 milliseconds.
Indeed, with a one-way delay above 75 milliseconds, the lost frames are retransmitted after more than 150 milliseconds and take more than 75 milliseconds to reach the receiver. Note that QUIC's loss detection mechanism takes a bit more than one RTT to consider a packet as lost to avoid spurious retransmissions due to reordering~\cite{ietf-quic-recovery}. These retransmitted frames thus arrive a few milliseconds before the deadline in the best case.
As we can see on Figure~\ref{fig:p98_with_api_bbr}, only a few experiments without \textit{FlEC}{} have more than 98\% of the frames arriving on-time while \textit{FlEC}{} can cope with one-way delays up to 200 milliseconds. Figure~\ref{fig:p99with_api_bbr} shows that no experiment with regular QUIC succeeded to deliver 99\% of the video frames on time with a one-way delay above 75ms, while \textit{FlEC}{} succeeded in every experiment.
Note that the $FECPattern{}()$ algorithm plugged in this use-case tries to protect as many messages as possible with the same number of Repair Symbols by delaying the sending of Repair Symbols when new messages are expected soon.
This lazy Repair Symbol scheduling explains the plateau present around the 250ms delivery time in Figure~\ref{fig:p99with_api_bbr} and why the frame delivery time is larger than the one-way delay. In order to send as few Repair Symbols as possible, \textit{FlEC}{} delays the sending of Repair Symbols to the last possible moment while ensuring that lost data can be recovered before the deadline.
Figure~\ref{fig:messages_ratio_with_and_without_api} shows the ratio between the number of messages received on-time by \textit{FlEC}{} and by the regular QUIC implementation. In order to isolate the effects of the \textit{FlEC}{} API, the Figure also shows \textit{FlEC}{} results without leveraging the application knowledge brought by the API functions ($FlEC_{NO-API}$ on the Figure). It thus uses \texttt{picoquic}'s default stream scheduler and sends repair symbols for each newly sent message. As it is only a simplified version of Algorithm~\ref{algo:ew_messages}, we do not show the $FECPattern{}()$ algorithm of this second solution. As we can see, nearly no experiment ended with fewer messages received on-time using the API-enabled \textit{FlEC}{} compared to QUIC. A similar gain compared to the regular QUIC is present for both \textit{FlEC}{} versions. However, the interest of the \textit{FlEC}{} API resides in the redundancy it needs to obtain those results.
We now analyze the redundancy overhead of our solution. Figure~\ref{fig:bytes_sent_with_and_without_api} shows the ratio of bytes sent by the server between regular QUIC and \textit{FlEC}{} with and without the API defined in Section~\ref{section:messaging_api}. The results of \textit{FlEC}{} without the API show that protecting every message blindly is very costly in terms of bandwidth. Indeed, for this video-conferencing transfer, many video frames sent by the application are smaller than the size of a full QUIC packet. The QUIC \texttt{REPAIR} frames sent by \textit{FlEC}{} contain additional metadata. In this case where the application traffic is thin, protecting every message may double the volume of sent data as shown on Figure~\ref{fig:bytes_sent_with_and_without_api}. Using \textit{FlEC}{} with the message-based API can save a lot of bandwidth by using application-aware stream and redundancy schedulers.
Note the portion of the graph between 5ms and 70ms one-way delays. For those configurations, no Repair Symbol is sent by \textit{FlEC}{}. Indeed, the messages are acknowledged by the peer before $FECPattern{}()$ triggers the sending of repair symbols. \textit{FlEC}{} thus naturally uses SR-ARQ when redundancy is not needed to meet the messages deadlines.
\subsubsection*{Experimental design analysis}
Figure~\ref{fig:message_experimental_design} shows the results of an experimental design analysis using the parameters depicted on the top of the Figure. The Figure shows CDFs for the amount of bytes sent by the server and the number of messages received within the deadline. In a few cases, the \textit{FlEC}{} solution using the application-tailored API sends a similar amount of bytes to regular QUIC. This is due to the fact that for some configurations, the delay was sufficiently low to send no or a few repair symbols. We can also see that none of the experiments revealed a lower amount of on-time received messages compared to regular QUIC, showing the robustness of \textit{FlEC}{} under various network conditions.
\subsubsection*{Improvements}
Other information from the application could have been taken in addition to the messages deadline. For example, information concerning the video frames type could have an impact on the stream scheduling: H264 I-frames are more important than P as the latter depend on the first to be decoded. The stream scheduling can even be further improved by looking at the dependence between each frames in a group of H264 frames. Given the flexibility of \textit{FlEC}{}, the messaging API can be easily extended for the application to transfer this kind of knowledge to the transport stack.
|
1912.02905
|
\section{Introduction}
The Sloan Digital Sky Surveys (SDSS) have been observing the skies
from Apache Point Observatory (APO) since 1998 (using the 2.5m Sloan Foundation Telescope, \citealt{2006AJ....131.2332G}) and from Las Campanas Observatory (LCO) since 2017 (using the du Pont 2.5m Telescope).
Representing the fourth phase of SDSS, SDSS-IV \citep{2017AJ....154...28B} consists
of three main surveys; the Extended Baryon Oscillation Spectroscopic
Survey (eBOSS; \citealt{Dawson16}), Mapping Nearby Galaxies
at APO (MaNGA; \citealt{2015ApJ...798....7B}), and the APO Galactic
Evolution Experiment 2 (APOGEE-2; \citealt{Majewski2017}). Within eBOSS,
SDSS-IV has also conducted two smaller programs: the SPectroscopic
IDentification of ERosita Sources (SPIDERS; \citealt{Clerc2016,
Dwelly17}) and the Time Domain Spectroscopic Survey
(TDSS; \citealt{morganson15a}). These programs have investigated a
broad range of cosmological scales, including cosmology with large-scale structure in
eBOSS, the population of quasars and variable or X-ray-emitting stars
with TDSS and SPIDERS; nearby galaxies in MaNGA; and the Milky Way and
its stars in APOGEE-2.
This paper documents the sixteenth data release from SDSS (DR16), the latest in a
series that began in 2001 (\citealt{2002AJ....123..485S}). It is the fourth data release from SDSS-IV (following DR13: \citealt{2017ApJS..233...25A}; DR14: \citealt{2018ApJS..235...42A} and DR15: \citealt{2019ApJS..240...23A}). A complete overview of the scope of DR16 is provided in \S \ref{sec:scope}, and information on how to access the data can be found in \S \ref{sec:access}. DR16 contains three
important milestones:
\begin{enumerate}
\item The first data from APOGEE-2 South (APOGEE-2S), which is mapping the
Milky Way in the Southern hemisphere from the du Pont Telescope at LCO. With SDSS now operating APOGEE instruments in two hemispheres,
all major components of the Milky Way are accessible (see \S \ref{sec:apogee})
\item The first and final release of eBOSS spectra from the emission line galaxy (ELG)
cosmology program. The entirety of this large-scale structure survey was conducted in the interval
between DR14 and DR16. Covering the redshift range $0.6<z<1.1$, the eBOSS ELG program
represents the highest redshift galaxy survey ever conducted within SDSS.
\item The full and final release of spectra from the main observing program of eBOSS, completing that cosmological redshift
survey. DR16 therefore marks the end of a twenty-year stretch during
which SDSS performed a redshift survey of the large-scale structure in the
universe. Over this span, SDSS produced a catalog of spectroscopic galaxy
redshifts that is a factor of more than five larger than any other program.
DR16 provides spectra along with usable redshifts for around 2.6 million
unique galaxies. The catalogues that contain the information to accurately measure the clustering statistics of ELGs, luminous red galaxies (LRGs), quasars, and Lyman-$\alpha$ absorption will be released later (see \S \ref{sec:eboss}).
\end{enumerate}
DR16 also represents the full release of the TDSS subprogram, which in total releases spectra for 131,552 variable sources (see \S \ref{sec:tdss}). The SPIDERS subprogram will have a small number of observations in the future to cover eROSITA targets, but DR16 releases a number of Value Added Catalogs (VACs) characterizing both X-ray cluster and X-ray point sources that have already been observed (as well as the optical spectra; see \S \ref{sec:spiders}). There are no new data from MaNGA or MaStar \citep{Yan2019} in DR16; however, a number of new or updated VACs based on DR15 MaNGA data are released (see \S \ref{sec:manga}).
\section{Scope of DR16}
\label{sec:scope}
Following the tradition of previous SDSS data releases, DR16 is a cumulative data release. This means that all previous data releases are included in DR16, and data products and catalogs of these previous releases will remain accessible on our data servers. Table \ref{tab:scope} shows the number of spectra contained in DR16 along with those from previous releases and demonstrates the incremental gains with each release. We strongly advise to always use the most recent SDSS data release, as data will have been reprocessed using updated data reduction pipelines, and catalogs may have been updated with new entries and/or improved analysis methods. These changes between DR16 and previous data releases are documented in this paper and on the DR16 website \url{https://www.sdss.org/dr16}.
\begin{deluxetable*}{llrrrr}
\tablewidth{6.5in}
\tablecaption{SDSS-IV spectroscopic data in DR13--DR16 \label{tab:scope}}
\tablehead{
\colhead{Survey} & \colhead{Target Category} & \colhead{DR13} & \colhead{DR14} & \colhead{DR15} & \colhead{DR16}}
\startdata
{eBOSS} \\
& \multicolumn{1}{r}{LRG samples} & 32968 & 138777 & 138777 & 298762 \\
& \multicolumn{1}{r}{ELG samples} & 14459 & 35094 & 35094 & 269889 \\
& \multicolumn{1}{r}{Main QSO Sample} & 33928 & 188277 & 188277 & 434820 \\
& \multicolumn{1}{r}{Variability Selected QSOs} & 22756 & 87270 & 87270 & 185816 \\
& \multicolumn{1}{r}{Other QSO samples} & 24840 & 43502 & 43502 & 70785 \\
& \multicolumn{1}{r}{TDSS Targets} & 17927 & 57675 & 57675 & 131552\\
& \multicolumn{1}{r}{SPIDERS Targets} & 3133 & 16394 & 16394 & 36300\\
& \multicolumn{1}{r}{Reverberation Mapping} & 849\tablenotemark{a} & 849\tablenotemark{a} & 849\tablenotemark{a} & 849\tablenotemark{a} \\
& \multicolumn{1}{r}{Standard Stars/White Dwarfs} & 53584 & 63880 & 63880 & 84605 \\
\tableline
{APOGEE-2} \T \\
&\multicolumn{1}{r}{Main Red Star Sample} & 109376 & 184148 & 184148 & 281575\\
&\multicolumn{1}{r}{AllStar Entries} & 164562 & 277371 & 277371 & 473307\tablenotemark{b} \\
&\multicolumn{1}{r}{APOGEE-2S Main Red Star Sample} & - & - & - &56480 \\
&\multicolumn{1}{r}{APOGEE-2S AllStar Entries} & - & - & - & 102200 \\
&\multicolumn{1}{r}{APOGEE-2S Contributed AllStar Entries} & - & - & - & 37409 \\
&\multicolumn{1}{r}{NMSU 1-meter AllStar Entries} & 894 & 1018 & 1018 & 1071 \\
&\multicolumn{1}{r}{Telluric AllStar Entries} & 17293 & 27127 & 27127 & 34016 \\
&\multicolumn{1}{r}{APOGEE-N Commissioning stars} & 11917 & 12194 & 12194 & 12194 \\
\tableline
MaNGA \\
&\multicolumn{1}{l}{MaNGA Cubes} & 1390 & 2812 & 4824 & 4824 \\
& \multicolumn{4}{l}{MaNGA main galaxy sample: } \\
& \multicolumn{1}{r}{\tt PRIMARY\_v1\_2} & 600 & 1278 & 2126 & 2126 \\
& \multicolumn{1}{r}{\tt SECONDARY\_v1\_2} & 473 & 947 & 1665 & 1665 \\
& \multicolumn{1}{r}{\tt COLOR-ENHANCED\_v1\_2} & 216 & 447 & 710 & 710 \\
& \multicolumn{1}{l}{MaStar (MaNGA Stellar Library)} & - & - & 3326 & 3326 \\
& \multicolumn{1}{l}{Other MaNGA ancillary targets\tablenotemark{c}} & 31 & 121 & 324 & 324 \\
\vspace{-0.1in}
\tablenotetext{a}{The number of RM targets remains the same, but the number of visits increases.}
\tablenotetext{b}{This number includes multiple entries for some stars; there are 437,485 unique stars.}
\tablenotetext{c}{Many MaNGA ancillary targets were also observed as part of the main galaxy sample, and are counted twice in this table; some ancillary targets are not galaxies.}
\enddata
\end{deluxetable*}
The content of DR16 is given by the following sets of data products:
\begin{enumerate}
\item eBOSS is releasing 860,935 new optical spectra of galaxies and quasars with respect to its previous SDSS data release. These targets were observed between MJD 57520 (May 11th 2016) and 58543 (March 1st 2019), and bring the total number of spectra observed by eBOSS to 1.4 million. This number includes spectra observed as part of the TDSS and SPIDERS sub-surveys, as well as the spectra taken as part of the eBOSS reverberation mapping ancillary program. All spectra, whether released previously or for the first time in this data release, have been processed using the latest version of the eBOSS data reduction pipeline {\tt v5\_13\_0}. In addition to the spectra, eBOSS is also releasing catalogs of redshifts, as well as various value-added catalogs (VACs; see Table \ref{table:vac}). DR16 is the last SDSS data release that will contain new eBOSS spectra from the main program, as this survey has now finished. Additional observations of X-ray sources under the SPIDERS program and continued monitoring of quasars under the reverberation mapping program are planned before the end of SDSS-IV, which will lead to another increment of single-fiber spectra from the Baryon Oscillation Spectroscopic Survey (BOSS) spectrograph in DR17.
\item APOGEE-2 is including 751,864 new infrared spectra;\footnote{the number of entries in the All Visit file, which is larger than the number of combined spectra having entries in the AllStar file as listed in Table \ref{tab:scope}} the new spectra comprise both observations of 195,936 new stars and additional epochs on targets included in previous DRs. The majority of the stars are in the Milky Way galaxy (including Omega Centauri), but DR16 also contains stars from, the Large and Small Magellanic Clouds, and dwarf Spheroidal satellites. A total of 262,997 spectra, for 102,200 unique stars, were obtained in the southern hemisphere from the {APOGEE-S} spectrograph at LCO. These new spectra were obtained from MJD 57643 to MJD 58301 (September 12th 2016 to July 2nd 2018) for APOGEE-2N from APO and from MJD 57829 to MJD 58358 (March 17, 2017 to August 28, 2018) for APOGEE-2S from LCO. DR16 also includes all previously released APOGEE and APOGEE-2 spectra, which have been re-reduced with the latest version of the APOGEE data reduction and analysis pipeline. In addition to the reduced spectra, element abundances and stellar parameters are included in this data release. APOGEE-2 is also releasing a number of VACs (Table \ref{table:vac})
\item MaNGA and MaStar are not releasing any new spectra in this data release; the spectra and data products included in DR16 are therefore identical to the ones that were released in DR15. However, MaNGA is contributing a number of of new or updated VACs in DR16, which are based on the DR15 sample and data products (see Table \ref{table:vac}).
\item Since SDSS data releases are cumulative, {\bf DR16 also includes data from all previous SDSS data releases.} All BOSS and eBOSS, APOGEE and APOGEE-2 spectra that were previously released have all been reprocessed with the latest reduction and analysis pipelines. The MaNGA and MaStar data in DR16 are identical to those in DR15 \citep{2019ApJS..240...23A}; SDSS-III MARVELS spectra have not changed since DR12 \citep{2015ApJS..219...12A}. SDSS Legacy Spectra in DR16 are the same as those released in their final form in DR8 \citep{2011ApJS..193...29A}, and the SEGUE-1 and SEGUE-2 survey data in DR16 are identical to the final reductions released with DR9 \citep{2012ApJS..203...21A}. The SDSS imaging had its most recent change in DR13 \citep{2017ApJS..233...25A}, when it was recalibrated for eBOSS imaging purposes and DR16 contains this version of the imaging.
\end{enumerate}
An overview of the total spectroscopic content of DR16, with number of spectra included, is given in Table \ref{tab:scope}. An overview of the value-added catalogs that are new or updated in DR16 can be found in Table \ref{table:vac}; adding these to the VACs previously released in SDSS, there are a total of 46 VACs in DR16\footnote{That is 40 previous released VACs, 7 of which are updated in DR16, and 6 VACs new to DR16.}.
\begin{deluxetable*}{lll}
\tablecaption{New or Updated Value Added Catalogs (VACs) \label{table:vac}}
\tablehead{\colhead{Description} & \colhead{Section} & \colhead{Reference(s)}}
\startdata
APOGEE-2 Red Clumps & \S \ref{vac:rc} & \citet{2014ApJ...790..127B}\\
APOGEE-2 \texttt{astroNN} & \S \ref{vac:astroNN} & \citet{2019MNRAS.483.3255L}\\
APOGEE-2 \textit{Joker} & \S \ref{vac:joker} & \citet{PriceWhelan2017, PriceWhelan2018, PriceWhelan2020} \\
APOGEE-2 OCCAM & \S \ref{vac:occam} & \citet{Donor2018,Donor2020} \\
APOGEE-2 StarHorse & \S \ref{vac:starhorse} & \citet{2018MNRAS.476.2556Q, Anders2019}; \\
& & \citet{Quieroz2019} \\
eBOSS ELG classification & \S \ref{vac:eboss}& \citet{Zhang2019} \\
SDSS Galaxy Single Fiber FIREFLY & \S \ref{vac:eboss} & \citet{Comparat2017} \\
SPIDERS X-ray clusters & \S \ref{vac:clusters} & \citet{Clerc2016}, C. Kirkpatrick et al. in prep.\\
SPIDERS Rosat and XMM\tablenotemark{a}-Slew Sources & \S \ref{vac:agn} & \citet{Comparat2020} \\
SPIDERS Multiwavelength Properties of RASS and XMMSL AGN & \S \ref{vac:rass} & \citet{Comparat2020}\\
SPIDERS Black Hole Masses & \S \ref{vac:coffey} & \citet{Coffey2019}\\
MaNGA Stellar Masses from PCA & \S \ref{vac:pca} & \citet{Pace2019a,Pace2019b} \\
MaNGA {\tt PawlikMorph} & \S \ref{vac:pawlikmorph} & \citet{Pawlik2016}
\enddata
\tablenotetext{a}{X-ray Multi-Mirror Mission}
\end{deluxetable*}
\section{Data Access}
\label{sec:access}
The SDSS data products included in DR16 are publicly available through several different channels. The best way to access the data products depends on the particular product, and the goal of the user. The different access options are described on the SDSS website \url{https://www.sdss.org/dr16/data_access/}, and we also describe them below. We provide a variety of tutorials and examples for accessing data products online at \url{https://www.sdss.org/dr16/tutorials/}.
All software that was used by SDSS to reduce and process data, as well as to construct derived data products is publicly available in either SVN or Github repositories; an overview of available software and where to retrieve it is given on \url{https://www.sdss.org/dr16/software/}.
\subsection{Science Archive Server; SAS}
The main path to access the raw and reduced imaging and spectroscopic data directly, as well as obtain intermediate data products and value-added catalogs, is through the SDSS Science Archive Server (SAS, \url{https://data.sdss.org/sas/}). Note that all previous data releases are also available on this server, but we recommend that users always adopt the latest data release, as these are reduced with the latest versions of the data reduction software. The SAS is a file-based system, which allows data downloads by browsing or through tools such as {\tt rsync}, {\tt wget} and Globus Online (see \url{https://www.sdss.org/dr16/data_access/bulk} for more details). The content of each data product on the SAS is outlined in its data model, which can be accessed through \url{https://data.sdss.org/datamodel/}.
\subsection{Science Archive Webapp; SAW}
Most of the reduced images and spectra on the SAS are also accessible through the Science Archive Webapp (SAW), which provides the user with options to display spectra and overlay model fits. The SAW includes search options to access specific subsamples of spectra, e.g. based on coordinates, redshift and/or observing programs. Searches can also be saved as ``permalinks" to allow sharing with collaborators and future use. Links are provided to download the spectra directly from the SAS, and to open SkyServer Explore pages for the objects displayed (see below for a description of the SkyServer). The SAW contains imaging, optical single-fiber spectra (SDSS-I/II, SEGUE, BOSS, eBOSS), infrared spectra (APOGEE-1/2) and stellar spectra of the MaStar stellar library. All of these webapps are linked from \url{https://dr16.sdss.org/}. Just like the SAS, the SAW provides access to previous data releases (back to DR8).
\subsection{Marvin for MaNGA}
Integral-field spectroscopic data (MaNGA) are not available in the SAW because they follow a different data format from the single object spectra. Instead, the MaNGA data can be accessed through Marvin (\url{https://dr16.sdss.org/marvin/}; \citealt{2019AJ....158...74C}). Marvin can be used to both visualize and analyze MaNGA data products and perform queries on MaNGA meta-data remotely. Marvin also contains a suite of Python tools, available through pip-install, that simplify interacting with the MaNGA data products and meta-data. More information, including installation instructions for Marvin, can be found here: \url{https://sdss-marvin.readthedocs.io/en/stable/}. In DR16, although no new MaNGA data products are included, Marvin has been upgraded by providing access to a number of MaNGA value-added catalogs based on DR15 data.
\subsection{Catalog Archive Server, CAS}
The SDSS catalogs can be found and queried on the Catalog Archive Server or CAS \citep{2008CSE....10...30T}. These catalogs contain photometric and spectroscopic properties, as well as derived data products. Several value added catalogs are also available on the CAS. For quick inspection of objects or small queries, the SkyServer webapp (\url{https://skyserver.sdss.org}) is the recommended route to access the catalogs: it contains amongst other facilities the Quick Look and Explore tools, as well as the option for SQL queries in synchronous mode directly in the browser. The SkyServer also contains tutorials and examples of SQL syntax (\url{http://skyserver.sdss.org/public/en/help/docs/docshome.aspx}). For larger queries, CASJobs (\url{https://skyserver.sdss.org/casjobs}) should be used, as it allows for asynchronous queries in batch mode. Users of CASJobs will need to create a (free of cost) personal account, which comes with storage space for query results \citep{2008CSE....10...18L}. A third way to access the SDSS catalogs is through the SciServer (\url{https://www.sciserver.org}), which is integrated with the CAS. SciServer allows users to run Jupyter notebooks in Docker containers, amongst other services.
\subsection{Data Access for Education}
We are providing access to a growing set of Jupyter Notebooks that have been developed for introductory\footnote{\url{https://github.com/ritatojeiro/SDSSEPO}} and upper-level\footnote{\url{https://github.com/brittlundgren/SDSS-EPO}} university astronomy laboratory courses. These Python-based activities are designed to be run on the SciServer platform\footnote{\url{http://www.sciserver.org/}}, which enables the analysis and visualization of the vast SDSS dataset from a web browser, without requiring any additional software or data downloads.
Additionally, Voyages (\url{http://voyages.sdss.org/}) provides activities and resources to help younger audiences explore the SDSS data. Voyages has been specifically developed to be used in secondary schools, and contains pointers to K-12 US science standards. A Spanish language version of these resources is now available at \url{http://voyages.sdss.org/es}
\section{APOGEE-2: First Release of Southern Hemisphere Data, and More from the North}
\label{sec:apogee}
APOGEE is performing a chemodynamical investigation across the entire Milky Way Galaxy with two similarly designed near-infrared, high-resolution multiplexed spectrographs. DR16 constitutes the fifth release of data from APOGEE, which has run in two phases (APOGEE-1 and APOGEE-2) spanning both SDSS-III and SDSS-IV. For approximately 3 years (August 2011- July 2014), APOGEE-1 survey observations were conducted at the 2.5m Sloan Foundation Telescope at APO as part of SDSS-III. In August 2014, at the start of SDSS-IV, APOGEE-2 continued data acquisition at the APO northern hemisphere site (APOGEE-2N). With the build of a second spectrograph (\citealt{Wilson2019}), APOGEE-2 commenced southern hemisphere operations at the 2.5m Ir\'en\'e du Pont Telescope at LCO (APOGEE-2S) in April 2017. \citet{Majewski2017} provides an overview of the APOGEE-1 Survey (with a forthcoming S. Majewski et al. in prep. planned to provide an an overview of the APOGEE-2 Survey).
In detail, the APOGEE data in DR16 encompasses all SDSS-III APOGEE-1 data and SDSS-IV APOGEE-2 data acquired with both instruments through August 2018. The current release includes two additional years of APOGEE-2N data and almost doubles the number of stars with available spectra as compared to the previous public release \citep[in DR14: ][]{2018ApJS..235...42A}. DR16 presents the first 16 months of data from APOGEE-2S. Thus, DR16 is the first release from APOGEE that includes data from across the entire night sky.
DR16 contains APOGEE data and information for 437,485 unique stars, including reduced and visit-combined spectra, radial velocity information, atmospheric parameters, and individual element abundances; nearly 1.8 million individual visit spectra are included. Figure \ref{fig:apogeedr16} displays the APOGEE DR16 coverage in Galactic coordinates where each point represents a single ``field" and is color-coded by the overall survey component (e.g., APOGEE, APOGEE-2N, and APOGEE-2S). Fields newly released in DR16 are encircled with black. As shown in this figure, the dual hemisphere view of APOGEE allows for targeting of all Milky Way components: the inner and outer halo, the four disk quadrants, and the full expanse of the bulge. The first APOGEE-2S observations of various Southern Hemisphere objects, such as Omega Centauri ($l,b = 309^\circ, 15^\circ$) and our current targetting of the Large and Small Magellanic Clouds ($l,b = 280^\circ, -33^\circ$ and $303^\circ, -44^\circ$ respectively), are visible in Figure \ref{fig:apogeedr16} as high density areas of observation. Moreover, DR16 features substantially increased coverage at high Galactic latitudes as APOGEE continues to piggy-back on MaNGA-led observing during dark time. Figure \ref{fig:apogeenstars} has the same projection, but uses color-coding to convey the number of unique targets for each of the APOGEE fields. Particularly dense regions include the Kepler field which serves multiple scientific programs, as well as APOGEE ``deep" fields observed with multiple ``cohorts" (see \citealt{Zasowski_2017_apogee2targeting}). Detailed discussions of our targeting strategies for each Galactic component, as well as an evaluation of their efficacy, will be presented in forthcoming focused papers (R. Beaton et al. in prep, F. Santana et al. in prep).
\begin{figure*}
\centering
\includegraphics[angle=0,width=15cm]{Figure1.png}
\caption{DR16 APOGEE sky coverage in Galactic coordinates. Each symbol represents a field, which is 7 square degrees for APOGEE-1 in cyan and APOGEE-2N in blue and 2.8 square degrees for APOGEE-2S in red (this difference is due to the different field-of-view of the two telescopes; see \S \ref{apogeeS}). Fields that have new data presented in DR16 are hi-lighted with a black outline. }
\label{fig:apogeedr16}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=0,width=15cm]{Figure2.png}
\caption{A sky map in Galactic coordinates showing the number of stars per APOGEE field (across APOGEE-1, 2N, and 2S). The disk is targeted with a symmetric dense grid within $|b| < 15\deg$. The dense coverage of the bulge and inner Galaxy is for $l < 30 \deg$. Other special programs, like the {\it Kepler-2} follow-up, have initial data in DR16. The circle sizes reflect the different field-of-view of APOGEE-N and APOGEE-S; see \S \ref{apogeeS}.}
\label{fig:apogeenstars}
\end{figure*}
\subsection{APOGEE Southern Survey Overview} \label{apogeeS}
The APOGEE-2S Survey has been enabled by the construction of a second APOGEE spectrograph. The second instrument is a near duplicate of the first with comparable performance, simultaneously delivering 300 spectra in the $H$-band wavelength regime ($\lambda$ = 1.5$\mu$m to 1.7$\mu$m) at a resolution of $R\sim 22,500$. Slight differences occur between the two instruments with respect to image quality and resolution across the detectors as described in detail in \citet{Wilson2019}.
The telescopes of the Northern and Southern Hemisphere sites have the same apertures. However, because the du Pont telescope was designed with a slower focal ratio (f/7.5) than the Sloan Foundation telescope (f/5), the resulting field-of-view for APOGEE-2S is smaller than APOGEE-2N and the fibers subtend a smaller angular area. The difference in field-of-view is evident in Figure \ref{fig:apogeedr16} by comparing the size of the red points (LCO fields) to those shown in blue or cyan (APO fields). However, the image quality (seeing) at LCO is generally better than that at APO, and this roughly compensates for the smaller angular diameter fibers such that the typical throughput at LCO is similar to, or even better than, that obtained at APO.
\subsection{General APOGEE Targeting}
Extensive descriptions of the target selection and strategy are found in \citet{Zasowski2013} for APOGEE-1 and in \citet{Zasowski_2017_apogee2targeting} for APOGEE-2. Details about the final selection method used for APOGEE-2N and APOGEE-2S will be presented in R. Beaton, et. al in prep. and F. Santana et al. in prep, respectively. These papers will provide descriptions for the ancillary and external programs, modifications to original targeting strategies required by evaluation of their effectiveness, and modifications of the field plan as required by weather gains or losses. We include all targeting information using flags and also provide input catalogs on the SAS.
APOGEE-2 scientific goals are implemented in a three-tier strategy, where individual programs aimed at specific science goals are classified as core, goal, or ancillary. The core programs produce a systematic exploration of the major components of the bulge, disk, and halo and are given the highest priority for implementation. The goal programs have more focused science goals, for example follow-up of Kepler Objects of Interest (KOIs), and are implemented as a secondary priority. Ancillary programs are implemented at the lowest priority; such programs were selected from a competitive proposal process and have only been implemented for APOGEE-2N. Generally, the APOGEE-2N and APOGEE-2S survey science are implemented in the same manner.
In addition to a target selection analogous to that for the Northern observations, APOGEE-2S includes External Programs selected by the Chilean National Time Allocation Committee (CNTAC) or the Observatories of the Carnegie Institution for Science (OCIS) and led by individual scientists (or teams) who can be external to the SDSS-IV collaboration. External programs can be ``contributed", or proprietary; contributed data are processed through the normal APOGEE data reduction pipelines and are released along with other APOGEE data whereas proprietary programs are not necessarily processed through the standard pipelines or released with the public data releases\footnote{To date all External Programs have been ``contributed" so there are no proprietary external programs.}. The selection of external program targets does not follow the standard APOGEE survey criteria in terms of S/N or even source catalogs; the scientists involved were able to exercise great autonomy in target selection (e.g., no implementation of color cuts). External programs are implemented as classical observing programs with observations only occurring for a given program on nights assigned to it.
The APOGEE portion of DR16 includes 437,485 unique stars. Among the unique stars, 308,000 correspond to core science targets, 112,000 to goal science targets, 13,000 to ancillary APOGEE-2N program targets, and 37,000 to APOGEE-2S external program targets. These numbers add up to more than 437,485 due to some stars being in multiple categories.
\subsection{APOGEE DR16 Data Products}
The basic procedure for processing and analysis of APOGEE
data is similar to that of DR14 data \citep{2018ApJS..235...42A,Holtzman2018}, but a few notable differences are
highlighted here. Full details, including verification analyses, are presented in \citet{Jonsson2020}.
\subsubsection{Spectral Reduction and Radial Velocity Determinations}
\citet{2015AJ....150..173N} describes the reduction procedure for APOGEE
data. While the basic reduction steps for DR16 were the same as described there, improvements were
implemented in the handling of bad pixels, flat fielding, and
wavelength calibration, all of which were largely motivated by small differences between the data produced by the APOGEE-S and APOGEE-N instruments. As an improvement over DR14, an attempt was
made to provide rough relative flux calibration for the spectra.
This was achieved by using observations of hot stars on the fiber plug plate
for which the spectral energy distribution are known.
Radial velocities were determined, as in DR14, using cross-correlation against a reference grid,
but a new synthetic grid was calculated for the reference grid,
using the same updated models that were used for the derivation
of stellar parameters and abundances (see \S \ref{details} for details). No constraint was placed on the effective temperature range
of the synthetic grid based on the $J-K$ color; DR14 used such a
constraint which led to a few issues with bad radial velocities. Therefore DR16 improves on this.
For the faintest stars in DR16, especially those in dwarf
spheroidal galaxies, the individual visit spectra can have
low $S/N$, and, as a result, the radial velocity determination fails. In many,
but not all cases, such objects are flagged as having bad or
suspect RV combination. Users who are working with data for
stars with $H>14.5$ need to be very careful with these data,
as incorrect RVs leads to incorrect spectral combination, which
invalidates any subsequent analysis. We intend to remedy this problem in the next data release.
\subsubsection{Atmospheric Parameter and Element Abundance Derivations}\label{details}
Stellar parameters and abundances are determined using the
APOGEE Stellar Parameters and Chemical Abundance Pipeline (ASPCAP,
\citealt{garciaperez2016})\footnote{\url{https://github.com/sdss/apogee}}. For DR16, entirely new synthetic
grids were created for this analysis. These grids were based
on a complete set of stellar atmospheres from the MARCS group \citep{Gustafsson2008}
that covers a wide range of $T_{\rm eff}$, $\log~g$, [Fe/H], [$\alpha$/M],
and [C/M]. Spectral syntheses were performed using the
Turbospectrum code \citep{Plez2012}. The synthesis was done using a revised
APOGEE line-list, which was derived, as before, from matching
very high resolution spectra of the Sun and Arcturus.
The revised line-list differs from that used previously by the
inclusion of lines from FeH, Ce II, and Nd II, some revisions
in the adopted Arcturus abundances, and a proper handling of
the synthesis of a center-of-disk solar spectrum. Details
on the line-list will be presented in V. Smith et al. (in prep). The synthetic
grid for red giants was calculated with seven dimensions, including [N/M] and
micro-turbulent velocity, as well as the atmospheric parameters previously listed; the
range for [C/M] and [N/M] was expanded over that used for DR14. For the giants, the [C/Fe] grid was expanded to include -1.25 and -1.50 dex and the [N/Fe] dimension to cover from -0.50 to +1.50 dex.
For dwarfs, an additional dimension was included to account for stellar rotation that included 7 steps (these being $v \sin i$ of 1.5, 3.0, 6.0, 12.0, 24.0, 48.0, and 96.0 km s$^{-1}$). During the stellar parameter and abundance fits, regions in the
spectrum that were not well matched in the solar and Arcturus
spectra were masked. The full details of the spectral grid derivations will be given in a dedicated paper on the APOGEE DR16 pipeline \citep{Jonsson2020}.
The DR16 analysis improves on the measurement of carbon and nitrogen
abundances in dwarf stars over DR14, as DR16 includes separate [C/M]
and [N/M] dimensions for dwarfs.
As for previous data releases, stellar parameters were determined
by searching for the best fit in the synthetic grid. The method
used to normalize the observed and model spectra was improved from
previous releases, and a new minimization option was adopted in
the {\sc FERRE} code \citep{Allende2006}.\footnote{\url{https://github.com/callendeprieto/ferre}}. More details on these changes are given in \citet{Jonsson2020}. As in previous releases, after the stellar parameters have been determined, these are held fixed while determining the elemental abundances; for these, only windows in the spectra that are sensitive to the element in question are fit, and only a single relevant abundance dimension of the grid is varied. The windows are chosen based on where our synthetic spectra are sensitive to a given element, and at the same time \emph{not} sensitive to another element in the same abundance dimension. In addition to the elements
measured for DR14, an attempt was made to measure the abundance of
cerium using a single line from \citet{Cunha2017}, but these results show
significant scatter and may be of limited utility.
In previous releases, we derived an internal calibration to the abundances to account for biases as a function of $T_{\rm eff}$, but for DR16 no such calibration is applied because, with the modification to the abundance pipeline, the trends with effective temperature for most elements have reduced amplitude as compared with previous data processing. The zero-point scale of the abundances was adjusted so that stars in the solar neighborhood (within 0.5 kpc of the Sun, according to {\it Gaia} parallaxes) with near-solar metallicity ($-0.05< $[M/H]$ <0.05$) are adjusted to have a mean [X/M] = 0. The reason for this choice is discussed in detail in \citet{Jonsson2020}.
The procedure is described in significantly more detail, along
with an assessment of the quality of the stellar parameters
and abundances, in \citet{Jonsson2020}.
\subsection{Data Quality}
The quality of the DR16 results for radial velocities, stellar parameters, and abundances is
similar to that of previous APOGEE data releases. Figure \ref{fig:apogeehr} shows a $T_{\rm eff}$-$\log~g$ ~diagram for the main sample APOGEE
stars in DR16. The use of MARCS atmosphere models \citep{Gustafsson2008} across the entire $T_{\rm eff}$-$\log~g$ ~range has significantly improved results for cooler giants; previously, Kurucz atmosphere models \citep{Kuruczmodel} were used for the latter stars, and discontinuities were visible at the transition point between MARCS and Kurucz. While the stellar parameters are overall an improvement from previous DRs, we still apply external calibrations to both $\log~g$ ~and $T_{\rm eff}$. These calibrations are discussed fully in \citet{Jonsson2020}, which also describes the features in Figure \ref{fig:apogeehr} in more detail.
\begin{figure*}
\centering
\includegraphics[angle=0,width=15cm]{Figure3.png}
\caption{Spectroscopic Hertzsprung-Russell diagram, $T_{\rm eff}$ ~versus $\log~g$ ~for the main red star sample in APOGEE DR16. The points are color-coded by their total metal content, [M/H]. Dwarf-type stars, those with $\log~g$ ~\textgreater 3.7~dex, have calibrated stellar parameters for the first time in DR16. New stellar grids also provide reliable measurements to cooler temperatures than in previous DRs.}
\label{fig:apogeehr}
\end{figure*}
Several fields were observed with both the APOGEE-N and APOGEE-S instruments. Comparing the results, we
find close agreement in the derived stellar parameters and abundances, with mean offsets of
$\Delta$ $T_{\rm eff}$ $\sim 10$ K, $\Delta$ $\log~g$ $\sim 0.02$ dex, and abundance offsets of $<0.02$ dex for
most elements.
\subsection{APOGEE Value Added Catalogs}
There are six APOGEE-associated VAC's in DR16. A brief description of each VAC and the corresponding publications are given below. They are also listed in Table \ref{table:vac}.
\subsubsection{APOGEE Red Clump Catalog}\label{vac:rc}
DR16 contains the latest version of the APOGEE red-clump (APOGEE-RC) catalog. This catalog is created in the same way as the DR14 version \citep[which is presented in][]{2014ApJ...790..127B}, with the more stringent $\log~g$~cut. The DR16 catalog contains 39,675 unique stars, about 30\% more than in DR14. The red clump stars are cross-matched to {\it Gaia} DR2 \citep{GaiaDR2} by matching (RA, Dec) within a radius of 2 arcsec using the Vizier xmatch service.\footnote{accessed through the {\tt gaia\_tools} code available here: \url{https://github.com/jobovy/gaia\_tools}} We include proper motions through this match.
\subsubsection{APOGEE-\texttt{astroNN}}\label{vac:astroNN}
The APOGEE-\texttt{astroNN} value-added catalog contains the results from applying the \texttt{astroNN} deep-learning code to APOGEE spectra to determine stellar parameters, individual stellar abundances \citep{2019MNRAS.483.3255L}, distances \citep{2019arXiv190208634L}, and ages \citep{2019arXiv190104502M}. Full details of how all of these quantities are determined from the DR16 data are given in \S 2.1 of \citet{2019arXiv190511404B}. In addition, properties of the orbits in the Milky Way (and their uncertainties) for all stars are computed using the fast method of \citet{2018PASP..130k4501M} assuming the \texttt{MWPotential2014} gravitational potential from \citet{2015ApJS..216...29B}. Typical uncertainties in the parameters are 60 K in $T_{\rm eff}$, 0.2 dex in $\log~g$, 0.05 dex in elemental abundances, 5\,\% in distance, and 30\,\% in age. Orbital properties such as the eccentricity, maximum height above the mid-plane, radial, and vertical action are typically precise to 4--8\,\%.
\subsubsection{APOGEE-\textit{Joker}}\label{vac:joker}
The APOGEE-\textit{Joker} VAC contains posterior samplings over binary star orbital parameters (i.e., Keplerian orbital elements) for 224,401 stars with three or more APOGEE visit spectra that pass a set of quality cuts as described in \citealt{PriceWhelan2020}).
The samplings are generated using \textit{The Joker}, a custom Monte Carlo sampler designed to handle the very multi-modal likelihood functions that are natural to sparsely-sampled or noisy radial velocity time series \citep{PriceWhelan2017, PriceWhelan2018}.
For some stars, these samplings are unimodal in period, meaning that the data are very constraining and the orbital parameters can be uniquely summarized; in these cases, we provide summary information about the samplings such as the maximum \textit{a posteriori} sample values.
\citet{PriceWhelan2020} describes the resulting catalog from applying of {\it The Joker} to APOGEE DR16. Based on some simple cuts comparing the maximum likelihood posterior sample to the likelihood of a model for each source in which the radial velocities are constant (both quantities are provided in the VAC metadata), we estimate that there are $\gtrsim 25,000$ binary star systems robustly detected by APOGEE (described in \citealt{PriceWhelan2020}, their Section 5). The vast majority of these systems have very poorly constrained orbital parameters, but these posterior samplings are still useful for performing hierarchical modeling of the binary star population parameters (e.g., period distribution and eccentricity parameters) as is demonstrated in \citet{PriceWhelan2020}.
While finalizing the DR16 Value Added Catalog release, we found a bug in the version of {\textit{The Joker} that was used to generate the posterior samplings released in this VAC. This bug primarily impacts long-period orbital parameter samplings, and only for systems with radial velocity measurements that are very noisy or have a short baseline relative to the periods of interest. The samplings for systems with precise data or with many epochs should not be affected. \citet{PriceWhelan2020} describe the this bug in more detail. The VAC will be updated as soon as possible.
\subsubsection{Open Cluster Chemical Abundances and Mapping}\label{vac:occam}
The goal of the Open Cluster Chemical Abundances and Mapping (OCCAM) survey is to create a uniform (same spectrograph, same analysis pipeline) open cluster abundances dataset. We combine proper motion (PM) and radial velocity (RV) measurements from {\it Gaia} DR2 \citep{GaiaDR2} with radial velocity (RV) and metallicity measurements from APOGEE to establish membership probabilities for each star observed by APOGEE in the vicinity of an open cluster. DR16 is the second VAC from the OCCAM survey. We do not impose a minimum number of reliable member stars as in the previous version (released in DR15 \citealt{2019ApJS..240...23A}; and described in detail in \citealt{Donor2018}), but we do enforce a visual quality cut based on each cluster's proper motion (PM) cleaned color-magnitude diagram (CMD). A detailed description of the updated methods is provided in \citet{Donor2020}. The VAC includes 10191 APOGEE stars in the vicinity of 126 open clusters. Average RV, PM, and abundances for reliable ASPCAP elements are provided for each cluster, along with the visual quality determination. Membership probabilities based individually upon RV, PM, and [Fe/H]~are provided for each star. The reported cluster PM is from the kernel smoothing routine used to determine cluster membership. Reported RVs and chemical abundances are simply the average value from cluster members; in practice, the uncertainties for chemical abundances are small and show small variation between stars of the same cluster.
\subsubsection{APOGEE DR16 StarHorse Distances and Extinctions}\label{vac:starhorse}
The APOGEE DR16 StarHorse catalog contains updated distance and extinction estimates obtained with the latest version of the StarHorse code \citep{2018MNRAS.476.2556Q, Anders2019}. The DR14 version of these results were published as part of the APOGEE DR14 Distance VAC (\citealt{2018ApJS..235...42A}; Sect. 5.4.3). DR16 results are reported for 388,815 unique stars, based on the following input data: APOGEE DR16 ASPCAP results, broad-band photometry from several sources (PanSTARRS-1, 2MASS, AllWISE), as well as parallaxes from {\it Gaia} DR2 corrected for the zero-point offset of -0.05 mas found by \citet{2019ApJ...878..136Z}. Typical statistical distance uncertainties amount to ~10\% for giant stars and ~3\% for dwarfs, respectively. Extinction uncertainties amount to ~0.07 mag for stars with optical photometry and ~0.17 mag for stars with only infra-red photometry. The APOGEE DR16 StarHorse results are presented in \citet{Quieroz2019}, together with updated results derived using spectroscopic information from other surveys.
\section{eBOSS: Final Sample Release}
\label{sec:eboss}
Observations for eBOSS were conducted with the 1000-fiber BOSS spectrograph \citep{Smee2013}
to measure the distance-redshift relation with the
baryon acoustic oscillation (BAO) feature that appears at a scale of roughly 150 Mpc.
The last observations that will contribute to large-scale structure measurements concluded
on March 1, 2019.
All eBOSS observations were conducted simultaneously with either TDSS
observations of variable sources or SPIDERS observations of X-ray sources.
\subsection{eBOSS}
The first generation of SDSS produced a spectroscopic LRG sample \citep{eisenstein01a}
that led to a detection of the BAO feature in the clustering of matter \citep{eisenstein05a}
and the motivation for dedicated large-scale structure surveys within SDSS.
Over the period 2009--2014, BOSS completed a BAO program using more than 1.5 million galaxy
spectra spanning redshifts $0.15<z<0.75$ and more than 150,000 quasars at $z>2.1$
that illuminate the matter density field through the Lyman-$\alpha$ forest.
Operating over the period 2014--2019, eBOSS is the third and final in the series of SDSS large-scale structure surveys.
The eBOSS survey was designed to obtain spectra of four distinct target classes to trace the underlying matter density field
over an interval in cosmic history that was largely unexplored during BOSS.
The LRG sample covers the lowest redshift interval within eBOSS, providing an expansion of the high
redshift tail of the BOSS galaxy sample \citep{reid16a} to a median redshift $z=0.72$. Galaxy
targets \citep{prakash16a} were selected from imaging catalogs derived from
Wide-field Infrared Survey Explorer (WISE) \citep[WISE;][]{wright10a}
and SDSS DR13 imaging data.
A new sample of ELG targets covering $0.6<z<1.1$ was observed over the period 2016--2018, leading
to the highest redshift galaxy sample from SDSS. Galaxy targets were identified using imaging from the Dark Energy Camera
\citep[DECam;][]{flaugher15a}.
The ELG selection \citep{raichoor17a} reaches a median redshift $z=0.85$
and represents the first application of the DECam Legacy Survey data \citep[DECaLS;][]{Dey2019} to
spectroscopic target selection in any large clustering survey.
The quasar sample covers the critical redshift range $0.8 < z < 2.2$ and is
derived from WISE infrared and SDSS optical imaging data \citep{myers15a}.
Finally, new spectra of $z>2.1$ quasars were obtained to enhance the final BOSS
Lyman-$\alpha$ forest measurements \citep{bautista17a,masdesbourboux17a}. A summary of all these target categories, with redshift ranges and numbers, is provided in Table \ref{table:eboss}.
The surface area and target densities of each sample were chosen to maximize sensitivity to the clustering of
matter at the BAO scale.
The first major clustering result from eBOSS originated from the two-year, DR14 quasar sample.
Using 147,000 quasars, a measurement of the spherically averaged BAO distance at an effective
redshift $z=1.52$ was performed with 4.4\% precision \citep{qso17a}.
The DR14 LRG sample was used successfully to measure the BAO distance scale at
2.6\% precision \citep{bautista17b} while the DR14 high redshift quasar sample led to improved
measurements of BAO in the auto-correlation of the Lyman-$\alpha$ forest \citep{desaintagathe19a}
and the cross-correlation of Lyman-$\alpha$ forest with quasars \citep{blomqvist19a}.
The DR14 samples have also been used to perform measurements of
redshift-space distortions (RSD) \citep[e.g.][]{zarrouk18a}, tests of inflation \citep[e.g.][]{castorina19a},
and new constraints on the amplitude of matter fluctuations and the
scalar spectral index \citep[e.g.][]{Chabanier19}.
\subsubsection{Scope of eBOSS}
With the completion of eBOSS, the BOSS and eBOSS samples provide six distinct target samples covering
the redshift range $0.2<z<3.5$. The number of targets for each sample is summarized in Table~\ref{table:eboss}
and the surface density of each sample is shown in Figure~\ref{fig:ebossnz}.
Figure \ref{fig:ebosssky} shows the DR16 eBOSS spectroscopic coverage in Equatorial coordinates.
For comparison, the SDSS-III BOSS coverage is shown in gray. The programs that define the unique
eBOSS clustering samples are SEQUELS (Sloan Extended Quasar, ELG, and LRG Survey; initiated during SDSS-III; LRG and quasars),
eBOSS LRG+QSO (the primary program in SDSS-IV observing LRGs and Quasi-stellar objects, or QSOs), and ELG (new to DR16).
\begin{deluxetable}{lcc}
\tablecaption{Main Target Samples in eBOSS and BOSS\label{table:eboss}}
\tablehead{\colhead{Sample} & \colhead{Redshift Range\tablenotemark{a}} & \colhead{Number}}
\startdata
eBOSS LRGs & $0.6<z<1.0$ & 298762 \\
eBOSS ELGs & $0.6<z<1.1$ & 269889 \\
eBOSS QSOs & $0.8<z<2.2$ & 434820 \\
BOSS ``LOWZ"\tablenotemark{b} & $0.15<z<0.43$ & 343160 \\
BOSS CMASS\tablenotemark{c} & $0.43<z<0.75$ & 862735 \\
BOSS Lyman-$\alpha$ QSOs & $2.2<z<3.5$ & 158917
\enddata
\tablenotetext{a}{Range used in clustering analysis}
\tablenotetext{b}{The low redshift targets in BOSS}
\tablenotetext{c}{``Constant mass" targets in BOSS}
\end{deluxetable}
\begin{figure}
\centering
\includegraphics[angle=0,width=8.7cm]{Figure4.png}
\caption{The normalized surface density ($N(z)$) of the spectroscopically-confirmed objects used
in the BOSS and eBOSS clustering programs. The SDSS-I,-II, and -III sample of confirmed
quasars is also presented to demonstrate the gains in the number of quasars that eBOSS
produced over the interval $0.8<z<2.2$.}
\label{fig:ebossnz}
\end{figure}
\begin{figure*}
\centering
\includegraphics[angle=0,width=15cm]{Figure5.jpg}
\caption{DR16 eBOSS spectroscopic coverage in Equatorial coordinates (map centered at RA = 8h.)
Each symbol represents the location of a completed spectroscopic plate scaled to the approximate
field of view. SPIDERS-maximal footprint is the same as BOSS, and SPIDERS-complete is SEQUELS. For more details on SPIDERS coverage see \citep{Comparat2020}.}
\label{fig:ebosssky}
\end{figure*}
\subsubsection{Changes to the eBOSS Spectral Reduction Algorithms}
The data in DR16 were processed with the version {\tt v5\_13\_0} of the
pipeline software \texttt{idlspec2d} \citep{bolton2012,Dawson13a}.
This is the last official version of the software that will be used
for studies of large-scale structure with the SDSS telescope. Table~\ref{tab:pipeline_changes} presents a summary of the major changes in the pipeline during SDSS-IV (eBOSS) and we document the final changes to \texttt{idlspec2d} below.
\begin{deluxetable*}{lll}
\tablecaption{Spectroscopic pipeline major changes \label{tab:pipeline_changes}}
\tablehead{
\colhead{Data Release} & \colhead{\texttt{idlspec2d} version} & \colhead{Major changes}
}
\startdata
DR12 & \texttt{v5\_7\_0} & Final SDSS-III/BOSS release \\
DR13 & \texttt{v5\_9\_0} & Adapting software to SDSS-IV/eBOSS data, new unbiased extraction algorithm \\
DR14 & \texttt{v5\_10\_0} & New unbiased flux correction algorithm, ADR\tablenotemark{a} corrections on individual exposures \\
DR16 & \texttt{v5\_13\_0} & Improved background fitting in extraction, new stellar templates for flux calibration
\enddata
\tablenotetext{a}{atmospheric differential refraction}
\end{deluxetable*}
There were two major changes from DR14 to DR16 to the reduction algorithm.
First, a new set of stellar templates is used for the flux
calibration. This set of templates was produced for the Dark Energy
Spectroscopic Instrument (DESI) pipeline and provided to eBOSS.
These templates reduce residuals in flux calibration relative to previous releases
through improved modeling of spectral lines in the F-stars.
The second major change was in the extraction step, where the background
flux is now fitted prior to the extraction of the flux of individual traces. This modification improved the stability of extraction and removed occasional artifacts
observed in low signal-to-noise spectra. While these changes did not measurably improve the
spectroscopic classification success rates, they represent an improvement
in the overall data quality.
\subsubsection{eBOSS Value Added Catalogs\label{vac:eboss}}
There are two VACs based on eBOSS data which we release in DR16. These
catalogs offer insight into galaxy physics with eBOSS spectra beyond the core cosmological goals.
The catalogs are described below.
\begin{itemize}
\item {\it Classification eBOSS Emission Line Galaxies:}
This catalog gives the classification of $0.32<z<0.8$ eBOSS ELGs into four types: star-forming galaxies, composites, Active Galactic Nuclei (AGN) and Low Ionization Nuclear Emission-line Regions (LINERs). It also contains the parameters: [OIII]/H$\beta$, [OII]/H$\beta$, [OIII] line velocity dispersion, stellar velocity dispersion, $u-g$, $g-r$, $r-i$, $i-z$ that are used for classification. The classification is based on a random forest model trained using $z<0.32$ ELGs labeled using standard optical diagnostic diagrams \citep{Zhang2019}. The codes, data and data models are available at \url{https://github.com/zkdtc/MLC\_ELGs} in addition to the standard location for VACs (see \S \ref{sec:access}).
\item {\it FIREFLY Stellar Population Models of SDSS Galaxy Spectra (single fiber):}
We determine the stellar population properties (age, metallicity, dust reddening, stellar mass, and star formation history) for all single fiber spectra classified as galaxies that were published in this release (including those from SDSS-I, II, III and IV). This catalog contains the newly completed samples of eBOSS LRG and eBOSS ELG and will be useful for a variety of studies on galaxy evolution and cosmology \citep[e.g.][]{Bates2019}.
This is an update of the calculation done by \citet{Comparat2017} on the galaxy spectra in DR14 \citep{2018ApJS..235...42A}.
We perform full spectral fitting on individual galaxy spectra using the \textsc{firefly}\footnote{\url{https://github.com/FireflySpectra/firefly_release}} code \citep{Wilkinson_2015,2017MNRAS.465..688G,goddard2017,wilkinson2017} which make use of high spectral resolution stellar population models from \citet{Maraston2011}.
Calculations are carried out using the \citet{Chabrier2003} stellar initial mass function and two input stellar libraries MILES and ELODIE \citep{MILES,MILES_2011,Prugniel2007}.
We publish all catalogs of properties through the SDSS web interfaces (SAS and CAS, see \S \ref{sec:access}) and also make individual best-fit model spectra available through the \textsc{firefly} website\footnote{\url{https://www.sdss.org/dr16/spectro/eboss-firefly-value-added-catalog/}}
\end{itemize}
In the future, we will also present a catalog of more than 800 candidate strong galaxy gravitational lens systems discovered by the presence of higher redshift background emission-lines in eBOSS galaxy spectra (M. Talbot et al. in prep). This Spectroscopic Identification of Lensing Object (SILO) program extends the method of the BOSS Emission-Line Lens Survey~\citep[BELLS;][]{2012ApJ...744...41B} and Sloan Lens ACS~\citep[SLACS;][]{2006ApJ...638..703B} survey to higher redshift, and has recently been applied to the spectroscopic discovery of strongly lensed galaxies in MaNGA~\cite[SILO;][]{2018MNRAS.477..195T}. The catalog will be released after DR16, but will be based on the DR16 sample.
\subsubsection{Anticipated Cosmology Results from eBOSS}
The final eBOSS BAO and RSD measurements will be presented in a series of independent analyses
for each target class. The measurements performed with LRG, ELG, and $z<2.2$ quasars
will be performed in configuration space and Fourier space. Systematic
errors will be assessed through the use of large N-body mock catalogs populated
with galaxies according to a halo occupation distribution prescription
that approximates the observed data, extending the work done in previous data releases \citep[e.g.][]{GIlMarin2018}. Consensus values of the angular diameter
distance, the Hubble parameter, and $f\sigma_8$
will be provided for each tracer based on the two measurements.
Measurements of the angular diameter distance and the Hubble parameter will be
reported at $z>2.1$ using both the auto-correlation of the final
Lyman-$\alpha$ forest sample and the cross-correlation of the Lyman-$\alpha$ forest
with quasars. All eBOSS results will be combined with the lower redshift
studies from SDSS and BOSS to offer new constraints on the cosmological model
as was done in the DR11 sample for BOSS \citep{aubourg15a}.
As part of the main cosmological goals of eBOSS, there will be a number of VACs based on the final eBOSS data released in DR16. VACs which are planned and will be publicly released in the future include:
\begin{itemize}
\item {\it Large Scale Structure (from ELGs, LRGs and QSOs).} These large-scale structure (LSS) VACs will be based on all available eBOSS data used for the clustering studies. Covering the main target classes, this VAC provides the tools to map the three-dimensional structure of the Universe across $0.6 < z< 2.2$ (A. Ross et al. in prep.).
\item {\it Lyman-$\alpha$ Forest Transmission VAC.} This VAC will contain the estimated fluctuations of transmitted flux fraction used for Lyman-$\alpha$ forest BAO measurements.
The catalog will provide the estimates over the Lyman-$\alpha$ and Lyman-$\beta$
rest frame regions of high redshift quasars (H. du Mas des Bourboux in prep.).
\item {\it eBOSS Quasar Catalog.} Beginning with SDSS-I, SDSS has maintained a tradition of releasing a visually-inspected quasar catalog alongside major data releases. The new SDSS-DR16Q catalog (DR16Q; \citealt{lyke2020}) will represent the most recent and largest catalog of known unique quasars within SDSS.
\end{itemize}
\subsection{Reverberation Mapping Program and Other Repeat Spectroscopy}
The SDSS Reverberation Mapping (SDSS-RM; \citealt{2015ApJS..216....4S}) project is a dedicated multi-object reverberation mapping (RM) program that began observations as a part of SDSS-III in January 2014.
Although not specifically established as a survey within eBOSS, observations of those same targets using
the BOSS spectrograph continued through SDSS-IV.
The SDSS-RM program monitors a sample of 849 quasars in a single $\sim 7\,{\rm deg}^2$ pointing (observed with three plates 7338, 7339 and 7340 with identical targets), with the overall goal of measuring black hole masses via RM in $\sim$100 quasars at a wide range of redshifts (details on the quasar sample itself are provided by \citealt{Shen19a}). During the first season of SDSS-III monitoring, SDSS-RM obtained 32 epochs of SDSS spectroscopy, and has subsequently obtained $\sim 12$ epochs/yr during 2015-2017 and $\sim 6$ epochs/yr during 2018-2020 as part of SDSS-IV. The field has also been monitored photometrically with the Canada-France-Hawaii Telescope (CFHT) and the Steward Observatory Bok telescope in order to increase the observing cadence and the overall yield of RM time-lag measurements. The SDSS-RM field is also coincident with the Pan-STARRS 1 \citep[PS1][]{Kaiser2010} Medium Deep Field MD07, and thus has been monitored photometrically since 2010. Observations with SDSS and the Bok telescope will continue through 2020.
The program has been largely successful in obtaining RM measurements: \cite{Shen16a} reported several reverberation-mapping measurements from the program after analyzing the first year of spectroscopic data only, and \cite{Li17} measured composite RM signals in the same dataset. \cite{Grier2017} combined the first year of spectroscopy with the first year of photometry and recovered 44 lag measurements in the lowest-redshift subsample using the H$\beta$ emission line. With the additional years of SDSS-IV monitoring included, \cite{Grier19} reported 48 lag measurements using the C{\sc iv} emission line; the addition of another year of SDSS spectroscopy and the inclusion of the PS1 photometric monitoring from 2010--2013 demonstrated the utility of longer time baselines in measuring additional lags (\citealt{Shen19b}). \cite{Homayouni19} measured inter-band continuum lags in many sources, allowing for investigations of accretion-disk properties. Additional studies based off of SDSS-RM data that aim to evaluate and improve RM and black hole-mass measurement methodologies have also been completed (\citealt{Wang19, Li19}). The final SDSS-RM dataset, which will make use of the PS1 monitoring of the SDSS-RM field and seven years of SDSS spectroscopic monitoring, will span more than ten years and allow for the measurement of lags in the highest-luminosity subset of the quasar sample.
The SDSS-RM dataset is extremely rich and allows for many other types of investigations beyond RM and black-hole masses. The SDSS-RM group has also reported on many other topics, such as studies of quasar host galaxies (\citealt{Shen15b, Matsuoka15, Yue18}), broad absorption-line variability (\citealt{Grier15, Hemler19}), studies of extreme quasar variability (\citealt{Dexter19}) and investigations of quasar emission-line properties (\citealt{Sun15, Denney16a, Shen16b, Denney16b, Sun18}).
RM observing will continue through 2020 at APO. Building on this program in SDSS-IV an expanded multi-object spectroscopic RM program is included in the Black Hole Mapper program in the upcoming SDSS-V survey post-2020 (see \S \ref{sec:future}).
In addition to the dedicated RM program, there were several fields in SDSS-III and SDSS-IV that were observed multiple times and thus
offer similar potential for time-domain spectroscopic analyses. Those fields with at least four observations are as follows:
\begin{itemize}
\item {\it Plates 3615 and 3647:} contain the standard BOSS selection of targets. These two plates have identical science targets
and contain 14 epochs that are classified as ``good'' observations during SDSS-III.
\item {\it Plate 6782:} contain targets selected to be likely quasars based on variability from multi-epoch imaging data
in Stripe 82 \citep{2000AJ....120.1579Y,Ivezic2007}\footnote{Also see \url{https://classic.sdss.org/dr7/coverage/sndr7.html} for details on Stripe 82 multi-epoch imaging}. This plate contains four epochs that are classified as ``good'' observations during SDSS-III.
\item {\it Plates 7691 and 10000:} contain a standard eBOSS selection of LRG, quasar, SPIDERS, and TDSS targets. The two plates
have identical selections and were observed nine times during SDSS-IV.
\item {\it Plate 9414:} contains ELG targets and TDSS targets from Stripe 82 and was observed four times
to develop higher signal-to-noise spectra that could be used to test the automated redshift classification schemes.
\end{itemize}
These multi-epoch fields and a few others from BOSS are
described in more detail on the DR16 ``Special Plates'' web page (\url{https://sdss.org/dr16/spectro/special_plates/}).
\subsection{SPIDERS}
\label{sec:spiders}
SPIDERS (Spectroscopic IDentification of EROSITA Sources) is one of two smaller programs conducted within eBOSS. SPIDERS was originally designed as a multi-purpose follow-up program of the Spectrum-Roentgen-Gamma (SRG)/eROSITA all-sky survey \citep{Merloni12,Predehl16}, with the main focus on X-ray selected AGN and clusters of galaxies. Given the delay in the launch of SRG (which took place in July 2019, i.e. after the end of the main eBOSS survey observing) the program was re-purposed to target the X-ray sources from the ROSAT All-Sky Survey \citep[RASS][]{Voges1999,Voges2000} and XMM-Newton \citep[X-ray Multi-mirror Mission][]{Jansen2001AA365L1J}, which will be eventually have their X-ray emission better characterized by eROSITA.
All SPIDERS spectra taken since the beginning of SDSS-IV have targeted either X-ray sources from the revised data reduction of ROSAT \citep[RASS,2RXS][]{Voges1999,Voges2000,Boller16} and XMM-Slew \citep{Saxton08A} catalogs, or red-sequence galaxies in clusters detected by ROSAT (part of the CODEX catalogue, \citealt{Finoguenov2020}) or by XMM \citep[XClass catalogue,][]{Clerc2012}. We define two areas: ``SPIDERS-Maximal'' which correspond to sky area covered by an SDSS legacy or BOSS/eBOSS/SEQUELS plate and ``SPIDERS-Complete'' which corresponds to the area covered by the eBOSS main survey and SEQUELS good plates. SPIDERS-Maximal (Complete) sky area amounts to $10,800$ ($5,350$) $\deg^2$. The sky area corresponding to SPIDERS-Complete is shown in Figure \ref{fig:ebosssky}.
\subsubsection{SPIDERS Clusters}
In this section we describe the DR16 target selection, data scope, and VACs related to X-ray clusters.
In DR16, 2,740 X-ray selected clusters (out of a total of 4,114) were spectroscopically confirmed by SPIDERS observing over the SPIDERS-Complete area.
This constitutes the largest X-ray cluster spectroscopic sample ever build.
It forms the basis of multiple studies of structure formation on cosmological times \citep{Furnell2018MNRAS4784952F,Erfanianfar2019arXiv190801559E}.
The majority of SPIDERS clusters targets are galaxies selected via the red-sequence technique around candidate X-ray galaxy clusters \citep{rykoff2012, rykoff2014}.
These systems were found by filtering X-ray photon over-densities in RASS with an optical cluster finder tool using SDSS photometry.
The target selection process for these targets is described fully in \citet{Clerc2016}.
The corresponding target bits and target classes are fully described in the SDSS DR14 data release \citep{2018ApJS..235...42A}.
We have also considered several additional SPIDERS cluster target classes which we describe below.
\subsubsection{SPIDERS Target selection update}
New for DR16 is data from ``chunk {\tt eboss20,26,27}".
In chunk 20, \texttt{SPIDERS\_RASS\_CLUS} targets are obtained by extending the red-sequence search up to five times the cluster virial radius in CODEX clusters detected through their weak-lensing signature \citep{Shan2014}.
The virial radius used in the target selection is provided in the value-added catalog.
Moreover, in chunks 26 and 27, we introduce three new target subclasses, taking advantage of deeper optical datasets that enable cluster member measurements at higher redshifts.
\begin{itemize}
\item \texttt{SPIDERS\_CODEX\_CLUS\_CFHT}: Following the procedure described in \citet{brimioulle2013}, pointed Canada France Hawaii Telescope (CFHT)/Megacam observations and CFHT-LS fields provide deep $(u)griz$ photometry. A total of 54 (out of 462 targets) spectra were acquired and are labelled with the bit mask \texttt{EBOSS\_TARGET2 = $2^6$};
\item \texttt{SPIDERS\_CODEX\_CLUS\_PS1}: A sample of 249 high-redshift ($z_{\lambda}>0.5$) CODEX cluster candidates were searched for red-sequence counterparts in PanStarrs PS1 \citep{flewelling2016} using a custom algorithm. A total of 129 (out of 1142 targets) spectra were acquired, and are labelled with the bit mask \texttt{EBOSS\_TARGET2 = $2^7$};
\item \texttt{SPIDERS\_CODEX\_CLUS\_DECALS}: These targets are output of a custom red-sequence finder code applied to DeCALS photometric data\footnote{http://legacysurvey.org/decamls/} \citep[5th data release][]{Dey2019}. A total of 48 spectra (out of 495 targets) were acquired and are labelled with the bit mask \texttt{EBOSS\_TARGET2 = $2^8$}.
\end{itemize}
\subsubsection{SPIDERS Galaxies and redshifts}
In the SPIDERS-Complete area, a total of 48,013 galaxy redshifts (observed by SDSS-I to IV) are matched to red-sequence galaxy targets, regardless of any actual membership determination (N. Clerc et al. in prep.)
Of those, 26,527 are SPIDERS targets specifically.
The additional redshifts were collected from past SDSS-I, II, III and other eBOSS programs.
The median $i$-band magnitudes of the 26,527 newly acquired targets are $i_{\rm fiber2}=20.0$ and $i_{\rm cModel}=18.5$.
The spectra are typical of red, passive galaxies at $0.05 \lesssim z \lesssim 0.7$, displaying characteristic absorption features (Ca H+K, G-band, MgI, NaD, etc.)
Such magnitude and redshift ranges and the purposely narrow spectral diversity make the automated galaxy redshift determination a straightforward task for the eBOSS pipeline, that is well-optimized in this area of the parameter space \citep{bolton2012}.
In total, 47,492 redshifts are successfully determined with a \texttt{ZWARNING\_NOQSO = 0}.
The remaining ($\sim 1\%$) showing a non-zero flag are mainly due to due to unplugged fibers or bad columns on the spectrograph CCD or very low signal to noise; their redshift is not measured.
Full details on the statistical properties of the sample and in particular the success of redshift determination are given in C. Kirkpatrick et al. (in prep.).
\subsubsection{VAC: SPIDERS X-ray clusters catalog for DR16}\label{vac:clusters}
Within the SPIDERS-Complete area, 2,740 X-ray clusters showing a richness $\lambda_{\rm OPT} > 10$ were spectroscopically validated based on galaxy redshift data from SDSS-I to -IV in their red-sequence.
The richness, $\lambda_{\rm OPT}$, is defined as the sum of the membership probability of every galaxy in the cluster field.
It was measured by the redmapper algorithm \citep{rykoff2012}.
A total of 32,326 valid redshifts were associated with these galaxy clusters, leading to a median number of 10 redshifts per cluster red sequence.
The process of this validation is a combination of automatic and manual evaluations (C. Kirkpatrick et al. in prep).
An automated algorithm performed a preliminary membership assignment and interloper removal based on standard iterative $\sigma$-clipping methods.
The results of the algorithm were visually inspected by six experienced galaxy cluster observers (eleven different people since the beginning of the survey), ensuring at least two independent inspectors per system.
A Web-based interface was specifically developed for this purpose: using as a starting point the result of the automated algorithm, the tool allows each inspector to interactively assess membership based on high-level diagnostics and figures \citep[see Figure 16 in][]{Clerc2016}.
Validation is in most cases a consequence of finding three or more red-sequence galaxies in a narrow redshift window all within the X-ray estimated viral radius, compatible with them all being galaxy cluster members.
A robust weighted average of the cluster member redshifts, provides the cluster systemic redshift.
\subsubsection{X-ray point like sources} \label{vac:agn}
Throughout SDSS-IV, the SPIDERS program has been providing spectroscopic observations of ROSAT/RASS and XMMSL1 sources in the BOSS footprint \citep{Dwelly17}.
In addition to those targeted by SPIDERS, a large number of ROSAT and XMMSL1 sources received spectra during the SDSS-I/II \citep[in 2000–2008;][]{2000AJ....120.1579Y} and SDSS- III BOSS \citep[in 2009–2014;][]{Eisenstein2011,Dawson13a} surveys.
By combining the SDSS-I to IV spectra, the spectroscopic completeness achieved for the ROSAT sample is $10,590/21,945=50$\% in the SPIDERS-Complete area.
It increases to $53$\% when considering only high-confidence X-ray detections, and to $95$\% when considering only sources with high-confidence X-ray detections and optical counterparts with magnitudes in the nominal eBOSS survey limits ($17\le i_\texttt{mFiber2} \le 22.5$).
In the SPIDERS-Maximal area, the spectroscopic completeness of the ROSAT sample is lower $17300/40929=42$\% ($45$\%, $62$\% respectively).
For ROSAT sources, the major difficulty lies in the identification of secure counterparts of the X-ray sources at optical wavelength, given the large positional uncertainties.
To solve this problem, the Bayesian cross-matching algorithm \textsc{NWAY} \citep{Salvato18a} was used. The priors for this were based on ALLWISE \citep{Cutri2013} infrared (IR) color-magnitude distributions which, at the depth of the 2RXS and XMMSL2 surveys, can distinguish between X-ray emitting and field sources.
WISE positions were matched to photometric counterparts in SDSS.
So that for the DR16 value added catalogues, instead of reporting RASS of XMMSL1 measured X-ray fluxes, we report the updates 2RXS and XMMSL2 fluxes.
\citet{Comparat2020} presents the SPIDERS spectroscopic survey of X-ray point-like sources, and a detailed description of the DR16 value-added catalogues. We summarize it below.
\subsubsection{VACs: Multi-wavelength Properties of RASS and XMMSL AGNs}\label{vac:rass}
In these two VACs, we present the multiwavelength characterization over the SPIDERS-Complete area of two highly complete samples of X-ray sources:
\begin{enumerate}
\item The ROSAT All-Sky Survey (RASS) X-ray source catalog \citep[2RXS,][]{Boller16}
\item The XMM-Newton Slew Survey point source catalog \citep[XMMSL2, Version 2,][]{Saxton08A}.
\end{enumerate}
We provide information about the X-ray properties of the sources as well as of their counterparts at longer wavelengths (optical, IR, radio) identified first in the AllWISE IR catalog via a Bayesian cross-matching algorithm \citep{Salvato18a}.
We complement this with dedicated visual inspection of all the SDSS spectra, providing accurate redshift estimates (with confidence levels based on the inspection) and source classification, beyond the standard eBOSS pipeline results.
These two VACs supersede the two analogous ones published in DR14.
\subsubsection{VAC: Spectral Properties and Black Hole Mass Estimates
for SPIDERS DR16 Type 1 AGN}\label{vac:coffey}
This VAC contains optical spectral properties and black hole mass estimates for the DR16 sample of X-ray selected SPIDERS type 1 (optically unobscured) AGN.
This is the DR16 edition of an earlier SPIDERS VAC covering SPIDERS type 1 AGN up to DR14,
which was presented by \citet{Coffey2019} and \citet{2019ApJS..240...23A}.
The spectral regions around the MgII and $\rm H\beta$ emission lines were fit using a multicomponent model in order to derive
optical spectroscopic properties as well as derived quantities such as black hole mass estimates and Eddington ratios.
\subsubsection{Future plans for SPIDERS}
In addition to these programs, completed and fully released in DR16, the performance verification data being taken as part of the eROSITA Final Equatorial Field Depth Survey (eFEDS) is currently planned to be available by November 2019 and should consist of 120 deg$^2$ observed to the final eROSITA all-sky survey depth over an equatorial field overlapping with the GAMA09 \citep{Robotham2011} survey window.
To address at least part of the original goals of SPIDERS (i.e. eROSITA follow-up) within SDSS-IV, we plan to dedicate a special set of twelve special plates for these targets, to be observed in Spring 2020, and released as part of the final seventeenth data release. An extensive eROSITA follow-up program is also planned for the next generation of the survey, SDSS-V \citep[][and see \S \ref{sec:future}]{2017arXiv171103234K} and 4MOST \citep{Finoguenov2019Msngr.175...39F,Merloni2019Msngr.175...42M}.
\subsection{TDSS}
\label{sec:tdss}
TDSS (The Time Domain Spectroscopic Survey), is the second large subprogram of eBOSS, the goal of which is to provide the first large-scale, systematic survey of spectroscopic follow-up to characterize photometric variables. TDSS makes use of the BOSS spectrographs \citep{Smee2013}, using a small fraction (about 5\%) of the optical fibers on eBOSS plates. TDSS observations thus concluded with the end of the main eBOSS survey data collection 1st March 2019, and the
full and final TDSS spectroscopic data are included in DR16.
There are three main components of TDSS, each now with data collection
complete:
\begin{enumerate}
\item The primary TDSS spectroscopic targets are selected from their
variability within Pan-STARRS1 (PS1 multi-epoch imaging photometry,
and/or from longer-term photometric variability between PS1 and SDSS
imaging data, see e.g. \citealt{Morganson2015}). TDSS single epoch
spectroscopy \citep[SES][]{Ruan2016} of these targets establish the
nature of the photometric variable (e.g., variable star vs. variable quasar, and subclass, etc.), and in turn often then suggest the character of the underlying variability (e.g., pulsating RR Lyrae vs. flaring
late-type star vs. cataclysmic variable, etc.).
More than 108,000 optical spectra of
these TDSS photometric variables have been observed through DR16 (in both eBOSS and the eBOSS pilot program SEQUELS). Adding in similar variables sources fortuitously already
having optical spectra within the SDSS archives (from SDSS-I,-II or -III), approximately one-third of
the TDSS variables can be spectroscopically classified as variable stars,
and the majority of the remaining two-thirds are variable quasars.
\item A sample of 6,500 TDSS spectroscopic fibers were allotted to obtain
repeat spectra of known star and quasar subclasses of unusual and
special interest, which were anticipated or suspected to exhibit spectroscopic
variability in few epoch spectroscopy (FES; see e.g.
\citealt{MacLeod2018}). A recent specific example of this category of sources, are
TDSS spectra of nearly 250 dwarf Carbon stars that provide strong
evidence of statistical radial velocity variations indicative of
subclass binarity \citep{Roulston2019}.
\item The more recently initiated TDSS Repeat Quasar Spectroscopy (RQS)
program (see \citealt{MacLeod2018}) obtains multi-epoch spectra for
16,500 known quasars, sampling across a broad range of properties
including redshift, luminosity, and quasar subclass type. This has a
larger sample size, and also a greater homogeneity and less a priori
bias to specific quasar subclasses compared to the TDSS FES program. All RQS targets have at least one earlier epoch of SDSS spectroscopy already
available in the SDSS archive. The RQS program is designed especially to investigate
quasar spectral variability on multi-year timescales, and in addition to
its own potential for new discoveries of phenomena such as changing-look
quasars or broad absorption line (BAL) variability and others, also
provides a valuable (and timely) resource for planning of yet larger
scale multi-epoch quasar repeat spectral observations anticipated for
the SDSS-V Black Hole Mapper program (see \S \ref{sec:future}).
\end{enumerate}
In total, TDSS has selected or co-selected (in the latter case, often
with eBOSS quasar candidate selections) more than 131,000 spectra in
SDSS-IV that probe spectroscopy in the time-domain. All of these spectra are now being released in DR16.
\section{MaNGA: Value Added Catalogues Only}
\label{sec:manga}
MaNGA continues to observe galaxies at APO and following the end of eBOSS observing, now uses all dark time at APO. Technical papers are available which overview the project \citep{2015ApJ...798....7B}, target selection \citep{Wake2017}, instrumentation \citep{Drory2015}, observing \citep{Law2015,Yan2016survey} and data reduction and calibration strategies \citep{law16,yan16calibration}. For DR16 there is no new data release of MaNGA data cubes or analysis products; all remaining data will be released in DR17. However two new or updated MaNGA related VACs are provided which we document here. Previously released VACs, which are still available include those which provide stellar masses, morphologies, and neutral hydrogen (HI) followup (for details of DR15 VACs see \citealt{2019ApJS..240...23A}\footnote{DR15 VACs are found at: \url{https://www.sdss.org/dr15/data_access/value-added-catalogs/}}).
\subsection{Stellar Masses from Principle Component Analysis}\label{vac:pca}
This VAC provides measurements of resolved and total galaxy stellar-masses, obtained from a low-dimensional fit to the stellar continuum: \citet{Pace2019a} documents the method used to obtain the stellar continuum fit and measurements of resolved stellar mass-to-light ratio, and \citet{Pace2019b} addresses the aggregation into total galaxy stellar-masses, including aperture-correction and accounting for foreground stars. The measurements rely on MaNGA data reduction pipeline (DRP) version \texttt{v2\_5\_3}, data analysis pipeline (DAP) version \texttt{2.3.0}, and \texttt{PCAY} version \texttt{1.0.0}\footnote{\url{https://www.github.com/zpace/pcay}}. The VAC includes maps of stellar mass-to-light ratio and $i$-band luminosity (in solar units), a table of aperture-corrected total galaxy stellar masses, a library of synthetic model spectra, and the resulting low-dimensional basis set.
The low-dimensional basis set used to fit the stellar continuum is generated by performing principal component analysis (PCA) on a library of 40,000 synthetic star-formation histories (SFHs): the SFHs are delayed-$\tau$ models (${\rm SFR} \sim t ~ e^{-t / \tau}$) modulated by infrequent starbursts, sharp cutoffs, and slow rejuvenations (see \citealt{Pace2019a}, Section 3.1.1). Broad priors dictate the possible range in stellar metallicity, attenuation by foreground dust, and uncertain phases of stellar evolution such as blue stragglers and blue horizontal branch stars (see \citealt{Pace2019a}, Section 3.1.2). The system of six principal component spectra (``eigenspectra") is used as a low-dimensional basis set for fitting the stellar continuum. A distribution of stellar mass-to-light ratio is obtained for each MaNGA spaxel (line of sight in a galaxy) by weighting each model spectrum's known mass-to-light ratio by its likelihood given an observed spectrum. The median of that distribution is adopted as the fiducial stellar mass-to-light ratio of a spaxel, and multiplied by the $i$-band luminosity to get an estimate for the stellar mass.
For DR16, $i$-band stellar mass-to-light ratio and $i$-band luminosity maps (both in Solar units) are released. Stellar mass-to-light ratios have been vetted against synthetic spectra, and found to be reliable at median signal-to-noise ratios between $S/N = 2-20$, across a wide range of dust attenuation conditions (optical depth in the range 0--4), and across the full range of realistic stellar metallicities (-2 dex to +0.2 dex), with respect to Solar (see \citealt{Pace2019a}, Section 4.10). Typical ``random" uncertainties are approximately 0.1 dex (including age-metallicity degeneracies and uncertainties induced by imperfect spectrophotometry), and systematic uncertainties induced by choice of training star formation histories could be as high as 0.3 dex, but are believed to be closer to 0.1--0.15 dex (see \citealt{Pace2019a}, Sections 4.10 \& 5).
In addition to resolved maps of stellar mass-to-light ratio and $i$-band luminosity, the VAC includes a catalog of total stellar masses for MaNGA DR16 galaxies. We provide the total mass inside the integral field unit (IFU; after interpolating over foreground stars and other unreliable measurements with the median of its 8 nearest neighbors: see \citealt{Pace2019b}, Section 4). We also supply two aperture corrections intended to account for mass falling outside the spatial grasp of the IFU: the first adopts the median stellar mass-to-light ratio of the outermost 0.5 effective radii, and the second (recommended) adopts a mass-to-light ratio consistent with the $(g - r)$ color of the NSA flux minus the flux in the IFU (see \citealt{Pace2019b}, Section 4). A comparison of these total masses with those from the NASA-Sloan Atlas (NSA; \citealt{blanton2011}) and MPA-JHU\footnote{Max Planck Institute for Astrophysics and the Johns Hopkins University} catalog \citep{Brinchmann2004} is shown in Figure \ref{fig:mangavac}.
\begin{figure}
\centering
\includegraphics[angle=0,width=8.5cm]{Figure6.png}
\caption{A comparison of MaNGA-PCA total stellar masses with NSA (blue points and dashed black line) and MPA-JHU (orange points and solid black line) stellar masses as a function of galaxy $g-r$ colour. The lines show a locally-weighted regression. This plot is reproduced from Figure 6 of \citet{Pace2019b}.}
\label{fig:mangavac}
\end{figure}
\subsection{PawlikMorph Catalog} \label{vac:pawlikmorph}
This catalog provides the shape asymmetry, alongside other standard galaxy morphological related measurements (CAS, Gini, M20, curve of growth radii and S\'ersic fits), based on SDSS DR7 imaging \citep{DR7} using the 8-connected structure detection algorithm described in \citet{Pawlik2016}\footnote{Available from \url{https://github.com/SEDMORPH/PawlikMorph}} to define the edges of the galaxies. We make this available for all galaxies in the MaNGA DR15 release \citep{2019ApJS..240...23A}. The algorithm is specifically designed to identify faint features in the outskirts of galaxies. In this version, stars are not masked prior to creating the 8-connected binary mask, therefore stars lying within the extended light of the galaxies cause incorrect measurements. More than 10\% of objects with unusual measurements have been visually inspected using {\tt Marvin} and SkyServer, and the {\tt WARNINGFLAG} set to 1 for the fraction of these where a star or other problem is identified. Users should not use these measurements, and additionally may wish to visually inspect small samples or outliers to ensure that the sample is appropriate for their science goals.
\section{Conclusions and Future Plans}
\label{sec:future}
This data release, which is the sixteenth over all from SDSS (DR16), is notable for containing the first release of data from Southern hemisphere observing as part of APOGEE-2S and the last release of large scale cosmological redshift-survey data from SDSS (the main program of the eBOSS survey). DR16 contains no new data from the MaNGA survey.
SDSS-IV has one final year of operations remaining, and is planning a further one final public data release. That data release, which will be the seventeenth from SDSS overall (DR17), will comprise all remaining data taken by all surveys in SDSS-IV. What follows is a brief summary of the intended contents of DR17:
\begin{itemize}
\item Due to an accelerated pace of observing in February 2018--1st March 2019, eBOSS has finished observing, and so DR16 is the final data release for both the main eBOSS survey and TDSS. A number of catalogues of redshifts based on eBOSS DR16 spectra have been constructed; these will be released in future. The successful launch of the eROSITA satellite \citep{Predehl_2014_eROSITA} means there will be a small number of addition SPIDERS plates for followup of eROSITA targets, the spectra from which will be released in DR17.
\item MaNGA has been observing in all remaining dark time from APO since 2nd March 2019, and is on schedule to meet, or slightly exceed its intended goal of 10,000 galaxies. In addition MaNGA has been approved time to observe a subset of ($N\sim$400) galaxies at an exposure time four times deeper than the typical survey.
\item APOGEE-2 continues to observe from both the Northern (APO) and Southern (LCO) hemisphere. DR16 is the first release of data from the Southern hemisphere, DR17 will be the final release of all APOGEE data from all phases of APOGEE. DR17 will have the complete multi-epoch samples spanning as long as 10 years for some targets, as well as reaching both full depth and coverage in the disk, bulge, and halo programs, and completing large-scale programs to characterize photometric objects of interest in Kepler, K2, and TESS.
\end{itemize}
\subsection{SDSS-V}
Starting in 2020, after SDSS-IV has ended observations at APO and LCO, the next generation of SDSS will begin --- SDSS-V \citep{2017arXiv171103234K}\footnote{\url{https://www.sdss.org/future}}. SDSS-V is a multi-epoch spectroscopic survey to observe nearly six million sources using the existing BOSS and APOGEE spectrographs, as well as very large swathes of interstellar medium (ISM) in the Local Group using new optical spectrographs and a suite of small telescopes. SDSS-V will operate at both APO and LCO, providing the first all-sky ``panoptic'' spectroscopic view of the sky, and will span a wide variety of target types and science goals.
The scientific program is divided into three ``Mappers'':
\begin{itemize}
\item The {\it Milky Way Mapper} (MWM) is targeting millions of stars with the APOGEE and BOSS spectrographs, ranging from the immediate solar neighborhood to the far side of the Galactic disk and the MW's satellite companions. The MWM will probe the formation and evolution of the MW, the physics and life-cycles of its stars, and the architecture of multi-star and planetary systems.
\item The {\it Black Hole Mapper} (BHM) is targeting nearly half a million SMBHs and other X-ray sources (including newly discovered systems from the {\it eROSITA} mission) with the BOSS spectrograph in order to characterize the X-ray sky, measure black hole masses, and trace black hole growth across cosmic time.
\item Finally, the {\it Local Volume Mapper} (LVM) employs a wide-field optical IFU and new optical spectrographs (with $R \sim 4000$) to map $\sim$2500~deg$^2$ of sky, targeting the ISM and embedded stellar populations in the MW and satellite galaxies. These maps will reveal the physics of both star formation and the interplay between these stars and the surrounding ISM.
\end{itemize}
SDSS-V builds upon the operational infrastructure and data legacy of earlier SDSS programs, with the inclusion of several key new developments. Among these are the retirement of the SDSS plug-plate system and the introduction of robotic fiber positioners in the focal planes of both 2.5~m telescopes at APO and LCO. These focal plane systems (FPS) enable more efficient observing and larger target densities than achievable in previous SDSS surveys. In addition, the LVM is facilitated by the construction of several $\le$1~meter telescopes at one or both observatories, linked to several new optical spectrographs based on the DESI design \citep{Martini2018}. SDSS-V continues the SDSS legacy of open data policies and convenient, efficient public data access, with improved data distribution systems to serve its large, diverse, time-domain, multi-object and integral-field data set to the world.
After twenty years of Sloan Digital Sky Surveys the data coming out from SDSS-IV in DR16 is making significant contributions to our understanding of the components our Galaxy, galaxy evolution in general and the Universe as a whole. The SDSS-IV project will end with the next data release (DR17), but the future is bright for SDSS with new technology and exciting new surveys coming in SDSS-V.
\section{Acknowledgements}
Funding for the Sloan Digital Sky Survey IV has been provided by
the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of
Science, and the Participating Institutions. SDSS-IV acknowledges
support and resources from the Center for High-Performance Computing at
the University of Utah. The SDSS web site is www.sdss.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics,
Instituto de Astrof\'isica de Canarias, The Johns Hopkins University,
Kavli Institute for the Physics and Mathematics of the Universe (IPMU) /
University of Tokyo, Korean Participation Group, Lawrence Berkeley National Laboratory,
Leibniz Institut f\"ur Astrophysik Potsdam (AIP),
Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg),
Max-Planck-Institut f\"ur Astrophysik (MPA Garching),
Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE),
National Astronomical Observatories of China, New Mexico State University,
New York University, University of Notre Dame,
Observat\'ario Nacional / MCTI, The Ohio State University,
Pennsylvania State University, Shanghai Astronomical Observatory,
United Kingdom Participation Group,
Universidad Nacional Aut\'onoma de M\'exico, University of Arizona,
University of Colorado Boulder, University of Oxford, University of Portsmouth,
University of Utah, University of Virginia, University of Washington, University of Wisconsin,
Vanderbilt University, and Yale University.
Co-authorship on SDSS-IV data papers is alphabetical by last name and offered to all collaboration members who have contributed at least 1 month FTE towards any of the surveys during the period up to the end of data collection; and any external collaboration who has contributed at least 1 month FTE to work critical to the data release.
We would like to thank the Center for Cosmology and AstroParticle Physics (CCAPP) at the Ohio State University for their hospitality during ``DocuBrew" 2019. This event held in August 2019 was the main venue for documentation updates for DR16 (including this paper), was organized by Ashley Ross, Jennifer Johnson and Anne-Marie Weijmans and attended by Rachael Beaton, Joel Brownstein, Brian Cherinka, Kyle Dawson, Sten Hasselquist, Amy Jones, Jade Ho, Karen Masters, Jordan Raddick, Jos\'e S\'anchez-Gallego, Felipe Santana, Michael Talbot (and remotely by Henrik J\"onsson, Julian Bautista, Jon Holtzman, Jennifer Sobeck, Catherine Grier. Johan Comparat, Scott Anderson, Rita Tojeiro, Britt Lundgren and Jesus Pando). Figures \ref{fig:apogeedr16} and \ref{fig:apogeenstars} were made by Christian Hayes. Figure \ref{fig:ebossnz} was made by Ashley Ross, and Figure \ref{fig:ebosssky} by M. Vivek and Julian Baustista.
This research made use of \textsc{astropy}, a community-developed core \textsc{python} ({\tt http://www.python.org}) package for Astronomy \citep{2013A&A...558A..33A}; \textsc{ipython} \citep{PER-GRA:2007}; \textsc{matplotlib} \citep{Hunter:2007}; \textsc{numpy} \citep{:/content/aip/journal/cise/13/2/10.1109/MCSE.2011.37}; \textsc{scipy} \citep{citescipy}; and \textsc{topcat} \citep{2005ASPC..347...29T}.
|
1912.03068
|
\section{Introduction}\label{sec:Intro}
\begin{table*}
\centering
\caption{\label{tab:Xagn:census}Subset of existing samples of X-ray selected AGN with spectroscopic redshift.
The area covered is given in square degrees.
The X-ray band signifies whether the sample was built using soft X-rays, hard X-rays or both.
FX$_{lim}$ gives the range in which the flux limits of the samples are located.
This is an order of magnitude, please refer to the articles to derive exact values.
References are
M05 \citet{Murray2005},
S09 \citet{Salvato2009},
B10 \citet{Brusa2010},
S11 \citet{Salvato11},
F12 \citet{Fotopoulou2012},
K12 \citet{Kochanek2012},
H14 \citet{Hsu14},
N15 \citet{Nandra2015},
Ma16 \citet{Marchesi2016},
Me16 \citet{Menzel16},
X16 \citet{Xue2016},
A17 \citet{Ananna17},
G17 \citet{Georgakakis17},
L17 \citet{Luo2017},
H18 \citet{Hasinger2018},
L19 \citet{Lamassa2019}.
}
\begin{tabular}{c rr cccccc}
\hline \hline
name & N & area & \multicolumn{2}{c}{X-ray} & references \\
& & deg$^2$ & band & FX$_{lim}$ & & \\
\hline
SPIDERS & 10,849 & 5128.9 & soft & [-12.5,-12] & this paper \\
XMM-XXL-N & 2,578 & 18.0 & both & [-15,-14] & B10, G17, Me16\\
Stripe 82X & 1,886 & 31.3 & both & [-15,-14] & A17, L19 \\
X-Bootes & 2,424 & 7.7 & both & [-15,-14] & M05,K12 \\
COSMOS & 2,169 & 2.2 & both & [-16,-15] & B10, S09, S11, Ma16, H18\\
AEGIS X & 354 & 0.3 & both & [-17,-16] & N15 \\
CDFS & 653 & 0.2 & both & [-17,-16] & H14, L17 \\
CDFN & 351 & 0.2 & both & [-16.5,-15.5] & X16 \\
LH & 115 & 0.2 & both & [-16,-15] & F12 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{\label{tab:catalogue:location}
Catalogues of spectra and their links presented below.
The unique combination of the values in the columns \texttt{PLATE\_BEST}, \texttt{MJD\_BEST}, \texttt{FIBERID\_BEST} allows users to retrieve the corresponding spectra via the SDSS search interface.
The user can upload a list of identifier to retrieve the corresponding set of spectra.
}
\begin{tabular}{c c}
\hline
\multicolumn{2}{c}{Official SDSS-DR16 Value Added Catalogues} \\
2RXS & \object{\url{https://data.sdss.org/sas/dr16/eboss/spiders/analysis/VAC_SPIDERS_2RXS_DR16.fits}} \\
XMMSL2 & \object{\url{https://data.sdss.org/sas/dr16/eboss/spiders/analysis/VAC_SPIDERS_XMMSL2_DR16.fits}} \\
\hline
\multicolumn{2}{c}{Official data model, description of the columns} \\
2RXS & \url{https://data.sdss.org/datamodel/files/SPIDERS_ANALYSIS/VAC_spiders_2RXS_DR16.html} \\
XMMSL2 & \url{https://data.sdss.org/datamodel/files/SPIDERS_ANALYSIS/VAC_spiders_XMMSL2_DR16.html} \\
\hline
\multicolumn{2}{c}{SDSS DR16 optical spectra} \\
\multicolumn{2}{c}{\texttt{PLATE\_BEST}, \texttt{MJD\_BEST}, \texttt{FIBERID\_BEST} at \url{https://dr16.sdss.org/optical/spectrum/search}} \\
\hline
\multicolumn{2}{c}{SPIDERS project web page} \\
\multicolumn{2}{c}{\url{http://www.mpe.mpg.de/XraySurveys/SPIDERS/}} \\
\hline
\end{tabular}
\end{table*}
Since the advent of powerful focusing X-ray telescopes, it has become clear that the high-energy emission provides an insightful view of the extra-galactic sky. Accreting super-massive black holes dominate the number of detected X-ray sources down to the limiting fluxes detectable in the deepest pencil beam surveys today; clusters of galaxies, on the other hand, also shine brightly in X-rays due to the presence of hot plasma reaching temperatures of millions of degrees in their potential
wells.
X-ray surveys can, therefore, be used to provide some of the most stringent constraints on the cosmological evolution of super massive black holes \citep[see e.g.][]{Hickox2017} and of the large-scale structure itself \citep[see e.g.][]{weinberg_2013_review}.
However, optical spectroscopy is almost always needed in order to unambiguously classify X-ray sources as well as measure their distances accurately.
Over the last decade, spectroscopic observations in the optical of X-ray selected active galactic nuclei (AGN) have increased in number by about two orders of magnitude, from hundreds to tens of thousands, when combining deep and medium-deep surveys with wide area surveys \citep{Murray2005,Salvato2009,Brusa2010,Salvato11,Fotopoulou2012,Kochanek2012,Hsu14,Nandra2015,Marchesi2016,Menzel16,Xue2016,Ananna17,Georgakakis17,Luo2017,Hasinger2018,Lamassa2019}.
A subset\footnote{\url{http://www.mpe.mpg.de/XraySurveys}}
of existing samples of X-ray selected AGN with spectroscopic redshift is detailed in Table \ref{tab:Xagn:census}.
In this article, we report on the spectroscopic redshift measurement of 10,849 sources for 14,759 X-ray candidates over an area of 5,128.9 deg$^2$ using the Sloan Digital Sky Survey (SDSS) telescope and spectrograph infrastructure \citep{Gunn2006,Smee13}, which constitutes the SPectroscopic IDentfication of ERosita Sources (SPIDERS) sample.
Compared to previous samples, SPIDERS covers a different parameter space in terms of area and depth and it
is also the largest X-ray point source spectroscopic catalogue to date.
The spectroscopic data are made public in the 16$^{th}$ release of data from the SDSS \citep[DR16][]{Ahumada2019DR16}\footnote{\url{sdss.org}}, together with two `value added catalogues', which are also part of DR16, for ROSAT and XMM-Slew sources, respectively.
Table \ref{tab:catalogue:location} gives the links to the catalogues and a description of each column.
The SDSS-IV single fibre optical spectroscopic programme is shared between the extended Baryon Oscillation Spectroscopic Survey (eBOSS, main programme), the SPectroscopic IDentfication of ERosita Sources survey (SPIDERS, sub-programme), and the Time-Domain Spectroscopic Survey (TDSS, sub-programme), which share the focal plane during observations.
The complete SPIDERS survey programme provides a homogeneous optical spectroscopic observations of X-ray sources both point-like and extended, paving the way towards systematic spectroscopic observations of eROSITA detections over a large portion of the sky \citep{Merloni12, Predehl16, Kollmeier17, Merloni2019}.
The programme started well before the beginning of SRG/eROSITA operations upon completing the observation of the currently existing wide area X-ray surveys.
In particular, SPIDERS targeted sources from the ROSAT All-Sky Survey, XMM Slew sources, and XMM-XCLASS catalogues \citep{Voges1999, Voges2000, Saxton08A,Clerc12} within the SDSS-IV footprint \citep{Dawson16,blanton17}.
Clusters of galaxies were selected by cross-correlating faint ROSAT and XCLASS extended sources with red-galaxy excess found in SDSS imaging in the range $0.1<z<0.6$ (\citealt{Clerc16,Finoguenov2019}).
These are the most massive and largest clusters in the X-ray sky, representing a well-defined sample that can be used as a first stepping stone for cluster cosmology experiments via a measurement of the growth of structure (Ider Chitham J. et al. submitted).
Two companion papers (Clerc N. et al. in preparation, Kirkpatrick C. et al. in preparation) describe the observation of clusters in SPIDERS.
Active galactic nuclei were selected by cross-correlating ROSAT and XMM Slew catalogues with optical and near infra-red data \citep{Dwelly17,Salvato18a}.
In this paper, we describe the results of the observation of point-like sources.
More specifically, we detail the case of the active galactic nuclei detected by ROSAT.
The structure of the paper is as follows.
We explain the data and the procedure used to construct the catalogue in Sec.~\ref{sec:Data}.
We describe the redshifts measured in Sec.~\ref{sec:z:measurements}.
We discuss the specific case of stars in Sec.~\ref{sec:Star}.
Finally, we show flavour spectral stacks of type 1 AGN in Sec.~\ref{sec:SpectralAnalysis}.
Throughout the paper, we assume the flat $\Lambda CDM$ cosmology from \citet{Planck14}.
Magnitudes are given in the AB system \citep{Oke1983}.
\section{Data}\label{sec:Data}
The original SPIDERS targeting, as documented in \citet{Dwelly17}, was based on earlier versions of the X-ray catalogues than the ones that were used to build the SPIDERS-DR16 catalogues, as the X-ray-optical cataloguing methods have evolved and improved since the time of target selection.
Here we first (in section \ref{subsec:TS:summary}) summarise the original target selection for the SPIDERS-AGN samples (based on 1RXS and XMMSL1 catalogues) and the observational completeness of these samples by the end of the SDSS-IV/eBOSS survey.
Then in section \ref{subsec:catalog:2rxs}, we describe in detail the steps that were carried out to build the catalogues released here based on updated X-ray catalogues (2RXS, XMMSL2).
These sections are very technical in nature.
\subsection{Target selection summary}
\label{subsec:TS:summary}
\citet{Dwelly17} documents how the target selection was carried out on the ROSAT (1RXS) and XMM Slew v1.6 (XMMSL1) catalogues \citep{Voges1999, Voges2000, Saxton08A}.
The area considered for target selection was the subset of the SDSS DR13 photometry footprint \citep{Fukugita96,SDSS_DR13_2017ApJS23325A} that was considered suitable for extragalactic survey work by the BOSS team\footnote{\url{http://data.sdss3.org/sas/dr9/boss/lss/boss_survey.fits}}.
It consists of $\sim$10,800 deg$^2$ of extra-galactic sky and contains 32,408 1RXS + 4,325 XMMSL1 X-ray sources.
For 28,515 (1RXS) and 3,142 (XMMSL1) of these X-ray sources, a counterpart was found in the AllWISE catalogue, together with an SDSS-DR13 photometric counterpart \citep[AllWISE,][]{wright10,Cutri2013}.
11,643 (1RXS) and 1,411 (XMMSL1) of these optical counterparts had previously been spectroscopically observed in earlier phases of the SDSS project.
Out of the 16,872 (1RXS) + 1,731 (XMMSL1) potential targets remaining, 9,028 (1RXS) + 873 (XMMSL1) passed suitability filters and were put forward for spectroscopic observation within the main SDSS-IV/eBOSS programme.
For more details on the procedure to select the targets, please refer to \citet{Dwelly17}, particularly their Figs. 8 and 13.
The target catalogues are available here\footnote{\url{https://sas.sdss.org/sas/dr14/eboss/spiders/target/}}.
The sky area observed by the combination of the SDSS-IV/eBOSS main spectroscopic programme, plus the SDSS-III/SEQUELS pilot area, covers approximately half of the wider 10,800 deg$^2$ BOSS imaging footprint considered for the SPIDERS-AGN target selection \citep{Dawson16}. For the purposes of this paper, we define the following `SPIDERS-DR16' footprint. First we consider the sky area covered by the union of 1006 SDSS-IV/eBOSS and SDSS-III/SEQUELS plates (each plate covers a 1.49\,deg radius circle). In order to maximise the contiguity of the footprint, we included 15 plates that do not meet the nominal eBOSS minimum signal-to-noise ratio (S/N).
We then reject any sky areas that lie outside the BOSS imaging footprint or those that are overlapped by any plates that were planned but not observed by the conclusion of SDSS-IV/eBOSS (217.8\,deg$^2$ is rejected).
The total remaining unique sky area in the SPIDERS-DR16 footprint is 5128.9\,deg$^2$.
Figure \ref{fig:mask:ra:dec:dr16} illustrates the SPIDERS DR16 footprints.
Within the SPIDERS-DR16 area, there are 4,713 (1RXS) + 457 (XMMSL1) potential targets available.
We note that during the SDSS-IV observations, the focal plane was shared between three programmes: eBOSS, TDSS, and SPIDERS \citep{Dawson16,blanton17} and so there was competition for fibre resources.
A total of 4,406 (1RXS, 93\%) + 430 (XMMSL1, 94\%) of the targets were eventually observed during the SDSS-III/SEQUELS and SDSS-IV/eBOSS campaigns.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/SPIDERS_AGN_DR16_footprint.png}\\
\caption{Illustration of the SPIDERS-DR16 footprint (blue, 5,129 deg$^2$) considered in this analysis, shown with an Equatorial Hammer-Aitoff projection. We also show the BOSS imaging footprint (black line, 10,800 deg$^2$), the union of all SEQUELS+eBOSS plates (red, 5,347 deg$^2$), and the Galactic Plane (grey dotted line).}
\label{fig:mask:ra:dec:dr16}
\end{figure}
\subsection{The SPIDERS 2RXS sample}
\label{subsec:catalog:2rxs}
The DR16 SPIDERS 2RXS catalogue is constructed as follows.
We consider the updated ROSAT point-source catalogue \citep[2RXS][]{Boller16} and its counterparts found via the \textsc{NWAY} software \citep{Salvato18a}. This parent catalogue does not correspond exactly to the parent catalogue used (1RXS) at the moment of targeting by \citet{Dwelly17}.
At the bright end, higher detection likelihood, the catalogues are the same. At the faint end, marking the lower detection likelihood, there are differences.
For a quantitative comparison between 1RXS and 2RXS, please refer to \citet{Boller16}
The 2RXS catalogue contains 132,254 sources over the entire sky, of which 21,288 lie in the SPIDERS-DR16 footprint.
We filter the complete source list with the eBOSS footprint mask (and with a galactic latitude cut $|g_{lat}|>15^\circ$).
We match AllWISE positions (columns names in the SPIDERS catalogue: \texttt{ALLW\_RA}, \texttt{ALLW\_DEC}) to SDSS-DR13-photo optical catalogues choosing the brightest counterpart (in \texttt{modelMag\_r}) lying within 3 arc seconds radius (larger than the 1.5 arc seconds radius used for targeting).
In the catalogue, we select only the most likely counterpart detected in SDSS photometry as follows :
\begin{equation}
\label{eqn:A}
A = {\tt ( NWAY\_match\_flag ==1) \; \& \; (FLAG\_SDSSv5b\_best==1) }
.\end{equation}
After this, only one catalogue entry per X-ray source remains; we note, however, that in some rare cases, this is the incorrect counterpart (for example, if the uncertainty on the X-ray position is underestimated, we may miss the true counterpart if it is located beyond the search radius).
We discuss these few cases later in the article.
In the SPIDERS-DR16 footprint, we obtain 19,821 (10,039) X-ray sources with existence likelihoods greater than 6.5 (10).\footnote{As discussed in \cite{Boller16}, above the lower existence likelihood threshold a significant fraction (up to 30\%) of spurious sources is expected, while only about 7\% cent spurious sources are expected above existence likelihood of 10.}
We refer to these as `All' the sources of interest (labelled `A' in Figures and Tables, Eq. \ref{eqn:A}).
Among `A', 13,986 (6,853) are in the magnitude range to be observed by the SDSS-IV programme.
We refer to these as candidate `targets' for spectroscopic observation with SDSS (`T', Eq. \ref{eqn:T}).
\begin{align}
\label{eqn:T}
T & = {\tt (SDSS\_FIBER2MAG\_i>=17) } \; \& \; \nonumber \\
& {\tt (SDSS\_FIBER2MAG\_i<=22.5) } \; \& \; \nonumber \\
& {\tt (SDSS\_MODELMAG\_i>=16). }
\end{align}
Then the SDSS spectroscopic information is added based on the optical position (using a 1.5 arc seconds matching radius between the optical source position ({\tt SDSS\_RA, SDSS\_DEC}) and fibre position on the sky ({\tt PLUG\_RA, PLUG\_DEC}).
Among `T', 10,590 (6,145) were spectroscopically observed during one of the SDSS editions (for these, in the catalogue, the `DR16\_MEMBER' flag is set to True).
We refer to these as `observed' (`O', Eq. \ref{eqn:O});
\begin{equation}
\label{eqn:O}
O = (T) \; \& \; {\tt (DR16\_MEMBER==1) }
.\end{equation}
Among `O', 10,474 (6,096) were identified or classified.
We refer to these as `identified sources' (`I', Eq. \ref{eqn:I}).
\begin{equation}
\label{eqn:I}
I = (c_2) | (c_3) | (c_4) | (c_5) | (c_6)
,\end{equation}
where
\begin{align}
c_1 & = (O) \; \& \; (\; {\tt (Z\_BEST>-0.5) }\; | \nonumber \\
& {\tt (\; (DR16\_Z>-0.5) \; \& \; (DR16\_Z\_ERR>0) \; ) \; ); }
\end{align}
\begin{equation}
c_2 = (c1) \; \& \; {\tt (CONF\_BEST==3) }
;\end{equation}
\begin{align}
c_3 & = (c1) \; \& \; {\tt (CONF\_BEST==2)} \; \& \; \nonumber \\
& {\tt (\; (CLASS\_BEST==``BLAZAR'')}\; | \nonumber \\
& {\tt (CLASS\_BEST==``BLLAC'')}\; );
\end{align}
\begin{align}
c_4 & = (c1) \; \& \; {\tt (DR16\_SN\_MEDIAN\_ALL>=2)} \; \& \; \nonumber \\
& {\tt (DR16\_ZWARNING==0 ); }
\end{align}
\begin{align}
c_5 & = (c1) \; \& \; {\tt(CONF\_BEST==2) \; \& \; (DR16\_ZWARNING==0 )} \; \& \; \nonumber \\
& {\tt(VI\_REINSPECT\_FLAG == 0) \; \& \; (VI\_NINSPECTORS>2); }
\end{align}
\begin{align}
c_6 & = (c1) \; \& \; {\tt(CONF\_BEST==2) \; \& \; (DR16\_ZWARNING==0 ) }\; \& \; \nonumber \\
& {\tt(VI\_AM\_RECONCILED==1); }
\end{align}
Among `I', we measured 10,366 (6,007) reliable redshifts, confirmed by visual inspection.
We refer to these as `good redshifts' (`Z', Eq. \ref{eqn:Z}).
The difference between I and Z consists of a set of 108 (89) featureless high signal-to-noise BLAZAR spectra, whose redshift could not be determined (classification `blazars\_noZ' below, Eq. \ref{eqn:BL});
\begin{align}
\label{eqn:BL}
{\tt blazar\_noZ } & = (I) \; \& \; {\tt (CONF\_BEST<3) }\; \& \; \nonumber \\
& {\tt (\; (CLASS\_BEST==``BLAZAR'')\; |} \nonumber \\
& {\tt (CLASS\_BEST==``BLLAC'')\; );}
\end{align}
\begin{equation}
\label{eqn:Z}
Z = (I) \; \& \; {\tt(blazar\_noZ == False). }
\end{equation}
The existence likelihood, denoted {\tt exiML}, is the detection likelihood that was measured by \citet{Boller16} for the 2RXS sample ({\tt RXS\_ExiML}).
Table \ref{tab:summary:number:sources} gives the number of object in each category A, T, O, I, {\tt blazar\_noZ}, Z for the two existence likelihood thresholds (6.5 and 10).
The redshifts are described in detail in the following section.
\begin{table}
\centering
\caption{
Number of sources in each class for the 2RXS and XMMSL2 catalogues.
`{\tt exiML}' refers to the existence likelihood threshold applied in the X-ray.
`Any' refers all the sources in the catalogue. For a single X-ray sources, a set of counterpart may be listed (not unique).
`A' refers to all sources matched to their potential best optical counterpart. Each X-ray source is listed only once (Eq. \ref{eqn:A}).
`T' refers to sources that are candidate targets for optical spectroscopic observation (Eq. \ref{eqn:T}).
`O' refers to observed sources (Eq. \ref{eqn:O}).
`I' refers to identified sources (Eq. \ref{eqn:I}).
`Blazar no Z' refers to sources identified as BLAZAR for which we could not measure the redshift (Eq. \ref{eqn:BL}).
`Z' refers to sources with good redshift measurements (Eq. \ref{eqn:Z}).
The last column gives the targets that are uniquely present in the XMMSL2 i.e. not in the 2RXS catalogue.
}
\begin{tabular}{ l r r r r}
\hline \hline
& \multicolumn{2}{c}{2RXS} & \multicolumn{2}{c}{XMMSL2} \\
exiML & $>6.5$ & $>10$ & $>10$ & not 2RXS \\ \hline
Any (non-unique) & 21288 & & 3196 & \\
A. All unique & 19821 & 10039 & 2341 & 1475 \\
T. Targets & 13986 & 6853 & 1490 & 773 \\
O. Observed & 10590 & 6145 & 1219 & 502 \\
I. Identified & 10474 & 6096 & 1208 & 496 \\
{\tt blazar\_noZ} & 108 & 89 & 42 & 13 \\
Z. good Z & 10366 & 6007 & 1166 & 483 \\ \hline
\end{tabular}
\label{tab:summary:number:sources}
\end{table}
We investigate the distribution of the A, T, O, I, Z samples (with exiML$>6.5$) as a function of the X-ray flux ({\tt RXS\_SRC\_FLUX}) and optical $i$-band 2 arc-seconds fibre magnitude ({\tt SDSS\_FIBER2MAG\_i}), see Fig. \ref{fig:completeness:success:rate:fluxX}.
The X-ray flux is de-reddened from the Milky-way assuming a power law emission, which is correct for AGN but not for stars or clusters.
The distribution of soft band X-ray flux for each sample is shown on the top panel.
Most of the sources have a flux $-13<\log_{10}(F_X [erg.\, cm^{-2}. s^{-1}])<-11.5$.
Few are brighter.
The number of targets diminishes (w.r.t. all sources) as a function of flux, see curve labelled `T' (in orange).
It is due to the bright fibre magnitude and model magnitude cuts i.e. the bright X-ray sources are also bright in the optical.
The bottom panel of the figure clearly shows the impact of the optical cuts.
The panels showing the ratio between the observed sample and the targets as a function of X-ray flux or fibre magnitude demonstrate that the observed sample is biased with respect to the targets. Indeed the faintest and brightest objects are under-represented.
For the high existence likelihood sample (exiML$>10$), the effect is lesser but is still present (third panel of Fig. \ref{fig:completeness:success:rate:fluxX}).
Although we have observed 6145/6853=89\% of the exiML$>10$ targets, there remain small biases as a function of fibre magnitude and X-ray flux at the bright end.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{PLOTS/2RXS_ExiML6p5__xray_hist_dr16.png}
\includegraphics[width=0.8\columnwidth]{PLOTS/2RXS_ExiML6p5__ratio_xray_hist_dr16.png}
\includegraphics[width=0.8\columnwidth]{PLOTS/2RXS_ExiML10__ratio_xray_hist_dr16.png}
\includegraphics[width=0.8\columnwidth]{PLOTS/2RXS_ExiML6p5__fiber2magi_hist_dr16.png}
\includegraphics[width=0.8\columnwidth]{PLOTS/2RXS_ExiML6p5__ratio_fiber2magi_hist_dr16.png}
\caption{Histograms showing the 2RXS samples with exiML$>6.5$ defined in Table \ref{tab:summary:number:sources}: A, T, O, I, Z.
The histogram of X-ray flux shows how bright the targets are (top panel).
The second and third panel shows the fraction of observed targets, identified objects and good redshifts with respec to the targets sample.
The second panel is for exiML$>6.5$ and the third panel for exiML$>10$.
They show that the exiML$>10$ Z sample is close to being a random sub sample of the targeted sample with a completeness slightly below 90\%.
The histogram of the $i$-band fibre 2 magnitude (fourth panel) shows the impact of the optical selection made on the counterparts found, which removes the bright objects.
Similarly to the second panel, we show in the fifth panel the ratios O/T, I/T, Z/T as a function of fibre magnitude.
This shows that identifying sources and determining their redshift is more difficult at the faint end.}
\label{fig:completeness:success:rate:fluxX}
\end{figure}
\subsection{The 2RXS catalogue over 10,800 deg$^2$}
\label{subsec:catalog:2rxs:fullAREA}
Over the complete BOSS extra galactic area (10,800 deg$^2$ = 2.1 times the SPIDERS-DR16 area), the total number of targets (26,685) is about twice that present in the SPIDERS-DR16 area (13,986), see Table \ref{tab:summary:number:sources:full:area}.
Here, the fraction of observed targets is 63.1\% over 10,800 deg$^2$ instead of 75.4\% on SPIDERS-DR16,
so the completeness is lower.
Furthermore the observed targets were chosen following different targeting schemes (previous SDSS editions), so the observed sample will be further away from being a random sampling of the complete set of targets.
It thus complicated the statistical analysis, for example extracting an unbiased redshift distribution becomes tedious.
This is the main reason we excluded this additional area from the catalogue and the analysis presented here.
Using the ZWARNING=0 criterion from the SDSS pipeline (indeed inspections are not available for the complete area), we obtain an estimation of the total number of good redshifts, 16,128 (95.7\% of the observed), but cannot guarantee that all of them indeed are, due to the lack of visual inspections.
To reach the 97.8\% of good redshifts (as in the SPIDERS-DR16 footprint) further inspection of the spectra is required.
It would also enable the proper flagging of blazars, which redshifts are difficult to fit.
\begin{table}
\centering
\caption{
Comparison of the number of 2RXS sources in each class in the SPIDERS-DR16 area and in the BOSS extra galactic area.
}
\begin{tabular}{ l r r r r}
\hline \hline
area & SPIDERS-DR16 & BOSS \\
deg$^2$ & 5,129 & 10,800 \\
\hline
A. All & 19,821 & 37,961 \\
T. Targets & 13,986 & 26,685 \\
O. Observed & 10,590 & 16,851 \\
Z. good Z & 10,366 & `16,128' \\ \hline
\end{tabular}
\label{tab:summary:number:sources:full:area}
\end{table}
\subsection{The SPIDERS-AGN XMMSL2 sample}
\label{subsec:catalog:xmmsl}
The DR16 SPIDERS XMMSL2 catalogue is constructed in a similar fashion to the 2RXS.
The existence likelihood, denoted {\tt exiML}, is the maximum of the detection likelihood in any of the three bands the point source were detected in \citep{Saxton08A}\footnote{\url{https://www.cosmos.esa.int/web/xmm-newton/xmmsl2-ug}}. We have
{\tt exiML} = Max {\tt ([XMMSL2\_DET\_ML\_B6, XMMSL2\_DET\_ML\_B7, XMMSL2\_DET\_ML\_B8]}.
It contains 3,196 unique X-ray sources in the eBOSS footprint.
A large fraction of them: 866, are present in the 2RXS catalogue, meaning that after removing common sources the catalogue contains 2,330 sources.
Applying the same procedure as for 2RXS, 3,196 (2,330) sources reduce to 2,341 (1,475 not in 2RXS) sources of interest (A), 1,490 (773) targets and 1,166 (483) good redshifts, see Table \ref{tab:summary:number:sources}.
\subsection{Summary of observations}
By combining the observations of the 2RXS and the XMMSL2 samples, we accumulated 10,849 good redshifts out of 14,759 targets over the SPIDERS-DR16 area.
The fraction of observed targets is about O/T$\sim$73.5\% and could increase to 90-95\% with another dedicated programme.
The fraction of identified targets among the observed is high: Z/O$=97.8\%$.
Given that the 2RXS catalogue covers the full sky, one could extend the match to spectroscopic observation to larger areas, but the completeness would then be much lower ( O/T$\sim$30\% ) and the observed redshift may constitute a biased sample with respect to the complete sample.
In the next decade, the combination of eROSITA with SDSS-V, 4MOST and DESI should enable the construction of a large full-sky X-ray AGN catalogue, see the discussion in Sec. \ref{sec:Discussion}.
\section{Redshift Measurement and Classification}\label{sec:z:measurements}
Automated redshift fitting for AGN is a demanding task, see \citep{Paris2012, Paris2014, Paris2017, Paris2018, Busca2018arXiv180809955B}.
To increase confidence in the automatically obtained redshifts, we visually inspect the SPIDERS spectra.
The visual inspection procedure and the reconciliation of results between inspectors is detailed in \citet{Dwelly17}.
After inspection, we report the successful measure of redshifts for 97.8\% of the observed targets.
Please refer to \citet{Menzel16} for a specific and detailed discussion on the accuracy of spectroscopic redshifts for X-ray selected AGNs.
Overall, the number of redshift failure being quite small, we cannot study these population statistically in depth.
Nevertheless, we see a hint that the magnitude (or fiber magnitude) distribution of undetermined redshifts is skewed towards the fainter magnitudes.
Indeed, it should be more difficult to obtain redshift for fainter objects relative to brighter objects.
\subsection{Classifications}
In addition to the redshift confidence flag ({\tt CONF\_BEST}), the visual inspection enable to classify in AGN types ({\tt CLASS\_BEST}).
However, because the SPIDERS catalogues has been assembled by combining various generations of SDSS observations and visual inspections, the final classification is heterogeneous.
For simplicity, we can group the observed objects with reliable redshift ({\tt CONF\_BEST}==3) into the following broadly defined families: AGNs (type 1 and 2), stars, AGN in clusters and galaxies in clusters. These additional classifications flags are made available here\footnote{\url{http://www.mpe.mpg.de/XraySurveys/SPIDERS/}}.
\begin{enumerate}
\item Stars are identified with the {\tt CLASS\_BEST==``STAR''}.
\item Blazar: {\tt CLASS\_BEST=`BLLAC'} or {\tt `BLAZAR'}.
\item Type 1 AGN (or Broad-line AGN, or un-obscured AGN of optical type 1), comprising {\tt CLASS\_BEST==``BALQSO'', ``QSO\_BAL'', ``QSO'', ``BLAGN''}.
\item Type 2 AGN (or narrow-line AGN or narrow-line AGN candidates or obscured AGN of optical type 2) comprising {\tt CLASS\_BEST==``NLAGN'', ``GALAXY''}
\item Considers the possibility of ROSAT mistakenly identifying a source as point-like instead of extended, due to poor PSF.
In the latter case (5), some or all of the X-ray flux may be due to a cluster of galaxies. In order to take that eventuality into account, Galaxies or QSO are counted as possible cluster members if their redshifts are within 0.01 and their position within 1 arc minute of a redmapper cluster \citep{Rykoff2014} or a SPIDERS cluster \citep{Clerc16,Finoguenov2019}. These cannot be counted within the 2RXS or XMMSL2 X-ray flux limited AGN sample. Indeed some of the flux associated may come from the host cluster.
\end{enumerate}
Among the good redshift class (`Z'), after visual inspection, we list (in parentheses, separated by a plus sign) the occurrences in the 2RXS+XMMSL2 catalogue in each family: type 1 AGN (8216+941), Type 2 AGN candidates (1331+119), possible clusters members (503+62) and stars (278+27), see Table \ref{tab:redshift:classes}.
We note that among the Cluster member candidates, 'GALAXY' refers here to the spectra without any obvious signature of an AGN. The top panel of Fig. ~\ref{fig:ext_vs_exiML} shows that these sources are usually either associated to a low X-ray source detection likelihood (and in this case the source would just be a galaxy in the field), or among the brightest members of a galaxy cluster (large extension in X-ray images, e.g., bottom panel of Fig.~\ref{fig:ext_vs_exiML}). Most of the sources classified as 'GALAXY/Cluster' have a low {\tt p\_any} value in NWAY \citep{Salvato18a}, indicating that the reliability of the association is also low.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/GALAXY_ext_eximl.png}
\includegraphics[width=0.9\columnwidth]{PLOTS/cluster1.png}
\caption{{\bf Top}: Distribution of all sources in the X-ray detection likelihood vs X-ray extension (in arc seconds) plane. All counterparts are shown in grey (label: CTPS to ROSAT/2RXS sources). Sources classified as 'GALAXY' are coloured according to their {\tt p\_any} parameter \citep[see Sec. 3 of][for a definition of {\tt p\_any}]{Salvato18a}. The majority of the galaxies are either associated to a very low significance X-ray detection and thus just galaxies in the field (bottom part) or to extended sources (upper right part), indicating that they could be passive galaxies members of clusters or local (low redshift) extended galaxies. {\bf Bottom}: Central object in the figure is the counterpart associated to a 2RXS source, with a low {\tt p\_any} but high extension in the X-ray images. In fact, the 2RXS source in this case was extended and the associated galaxy is the central galaxy of a cluster at redshift 0.145. These type of sources populate the top/right quadrant in the top panel of the figure.}
\label{fig:ext_vs_exiML}
\end{figure}
We note that each population samples the fiber magnitude, model magnitude, and redshift histograms in a different fashion (see Fig. \ref{fig:mag:fibermag:i:histograms:classbest}).
The stars sample the brighter end of the magnitude distribution.
The AGN exclusively populate the fainter end.
Indeed, at faint broad band magnitudes, redshift can only be determined thanks to strong emission lines; and the galaxies in clusters sample intermediate magnitudes.
\begin{table}
\centering
\caption{
Number of identified redshift in each class: AGN (Type 1 or 2), Potential Cluster Members, STAR.
`exiML' refers to the existence likelihood threshold applied in the X-ray catalogue.
`Z' refers to sources with good redshift measurements.
}
\begin{tabular}{ l r r r r}
\hline \hline
& \multicolumn{2}{c}{2RXS} & XMMSL2\\
exiML & $>6.5$ & $>10$ & $>10$ \\
Z & 10366 & 6007 & 1166 \\ \hline
Type 1 AGN & 8216 & 4904 & 941 \\
Type 2 AGN & 1331 & 602 & 119 \\
BLAZAR AGN & 38 & 51 & 17 \\
Cluster & 503 & 362 & 62 \\
-(GAL/QSO) & (387/116) & (264/98) & (40/22) \\
STAR & 278 & 88 & 27 \\ \hline
\end{tabular}
\label{tab:redshift:classes}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/2RXS_ExiML6p5__SDSS_FIBER2MAG_i_ratioZ_dr16.png}
\caption{Fractional contribution of each object class (AGN, Cluster members, stars) to the number of good spectroscopic redshifts obtained for 2RXS sources as a function of magnitude (SDSS \tt{fiber2mag\_i})}
\label{fig:mag:fibermag:i:histograms:classbest}
\end{figure}
\subsubsection{AGN}
Among the AGN, the majority (8216/9622$\sim85\%$) show a spectrum with broad features \citep[emission line widths in excess of 200 km/s][]{Bolton12}. We name these type 1 AGN.
1331 are classified as type 2 AGN.
The remaining few are either BLAZAR or broad absorption line QSO.
The type 2 AGN category is constituted by heavily obscured AGN (or candidates).
Among the 1331, 602 (729) have a high (low) existence likelihood in the X-ray (i.e. above and below 10).
For the population of high existence likelihood, the spurious fraction expected is of order of 7 per cent, that is, about 40 among 602.
The spurious fraction should be higher (about 50\%) among the 729 with low existence likelihood.
More accurate X-ray observations, deeper optical data, and a detailed emission line analysis are needed to disentangle these cases.
We leave such analysis for future studies, and note that machine learning algorithm using spectral features may be a key in this process \citep[e.g.][]{Zhang2019}.
Following Sec. 5 of \citet{Coffey2019}, we compute the 2RXS (XMMSL2) X-ray luminosities in the bands 0.1-2.4 (0.2-12) keV.
The 2RXS (XMMSL2) flux is modelled with an absorbed power law, \texttt{mod pha*powerlaw}, with a slope of $\Gamma = $2.4 (1.7) with the n$_H$ set to that of the Milky Way.
Figure \ref{fig:LX:Z} shows the X-ray luminosity vs. redshift for the 2RXS (XMMSL2) samples.
It compares them to a set of the deep pencil beam surveys referenced in Table \ref{tab:Xagn:census} (red points) and the upcoming eROSITA sample (purple) taken from the mock catalogue of \citet{Comparat19}.
The three data sets are very complementary in sampling the redshift luminosity plane.
The SPIDERS-DR16 sample will participate to a more quantitative estimate of the evolution with redshift of the bright end of the X-ray AGN luminosity function \citep{Miyaji2000,Miyaji15,Aird15,Georgakakis17}.
Indeed, this sample has a comparable number of sources to all pencil beam surveys together.
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{PLOTS/LX-z-merged-nogrid.png}
\caption{AGN X-ray luminosity vs. redshift for the 2RXS (blue) and XMMSL2 samples (yellow) and comparison with deep pencil beam surveys listed in Table \ref{tab:Xagn:census} (red crosses) and the prediction for the upcoming eROSITA sample (grey). The samples shown cover the plane in a complementary fashion. 2RXS (XMMSL2) X-ray luminosities are computed in the bands 0.1-2.4 (0.2-12) keV.}
\label{fig:LX:Z}
\end{figure*}
\subsubsection{Clusters members}
We find 503+62 sources in clusters, 374+36 (72\%) feature a galaxy spectrum ({\tt CLASS\_BEST==``GALAXY''}) and the remaining have typical AGN spectrum.
Overall, the contamination by galaxies in clusters is small, of order of 3\% (374/10368).
\subsubsection{Stars}
A complete section on stars is presented in Sec. 4.
\subsection{Redshift distribution}
We find that the redshift distribution observed has the shape expected for an X-ray flux-limited sample with a broad optical magnitude range cut. In Fig. \ref{fig:NZ:dr16}
we show the redshift distribution observed per square degrees in the 2RXS and XMMSL2 catalogues for each classification: AGN and cluster.
For XMMSL2, which has the brightest flux limit (log10 around -12) the number density per unit sky area increases and reaches its peak in the bin $0.1<z<0.2$.
For 2RXS, which has a fainter flux limit (log10 around -12.5) the peak in number density occurs in the bin $0.2<z<0.3$.
It compares favourably with predictions from an adaptation of the mock catalogue of \citet{Comparat19}.
To adapt the mock sample, we re-sample the X-ray fluxes and optical magnitudes to match the depths of the 2RXS catalogue and of the SDSS optical photometric survey.
There is a discrepancy at low redshift: a deficit of AGNs in the observed sample compared to the mock.
It is due to the bright magnitude and fiber magnitude cuts applied to the targeted sample; see Fig. \ref{fig:completeness:success:rate:fluxX}.
Indeed these cuts remove a part of the low redshift AGNs, but they are difficult to mock properly.
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{PLOTS/MD10_histogram_redshift_01_225_65.png}
\caption{Observed redshift distribution of the SPIDERS-DR16 2RXS and XMMSL2 samples.
The C19 shaded area shows the prediction based on the mock catalogues from \citet{Comparat19}.
}
\label{fig:NZ:dr16}
\end{figure*}
We complemented the SPIDERS-DR16 catalogue with a variety of multi-wavelength information: X-ray \citep[2RXS, XMMSL2,][]{Boller16, Saxton08A}, optical \citep[SDSS,][]{SDSS_DR13_2017ApJS23325A}, \citep[GAIA,][]{Gaia2018}, infra-red \citep[AllWISE,][]{wright10}, radio \citep[FIRST,][]{White1997}.
As in \cite{Salvato18a}, we plotted in Fig. \ref{fig:wise_X_gal} the W1 magnitude vs the X-ray flux of the sources, adopting the same line that was suggested to be able to separate AGN and compact objects from stars.
Figure \ref{fig:sdss_col} shows the SDSS g-r vs. r-i colours for all our SPIDERS sources. The vast majority of AGN cluster around a blue locus (g-r$<0.5$ and r-i$<0.5$)
Sources classified as BLAZAR lie in the same blue locus. Some AGN are redder (obscured) and thus extend to the to right corner of the plot.
The sequence of stars also appears clearly.
Galaxies in clusters are mostly red and QSO in clusters are mostly blue.
A consistent picture emerges also from the analysis of the with the WISE colour-colour diagrams (W1-W2 vs. W2-W3) shown in Fig. \ref{fig:wise_col}.
The interplay of the different classes in color-magnitude space should open a new window to determine optimal priors to select counterparts \citep{Salvato18a}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/color-color-w1-FX-stilts.png}
\caption{W1 magnitude vs X-ray flux for 2RXS sources without (grey) and with (colored) spectroscopy, as labeled. The dashed line, taken from \citet{Salvato18a}, define the loci of AGN and compact objects (above) and stars (below). Note that here AGN contains both type 1 and type 2 (and candidates) objects.
}
\label{fig:wise_X_gal}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/color-color-gr-ri-stilts.png}
\caption{r-i vs. g-r colors (from SDSS MODEL MAG) for the counterparts to 2RXS sources, split by their spectroscopic classification, as labeled.}
\label{fig:sdss_col}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/color-color-w1w2-w2w3-stilts.png}
\caption{W1-W2 vs. W2-W3 colors (from WISE) for the counterparts to 2RXS sources, split by their spectroscopic classification, as labeled.}
\label{fig:wise_col}
\end{figure}
\section{X-ray Stars}\label{sec:Star}
Visual screening of all spectra obtained in the SPIDERS programme and of those obtained during earlier phases of the SDSS programme and associated with 2RXS and XMMSL2 sources led to a separation of stellar objects from the large body of extra-galactic objects. The 2RXS and XMMSL2 catalogues list 290 and 37 stellar objects with attribute {\tt CLASS\_BEST==`STAR'}, but 278 and 27 only, when the criteria described in Sect.~\ref{sec:Data} are applied. The number 27 is further reduced to 16, when duplications with the 2RXS catalogue are removed.
Obtaining a spectrum of an object classified as 'STAR' does not entail that the counterpart of the X-ray source has been identified; for this, a second X-ray identification screening step (XID) is needed.
While the initial screening was undertaken by several individuals and a compromise had to be found in case of deviating results (classification, redshift), the XID screening step was performed by just one of the authors (AS) with the potential risk of introducing some biases or errors, but the potential advantage of a more homogeneous way of classifying stars.
Screening for XID was done with the help of a few extra data products. These were: (a) an optical finding chart based on a PanSTARRS \citep{flewelling+16} $g-$band image (location of the X-ray centroid, the X-ray uncertainty and the target indicated), an X-ray to optical colour-colour diagram ($\log(f_{\rm X}/f_{\rm opt})$ vs.~$g-r$), and a long-term light curve obtained from the Catalina Real-Time Transient Survey \citep[CRTS, ][]{drake+09}. For almost all targets, the 'EXPLORE' feature of the SDSS-sciserver was used to search for possible other counterparts and to search for entries in the SIMBAD or NED databases.
Based on the available information, a first decision was made if the object could be confirmed as a star.
This first screening step was performed on the more general {\tt CLASS\_BEST==``STAR''} sample and led to a revision of a number classifications that are documented in Table \ref{table:xid}.
We corrected the incorrectly labelled source {\tt CLASS\_BEST==``STAR''} in appendix Table \ref{table:classbest:correction}.
Then a second decision about the reliability of the target being the counterpart of the X-ray source was made.
An XID-flag was assigned to each spectrum indicating this kind of reliability, ranging from XID=1 to XID=3.
XID=1 means that the object is regarded being the optical counterpart with high confidence. XID=2 means that the object could be the counterpart or at least could contribute to the observed X-ray emission.
This often means that some typical ingredient or hallmark is missing or that the object seems to be blended or shows other morphological complexities.
An XID=3 object is regarded likely not being the counterpart of the X-ray source.
Table \ref{table:xid} in the Appendix contains the results and XID values for the objects classified as stars.
All stellar targets were sub-classified into three main classes: coronal emitters (including flare stars), white dwarfs (WD), and compact white-dwarf binaries, either in a detached or a semi-detached configuration. The latter are the cataclysmic binaries, were a white dwarf accretes matter from a main-sequence star via Roche-lobe overflow. The break-down of stars flagged XID=1 into those three main sub-classes for the 2RXS and the XMMSL2 samples is given in Tab.~\ref{t:starsxid}.
In the star-related Tables, we use the following acronyms to classify the sources:
\begin{itemize}
\item CV: cataclysmic variable with unknown sub-category
\item CV/AM: cataclysmic variable of AM Herculis type
\item CV/DN: cataclysmic variable of dwarf nova type
\item WDMS: detached white dwarf/main sequence binary
\item LARP: low accretion rate polar
\item DB+M: a binary consisting of a white dwarf of spectral type DB and a companion star
\end{itemize}
\begin{table}
\caption{\label{t:starsxid} Breakdown of 2RXS and XMMSL2 objects with high confidence identifications (XID=1) into three main object classes}
\begin{center}
\begin{tabular}{lrrr}
\hline
SUBCLASS & 2RXS & XMMSL2 & not in 2RXS\\
\hline
Coronal emitters & 61 & 3 & 3\\
WDs & 6 & -- & --\\
Compact WD binaries & 35 & 16 & 5 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{2RXS}
The distribution of stellar spectra over the three XID bins (1/2/3) is (102/77/99).
Among the 102 XID=1 sources from the 2RXS list, we find 67 single stars (coronal emitters and hot or sufficiently close white dwarfs) and 33 binaries with a compact object, most of them (29) being cataclysmic variables (CVs). Sample spectra of those typical X-ray emitters are displayed in Fig.~\ref{f:stars}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/montage2e.png}
\caption{\label{f:stars}
Sample spectra of XID=1 objects, a hot white dwarf, a flare
star, a non-magnetic cataclysmic variable (dwarf nova), and a strongly
magnetic cataclysmic variable (a polar or AM Herculis star).
The PLATE-MJD-FIBERID combination and the type of X-ray emitter are
indicated in the panels. All spectra were obtained in the current SDSS programme.}
\end{figure}
Interestingly, 75 of the 102 high-confidence (XID=1) counterparts have an NWAY ${\tt p\_any} < 0.5$, illustrating the fact that the Bayesian prior used in the X-ray to IR/optical association seems to disfavour true stellar X-ray emitters. For a stellar survey, a different prior is needed.
\noindent We list the reasons for an XID=2 classification over an XID=1:
(1) the object appeared optically too faint for the given X-ray flux,
(2) an M-star did not show any obvious sign of activity like H$\alpha$ in emission of flares/flickering of the light curves,
(3) large X-ray positional errors could cast doubt on the uniqueness of the identification, in particular if the object does not show strong signs of activity which, together with an atypical optical faintness casts doubt on the reliability of the X-ray to optical association,
(4) apparent binaries were found, so that the X-ray-WISE-SDSS association chain led to ambiguities (an unresolved double WISE counterpart to the X-ray source was associated with the wrong SDSS object),
(5) the contribution of the WISE-blended source could not be quantified.
An example of such an of XID=2 classification is
J002317.1+191028 (7590-56944-674), which is an M-star showing H$\alpha$ in emission and displays a variable light curve, hence qualifies as X-ray emitter, although being found with an uncomfortably large $f_{\rm X}/f_{\rm opt}$. We found that a QSO, SDSS J002319.72+190958.2, at redshift $z=1.504$ with a similar distance to the X-ray position and could contribute to the X-ray flux or even dominate. This object was thus put in the XID=2 bin because both objects could contribute to the X-ray emission.
XID=3 sources were classified as such mainly for two reasons:
(a) the targeted object was too faint with high confidence for being compatible with a stellar coronal emitter, meaning that it had a too high an X-ray flux or a too faint an optical brightness to be compatible with the maximum $L_{\rm X}/L_{\rm bol}$ which was assumed to be $\leq -3$
(b) another much more typical X-ray emitter was found (often even closer) to the X-ray position (e.g. an A0 star was targeted (3454-55003-211), one of the least X-ray active stars, but a white dwarf SDSS J155108.25+454313.2 was found to lie closer to the X-ray position).
Indeed most of the discarded objects had QSOs, CVs or WDs as more likely counterparts.
These more likely counterparts already had spectra taken by previous editions of SDSS, so in the SPIDERS programme, they were targeted as possible secondary sources to investigate their hierarchy.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PLOTS/stars_ccd.png}
\caption{X-ray/optical colour-colour diagram highlighting the XID=1 objects on the background of all identified objects of the 2RXS sample. Hot white dwarfs, coronal emitters and close binaries are shown with blue, red and green symbols, respectively.}
\label{f:stars_ccd}
\end{figure}
A further two X-ray sources were associated with M-stars (spectra with PLATE-MJD-FIBERID 693-52254-0599 and 1046-52460-0078) but had unusually large X-ray positional uncertainties.
Inspecting the area around the M-stars revealed many galaxies with concordant redshifts, obvious clusters of galaxies with the BCG rather close to the targeted star.
While the spectrum taken was clearly that of a star, the X-ray source was likely not point-like.
While these two objects were most pronounced and for that reason discussed here separately, there are possibly more of this kind in the larger sample. As stated above, we give in Table \ref{table:classbest:correction} the re-classification of these objects as a correction of the officially published catalogue.
The distribution of the XID=1 objects in an X-ray/optical colour-colour diagram is shown in Fig.~\ref{f:stars_ccd}. The quantity plotted along the ordinate was computed as $\log(\texttt{RXS\_SRC\_FLUX}) + 0.4\times \texttt{SDSS\_MODELMAG\_i} + 5.61425$. The optical colour $g-r$ was built from the SDSS MODELMAG columns. The many objects in grey in the background are all identified objects in the catalogue (10404). The white dwarfs stick out as extreme blue objects with a high X-ray to optical flux ratio. Many of the single stars are likely coronal emitters in late-type stars and to be found as red objects with $g-r \simeq 1.5$.
The compact binaries appear on top of the abundant AGN with a median $g-r\simeq 0.2$ and a median $\log (f_{\rm X} / f_{\rm opt}) \simeq 0.9$ but with a large dispersion in both quantities.
Among the compact white dwarf binaries that are not CVs we find three objects that were previously classified as WDMS objects \citep[detached white dwarf main sequence objects][]{heller+09,rebassa+12} and one magnetic pre-cataclysmic binary \citep[a so-called LARP - low accretion rate polar, ][]{schwope+02}. The origin of their X-ray emission needs to be addressed separately, as well as the extreme X-ray emission of a few of the apparently normal stars around $g-r\sim 0.7, \log (f_{\rm X} / f_{\rm opt}) \sim 0.5$. Such a discussion, together with a more thorough presentation of the stellar content of the survey, is foreseen in a subsequent paper.
\subsection{XMMSL2}
For the SPIDERS-XMMSL2 stellar sources, the emerging picture is slightly different. We find 19/2/6 objects in the XID=1/2/3 bins, a much higher fraction of XID=1 candidates as in 2RXS.
Among the 37 objects with {\tt CLASS\_BEST==``STAR''} we re-classify two as Blazar (still XID=1, although not being a star, 4385-55752-614, 8172-57423-839), and one further, following the arguments given above, as likely cluster of galaxies, which thus becomes an XID=3 object.
In this case, XMMSL2\,J113224.0+555745 (8170-57131-926), the BCG of the cluster lies even closer to the X-ray position than the M-star whose spectrum was taken.
Other objects classified as XID=3 were F, K or M stars which appeared way too faint given the measured X-ray flux.
Among the XID=1 sources, we find 16 CVs and only three late-type coronal emitters (M5, M6). Interestingly, the majority (11 out of 19) XID=1 sources of the SPIDERS-XMMSL2 have a likelihood of any association ${\tt p\_any} > 0.5$. It confirms that having a reliable X-ray positional error is key to obtain accurate counterparts. To resolve ambiguities mentioned in this section, it would appear advisable to additionally visualise X-ray contours on the optical (or infrared) finding charts, instead of just using coordinates.
\section{AGN Spectral Properties}\label{sec:SpectralAnalysis}
A detailed discussion of the optical spectral properties of the SPIDERS sample is beyond the scope of this paper.
We refer the reader to \citet{Coffey2019,Wolf2019} for an exploration of the detailed properties of SPIDERS type 1 AGN with sufficient signal-to-noise ratio in individual spectra.
\citet{Wolf2019} investigated the markers of optical diversity of Type 1 AGN by deriving the principal components of optical and X-ray features for a sample of sources identified in SDSS-IV/SPIDERS and compiled by \citet{Coffey2019}.
Making use of the large redshift and luminosity ranges probed by the SPIDERS sample, they could confirm that the broad $\rm H\beta$ line shape significantly evolves along the main sequence of broad line AGN (for a review see \citealt{Marziani18}). \citet{Wolf2019} report that the scaling of the FeII and the continuum emission strengths strongly depends on the sign of the asymmetry of $\rm H\beta$. The effect is discussed in the light of Broad Line Region outflows.
Instead, we present here a description of the general features of the sample. A benefit from having a large number of spectra is in stacking similar objects to increase the signal-to-noise ratio
per pixel and possibly unveil new features in the spectra \citep[e.g.][]{zhu2015}. In the following, we stack SPIDERS-DR16 spectra to create templates for generic usage, for example, exposure time calculation for spectroscopy, redshift fitting re-simulation, etc. The stacks are made available here\footnote{\url{http://www.mpe.mpg.de/XraySurveys/SPIDERS/}}.
On average, the signal-to-noise ratio per pixel grows with the number of spectra stacked together as follows:
\begin{equation}
\mathrm{\log_{10}(S/N\; per\; pixel) = 0.45( 1 + \log_{10}(N\; spec \; per\; pixel) )}
.\end{equation}
The median signal-to-noise ratio per pixel in the observed spectra is $10^{0.45}=2.81$. By stacking 3000 (1000) spectra one reaches a signal-to-noise ratio on the order of 100 (60).
\subsection{Spectral stacking method}
\label{sec:stacking_method}
First, we translated each observed spectrum to its rest-frame $\lambda_{RF}=\lambda/(1+z)$.
Then we interpolated each spectrum and its uncertainties on a fixed wavelength grid in log$_{10}$ wavelength between 800$\AA$ and 11,000$\AA$ with a $\Delta \log_{10}(\lambda)=0.0001$ using \textsc{spectres} \citep{Carnall2017}.
Finally, we took the median value of all fluxes in each pixels to obtain a stacked spectrum on this wavelength grid.
We estimated the uncertainty on the median flux with a jackknife procedure.
We note that to each spectrum, a normalisation (or a weight, e.g. a luminosity function completeness weight) can be applied, but this feature was not used here.
This stacking procedure was previously applied in \citep{zhu2015,Raichoor17,Huang2019,Zhang2019} to stack spectra from star-forming galaxy.
It is also used to stack the spectra of passives galaxies observed in the SPIDERS-CLUSTERS programme.
These are presented in Clerc et al. (in prep).
The accuracy on the redshift of AGN being lower than that of star-forming galaxies (with narrow lines), some information spanning the width of a few pixels is washed out in the stacks; the broad features remain. We chose a redshift bin with width 0.2 (or 0.5) and slide the redshift window by 0.1 to obtain a consistent evolution between the stacks.
If more than 100 spectra were available in a bin, then we computed the stack.
\begin{figure*}
\centering
\includegraphics[width=19cm]{PLOTS/AGNT1.png}
\caption{Spectral stacks as a function of redshift for objects classified as type 1 AGN. Vertical displacement between spectra are added for clarity.}
\label{fig:spectral:stacks:qso:0}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=19cm]{PLOTS/AGNT2.png}
\caption{Spectral stacks as a function of redshift for objects classified as type 2 AGN. Vertical displacement between spectra are added for clarity.}
\label{fig:spectral:stacks:type2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=19cm]{PLOTS/CLUAGN.png}
\includegraphics[width=19cm]{PLOTS/CLUGAL.png}
\caption{Spectral stacks as a function of redshift for objects classified as galaxy in cluster. The top panel shows the stack of active galaxies in clusters and the bottom panel shows the stack of passive galaxies in clusters. The bottom panel is accompanied by the stack of passive galaxies in clusters from the SPIDERS-CLUSTER observations with their evolution as a function of cluster-centric radius, see Clerc et al. in preparation for full details. Vertical displacement between spectra are added for clarity.}
\label{fig:spectral:stacks:clusterGalaxy}
\end{figure*}
\subsection{Type 1 AGN}
We selected type 1 AGN spectra in the 2RXS sample.
There are enough spectra for the stacks to cover up to redshift 2.5.
Figure \ref{fig:spectral:stacks:qso:0} shows the stacks obtained on a rest-frame wavelength axis in a $f_\lambda$ convention.
The stacks obtained are consistent with the findings of \citet{VandenBerk2001}.
We zoom in on the second and the last spectra to show the variety of features detected in Figs. \ref{fig:spectral:stacks:qso:zoom:a}, \ref{fig:spectral:stacks:qso:zoom:b}.
We compare it to the SDSS DR5 spectral templates of the QSO (DR5 29) and of the luminous QSO (DR5 32) \citep{AdelmanMcCarthy2007}. Emission line features are more marked (higher equivalent widths) in the SPIDERS templates.
\subsection{Type 2 AGN}
In SPIDERS-DR16, the sample of type 2 AGN is large enough and spectroscopic data is homogeneous, so that we can create stacks up to redshift 0.7.
We were previously lacking such stacks due to a smaller number of spectra or less homogeneous observations (exposure time, different instruments), which made the stacking procedure tedious.
Fig. \ref{fig:spectral:stacks:type2} shows the stacks obtained.
There, H$\alpha$ seems to be somewhat broad meaning that the type 2 classification is not perfect.
\subsection{Galaxies in clusters}
Fig. \ref{fig:spectral:stacks:clusterGalaxy} shows the stacks of sources that are in the vicinity of optically detected clusters.
The stacks show that we can separate (on average) the two populations of active galaxies in clusters and passive galaxies in clusters.
The bottom panel is accompanied by the stack of passive galaxies in clusters from the SPIDERS-CLUSTER observations with their evolution as a function of cluster-centric radius, see Clerc et al. in preparation for full details.
The stack of galaxies in clusters `contaminating' the AGN sample looks exactly like stacks of passive galaxies found to be cluster members.
\subsection{Black hole mass and Eddington ratio}
The FWHM of $\rm H\beta$ frequently serves as a virial broadening estimator and is used to estimate black hole masses \citep[e.g.][]{Trakhtenbrot12, Mejia16}.
The flux ratio $\rm r_{FeII}=F(FeII)/F(H\beta)$ is known to correlate with the Eddington ratio \citep{Grupe99,Marziani01,Du16}.
These two parameters were initially among the main correlates of the original Eigenvector 1 (EV1), that is, the vector through optical and X-ray parameter space, which spans the most total variance \citep{Boroson92}.
The plane $\rm FWHM_{H\beta}$ and $\rm r_{FeII}$ span is known as the EV 1 plane.
The distribution of Type 1 AGN in this plane has been identified as main sequence of broad line AGN \citep[e.g.][and references therein]{Marziani18} and has proven of great use in the characterisation of the optical diversity of these sources.
The stacking method described in this work can be applied in this context by using the binning of the Eigenvector 1 plane proposed by \citet{Sulentic02}. \citet{Sulentic02} as well as \citet{Zamfir10} have computed median composite spectra to investigate the evolution of the broad $\rm H\beta$ line shape along the EV1 sequence.
The large number of sources available from the SPIDERS programme can be used similarly to uncover the dominating trends in the Balmer line diagnostics with increasing black hole mass and increasing Eddington ratio.
In order to demonstrate the high S/N achieved with our stacks, we made use of the DR16 update of the SDSS-IV/SPIDERS Type 1 AGN catalogue compiled by \citet{Coffey2019}.
$\rm FWHM_{H\beta}$ and $\rm r_{FeII}$ are listed as derived parameters in the catalogue from \citet{Coffey2019} and we identified sources in the following bins :
\begin{itemize}
\item A1: $\rm 0 \, km \, s^{-1}<FWHM_{H\beta} < 4000 \, km \, s^{-1}$ and $\rm 0 <r_{FeII}< 0.5$
\item B1: $ \rm 4000 \, km \, s^{-1} <FWHM_{H\beta} < 8000 \, km \, s^{-1}$ and $\rm 0<r_{FeII}<0.5$
\item B1+: $\rm 80000 \, km \, s^{-1}<FWHM_{H\beta} <12000 \, km \, s^{-1}$ and $\rm 0<r_{FeII}<0.5$
\end{itemize}
The spectra of these sources were stacked following the method described in section \ref{sec:stacking_method}. Figure \ref{fig:spectral:stacks:qso:EV1} zooms on the $\rm H\beta$ line in these stacks. To guide the eye, we overplot the location of emission lines \citep{VandenBerk2001}.
For increasing FWHM of $\rm H\beta$ one can clearly see the gradual appearance of a distinct very broad, slightly redshifted component in the stacked $\rm H\beta$, confirming the results by \citet{Sulentic02,Zamfir10}.
Finer bins in the EV1 plane or further key optical parameter planes will allow us to probe the physics and geometry of the Broad Line Region in future work.
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{PLOTS/seq1_Hbeta_EVstacks_wrange_4800_5050.png}
\caption{Spectral stacks for objects classified as type 1 AGN.
A zoom on the $\rm H\beta$ spectral region is presented.
The stacks are divided by their median flux to ease comparison.
Stacks are taken in bins along the vertical EV1 sequence \citet{Sulentic02} with the emission line from \citet{VandenBerk2001} marked.}
\label{fig:spectral:stacks:qso:EV1}
\end{figure*}
\section{Conclusions and outlook}\label{sec:Discussion}
In this work, we present the contents of the optical spectroscopic catalogue associated to X-ray point-like sources in the SPIDERS survey, published as part of the SDSS DR16. The systematic, highly complete follow-up programme assembled within four generations of SDSS delivers the largest spectroscopic redshift sample of an X-ray survey to date and represents a test-bed for a large programme of identification for large X-ray surveys in the future, especially with regard to the upcoming eROSITA all-sky survey.
The combination of wide-area X-ray surveys with optical spectroscopy enables a large number of unique scientific applications. As a further example, we a possible application for cosmology discuss below, following the works of \citet{Risaliti2015,Lusso2017}.
\subsection{Future AGN spectroscopic surveys following X-ray selected AGNs}
\label{subsec:future:surveys}
The SRG eROSITA full-sky scans will provide large number of targets for spectroscopic observation \citep{Merloni12,Predehl16}.
SDSS-IV SPIDERS has demonstrated its ability to observed AGN with high completeness and to unambiguously classify the X-ray sources.
For eROSITA, eRASS8 with a flux limit around -14, the peak of the number density should be around $z\sim1$ \citep{Merloni12,Comparat19} (compared to 0.1-0.2 in 2RXS, XMMSL2).
It justifies the need of larger spectroscopic infrastructure to be complete.
The next X-ray observation programme lined up is a transition programme linking SDSS-4 and SDSS-5.
This programme is named eFEDS and will consist in 12 eROSITA dedicated plates covering 60 deg$^2$ within the footprint of the eROSITA Performance Verification programme. The data will be released as part of the next SDSS Data Release.
Later in September 2020, following the completion of the first full-sky scan, SDSS-V \citet{Kollmeier17}, with its telescopes located in both hemisphere, will optimally observe the bright half of the sources.
A couple of years later, using a deeper four-year full-sky scan, 4MOST \citep{Merloni2019} will observe the fainter half of the eROSITA sources.
In the longer term, the Athena \citep{Nandra2013athena} observatory will be well matched to the capabilities of the upcoming optical multi-fiber spectrograph MSE facility \citep{McConnachie16} to be mounted on a 10 meter class telescope.
\subsection{A tentative forecast for the eROSITA era: cosmology with the AGN standard candle}
Recently, \citet{Risaliti2015,Lusso2017} proposed a method to construct quasar standard candles.
It relies on the fact that exist an intrinsic non-linear relation between the UV emission from the accretion disk and the X-ray emission from the surrounding corona of the AGN.
This relation between X-ray luminosity and UV luminosity has been observed \citep{Lusso2010}.
Our current understanding of the disc-coronae and its non-linear scaling between UV and X-ray luminosity is not yet sufficient to prove this method in details. In the literature, there is skepticism about the physical disc-coronae model from \citet{Lusso2017} to account for this relation.
For example, \citet{Kubota2018}; \citet{Panda2019} propose a model that is in agreement with the LX-LUV relation from Lusso.
On the contrary, after exploring the physics of the disc and the coronae within a radiatively efficient AGN model, \citet{Arcodia2019} could not find a satisfactory explanation for the tight relation observed.
Some authors find the $\alpha$ OX to be correlated with the Eddigton ratio \citep{Lusso2010}, some authors do not \citep[e.g.][]{Vasudevan2009}; and others find a correlation with black hole mass. So this point is yet to be entirely proven.
SDSS-IV SPIDERS has demonstrated our ability to observed AGN with high completeness and to unambiguously classify the X-ray sources, as required by this method.
Additionally, \citet{Coffey2019} showed our ability to determine accurately the relevant spectral features for such an analysis.
A cosmological analysis of the SPIDERS 2RXS sample is limited by depth the X-ray data, as the X-ray properties of the AGN are not determined well enough and impede the best selection of type 1 AGN standard candles.
In the near future, eROSITA will provide the necessary high quality X-ray data, and we estimate below the possibility of a cosmological constraints via this method using the eROSITA mock catalogue produced by \citet{Comparat19}.
For all type 1 mock AGN, we simulate the quasar UV\,-\,X-ray relationship and derive distance modulus estimates following the \citet{Risaliti2015} method.
The resulting quasar Hubble diagram is then fit using a standard $\Lambda$CDM cosmological model to place constraints on $\rm \Omega_{M}$ and $\rm \Omega_{\Lambda}$.
We use only sources for which 4MOST will obtain optical spectra at a signal-to-noise greater than or equal to 10.
Among these sources, we assume that $\sim$10 per cent of these sources will have reliable measurements of both the UV and X-ray flux densities (conservative assumption). We find a best-fit cosmology compatible with the input cosmology of the simulations.
The uncertainty obtained is 5\% on $\rm \Omega_{M}$ and 10\% $\Omega_{\Lambda}$.
For comparison, \citet{Risaliti2015} with current samples constrained $\rm \Omega_{M}$ and $\Omega_{\Lambda}$ to the $\sim$40\% and 27\% level while the Union 2.1 supernovae sample from \citep{Suzuki2012} constrained them to the 14 and 11\% level.
Given the large number of type 1 AGN to be detected by eROSITA, which will then be observed with optical spectroscopy, the combination of eROSITA + SDSS-5 and 4MOST should be able to unveil if the method is correct.
If the method is proven right, it would produce competitive and independent constraints on cosmological parameters.
More accurate forecasts where one simulates jointly the photometry and the spectroscopy, based on the stacks presented here, to populate the Hubble diagram are foreseen in upcoming studies (PhD Thesis of D. Coffey, to be submitted).
\section*{Acknowledgements}
We thank Jan Kurpas and Fabian Emmerich (AIP) for help with data presentation for screening and plotting.
We thank the referee for the constructive feedback.
This paper represents an effort by both the SDSS-IV collaborations.
Funding for SDSS-III was provided by the Alfred
P. Sloan Foundation, the Participating Institutions, the
National Science Foundation, and the U.S. Department
of Energy Office of Science.
Funding for the Sloan Digital Sky Survey IV has been provided by
the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of
Science, and the Participating Institutions. SDSS-IV acknowledges
support and resources from the Center for High-Performance Computing at
the University of Utah. The SDSS web site is www.sdss.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group,
the French Participation Group, Harvard-Smithsonian Center for Astrophysics,
Instituto de Astrof\'isica de Canarias, The Johns Hopkins University,
Kavli Institute for the Physics and Mathematics of the Universe (IPMU) /
University of Tokyo, Lawrence Berkeley National Laboratory,
Leibniz Institut f\"ur Astrophysik Potsdam (AIP),
Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg),
Max-Planck-Institut f\"ur Astrophysik (MPA Garching),
Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE),
National Astronomical Observatory of China, New Mexico State University,
New York University, University of Notre Dame,
Observat\'ario Nacional / MCTI, The Ohio State University,
Pennsylvania State University, Shanghai Astronomical Observatory,
United Kingdom Participation Group,
Universidad Nacional Aut\'onoma de M\'exico, University of Arizona,
University of Colorado Boulder, University of Portsmouth,
University of Utah, University of Virginia, University of Washington,
University of Wisconsin,
Vanderbilt University, and Yale University.
This publication makes use of data products from the \textit{Wide-field Infrared Survey Explorer}, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology.
\textit{WISE} and NEOWISE are funded by the National Aeronautics and Space Administration.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No.~NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
\bibliographystyle{aa}
|
1307.7533
|
\section{Introduction}
The emerging area of networked control systems has gained significant attention in recent years due to its potential applications in many areas such as machine-to-machine communication for security, surveillance, production, building management, and traffic control. The idea of controlling dynamical systems over communication networks is supported by the rapid advance of wireless technology and the development of cost-effective and energy efficient devices (sensors), capable of sensing, computing, and transmitting. This paper considers a setup in which a sensor node communicates the observations of a linear dynamical system (plant) over a network of wireless nodes to a remote controller in order to stabilize the system in closed-loop. The wireless nodes have transmit and receive capability and we call them \emph{relays}, as they relay the plant's state information to the remote controller. We assume a transmit power constraint on the sensor and relays, and the wireless links between all agents (sensor, relays, and controller) are modeled as Gaussian channels. The objective is to study stabilizability of the plant over Gaussian networks.
\subsection{Problem Formulation}
Consider a discrete linear time invariant system, whose state equation is given by
\begin{eqnarray}
X_{t+1}&=&A X_t + B U_t + W_t, \label{eq:stateEquation}
\end{eqnarray}
where $X_t\in\mathbb{R}^n,U_t\in\mathbb{R}^m$, and
$W_t\in\mathbb{R}^n$ are state, control, and plant noise
The initial state $X_0$ is a random variable with bounded differential
entropy $|h(X_0)|<\infty$ and a given covariance matrix
$\lmm_0$. The plant noise $\{W_t\}$ is a zero mean white Gaussian
noise sequence with variance $K_W$ and it is assumed to be independent
of the initial state $X_0$. The matrices $A$ and $B$ are of appropriate dimensions and the
pair $(A,B)$ is controllable. Let $\{\lm_1,\lm_2,\dots,\lm_n\}$ denote
the eigenvalues of the system matrix $A$. Without loss of generality
we assume that all the eigenvalues of the system matrix are outside
the unit disc, i.e., $|\lm_i|\geq 1$. The unstable modes can
be decoupled from the stable modes by a similarity transformation. If
the system in \eqref{eq:stateEquation} is one-dimensional then $A$ is
scalar and we use the notation $A=\lm$. We consider a remote control
setup, where a sensor node observes the state process and transmits it
to a remotely situated controller over a network of relay\footnote{A
relay is a communication device whose sole purpose is to support
communication from the information source to the destination. In our
setup the relay nodes cooperate to communicate the state process
from sensor to the remote controller. If the system design objective
is to replace wired connections, then relaying is a vital approach
to communicate over longer distances.} nodes as shown in
Fig. \ref{fig:SystemDiagram}. The communication links between nodes
are modeled as white Gaussian channels, which is why we refer to it as
a Gaussian network. In order to communicate the observed state value
$X_t$, an encoder $\mathcal{E}$ is lumped with the observer
$\mathcal{O}$ and a decoder $\mathcal{D}$ is lumped with the
controller $\mathcal{C}$. In addition there are $L$ relay nodes
$\{\mathcal{R}_i\}^L_{i=1}$ within the channel to support
communication from $\mathcal{E}$ to $\mathcal{D}$. At any time instant
$t$, $S_{e,t}$ and $R_t$ are the input and the output of the network
and $U_t$ is the control action. Let $f_t$ denote the observer/encoder
policy such that $S_{e,t}=f_t(X_{[0,t]},U_{[0,t-1]})$, where
$X_{[0,t]}:= \{X_0, X_1,\dots,X_t\}$ and we have the following
average transmit power constraint $\lim_{T\rightarrow \infty}\frac{1}{T}\sum^{T-1}_{t=0}\mathbb{E}[S^2_{e,t}]\leq P_S$. Further let $\pi_t$ denote the decoder/controller policy, then
$U_t=\pi_t\left(R_{[0,t]}\right)$. The objective in this paper is to
find conditions on the system matrix $A$ so that the plant in
(\ref{eq:stateEquation}) can be mean square stabilized over a given
Gaussian network.
\begin{definition}
A system is said to be \emph{mean square stable} if there exists a constant $M<\infty$ such that $\ex[\|X_t\|^2]<M$ for all $t$.
\end{definition}
\begin{figure}[!t]
\centering
\psfrag{p}[][][3]{\begin{sideways}Plant\end{sideways}}
\psfrag{oe}[][][3]{$\mathcal{O}/\mathcal{E}$}
\psfrag{dc}[][][3]{$\mathcal{D}/\mathcal{C}$}
\psfrag{rn}[][][2]{$\mathcal{R}_L$}
\psfrag{r1}[][][2]{$\mathcal{R}_1$}
\psfrag{r2}[][][2]{$\mathcal{R}_2$}
\psfrag{r3}[][][2]{$\mathcal{R}_3$}
\psfrag{ch}[][][2]{AWGN Relay Channel}
\psfrag{nfch}[][][2]{\begin{sideways}Noiseless Feedback Communication Channel\end{sideways}}
\psfrag{x}[][][2.5]{$X_t$}
\psfrag{st}[][][2.5]{$S_{e,t}$}
\psfrag{rt}[][][2.5]{$R_t$}
\psfrag{ut}[][][2.5]{$U_t$}
\resizebox{7 cm}{!}{\epsfbox{RelayNetwork}}
\vspace{-.2cm}
\caption{The unstable plant has to be controlled over a Gaussian relay network.}\label{fig:SystemDiagram}
\end{figure}
\vspace{-.3cm}
\subsection{Literature Review}
Important contributions to control over communication channels include \cite{BansalBasar89,Baillieul99,WongBrockett99,NairEvans00,EliaMitter01,NairEvans04,Elia04,MatveevSavkin04,TatikondaMitter04b,TatikondaMitter04c,MartinElia06,SahaiMitter06,MatveevSavkin07,BraslavskyFreudenberg07,MartinsDahleh08,MiddletonBraslavsky09,YukselTatikonda09,YukselBasar10,FarhadiAhmed11}.
The problem of remotely controlling dynamical systems over communication channels is studied with methods from stochastic control theory and information theory. The seminal paper by Bansal and Ba\c{s}ar \cite{BansalBasar89} used fundamental information theoretic arguments to obtain optimal policies for LQG control of a first order plant over a point to point Gaussian channel. Minimum rate requirements for stabilizability of a noiseless scalar plant were first established in \cite{Baillieul99,WongBrockett99} followed by \cite{NairEvans00}. Further rate theorems for stabilization of linear plants over some discrete and continuous alphabet channels can be found in \cite{TatikondaMitter04c,BraslavskyFreudenberg07,MartinsDahleh08,CharalambousFarhadi08,MiddletonBraslavsky09,YiqianChen09,YukselBasar10,FreudenbergSolo10,ShuMiddleton11,YukselIT12,SilvaQuevedo10,VargasSilva12}. The papers \cite{BansalBasar89,TatikondaMitter04b,TatikondaMitter04c,BraslavskyFreudenberg07,MiddletonBraslavsky09,YiqianChen09,FreudenbergSolo10,YukselTatikonda09,ShuMiddleton11,YukselBasar10,SilvaQuevedo10,VargasSilva12} addressing control over Gaussian channels are more relevant to our work. In \cite{BansalBasar89} linear sensing and control policies are shown to be optimal for the LQG control of a first order linear plant over a point-to-point Gaussian channel. A necessary condition for stabilization relating eigenvalues of the plant to the capacity of the Gaussian channel first appeared in \cite{TatikondaMitter04b,TatikondaMitter04c}. Some important contributions on stabilization over Gaussian channels with average transmit power constraints have been made in \cite{BraslavskyFreudenberg07,MiddletonBraslavsky09,FreudenbergSolo10,YiqianChen09,ShuMiddleton11,KumarLaneman11,VargasSilva12}. In \cite{BraslavskyFreudenberg07} sufficient conditions for stabilization of both continuous time and discrete time multi-dimensional plants over a scalar white Gaussian channel were obtained using linear time invariant (LTI) sensing and control schemes. It was shown in \cite{BraslavskyFreudenberg07,FreudenbergSolo10} that under some assumptions there is no loss in using LTI schemes for stabilization, that is the use of non-linear time varying schemes does not allow stabilization over channels with lower signal-to-noise ratio. The stability results were extended to a colored Gaussian channel in \cite{MiddletonBraslavsky09}. In \cite{YukselBasar10} the authors considered noisy communication links between both sensor--controller and controller--actuator and presented necessary and sufficient conditions for mean square stability. Stabilization of noiseless LTI plants over parallel white Gaussian channels subject to transmit power constraint has been studied in \cite{YiqianChen09,ShuMiddleton11,KumarLaneman11,VargasSilva12}. The paper \cite{YiqianChen09} considers output feedback stabilization and \cite{ShuMiddleton11} considers state feedback stabilization, and they both derive necessary and sufficient conditions for stability under a total transmit power constraint. The necessary condition derived in \cite{ShuMiddleton11} for mean-square stabilization of discrete time LTI plants over parallel Gaussian channels is not tight in general and its achievability is not guaranteed by LTI schemes. The paper \cite{VargasSilva12} focuses on mean-square stabilization of two-input two-output system over two parallel Gaussian channels. By restricting the study to LTI schemes and assuming individual power constraint on each channel, the authors derive tight necessary and sufficient conditions for both state feedback and output feedback architectures. Realizing that LTI schemes are not optimal in general for stabilization over parallel channels \cite{ShuMiddleton11},
the paper \cite{KumarLaneman11} proposes a non-linear time invariant scheme for stabilization of a scalar noiseless plant over a parallel Gaussian channel using the idea that independent information should be transmitted on parallel channels \cite{VaishampayanThesis,YukselTatikonda09}. The problem of finding a tight necessary and sufficient condition for stabilization of an $m$-dimensional plant over an $n$-dimensional parallel Gaussian channel is still open, which we investigate in this paper.
As summarized above, the previous works on control over Gaussian channels have mostly focused on situations where there is no intermediate node between the sensor and the remote controller. The problems related to control over Gaussian networks with relay nodes are largely open. Such problems are hard because a relay network can have an arbitrary topology and every node within the network can have memory and can employ any transmit strategy. The papers \cite{Tatikonda03} and \cite{GuptaHassibi09} have derived conditions for stabilization over networks with digital noiseless channels and analog erasure channels respectively, however those results do not apply to noisy networks. In \cite{SahaiMitter06,YukselIT12} moment stability conditions in terms of error exponents have been established. However, even a single letter expression for channel capacity of the basic three-node Gaussian relay channel \cite{InfoBook} is not known in general. In \cite{GastparVetterli05} Gastpar and Vetterli determined capacity of a large Gaussian relay network in the limit as the number of relays tends to infinity. The problem of control over Gaussian relay channels was first introduced in \cite{zaidiICCA10,zaidiReglermote} and further studied in \cite{zaidiACC11,KumarLanemanCDC10}. The papers \cite{zaidiICCA10,zaidiReglermote,zaidiACC11,KumarLanemanCDC10} derived sufficient conditions for mean square stability of a scalar plant by employing linear schemes over Gaussian channels with single relay nodes. In this paper we consider more general setups with multiple relays and multi-dimensional plants. We also derive necessary conditions along with sufficient conditions and further discuss how good linear policies are for various network topologies. In particular this paper makes the following contributions:
\vspace{-.3cm}
\subsection{Main Contributions}
\begin{itemize}
\item In Sec.~\ref{sec:NecConditionGen} we obtain a necessary condition for
mean square stabilization of the linear system in
\eqref{eq:stateEquation} over the general relay network depicted in Fig.~\ref{fig:SystemDiagram}.
\item In Sec.~\ref{sec:Cascade}--\ref{sec:NonOrthogonal} we derive necessary as well as sufficient conditions for
stabilization over some fundamental network topologies such as
cascade network, parallel network, and non-orthogonal network, which serve as building blocks for a large class of
Gaussian networks (see Figures \ref{fig:CascadeNetwork}, \ref{fig:ParallelNetwork}, \ref{fig:HalfDuplexNetwork}, pp. 7, 10, 13). Necessary conditions are obtained using information
theoretic tools. Sufficient conditions are obtained using linear
schemes.
\item Sub-optimality of linear policies is discussed and some insights on optimal schemes are presented. In some cases linear schemes can be
asymptotically optimal and in some cases exactly optimal.
\item A linear time varying scheme is proposed in Sec.~\ref{sec:multiDimension}, which is optimal for stabilization of noisy multi-dimensional plants over the point-to-point scalar Gaussian channels.
\item The minimum rate required for stabilization of multi-dimensional plants over parallel Gaussian channels is established in Sec. \ref{sec:Parallel}, which is achievable by a non-linear time varying scheme for noiseless plants.
\end{itemize}
\vspace{-.3cm}
\section{Necessary Condition for Stabilization}\label{sec:NecConditionGen}
In the literature \cite{Elia04,MartinsDahleh08,SilvaOstergaard11,YukselIT12}, there exist a variety of information rate inequalities characterizing fundamental limits on the performance of linear systems controlled over communication channels. In the following we state a relationship which gives a necessary condition for mean square stabilization over the general network depicted in Fig.~\ref{fig:SystemDiagram}.
\begin{theorem}\label{thm:NecGeneral}
If the linear system in \eqref{eq:stateEquation} is mean square
stable over the Gaussian relay network, then
\begin{align}
\log\left(|\dt\left(A\right)|\right)\leq\liminf_{T\rightarrow\infty}
\frac{1}{T} I(\bar{X}_{[0,T-1]} \rightarrow R_{[0,T-1]}),
\end{align}
where $\{\bar{X}_t\}$ denotes the uncontrolled state process obtained by substituting $U_t=0$ in \eqref{eq:stateEquation}, i.e., $\bar{X}_{t+1}=A\bar{X}_t+W_t$, the notation $|\dt\left(A\right)|$ represents the absolute value of determinant
of matrix $A$ and $$I(\bar{X}_{[0,T-1]} \rightarrow R_{[0,T-1]})=\sum^{T-1}_{t=0} I
\left(\bar{X}_{[0,t]};R_t|R_{[0,t\sm 1]}\right)$$ is the directed
information from the uncontrolled state process
$\{\bar{X}_{[0,T-1]}\}$ to the sequence of variables $\{R_{[0,T-1]}\}$
received by the controller over the network of relay nodes.
\end{theorem}
\begin{proof}
The proof is given in Appendix \ref{apx:NecGeneral}, which essentially follows from the same steps as in the proof of Theorem 4.1 in \cite{YukselIT12}, however, with some differences due to the network structure. Similar constructions can also be found in \cite{MartinsDahleh08,SilvaOstergaard11}.
\end{proof}
\section{Cascade (Serial) Network}\label{sec:Cascade}
In this section we consider a cascade network of half-duplex relay nodes. A node which is capable of transmitting and receiving signals simultaneously using the same frequency band is known as \emph{full-duplex} while a \emph{half-duplex} node cannot simultaneously receive and transmit signals. In practice it is expensive and hard to a build a communication device which can transmit and receive signals at the same time using the same frequency, due to the self-interference created by the transmitted signal to the received signal. Therefore half-duplex systems are mostly used in practice. Consider a cascade network comprised of $L-1$ half-duplex relay nodes depicted in Fig.~\ref{fig:CascadeNetwork}, where the state encoder $\e$ observes the state of the system and transmits its signal to the relay node $\r_1$. The relay node $\r_1$ transmits a signal to the relay node $\r_2$ and so on. Finally the state information is received at the remote decoder/controller $\d$ from $\r_{L-1}$. The communication within the network takes place such that only one node is allowed to transmit at every time step. That is, if in a time slot $\r_i$ transmits signal to $\r_{i+1}$, then all the remaining nodes in the network are considered to be silent in that time slot. At any time step $t$, $S_{e,t}$ is the signal transmitted from $\e$ and $S^i_{r,t}$ is the signal transmitted from $\r_i$, which are given by
\begin{align}\label{eq:inOutCascadeRelay1}
&S_{e,t}=f_t\left(X_{[0,t]},U_{[0,t-1]}\right) \quad \forall t:t=1+nL, n\in\mathbb{N}, \qquad S_{e,t}=0 \quad \text{otherwise}, \nonumber \\
&S^i_{r,t}=g^i_t\left(Y^i_{[0,t]}\right) \quad \forall t:t=1+i+nL, n\in\mathbb{N}, \qquad S^i_{r,t}=0 \quad \text{otherwise},
\end{align}
where $\mathbb{N}=\{0,1,2,\dots\}$, $f_t:\mathbb{R}^{2t\sm1}\rightarrow\mathbb{R}$, $g^i_t:\mathbb{R}^t\rightarrow\mathbb{R}$ such that $\ex\left[f^2_t\left(X_{[0,t]},U_{[0,t-1]}\right)\right]=L\ps$, $\ex\left[\left(g^i_t\left(Y_{[0,t]}\right)\right)^2\right]=L\pr^i$, $\sum^{L\sm1}_{i=1}\pr^i\leq\pra$. The signal received by $\r_i$ is
\begin{align}\label{eq:inOutCascadeRelay2}
Y^1_t=S_{e,t}+Z^1_t, \quad Y^i_t=S^{i\sm1}_{r,t}+Z^i_t \quad \forall t:t=nL+i, n\in\mathbb{N}, \qquad Y^i_t=0 \quad \text{otherwise}.
\end{align}
Here $Z^i_t\sim\mathcal{N}(0,N_i)$ denotes mutually independent white Gaussian noise components. Accordingly $\d$ receives $R_t=S^{L\sm1}_{r,t}+Z^L_t$ at $t=nL$ and zero otherwise.
\begin{figure}[!h]
\centering
\psfrag{p}[][][3.5]{\begin{sideways}Plant\end{sideways}}
\psfrag{e}[][][3.5]{$\mathcal{E}$}
\psfrag{d}[][][3.5]{$\mathcal{D}$}
\psfrag{rn}[][][3.5]{$\mathcal{R}_L$}
\psfrag{r1}[][][3.5]{$\mathcal{R}_1$}
\psfrag{r2}[][][3.5]{$\mathcal{R}_2$}
\psfrag{r3}[][][3.5]{$\mathcal{R}_{L\sm1}$}
\psfrag{ch}[][][2]{AWGN Relay Channel}
\psfrag{nfch}[][][2]{\begin{sideways}Noiseless Feedback Communication Channel\end{sideways}}
\psfrag{z1}[][][2.5]{$Z^1_{t}$}
\psfrag{z2}[][][2.5]{$Z^2_{t}$}
\psfrag{z3}[][][2.5]{$Z^{L}_{t}$}
\psfrag{st}[][][2.5]{$S_t$}
\psfrag{rt}[][][2.5]{$R_t$}
\psfrag{ut}[][][2.5]{$U_t$}
\psfrag{s1}[][][2.5]{$S_{e,t}$}
\psfrag{s2}[][][2.5]{$S^1_{r,t}$}
\psfrag{s3}[][][2.5]{$S^2_{r,t}$}
\psfrag{s4}[][][2.5]{$S^{L\sm1}_{r,t}$}
\psfrag{y1}[][][2.5]{$Y^1_t$}
\psfrag{y2}[][][2.5]{$Y^2_t$}
\psfrag{y3}[][][2.5]{$Y^{L\sm1}_t$}
\psfrag{y4}[][][2.5]{$R_t$}
\resizebox{14 cm}{!}{\epsfbox{cascadeRelay}}
\vspace{-.2cm}
\caption{A cascade Gaussian network model.}\label{fig:CascadeNetwork}
\end{figure}
We now present a necessary condition for mean square stability over the given channel.
\begin{theorem}\label{thm:cascadeNetwork_nec}
If the system (\ref{eq:stateEquation}) is mean square stable over the \emph{cascade network} then
\begin{align}\label{eq:thmCascadenec}
\log\left(\left|\dt\left(A\right)\right|\right)<\frac{1}{2L} \log\left(1+L\min\left\{\frac{\ps}{\n_1},\frac{\pra}{\sum^{L}_{i=2}\n_i}\right\}\right).
\end{align}
\end{theorem}
\begin{proof}
We first derive an outer bound on the directed information $I(\bar{X}_{[1,LT]} \rightarrow R_{[1,LT]})$ over the given channel and then use Theorem \ref{thm:NecGeneral} to find the necessary condition \eqref{eq:thmCascadenec}.
\begin{align}\label{eq:boundNec1Cascade}
&I(\bar{X}_{[1,LT]} \rightarrow R_{[1,LT]})\stackrel{(a)}{=}I(\bar{X}_{[1,LT]} ; R_{[1,LT]})\stackrel{(b)}{\leq}I(\bar{X}_{[1,LT]}; Y^i_{[1,LT]}, R_{[1,LT]}) \nonumber \\
&= \sum^{LT}_{t=1}I(\bar{X}_{[1,LT]} ; R_t, Y^i_t | R_{[1,t\sm1]}, Y^i_{[1,t\sm 1]})\stackrel{(c)}{=} \sum^{LT}_{t=1} \Big( h(R_t, Y^i_t | R_{[1,t-1]}, Y^i_{[1,t\sm1]}) \nonumber \\
&\quad - h(R_t, Y^i_t | R_{[1,t-1]}, Y^i_{[1,t\sm 1]},\bar{X}_{[1,LT]})\Big)\stackrel{(d)}{=}\sum^{LT}_{t=1}\Big(h(Y^i_t | R_{[1,t-1]}, Y^i_{[1,t-1]}) + h(R_t|R_{[1,t\sm1]}, Y^i_{[1,t]}) \nonumber \\
&\quad - h(Y^i_t | R_{[1,t\sm1]}, Y^i_{[1,t\sm 1]},\bar{X}_{[1,LT]})- h(R_t|R_{[1,t\sm1]}, Y^i_{[1,t]},\bar{X}_{[1,LT]}) \Big) \nonumber \\
&\stackrel{(e)}{=}\sum^{LT}_{t=1}\Big(h(Y^i_t | R_{[1,t-1]}, Y^i_{[1,t-1]})- h(Y^i_t | R_{[1,t\sm1]}, Y^i_{[1,t\sm 1]},\bar{X}_{[1,LT]})+\underbrace{I(R_t;\bar{X}_{[1,LT]}|R_{[1,t\sm1]}, Y^i_{[1,t]})}_{=0}\Big) \nonumber \\
&\stackrel{(f)}{\leq}\sum^{LT}_{t=1}\left(h(Y^i_t)- h(Y^i_t |R_{[1,t\sm1]}, Y^i_{[1,t\sm 1]},\bar{X}_{[1,LT]})\right)\nonumber \\
&\stackrel{(g)}{\leq}\sum^{LT}_{t=1}\left(h(Y^i_t)- h(Y^i_t | S^{i\sm1}_{r,t},R_{[1,t\sm1]}, Y^i_{[1,t\sm 1]},\bar{X}_{[1,LT]})\right)\stackrel{(h)}{=}\sum^{LT}_{t=1}I(S^{i\sm1}_{r,t};Y^i_t) \nonumber\\
&\stackrel{(i)}{=}\sum^{T-1}_{t=0}I(S^{i\sm 1}_{r,tL+i};Y^i_{tL+i})\stackrel{(j)}{\leq}\frac{1}{2}\sum^{T-1}_{t=0}\log\left(1+\frac{L\pr^{i\sm 1}}{\n_i}\right)=\frac{T}{2}\log\left(1+\frac{L\pr^{i\sm 1}}{\n_i}\right)
\end{align}
where $(a)$ follows from \cite[Theorem 2]{Massey90}; $(b)$ follows from the fact that adding side information cannot decrease mutual information; $(c)$, $(d)$ and $(e)$ follow from properties of mutual information and differential entropy; $(f)$ follows from conditioning reduces entropy and the following Markov chain $\bar{X}_{[1,LT]}-(Y^i_{[1,t]},R_{[1,t\sm1]})-R_{t}$; $(g)$ follow from conditioning reduces entropy; $(h)$ follows from the Markov chain $Y^i_t-S^{i\sm1}_{r,t}-(R_{[1,t\sm1]}, Y^i_{[1,t\sm 1]},\bar{X}_{[1,LT]})$ due to memoryless channel from $S^{i\sm1}_{r,t}$ to $Y^i_t$; $(i)$ follows from \eqref{eq:inOutCascadeRelay1} and \eqref{eq:inOutCascadeRelay2}; and $(j)$ follows from the fact that mutual information of a Gaussian channel is maximized by the Gaussian input distribution \cite[Theorem 8.6.5]{InfoBook}. If we replace $Y^i_{[1,LT]}$ with $Y^1_{[1,LT]}$ in step $(b)$ of \eqref{eq:boundNec1Cascade} and $S^{i\sm1}_{r,t}$ with $S_{e,t}$ in step $(g)$ of \eqref{eq:boundNec1Cascade}, then we get the following bound:
\begin{align}\label{eq:boundNec2Cascade}
I(\bar{X}_{[1,LT]} \rightarrow R_{[1,LT]})\leq\frac{T}{2}\log\left(1+\frac{L\ps}{\n_1}\right).
\end{align}
The directed information $I(\bar{X}_{[1,LT]} \rightarrow R_{[1,LT]})$ can also be bounded as
\begin{align}\label{eq:boundNec3Cascade}
&I\left(\bar{X}_{[1,LT]} \rightarrow R_{[1,LT]}\right)\stackrel{}{=}\sum^{LT}_{t=1}I\left(\bar{X}_{[1,t]} ; R_t | R_{[1,t\sm1]}\right)\stackrel{(a)}{\leq}\sum^{LT}_{t=1}I\left(S^{L\sm1}_{r,[1,t]} ; R_t| R_{[1,t\sm1]}\right)=I\left(S^{L\sm1}_{r,[1,LT]} \rightarrow R_{[1,LT]}\right) \nonumber \\
&\stackrel{(b)}{\leq}\sum^{LT}_{t=1}I(S^{L\sm 1}_{r,t};R_{t})\stackrel{(c)}{=}\sum^{T-1}_{t=0}I(S^{L\sm 1}_{r,tL+L};R_{tL+L})\stackrel{(d)}{\leq}\frac{T}{2}\log\left(1+\frac{L\pr^{L\sm 1}}{\n_L}\right),
\end{align}
where $(a)$ follows from the Markov chain $\bar{X}_{[1,LT]}-(S^{L\sm1}_{r,[1,t]},R_{[1,t\sm1]})-R_{[1,t]}$, $(b)$ follows from from \cite[Theorem 1]{Massey90}; $(c)$ follows from \eqref{eq:inOutCascadeRelay1} and \eqref{eq:inOutCascadeRelay2}; and $(d)$ follows from the fact that mutual information of a Gaussian channel is maximized by the Gaussian input distribution \cite[Theorem 8.6.5]{InfoBook}. Finally using \eqref{eq:boundNec1Cascade}, \eqref{eq:boundNec2Cascade}, and \eqref{eq:boundNec3Cascade}, we have the following bound:
\begin{align}\label{eq:directedInfoCascade}
&I(\bar{X}_{[1,LT]} \rightarrow R_{[1,LT]})\stackrel{}{\leq}\frac{T}{2}\min\left\{\log\left(1+\frac{L\ps}{\n_1}\right),\log\left(1+\frac{L\pr^1}{\n_2}\right),\dots,\log\left(1+\frac{L\pr^{L\sm1}}{\n_{L}}\right) \right\} \nonumber \\
&\stackrel{(a)}{=}\frac{T}{2}\log\left(1+L\min\left\{\frac{\ps}{\n_1},\frac{\pr^1}{\n_2},\dots,\frac{\pr^{L\sm1}}{\n_{L}}\right\}\right) \nonumber \\
&\leq\frac{T}{2}\log\left(1+L\min\left\{\frac{\ps}{\n_1},\max_{\pr^i: \sum \pr^i\leq \pra} \min \left\{\frac{\pr^1}{\n_2},\dots,\frac{\pr^{L\sm1}}{\n_{L}}\right\}\right\}\right) \nonumber \\
&\stackrel{(b)}{=}\frac{T}{2}\log\left(1+L \min\left\{\frac{\ps}{\n_1},\frac{\pra}{\sum^{L}_{i=2}\n_i}\right\}\right),
\end{align}
$(a)$ follows from the fact that $\log(1+x)$ is a monotonically increasing function of $x$; and $(b)$ follows from the optimal power allocation choice $\pr^i=\frac{\n_{i+1}\pra}{\sum^{L}_{i=2}\n_i}$. Finally dividing \eqref{eq:directedInfoCascade} by $LT$ and let $T\rightarrow \infty$ according to Theorem \ref{thm:NecGeneral}, we get the necessary condition \eqref{eq:thmCascadenec}.
\end{proof}
We now present a sufficient condition for mean-square stability over the given network.
\begin{theorem}\label{thm:cascadeNetwork_suff}
The scalar linear time invariant system in (\ref{eq:stateEquation}) with $A=\lm$ can be mean square stabilized using a linear scheme over a \emph{cascade network} of $L$ relay nodes if
\begin{align}\label{eq:thmCascadesuff}
\log\left(|\lm|\right)<\max_{\pr^i:\sum^L_{i=1}\pr^i \leq \pra} \frac{1}{2L} \log\left(1+\frac{L\ps}{L\ps+\n_1}\prod^{L-1}_{i=1}\left(\frac{L\pr^i}{L\pr^i+\n_{i+1}}\right)\right),
\end{align}
where the optimal power allocation is given by $\pr^i=\frac{-\n_{i+1}+\sqrt{\n^2_{i+1}-\frac{4\n_{i+1}}{\gamma}}}{2}$ and $\gamma<0$ is chosen such that $\sum^L_{i=1}\pr^i \leq \pra$. When all $N_i$ are equal, the optimal choice is $\pr^i=\frac{\pra}{L-1}$.
\end{theorem}
\emph{Outline of proof:}
The result can be derived by using a memoryless linear sensing and control scheme. Under linear policies, the overall mapping from the encoder to the controller becomes a scalar Gaussian channel, which has been well studied in the literature (see for example \cite{BansalBasar89}). Due to space constraints, we refer the reader to the proof of Theorem 5.2, which contains a detailed derivation for the non-orthogonal network and the proof for this setting is similar. The optimal power allocation follows from the concavity of $\prod^{L-1}_{i=1}\left(\frac{L\pr^i}{L\pr^i+\n_{i+1}}\right)$ in $\{\pr^i\}^{L-1}_{i=1}$ and by using the Lagrange multiplier method.
\begin{remark}
For fixed power allocations, as the number of relays $L$ approaches infinity in \eqref{eq:thmCascadenec}, the right hand side converges to zero and stabilization becomes impossible. We also note that the ratio between the sufficiency and necessity bounds converges to zero as the number of relays goes to infinity.
\end{remark}
In the related problem on the transmission of a Gaussian source with minimum mean-square distortion \cite{zaidiCDC11,LipsaMartinsAutomatica2011}, it is shown that linear sensing policies are not
globally optimal in general when there is one or more relay nodes in cascade. However linear policies
are shown to be person-by-person optimal in a single relay
setup. According to \cite{LipsaMartinsAutomatica2011,zaidiCDC11},
simple quantizer based policies can lead to a lower mean-square distortion than the best linear policy. We expect such non-linear policies to be useful for stabilization over cascade relay channels.
\section{Parallel Network}\label{sec:Parallel}
Consider the network shown in Fig.~\ref{fig:ParallelNetwork}, where the signal transmitted by a node does not interfere with the signals transmitted by other nodes, i.e., there are $L$ parallel channels from $\{\r_i\}^L_{i=1}$ to $\d$. We call this setup a \emph{parallel network}, which models a scenario where the signal spaces of the relay nodes are mutually orthogonal. For example the signals may be transmitted in either disjoint frequency bands or in disjoint time slots. In the first transmission phase, the sensor transmits $S_{e,t}$ with an average power $\ex\left[S^2_{e,t}\right]=2\ps$ to the relays and in the second phase all relays simultaneously transmit to the remote controller with average powers $2\pr^i$ such that $\sum^L_{i=1}\pr^i\leq\pra$. Accordingly, the received signals are given by
\begin{align}\label{eq:inOutCascadeRelay}
&Y^i_t=S_{e,t}+Z^i_{r,t}, \quad R^i_t=S^{i}_{r,t}=0, \quad t=1,3,5,\dots \nonumber\\
&R^i_t=S^{i}_{r,t}+Z^i_{d,t}, \quad Y^i_t=S_{e,t}=0, \quad t=2,4,6,\dots
\end{align}
where $Z^i_{r,t}\sim\mathcal{N}(0,N^i_r)$, $Z^i_{d,t}\sim\mathcal{N}(0,N^i_d)$ denote mutually independent white Gaussian noise variables.
In the following we present conditions for mean square stability of the system in \eqref{eq:stateEquation} over the given parallel network.
\begin{figure}[!tb]
\centering
\psfrag{p}[][][3]{\begin{sideways}Plant\end{sideways}}
\psfrag{e}[][][3]{$\mathcal{E}$}
\psfrag{d}[][][3]{$\mathcal{D}$}
\psfrag{rn}[][][3]{$\mathcal{R}_L$}
\psfrag{r1}[][][3]{$\mathcal{R}_1$}
\psfrag{r2}[][][3]{$\mathcal{R}_2$}
\psfrag{r3}[][][3]{$\mathcal{R}_3$}
\psfrag{ch}[][][2]{AWGN Relay Channel}
\psfrag{nfch}[][][2]{\begin{sideways}Noiseless Feedback Communication Channel\end{sideways}}
\psfrag{z}[][][2]{$Z_{t}$}
\psfrag{z1}[][][2]{$Z^1_{r,t}$}
\psfrag{z2}[][][2]{$Z^2_{r,t}$}
\psfrag{zn}[][][2]{$Z^L_{r,t}$}
\psfrag{v1}[][][2]{$Z^1_{d,t}$}
\psfrag{v2}[][][2]{$Z^2_{d,t}$}
\psfrag{vl}[][][2]{$Z^L_{d,t}$}
\psfrag{y1}[][][2]{$Y^1_{t}$}
\psfrag{y2}[][][2]{$Y^2_{t}$}
\psfrag{ym}[][][2]{$Y^L_{t}$}
\psfrag{y}[][][2]{$Y_{t}$}
\psfrag{se}[][][2]{$S_{e,t}$}
\psfrag{sr1}[][][2]{$S^1_{r,t}$}
\psfrag{sr2}[][][2]{$S^2_{r,t}$}
\psfrag{sr3}[][][2]{$S^L_{r,t}$}
\psfrag{h1}[][][2]{$h_1$}
\psfrag{h2}[][][2]{$h_2$}
\psfrag{hn}[][][2]{$h_L$}
\psfrag{h}[][][2]{$h$}
\psfrag{st}[][][2.5]{$S_t$}
\psfrag{rr1}[][][2]{$R^1_{t}$}
\psfrag{rr2}[][][2]{$R^2_{t}$}
\psfrag{rrn}[][][2]{$R^L_{t}$}
\psfrag{ut}[][][2.5]{$U_t$}
\resizebox{8 cm}{!}{\epsfbox{OrthogonalRelay}}
\vspace{-.3cm}
\caption{Parallel relay network.}\label{fig:ParallelNetwork}
\end{figure}
\begin{theorem}\label{thm:parallelNec}
If the system (\ref{eq:stateEquation}) is mean square stable over the \emph{parallel network} then
\begin{align}\label{eq:thmParallel_nec}
\log\left(\left|\dt\left(A\right)\right|\right)\leq \frac{1}{4}\min\left\{\log\left(1+2\sum^L_{i=1}\frac{\ps}{\nr^i}\right),\sum^L_{i=1}\log\left(1+\frac{2\pr^i}{\nd^i}\right) \right\},
\end{align}
where $\pr^i=\max\{\gamma - \nd^i, 0\}$ and $\gamma$ is chosen such that $\sum^L_{i=1}\pr^i=\pra$.
\end{theorem}
\begin{proof}
Following the same steps as in proof of Theorem \ref{thm:cascadeNetwork_nec}, we can bound directed information $I(\bar{X}_{[1,2T]} \rightarrow R_{[1,2T]})$ over \emph{parallel} relay network as,
\begin{align}\label{eq:directedInfoParallel}
&I(\bar{X}_{[1,2T]} \rightarrow \{R^i_{[1,2T]}\}^L_{i=1}) \stackrel{(a)}{\leq}\min\left\{\sum^{2T}_{t=1}I\left(S_{e,t};\{Y^i_t\}^L_{i=1}\right), \sum^{2T}_{t=1}I\left(\{S^i_{r,t}\}^L_{i=1};\{R^i_t\}^L_{i=1}\right)\right\} \nonumber\\
&\stackrel{(b)}{=}\min\left\{\sum^{T}_{t=1}I\left(S_{e,2t\sm 1};\{Y^i_{2t\sm1}\}^L_{i=1}\right), \sum^{T}_{t=1}I\left(\{S^i_{r,2t}\}^L_{i=1};\{R^i_{2t}\}^L_{i=1}\right)\right\} \nonumber \\
&\stackrel{(c)}{\leq}\frac{T}{2}\min\left\{\log\left(1+2\sum^L_{i=1}\frac{\ps}{\nr^i}\right),\max_{\pr^i: \pr\geq0,\sum_i\pr^i\leq\pra}\sum^L_{i=1}\log\left(1+\frac{2\pr^i}{\nd^i}\right) \right\},
\end{align}
where $(a)$ follows from the same steps as in \eqref{eq:boundNec1Cascade} and \eqref{eq:boundNec3Cascade}; $(b)$ follows from \eqref{eq:inOutCascadeRelay}; and $(c)$ follows from the fact that Gaussian input distribution maximizes mutual information for a Gaussian channel. The function $\sum^L_{i=1}\log\left(1+\frac{2\pr^i}{\nd^i}\right)$ is jointly concave in $\{\pr^i\}^L_{i=1}$. The optimal power allocation is given by $\pr^i=\max\{\gamma - \nd^i/2, 0\}$, where $\gamma$ is chosen such that $\sum^L_{i=1}\pr^i=\pra$, which is the well-known water-filling solution \cite[pp. 204-205]{TseBook}. We obtain \eqref{eq:thmParallel_nec} by using \eqref{eq:directedInfoParallel} in Theorem \ref{thm:NecGeneral}.
\end{proof}
We can obtain a sufficient condition for mean square stability over the \emph{parallel network} using linear policies like previously discussed scenarios, which is stated in the following theorem.
\begin{theorem}\label{thm:parallel_suff}
The scalar linear time invariant system in (\ref{eq:stateEquation}) with $A=\lm$ can be mean square stabilized using a linear scheme over the Gaussian \emph{parallel network} if
\begin{align}\label{eq:thmParallel_suff}
\log\left(\left|\lambda\right|\right)<\frac{1}{4} \log\left(1+\sum^L_{i=1}\frac{4\ps \pr^i}{2\ps \nd + 2\pr^i \nr^i +\nd \nr^i}\right).
\end{align}
\end{theorem}
\begin{proof}
The above result can be obtained by using a memoryless linear sensing and control scheme and as discussed in the proof of Theorem \ref{thm:cascadeNetwork_suff} and Theorem \ref{thm:HalfDup}.
\end{proof}
\begin{proposition}
The gap between the necessary and sufficient conditions for a symmetric parallel network with $\pr^i=\pr, \nr^i=\nr$ is a non-decreasing function of the number of relays $L$ and approaches $\frac{1}{4}\log\left(1+\frac{\nd\left(2\ps+\nr\right)}{2\pr\nr}\right)$ as $L$ goes to infinity.
\end{proposition}
\begin{proof}
For $\pr^i=\pr, \nr^i=\nr$, the R.H.S. of \eqref{eq:thmParallel_suff} is evaluated as $\Gamma_{\text{suf}}:=\frac{1}{4}\log\left(1+\frac{4L\ps\pr}{2\ps\nd+2\pr\nr+\nd\nr}\right)$ and the R.H.S of \eqref{eq:thmParallel_nec} can be bounded as $\Gamma_{\text{nec}}:=\frac{1}{4}\log\left(1+\frac{2L\ps}{\nr}\right)$. The gap is given by
\begin{align}
\Gamma_{\text{nec}}-\Gamma_{\text{suf}}=\frac{1}{4}\log\left(1+ \frac{2\ps \nd\left(2\ps+\nr\right)}{4\ps\pr\nr+\frac{\nr\left(2\ps\nd+2\pr\nr+\nd\nr\right)}{L}}\right),
\end{align}
which is an increasing function of $L$, approaching $\frac{1}{4}\log\left(1+\frac{\nd\left(2\ps+\nr\right)}{2\pr\nr}\right)$ as $L\rightarrow\infty$.
\end{proof}
\begin{remark}
If $N^i_d=0$, then $\Gamma_{\text{nec}}-\Gamma_{\text{suf}}=0$ and the linear scheme is exactly optimal. For $\nr^i=0$, $\Gamma_{\text{suf}}:=\frac{1}{4}\log\left(1+\frac{2L\pr}{\nd}\right)$ and $\Gamma_{\text{nec}}:=\frac{L}{4}\log\left(1+\frac{2\pr}{\nd}\right)$ according to \eqref{eq:thmParallel_nec}. Clearly $\lim_{L \rightarrow \infty}\left(\Gamma_{\text{nec}}-\Gamma_{\text{suf}}\right)=\infty$, showing the inefficiency of the LTI scheme for parallel channels.
\end{remark}
It is known that
linear schemes can be sub-optimal for transmission over parallel channels \cite{VaishampayanThesis,WernerssonSkoglund09}.
A distributed joint source--channel code is optimal in
minimizing mean-square distortion if the following two conditions hold
\cite{ShamaiZamir98}: i) All channels from the source to the
destination send independent information; ii) All channels utilize the
capacity, i.e., the source and channel need to be matched. If we use
linear policies at the relay nodes then the first condition is not
fulfilled because all nodes would be transmitting correlated
information. In \cite{YukselTatikonda09} the authors proposed a
non-linear scheme for a parallel network of two sensors without
relays, in which one sensor transmits only the magnitude of the
observed state and the other sensor transmits only the phase of the
observed state. The magnitude and phase of the state are shown to be
independent and thus the scheme fulfills the first condition of
optimality. This nonlinear sensing scheme is shown to outperform the
best linear scheme for the LQG control problem in the absence of
measurement noise, although the second condition of source-channel
matching is not fulfilled. We can use this non-linear scheme together
with the initialization step of the Schalkwijk Kailath (SK) type scheme described in Appendix \ref{sec:ProofHalfDuplex} for the non-orthogonal network,
which will ensure source-channel matching by
making the outputs of the two sensors Gaussian distributed after the
initial transmissions. In \cite{MattiasZaidi11} it is shown that linear sensing policies may not
be even person-by-person optimal for LQG control over parallel network without
relays.
For the special case of \emph{parallel network} with noiseless $\e-\r_i$ links, we have the following necessary and sufficient condition for mean-square stability.
\begin{theorem}\label{thm:parallelNoiseless}
The system (\ref{eq:stateEquation}) in absence of process noise ($W_t=0$) can be mean square stabilized over the Gaussian \emph{parallel network} with $Z^i_{r,t}=0$ for all $i$, only if
\begin{align}
\log\left(\left|\dt\left(A\right)\right|\right)\leq\frac{1}{4}\max_{\pr^i: \pr\geq0,\sum_i\pr^i\leq\pra}\sum^L_{i=1}\log\left(1+\frac{2\pr^i}{\nd^i}\right).
\end{align}
If the inequality is strict, then there exists a non-linear policy leading to mean-square stability.
\end{theorem}
\begin{proof}
The necessity follows from Theorem \ref{thm:parallelNec}. The sufficiency part for scalar systems follows from \cite[Theorem 6]{KumarLaneman11}, which is derived using a non-linear scheme. This scheme can be extended to vector systems using a time sharing scheme presented in Sec.~\ref{sec:multiDimension}.
\end{proof}
\begin{remark}\label{rem:parallelRelay}
According to Theorem \ref{thm:parallelNoiseless} the minimum rate required for stabilization of a noisy plant over a parallel Gaussian channel is equal to the channel capacity. It was shown by Shu and Middleton in \cite{ShuMiddleton11} that for some first order noiseless plants, linear time invariant encoders/decoders cannot achieve this minimum rate over parallel Gaussian channels. However the minimum rate for stabilization can always be achieved by a non-linear time varying scheme as discussed in the proof of Theorem \ref{thm:parallelNoiseless}.
\end{remark}
\section{Non-orthogonal Network}\label{sec:NonOrthogonal}
A communication network is said to be \emph{non-orthogonal} if all the communicating nodes transmit signals in overlapping time slots using the same frequency bands. A node which is capable of transmitting and receiving signals simultaneously using the same frequency band is known as \emph{full-duplex} while a \emph{half-duplex} node cannot simultaneously receive and transmit signals. In practice it is expensive and hard to a build a communication device which can transmit and receive signals at the same time using the same frequency, due to the self-interference created by the transmitted signal to the received signal. Therefore half-duplex systems are mostly used in practice. In this section we study both half-duplex and full-duplex configurations.
\subsection{Non-orthogonal Half-duplex Network}\label{sec:HalfDuplex}
\begin{figure}[!t]
\centering
\subfigure[First transmission phase.]{
\psfrag{p}[][][3]{\begin{sideways}Plant\end{sideways}}
\psfrag{e}[][][3]{$\mathcal{E}$}
\psfrag{d}[][][3]{$\mathcal{D}$}
\psfrag{rn}[][][3]{$\mathcal{R}_L$}
\psfrag{r1}[][][3]{$\mathcal{R}_1$}
\psfrag{r2}[][][3]{$\mathcal{R}_2$}
\psfrag{r3}[][][3]{$\mathcal{R}_3$}
\psfrag{ch}[][][2]{AWGN Relay Channel}
\psfrag{nfch}[][][2]{\begin{sideways}Noiseless Feedback Communication Channel\end{sideways}}
\psfrag{z}[][][2]{$Z_{d,t}$}
\psfrag{z1}[][][2]{$Z^1_{r,t}$}
\psfrag{z2}[][][2]{$Z^2_{r,t}$}
\psfrag{zn}[][][2]{$Z^L_{r,t}$}
\psfrag{y1}[][][2]{$Y^1_{t}$}
\psfrag{y2}[][][2]{$Y^2_{t}$}
\psfrag{ym}[][][2]{$Y^L_{t}$}
\psfrag{y}[][][2]{$Y_{t}$}
\psfrag{se}[][][2]{$S_{e,t}$}
\psfrag{sr1}[][][2]{$S^1_{r,t}$}
\psfrag{sr2}[][][2]{$S^2_{r,t}$}
\psfrag{sr3}[][][2]{$S^L_{r,t}$}
\psfrag{h1}[][][2]{$h_1$}
\psfrag{h2}[][][2]{$h_2$}
\psfrag{hn}[][][2]{$h_L$}
\psfrag{h}[][][2]{$h$}
\psfrag{st}[][][2.5]{$S_t$}
\psfrag{r}[][][2]{$R_t$}
\psfrag{ut}[][][2.5]{$U_t$}
\resizebox{7.7 cm}{!}{\epsfbox{NonOrthogonalRelay_half1}}
\label{fig:phaseOne}}
\vspace{1cm}
\subfigure[Second transmission phase.]{
\centering
\psfrag{p}[][][3]{\begin{sideways}Plant\end{sideways}}
\psfrag{e}[][][3]{$\mathcal{E}$}
\psfrag{d}[][][3]{$\mathcal{D}$}
\psfrag{rn}[][][3]{$\mathcal{R}_L$}
\psfrag{r1}[][][3]{$\mathcal{R}_1$}
\psfrag{r2}[][][3]{$\mathcal{R}_2$}
\psfrag{r3}[][][3]{$\mathcal{R}_3$}
\psfrag{ch}[][][2]{AWGN Relay Channel}
\psfrag{nfch}[][][2]{\begin{sideways}Noiseless Feedback Communication Channel\end{sideways}}
\psfrag{z}[][][2]{$Z_{d,t}$}
\psfrag{z1}[][][2]{$Z^1_{r,t}$}
\psfrag{z2}[][][2]{$Z^2_{r,t}$}
\psfrag{zn}[][][2]{$Z^L_{r,t}$}
\psfrag{y1}[][][2]{$Y^1_{t}$}
\psfrag{y2}[][][2]{$Y^2_{t}$}
\psfrag{ym}[][][2]{$Y^L_{t}$}
\psfrag{y}[][][2]{$Y_{t}$}
\psfrag{se}[][][2]{$S_{e,t}$}
\psfrag{sr1}[][][2]{$S^1_{r,t}$}
\psfrag{sr2}[][][2]{$S^2_{r,t}$}
\psfrag{sr3}[][][2]{$S^L_{r,t}$}
\psfrag{h1}[][][2]{$h_1$}
\psfrag{h2}[][][2]{$h_2$}
\psfrag{hn}[][][2]{$h_L$}
\psfrag{h}[][][2]{$h$}
\psfrag{st}[][][2.5]{$S_t$}
\psfrag{r}[][][2]{$R_t$}
\psfrag{ut}[][][2.5]{$U_t$}
\resizebox{7.8 cm}{!}{\epsfbox{NonOrthogonalRelay_half2}}
\label{fig:phaseTwo}}
\vspace{-.8cm}
\caption{A non-orthogonal half-duplex Gaussian network model.}
\label{fig:HalfDuplexNetwork}
\vspace{5mm}
\end{figure}
A non-orthogonal half-duplex Gaussian network with $L$ relay nodes
$\{\r_i\}^L_{i=1}$ is illustrated in
Fig.~\ref{fig:HalfDuplexNetwork}. The variables $S_{e,t}$ and
$S^i_{r,t}$ denote the transmitted signals from the state encoder $\e$
and relay $\r_i$ at any discrete time step $t$. The variables
$Z^i_{r,t}$ and $Z_{d,t}$ denote the mutually independent white
Gaussian noise components at the relay node $i$ and $\d$ of the remote
control unit, with $Z^i_{r,t}\sim\mathcal{N}(0,N^i_r)$ and
$Z_{d,t}\sim\mathcal{N}(0,N_d)$. The noise components
$\{Z^i_{r,t}\}^L_{i=1}$ are independent across the relays, i.e.,
$\ex[Z^k_{r,t}Z^i_{r,t}]=0$ for all $i\neq k$. The information
transmission from the state encoder consists of two phases as shown in
Fig.~\ref{fig:HalfDuplexNetwork}. In the first phase the encoder
$\mathcal{E}$ transmits a signal with an average power $2\beta \ps$,
where $0<\beta\leq1$ is a parameter that adjusts power between the two
transmission phases. In this transmission phase all the relay nodes
listen but remain silent. In the second transmission phase, the
encoder $\e$ and relay nodes $\{\r_i\}^L_{i=1}$ transmit
simultaneously. In this second transmission phase, the encoder
transmits with an average power $2(1-\b)\ps$ and the $i$-th relay node
transmits with an average power $2\pr^i$ such that
$\sum^L_{i=1}\pr^i\leq P_R$. The input and output of the $i$-th relay
are given by,
\begin{align}\label{eq:halfDupRelay_InOutEq}
&Y^i_t=S_{e,t}+Z^i_{r,t}, \quad S^i_{r,t}=0, \qquad &t=1,3,5,\dots \nonumber \\
&Y^i_t=0, \quad S^i_{r,t}=g^i_t\left(Y^i_{[0,t-1]}\right), \qquad &t=2,4,6,\dots
\end{align}
where $g^i_t: \mathbb{R}^{t+1}\rightarrow\mathbb{R}$ is the $i$-th relay encoding policy such that $\ex\left[\left(g^i_t(Y^i_{[0,t-1]})\right)^2\right]=2\pr^i$ and $\sum^L_{i=1}\pr^i\leq P_R$.
The signal received at the decoder/controller is given by
\begin{align}
R_t&=h S_{e,t}+\sum^L_{i=1} h_iS^i_{r,t}+Z_{d,t}, \nonumber
\end{align}
where $h,h_i\in\mathbb{R}$ denote the channel gains of $\e-\d$ and $\r_i-\d$ links respectively.
\begin{theorem}\label{thm:NonOrthHalf_Nec}
If the linear system in \eqref{eq:stateEquation} is mean-square stable over the
\emph{non-orthogonal half-duplex} relay network, then
\begin{align}\label{eq:thmNonOrthHalf_Nec}
&\log\left(|\dt\left(A\right)|\right)\leq \frac{1}{4}\min\Bigg\{\max_{\begin{subarray}{c} 0<\b\leq 1\end{subarray}}\left( \log\left(1+\frac{2h^2(1-\b)\ps}{\nd}\right)+\log\left(1+2\b\ps\left(\sum^L_{i=1}\frac{1}{\nr^i}+\frac{h^2}{N_d}\right)\right)\right), \nonumber \\
&\max_{\begin{subarray}{c} 0<\b\leq1 \\ \pr^i: \sum_i \pr^i \leq\pra \end{subarray}}\left( \log\left(1\!+\!\frac{2h^2\b\ps}{\nd}\right)\!+\! \log\left(1\!+\!\frac{1}{\nd}\left(\!\sum^{L+1}_{i=1}\!\delta^2_iP_i\!+\!2\!\sum^{L+1}_{i=1}\sum^{L+1}_{k=i+1}\!\rho^\star_{i,k}\delta_i\delta_k\sqrt{P_i P_k}\right)\right)\right)\Bigg\},
\end{align}
where $\rho^\star_{i,k}:=\frac{2(1-\b)\ps}{\sqrt{(2(1-\b)\ps+N_i)(2(1-\b)\ps+N_k)}}$, $P_{L+1}:=2(1-\b)\ps$, $N_{L+1}:=0$, $\delta_{L+1}:=h$, $P_i:=2\pr^i$, $\delta_i:=h_i$, $N_i:=\nr^i$ for all $i=\{1,2,\dots,L\}$.
\end{theorem}
\begin{proof}
We first derive an outer bound on the directed information $I(\bar{X}_{[1,LT]} \rightarrow R_{[1,LT]})$ over the given channel and then use Theorem \ref{thm:NecGeneral} to find the necessary condition \eqref{eq:thmNonOrthHalf_Nec}.
\begin{align}\label{eq:nonOrthHalf_Bnd1}
&I(\bar{X}_{[1,2T]} \rightarrow R_{[1,2T]})\stackrel{(a)}{=}I\left(\bar{X}_{[1,2T]} ; R_{[1,2T]}\right)\stackrel{(b)}{\leq}I(\bar{X}_{[1,2T]}; \{Y^i_{[1,2T]}\}^L_{i=1}, R_{[1,2T]}) \nonumber \\
&\stackrel{(c)}{=}I\left(\bar{X}_{[1,2T]}; \tilde{R}_{[1,2T]}, \{Y^i_{[1,2T]}\}^L_{i=1}\right)= \sum^{2T}_{t=1}I(\bar{X}_{[1,2T]} ; \tilde{R}_t, \{Y^i_{t}\}^L_{i=1} | \tilde{R}_{[1,t\sm1]}, \{Y^i_{[1,t\sm1]}\}^L_{i=1}) \nonumber \\
&\stackrel{(d)}{\leq}\sum^{2T}_{t=1}I(S_{e,t} ; \tilde{R}_t, \{Y^i_{t}\}^L_{i=1} | \tilde{R}_{[1,t\sm1]}, \{Y^i_{[1,t\sm1]}\}^L_{i=1})\stackrel{(e)}{\leq} \sum^{2T}_{t=1}I\left(S_{e,t}; \tilde{R}_{t}, \{Y^i_{t}\}^L_{i=1}\right) \nonumber \\
&\stackrel{(f)}{=}\sum^{T}_{t=1}I\left(S_{e,2t}; \tilde{R}_{2t}\right)+\sum^{T}_{t=1}I\left(S_{e,2t\sm 1}; \tilde{R}_{2t\sm 1},\{Y^i_{2t\sm 1}\}^L_{i=1}\right) \nonumber \\
&\stackrel{(g)}{\leq} \frac{T}{2} \log\left(1+\frac{2h^2(1-\b)\ps}{\nd}\right) +\frac{T}{2}\log\left(1+2\b\ps\left(\sum^L_{i=1}\frac{1}{\nr^i}+\frac{h^2}{N_d}\right)\right) \nonumber \\
& \stackrel{}{\leq} \frac{T}{2} \max_{\begin{subarray}{c} 0<\b\leq 1\end{subarray}}\left\{ \log\left(1+\frac{2h^2(1-\b)\ps}{\nd}\right)+\log\left(1+2\b\ps\left(\sum^L_{i=1}\frac{1}{\nr^i}+\frac{h^2}{N_d}\right)\right)\right\}
\end{align}
where $(a)$ follows from \cite[Theorem 1]{Massey90}; $(b)$ follows from the fact that adding side information cannot decrease mutual information; $(c)$ follows by defining $\tilde{R}_{t}:=R_t-\sum^L_{i=1} h_iS^i_{r,t}$ and from the fact that $S^i_{r,t}$ is a function of $Y^i_{[1,t\sm1]}$; $(d)$ follows from the Markov chain $\bar{X}_{[1,2T]}-S_{e,t}-(\tilde{R}_{t},\{Y^i_{t}\}^L_{i=1})$, since $\bar{X}_{[0,T]}$ is the uncontrolled state process and the fact that the channel between $S_{e,[1,2T]}$ and $(\tilde{R}_{[1,2T]}, \{Y^i_{[1,2T]}\}^L_{i=1})$ is memoryless due to $\tilde{R}_{t}=R_t-\sum^L_{i=1} h_iS^i_{r,t}$; $(e)$ follows from the Markov chain $(\tilde{R}_{[1,t\sm1]}, \{Y^i_{[1,t\sm1]}\}^L_{i=1})-S_{e,t}-(\tilde{R}_{t}, \{Y^i_{t}\}^L_{i=1})$ and conditioning reduces entropy; $(f)$ follows by separating odd and even indexed terms and $Y^i_{2t}=0$ according to \eqref{eq:halfDupRelay_InOutEq}; $(g)$ follows from $Y^i_{2t\sm1}=S_{e,2t\sm1}+Z^i_{r,2t\sm 1}$, $\tilde{R}_t=S_{e,t}+Z_t$, $\ex\left[S^2_{e,2t}\right]=2(1-\b)\ps$, $\ex\left[S^2_{e,2t\sm1}\right]=2\b\ps$, and the fact that mutual information of a Gaussian channel is maximized by centered Gaussian input distribution \cite{TseBook}. The directed information rate $I(\bar{X}_{[1,2T]} \rightarrow R_{[1,2T]})$ can also be bounded as,
\begin{align}\label{eq:nonOrthHalf_Bnd2}
&I(\bar{X}_{[1,2T]} \rightarrow R_{[1,2T]})=\sum^{2T}_{t=1}I(\bar{X}_{[1,t]} ; R_t| R_{[1,t\sm1]}) \stackrel{(a)}{\leq}\sum^{2T}_{t=1}I(S_{e,t},\{S^i_{r,t}\}^L_{i=1} ; R_t| R_{[1,t\sm1]})\nonumber \\
&\stackrel{(b)}{\leq} \sum^{2T}_{t=1}I\left(S_{e,t},\{S^i_{r,t}\}^L_{i=1}; R_{t}\right)\stackrel{(c)}{=}\sum^{T}_{t=1}I\left(S_{e,2t\sm 1}; R_{2t\sm 1}\right)+ \sum^{T}_{t=1}I\left(S_{e,2t},\{S^i_{r,2t}\}^L_{i=1}; R_{2t}\right)\nonumber \\
&\stackrel{(d)}{\leq} \frac{T}{2} \log\left(1+\frac{2h^2\b\ps}{\nd}\right)+ \frac{T}{2}\log\left(1\!+\!\frac{1}{\nd}\left(\!\sum^{L+1}_{i=1}\!\delta^2_iP_i\!+\!2\!\sum^{L+1}_{i=1}\sum^{L+1}_{k=i+1}\!\rho^\star_{i,k}\delta_i\delta_k\sqrt{P_i P_k}\right)\right)\nonumber \\
&\stackrel{}{\leq} \frac{T}{2} \max_{\begin{subarray}{c} 0<\b\leq1 \\ \pr^i: \sum_i \pr^i \leq\pra \end{subarray}}\left\{ \log\left(1+\frac{2h^2\b\ps}{\nd}\right)+ \log\left(1\!+\!\frac{1}{\nd}\left(\!\sum^{L+1}_{i=1}\!\delta^2_iP_i\!+\!2\!\sum^{L+1}_{i=1}\sum^{L+1}_{k=i+1}\!\rho^\star_{i,k}\delta_i\delta_k\sqrt{P_i P_k}\right)\right)\right\}
\end{align}
where $\rho^\star_{i,k}=\frac{2(1-\b)\ps}{\sqrt{(2(1-\b)\ps+N_i)(2(1-\b)\ps+N_k)}}$, $P_{L+1}=2(1-\b)\ps$, $N_{L+1}=0$, $\delta_{L+1}=h$, $P_i=2\pr^i$, $\delta_i=h_i$, $N_i=\nr^i$ for all $i=\{1,2,\dots,L\}$. The inequality $(a)$ follows from the Markov chain $\bar{X}_{[0,t]}-\left(S_{e,t},\{S^i_{r,t}\}^L_{i=1}\right)-R_t$ due to the memoryless channel between $S_{e,[1,2T]},\{S^i_{r,[1,2T]}\}^L_{i=1}$ and ${R}_{[1,2T]}$; $(b)$ follows from the Markov chain $R_{[1,t\sm1]}-\left(S_{e,t},\{S^i_{r,t}\}^L_{i=1}\right)-R_t$ and conditioning reduces entropy; $(c)$ follows by separating the odd and even indexed terms and $S^i_{r,2t\sm 1}=0$ according to \eqref{eq:halfDupRelay_InOutEq}; $(d)$ follows from the fact that the first addend on the R.H.S. of $(c)$ is maximized by a centered Gaussian distributed $S_{e,t}$ and the second addend is bounded using a bound presented in \cite{GastparITA07}, where the author studied the problem of transmitting a Gaussian source over a simple sensor network. In order to apply the upper bound given in (48) of \cite{GastparITA07} to our setup, we consider state encoder $\e$ to be a sensor node with zero observation noise and make the following change of system variables so that our system model becomes equivalent to the one discussed in \cite{GastparITA07}: $\sigma^2_S:=\a_t$, $\delta_i:=h_i$, $M:=L+1$, $P_i:=2\pr^i$, $\sigma^2_Z:=\nd$, $\sigma^2_{W,i}:=\nr^i$, $\a_i=\sqrt{\frac{2(1-\b)\ps}{\a_t}}$ for all $i$. We finally obtain \eqref{eq:thmNonOrthHalf_Nec} by dividing \eqref{eq:nonOrthHalf_Bnd1} and \eqref{eq:nonOrthHalf_Bnd2} by $2T$ and let $T\rightarrow \infty$ according to Theorem \ref{thm:NecGeneral}.
\end{proof}
We now present a sufficient condition for mean square stability of a scalar plant over the given network, which can be extended to a multi-dimensional plant using the arguments given in Sec.~\ref{sec:multiDimension}.
\begin{theorem}\label{thm:HalfDup} The scalar linear time invariant system in (\ref{eq:stateEquation}) with $A=\lm$ can be mean square stabilized using a linear scheme over the \emph{non-orthogonal half-duplex} network if
\begin{align}
\log\left(|\lm|\right)\!<\!\frac{1}{4}\max_{\begin{subarray}{c} 0<\b\leq1 \\ \pr^i: \sum_i \pr^i \leq\pra \end{subarray}}\left\{\log\left(1+\frac{2h^2\b\ps}{\nd}\right)+\log\left(1+\frac{\tilde{M}\left(\b,\left\{\pr^i\right\}^L_{i=1}\right)}{\nt\left(\b,\left\{\pr^i\right\}^L_{i=1}\right)}\right)\right\},
\label{eq:thmHalfDup}
\end{align}
where $\tilde{M}\left(\b,\left\{\pr^i\right\}^L_{i=1}\right)=\left(\sqrt{2h^2(1-\b)\ps}+\sqrt{\frac{2\b\ps\nd}{\left(2h^2\b\ps+\nd\right)}}
\left(\sum^L_{i=1}\sqrt{\frac{2h^2_i\pr^i}{2\b\ps+\nr^i}}\right)\right)^2$ and $\nt(\b,\left\{\pr^i\right\}^L_{i=1})=\sum^L_{i=1}\frac{2h^2_i\pr^i\nr^i}{2\b\ps+\nr^i}+\nd$ are real-valued functions.
\end{theorem}
\begin{proof}
The proof is given in Appendix \ref{sec:ProofHalfDuplex}.
\end{proof}
\begin{remark}
An optimal choice of the power allocation parameter $\b$ at the state encoder and an optimal power allocation at the relay nodes $\{\pr^i\}^L_{i=1}$ which maximize the term on the right hand side of (\ref{eq:thmHalfDup}) depend on the quality of the $\e-\d$, $\e-\r_i$, and $\r_i-\d$ links. This is a non-convex optimization problem, however it can be transformed into an equivalent convex problem by using the approach in \cite[Appendix A]{XiaoGoldsmith08}. This equivalent convex problem can be efficiently solved for optimal $\{\pr^i\}^L_{i=1}$ using the interior point method. For $\b=1$, we can analytically obtain the following optimal power allocation using the Lagrangian method:
\begin{align} \label{eq:optimalPowerNonOrth}
\pr^i=\pra\left(\frac{h^2_i\left(2\ps+\nr^i\right)}{\left(2\ps\nd+\nr^i\nd+\pra h^2_i \nr^i\right)^2}\right)\left[\sum^L_{l=1}\frac{h^2_l\left(2\ps+\nr^l\right)}{\left(2\ps\nd+\nr^l\nd+\pra h^2_l \nr^l\right)^2}\right]^{-1}.
\end{align}
\end{remark}
\begin{remark}\label{rem:infoRate}
For channels with feedback, directed information is a useful
quantity \cite{Massey90,TatikondaMitter09}. It is shown in
Appendix~\ref{appendixAchievability} that the term on the right hand
side of (\ref{eq:thmHalfDup}) is the information rate over the
half-duplex network with noiseless feedback, obtained when running
the described closed-loop protocol. Further we show that the directed
information rate is also equal to the term on the right hand side of
(\ref{eq:thmHalfDup}).
\end{remark}
\subsection{Two-Hop Network}
Consider the half-duplex relay network illustrated in Fig.~\ref{fig:HalfDuplexNetwork} with $h=0$. The state information is communicated to the remote controller only via the relay nodes. We call this setup a \emph{two-hop} relay network, where the communication from the state encoder to the controller takes place in two hops. In the first hop the relay nodes receive the state information from the state encoder, which then communicate the state information to the controller in the second hop. The controller takes action in alternate time steps upon receiving the state information. We can obtain a sufficient condition for stability over this network by substituting $h=0, \b=1$ in Theorem \ref{thm:HalfDup}. Similarly a necessary condition can be obtained from \eqref{eq:thmNonOrthHalf_Nec}, where $\b=1$ is the maximizer of the first term and $\b=0$ is the maximizer of the second term. In the following we evaluate the gap between the sufficient and necessary conditions for a symmetric two hop network.
\begin{proposition}
For a symmetric two-hop network with $\pr^i=\pr, \nr^i=\nr, h_i=c, h=0,\b=1$, the gap between necessary and sufficient conditions approaches zero as the number of relays $L$ goes to infinity. The gap also monotonically approaches zero as $\pr$ goes to infinity.
\end{proposition}
\begin{proof}
For $\pr^i=\pr, \nr^i=\nr, h_i=c, h=0,\b=1$ for all $i$, the R.H.S. of \eqref{eq:thmHalfDup} is evaluated as $\Gamma_{\text{suf}}:=\frac{1}{4}\log\left(1+\frac{4L^2c^2\ps\pr}{2Lc^2\pr\nr+\nd\left(2\ps+\nr\right)}\right)$ and the R.H.S of \eqref{eq:thmNonOrthHalf_Nec} can be bounded as $\Gamma_{\text{nec}}:=\frac{1}{4}\log\left(1+\frac{2L\ps}{\nr}\right)$. The gap between $\Gamma_{\text{suf}}$ and $\Gamma_{\text{nec}}$ is given by
\begin{align}
\Gamma_{\text{nec}}-\Gamma_{\text{suf}}=\frac{1}{4}\log\left(1+ \frac{\frac{4\ps^2\nd+2\ps\nr\nd}{L}}{4c^2\ps\pr\nr+\frac{2c^2\pr\nr^2}{L}+\frac{\nd\nr\left(2\ps+\nr\right)}{L^2}}\right),
\end{align}
which approaches zero as $L$ goes to infinity. The gap also monotonically approaches zero as $\pr$ tends to infinity.
\end{proof}
In Fig.~\ref{fig:nonOrthCompLPr} we have plotted $\Gamma_{\text{nec}}$ and $\Gamma_{\text{suf}}$ as functions of $L$ and $\pr$. These figures show that linear schemes are quite efficient in some regimes.
\begin{remark}
Linear policies can be even exactly optimal in the following special cases: i) If we fix all relaying policies to be linear, then the channel becomes equivalent to a point-point scalar Gaussian channel, for which linear sensing is known to be optimal for LQG control \cite{BansalBasar89}. ii) If we fix the state encoder to be linear and assume noiseless causal feedback links from the controller to the relay nodes, then linear policies are optimal for mean-square stabilization over a symmetric \emph{two-hop} relay network, by the following arguments. Since the control actions are available at the relay nodes via noiseless feedback links, there is no dual effect of control, i.e., the separation of estimation and control holds. Further by restricting the state encoder to be linear, the relay network becomes equivalent to the Gaussian network studied in\cite{GastparIT08,GastparITA07}, where it is shown that linear policies are optimal if the network is symmetric.
\end{remark}
\begin{figure}\label{fig:nonOrthCompLPr}
\centering
\subfigure[$\ps=2\pr^i=10,\nr^i=\nd=1,h_i=1$]{\includegraphics[width=7cm]{comparsionNonOrth_L}}
\subfigure[$L=10,\ps=10,\nr^i=\nd=1,h_i=1$]{\includegraphics[width=7cm]{comparsionNonOrth_Ps}}
\caption{Comparison of necessary and sufficient conditions for a symmetric two-hop relay network.}
\label{subfig2}
\end{figure}
\subsection{Non-orthogonal Full-duplex Network}\label{sec:FullDuplex}
We now consider a non-orthogonal network of $L$ \emph{full-duplex} relay nodes, where all the nodes receive and transmit their signals in every time step, i.e., at any time instant $t\in\mathbb{N}$,
\begin{align}
&S_{e,t}=f_t\left(X_{[0,t]},U_{[0,t-1]}\right), \qquad S^i_{r,t}=g^i_t\left(Y^i_{[0,t-1]}\right), \quad \forall t\in\mathbb{N},\nonumber \\
&Y^i_t=S_{e,t}+Z^i_{r,t}, \qquad R_t=hS_{e,t}+\sum^L_{i=1}S^i_{r,t}+Z_{d,t}, \quad \forall t\in\mathbb{N},
\end{align}
where $\ex\left[(S_{e,t})^2\right]=\ps$, $\ex\left[(S^i_{r,t})^2\right]=\pr^i$, and $\sum^L_{i=1}\pr^i\leq P_R$.
\begin{theorem}\label{thm:NonOrthFull_Nec}
If the linear system in \eqref{eq:stateEquation} is mean-square stable over the
\emph{non-orthogonal full-duplex} relay network, then
\begin{align}\label{eq:thmNonOrthFull_Nec}
\log\left(|\dt\left(A\right)|\right)\leq& \frac{1}{2}\min\Bigg\{\log\left(1+\ps\left(\sum^L_{i=1}\frac{1}{\nr^i}+\frac{h^2}{N_d}\right)\right), \nonumber \\
&\max_{\begin{subarray}{c} \pr^i: \sum_i \pr^i \leq\pra \end{subarray}}\left( \log\left(1\!+\!\frac{1}{\nd}\left(\!\sum^{L+1}_{i=1}\!\delta^2_iP_i\!+\!2\!\sum^{L+1}_{i=1}\sum^{L+1}_{k=i+1}\!\rho^\star_{i,k}\delta_i\delta_k\sqrt{P_i P_k}\right)\right)\right)\Bigg\},
\end{align}
where $\rho^\star_{i,k}=\frac{\ps}{\sqrt{(\ps+N_i)(\ps+N_k)}}$, $P_{L+1}:=\ps$, $N_{L+1}:=0$, $\delta_{L+1}:=h$, $P_i:=\pr^i$, $\delta_i:=h_i$, $N_i:=\nr^i$ for all $i=\{1,2,\dots,L\}$.
\end{theorem}
\begin{proof}
The proof follows exactly in the steps of the proof of Theorem
\ref{thm:NonOrthHalf_Nec}, with an exception that odd and even
indexed terms are not treated separately because
$\ex\left[S^2_{e,t}\right]=\ps$ and
$\ex\left[(S^i_{r,t})^2\right]=\pr^i$ for all $t$.
\end{proof}
\begin{theorem}\label{thm:FullDup}
The scalar linear time invariant system in (\ref{eq:stateEquation}) with $A=\lm$ and $W_t=0$ can be mean square stabilized using a linear scheme over the \emph{non-orthogonal full-duplex} Gaussian network if
\begin{align}\label{eq:thmFullDup}
\log\left(|\lambda|\right)\!<\!\frac{1}{2}\max_{\pr^i:\sum^L_{i=1}\pr^i \leq \pra} \left\{\log \left(1\!+\!{\left(\sqrt{h^2\ps}\!+\!\eta^\star \sum^L_{i=1}\sqrt{\frac{h^2_i\ps\pr^i}{\ps\!+\!\nr^i}}\right)^2}\left({\nd\!+\!\sum^L_{i=1}\frac{h^2_i\pr^i \nr^i}{\ps\!+\!\nr^i}}\right)^{\!-\!1}\right)\right\},
\end{align}
where $\eta^\star$ is the unique root in the interval $[0,1]$ of the following fourth order polynomial
\begin{align}\label{eq:chan3Poly}
\left(\sum^L_{i=1}\sqrt{\frac{h^2_i\ps\pr^i}{(\ps+\nr^i)}}\right)\eta^4&+\left(2h\ps\sum^L_{i=1}\sqrt{\frac{h^2_i\pr^i}{(\ps+\nr^i)}}\right)\eta^3 \nonumber \\
&+\left(h^2\ps+\nd+\sum^L_{i=1}\frac{h^2_i\pr^i \nr^i}{\ps+\nr^i}\right)\eta^2=\left(\nd+\sum^L_{i=1}\frac{h^2_i\pr^i \nr^i}{\ps+\nr^i}\right).
\end{align}
\end{theorem}
\begin{proof}
The proof can be found in \cite{zaidiICCA10} for a single relay setup, which can be easily extended for multiple relays.
\end{proof}
Although we expect that Theorem \ref{thm:FullDup} also holds in the presence of process noise like other setups, we are not able to show convergence of second moment of the state process. However numerical experiments suggest that the result should hold.
\begin{remark} The term on the right hand side of the inequality in (\ref{eq:thmFullDup}) is an achievable rate with which information can be transmitted reliably over the \emph{non-orthogonal full-duplex} relay network. This result is derived for a network with single relay node in \cite[Theorem 5]{BrossWigger09}, however it can be easily extended to problems with multiple relays.
\end{remark}
\section{Noisy Multi-dimensional Systems}\label{sec:multiDimension}
In this section we investigate stabilization of multi-dimensional systems over multi-dimensional channels. First we state a result for a scalar Gaussian channel.
\begin{theorem}\label{thm:multiDim}
The $n$-dimensional noisy linear system \eqref{eq:stateEquation} can be mean square stabilized over a scalar Gaussian channel having information capacity $\mathcal{C}$, if $\log\left(\left|A\right|\right)<\mathcal{C}$. Furthermore, a linear time varying policy is sufficient through sequential linear encoding
of scalar components.
\end{theorem}
\emph{Proof Outline:} We prove Theorem \ref{thm:multiDim} with the help of a simple example, due to space limitation in the paper.
Consider that a two-dimensional plant with system matrix $A=\left[
\begin{array}{cc}
\lm_1 & 1 \\
0 & \lm_2 \\
\end{array}
\right]$
and an invertible input matrix $B$ has to be stabilized over a Gaussian channel disturbed by a zero mean Gaussian noise with variance $N$.
We assume that the sensor transmits with an average $P$. For this channel, we define information
capacity as $\mathcal{C}:=\frac{1}{2}\log\left(1+\frac{P}{N}\right)$.
We denote the state and the control variables as
$X_t:=[x_{1,t},x_{2,t}]^T$ and $U_t:=[u_{1,t},u_{2,t}]^T$
respectively. Consider the following scheme for stabilization. The
sensor observes state vector $X_t$ in alternate time steps (that is,
at $t, t+2, t+4, \dots$), whose elements are sequentially
transmitted. The sensor linearly transmits $x_{2,t}$ at time $t$ and
$x_{1,t}$ at time $t+1$ with an average transmit power constraint. The
control actions for the two modes are also taken in alternate time
steps, that is, $u_{1,t}=0$ and $u_{2,t+1}=0$. Accordingly the state
equations for the two modes at time $t+1$ are given by
\begin{align}
x_{2,t+1}&=\lm_2 x_{2,t}+ u_{2,t}+w_{2,t}\stackrel{(a)}{=}\lm_2 \left(x_{2,t}-\hat{x}_{2,t}\right)+w_{2,t}, \label{eq:vectorMode2t1}\\
x_{1,t+1}&\stackrel{(b)}{=}\lm_1 x_{1,t}+ x_{2,t} +w_{1,t}, \label{eq:vectorMode1t1}
\end{align}
where $(a)$ and $(b)$ follow from $u_{2,t}=-\lm_2 \hat{x}_{2,t}$ and $u_{1,t}=0$. The state equations at time $t+2$ are
\begin{align}
x_{2,t+2}&=\lm_2 x_{2,t+1}+w_{2,t+1}=\lm^2_2 \left(x_{2,t}-\hat{x}_{2,t}\right)+\lm_2 w_{2,t}+ w_{2,t+1}, \label{eq:vectorMode2t2}\\
x_{1,t+2}&=\lm_1 x_{1,t+1}\!+\! x_{2,t+1} \!+\! u_{1,t+1} \!+\! w_{1,t+1}\stackrel{(a)}{=}\lm^2_1 x_{1,t}\!+\! (\lm_1\!+\!\lm_2) x_{2,t} \!+\! u_{1,t+1} \!+\! \lm_1 w_{1,t} \!+\! w_{2,t} \!+\! w_{1,t+1} \nonumber \\
&\stackrel{(b)}{=}\lm^2_1\left(x_{1,t}- \hat{x}_{1,t}\right)+ (\lm_1+\lm_2)\left(x_{2,t}-\hat{x}_{2,t}\right) + \lm_1 w_{1,t} +w_{2,t}+ w_{1,t+1}, \label{eq:vectorMode1t2}
\end{align}
where $(a)$ follows \eqref{eq:vectorMode1t1}; and $(b)$ follows from $u_{1,t+1}=\!-\!\lm^2_1 \hat{x}_{1,t}\! -\! (\lm_1 \!+\!\lm_2) \hat{x}_{2,t}$. We first study the stabilization of the lower mode. According to \eqref{eq:vectorMode2t2} the second moment of $x_{2,t}$ is given by
\begin{align}
\ex\left[x^2_{2,t+2}\right]=\lambda^4_2 \ex\left[\left(x_{2,t}- \hat{x}_{2,t}\right)^2\right]+ \tilde{n}_2= \lambda^4_2 2^{-2\mathcal{C}} \ex\left[x^2_{2,t}\right]+\tilde{n}_2.
\end{align}
where the last equality follows from the linear mean-square estimation of a Gaussian variable over a scalar Gaussian channel of capacity $\mathcal{C}$ and $\tilde{n}_2:=(\lm^2_2+1) n_{w,2}$. We observe that the lower mode is stable if and only if $\lambda^4_2 2^{-2\mathcal{C}}<1 \Rightarrow \log(|\lm_2|) < \frac{\mathcal{C}}{2}$.
Assuming that $x_{2,t}$ is stable, the second moment of $x_{1,t}$ is given by
\begin{align}
&\ex\left[x^2_{1,t+2}\right]\stackrel{(a)}{=}\lambda^4_1 \ex\left[\left(x_{1,t}- \hat{x}_{1,t}\right)^2\right]+ 2\lambda^2_1 (\lm_1+\lm_2) \ex\left[\left(x_{1,t}- \hat{x}_{1,t}\right)\left(x_{2,t}-\hat{x}_{2,t}\right)\right] \nonumber \\
&\quad +(\lm_1+\lm_2)^2 \ex\left[\left(x_{2,t}-\hat{x}_{2,t}\right)^2\right] + \tilde{n}_1 \nonumber \\
&\stackrel{(b)}{=}\lambda^4_1 2^{-2\mathcal{C}} \ex\left[x^2_{1,t}\right]+ 2\lambda^2_1 (\lm_1+\lm_2) \ex\left[\left(x_{1,t}- \hat{x}_{1,t}\right)\left(x_{2,t}-\hat{x}_{2,t}\right)\right]+(\lm_1+\lm_2)^2 2^{-2\mathcal{C}} \ex\left[x^2_{2,t}\right] + \tilde{n}_1 \nonumber \\
&\stackrel{(c)}{\leq}\lambda^4_1 2^{-2\mathcal{C}} \ex\left[x^2_{1,t}\right]\!+\! 2\lambda^2_1 (\lm_1\!+\!\lm_2) \sqrt{\ex\left[\left(x_{1,t}\!-\! \hat{x}_{1,t}\right)^2\right] \ex\left[\left(x_{2,t}\!-\!\hat{x}_{2,t}\right)^2\right]}\!+\!(\lm_1\!+\!\lm_2)^2 2^{-2\mathcal{C}} \ex\left[x^2_{2,t}\right] \!+\! \tilde{n}_1 \nonumber \\
&=\lambda^4_1 2^{-2\mathcal{C}} \ex\left[x^2_{1,t}\right]+ 2\lambda^2_1 (\lm_1+\lm_2) \sqrt{2^{-2\mathcal{C}} \ex\left[x^2_{1,t}\right]}\sqrt{2^{-2C}\ex\left[x^2_{2,t}\right]}+(\lm_1+\lm_2)^2 2^{-2\mathcal{C}} \ex\left[x^2_{2,t}\right] + \tilde{n}_1 \nonumber \\
&\stackrel{(d)}{\leq} k_1 \ex\left[x^2_{1,t}\right]+ k_2\sqrt{\ex\left[x^2_{1,t}\right]}+k_3,
\end{align}
where $(a)$ follows from \eqref{eq:vectorMode1t2} and
$\tilde{n}_1:=(\lm^2_1+1) n_{w,1}+n_{w,2}$; $(b)$ follows from the
linear mean-square estimation of a Gaussian variable over a scalar
Gaussian channel of capacity $\mathcal{C}$; $(c)$ follows from the
Cauchy--Schwarz inequality; $(d)$ follows from the fact
$\ex\left[x^2_{1,t}\right]<M$ (assuming that $\lambda^4_2
2^{-2\mathcal{C}}<1$) and by defining $k_1:=\lambda^4_1 2^{-2\mathcal{C}}$,
$k_2:=2\lambda^2_1 (\lm_1+\lm_2) 2^{-2\mathcal{C}} \sqrt{M}$, and
$k_3:=(\lm_1+\lm_2)^2 2^{-2\mathcal{C}} M +\tilde{n}_1$. We now want to a find
condition which ensures convergence of the following sequence:
\begin{align}
\a_{t+1}=k_1 \a_{t}+ k_2\sqrt{\a_{t}}+k_3. \label{eq:vectorSeq1}
\end{align}
In order to show convergence, we make use of the following lemma.
\begin{lemma}\label{lm:vectorConverge}
Let $T:\mathbb{R}\mapsto\mathbb{R}$ be a non-decreasing continuous mapping with a unique fixed point $x^\star\in\mathbb{R}$. If there exists $u\leq x^\star \leq v$ such that $T(u)\geq u$ and $T(v) \leq v$, then the sequence generated by $x_{t+1}=T(x_t)$, $t\in\mathbb{N}$ converges starting from any initial value $x_0\in \mathbb{R}$.
\end{lemma}
\begin{proof}
The proof is given in Appendix \ref{apx:MultiDim}.
\end{proof}
We observe that the mapping $T(\a)=k_1 \a+ k_2\sqrt{\a}+k_3$ with $\a\geq0$ is monotonically increasing since $k_1,k_2>0$. It will have a unique fixed point $\a^\star$ if and only if $k_1<1$, since $k_2,k_3>0$. Assuming that $k_1<1$, there exists $u<\a^\star<v$ such that $T(u)\geq u$ and $T(v) \leq v$. Therefore by Lemma \ref{lm:vectorConverge} the sequence $\{\a_t\}$ is convergent if $k_1=\lambda^4_1 2^{-2\mathcal{C}}<1 \Rightarrow \log(|\lm_1|) < \frac{\mathcal{C}}{2}$.
The time sharing scheme illustrated above can be generalized to any
$n$-dimensional plant and the stability conditions can be easily
obtained using Lemma \ref{lm:vectorConverge}. We know that any system
matrix $A$ can be written in the Jordan form by a similarity matrix
transformation. We can then use the following scheme for
stabilization. The encoder chooses to send only one component of the
observed state vector at each time $t$ over a Gaussian channel of
capacity $\mathcal{C}$. Assume that for a fraction
$\frac{\log(|\lm_m|)}{\sum^K_{i=1}\log(|\lm_i|)}$ of the total
available time the encoder transmits the $m$-th component of the state
vector. Thus the rate available for the transmission of the $m$-th
state component is
$\frac{\log(|\lm_m|)}{\sum^K_{i=1}\log(|\lm_i|)}\mathcal{C}$. The
system will be stable if and only if
$\log(|\lm_m|)<\frac{\log(|\lm_m|)}{\sum^K_{i=1}\log(|\lm_i|)}\mathcal{C}$
for all $m\in \{1,2,\dots,K\}$, which implies
$\sum^K_{i=1}\log(|\lm_i|)=\log\left(\left|\dt\left(A\right)\right|\right)<
\mathcal{C}$. For a multi-dimensional system with a controllable $(A,B)$ pair, any input (control action) can be realized in $n$ time steps. If the encoder has access to the channel output, then it can refine estimate of the state using noiseless feedback channel (SK coding scheme) during these $n$ time steps and observe the new state periodically after every $n$ time steps. \hfill $\square$
\begin{remark}The sufficiency results
presented in sections \ref{sec:NonOrthogonal}-\ref{sec:Parallel} for
scalar systems can be extended to multi-dimensional systems using the proposed time varying scheme. The
sufficient conditions for vector systems will be identical to scalar
systems except that $\log(|\lm|)$ is replaced with
$\log(\left|\dt\left(A\right)\right|)$ everywhere.
\end{remark}
\begin{remark}
In \cite{BraslavskyFreudenberg07} the authors studied stabilization
of a noiseless multi-dimensional system over a point-to-point scalar
Gaussian channel using a linear time invariant scheme, that is the
state encoder transmits $S_t=E X_t$, where $E$ is a row vector. This
LTI scheme cannot stabilize if the pair $(A,E)$ is not
observable. For example consider a diagonal system matrix $A$ with
two equal eigenvalues. This system cannot be stabilized by any
choice of the encoding matrix $E$, irrespective of how much power
the state encoder is allowed to spend. However our linear time varying scheme can always stabilize
the system, even in the presence of process noise.
\end{remark}
\begin{remark}
As mentioned in Theorem \ref{thm:parallelNoiseless} and Remark \ref{rem:parallelRelay}, the proposed time varying scheme can be used with the non-linear scheme of \cite{KumarLaneman11} to achieve the minimum power required for stabilization of noiseless multi-dimensional plants over vector Gaussian channels.
\end{remark}
\vspace{-.3cm}
\section{Conclusions}
The problem of mean-square stabilization of LTI plants over basic Gaussian relay networks is analyzed. Some necessary and sufficient conditions for stabilization are presented which reveal relationships between stabilizability and communication parameters. These results can serve as a useful guideline for a system designer. Necessary conditions have been derived using information theoretic cut-set bounds, which are not tight in general due to the real-time nature of the information transmission. Sufficient conditions for stabilization of scalar plants are obtained by employing time invariant communication and control schemes. We have shown that time invariant schemes are not sufficient in general for stabilization of multi-dimensional plants. However, a simple time variant scheme is always shown to stabilize multi-dimensional plants. In this time varying scheme, one component of the state vector is transmitted at a time and the state component corresponding to a more unstable mode is transmitted more often. The sufficient conditions for stabilization of multi-dimensional plants are obtained by using this time varying scheme. We also established minimum signal-to-noise ratio requirement for stabilization of a noiseless multi-dimensional plant over a parallel Gaussian channel. It is observed in some network settings that sufficient conditions do not depend on the plant noise and they may be characterized by the directed information rate from the sequence of channel inputs to the sequence of channel outputs. We have discussed optimality of linear policies over the given network topologies. In some very special cases, linear schemes are shown to be optimal.
\vspace{-.3cm}
|
1401.5875
|
\section{Introduction}
In their classical paper~\cite{DH}, Davenport and Heilbronn proved the
following theorem.
\begin{theorem}\label{theoremdh}
When quadratic fields are ordered by their absolute
discriminants:
\begin{itemize}
\item[{\rm (a)}]
The average number of $3$-torsion elements in the class
groups of imaginary quadratic fields~is~$2$.
\item[{\rm (b)}]
The average number of $3$-torsion elements in the class
groups of real quadratic fields is~$\textstyle{\frac{4}{3}}$.
\end{itemize}
\end{theorem}
This theorem yields the only two proven cases of the
Cohen-Lenstra heuristics for class groups of quadratic fields.
In their paper~\cite[p.\ 59]{CL}, Cohen and Lenstra raise the
question as to what happens when one looks at class groups over {\it all
orders}, rather than just the maximal orders corresponding to fields.
The heuristics formulated by Cohen and Lenstra for class groups of quadratic
fields are based primarily on the assumption that,
in the absence of any known structure for these abelian groups beyond
genus theory, we may as well
assume that they are ``random'' groups in the appropriate sense.
For orders, however, as pointed out by Cohen and Lenstra
themselves~\cite{CL}, when an imaginary quadratic
order is not maximal there is an
additional arithmetic constraint on the class group coming from the
class number formula. Indeed, if $h(d)$ denotes the class number of the imaginary quadratic order of discriminant $d$, and if $D$ is a (negative) fundamental discriminant,
then the class number formula gives
\begin{equation}\label{cnf}
h(Df^2) =
\Bigl[f \cdot\prod_{{p|f}}\left(1-\frac{(D|p)}{p}\right)\Bigr]
h(D),
\end{equation}
where $(\cdot | \cdot)$ denotes the Kronecker symbol.
Thus, one would naturally expect that the
{percentage} of quadratic orders having class number divisible by 3
should be strictly larger than the corresponding percentage for
quadratic fields. Similarly, the {average number} of 3-torsion
elements across all quadratic orders would also be expected to be
strictly higher than the corresponding average for quadratic
fields.\footnote{Note that the class number formula does not give
complete information on the number of 3-torsion elements; indeed, extra
factors of 3 in the class number may mean
extra 3-torsion, but it could also mean
extra 9-torsion or 27-torsion, etc.!}
In this article, we begin by proving the latter statement:
\begin{theorem}\label{thmorders}
When orders in quadratic fields are ordered by their absolute
discriminants:
\begin{itemize}
\item[{\rm (a)}]
The average number of $3$-torsion elements in the class
groups of imaginary quadratic orders~is $\displaystyle{1+\frac{\zeta(2)}{\zeta(3)}}$.
\item[{\rm (b)}]
The average number of $3$-torsion elements in the class
groups of real quadratic fields is $\displaystyle{1+\frac{1}{3}\cdot\frac{\zeta(2)}{\zeta(3)}}$.
\end{itemize}
\end{theorem}
Note that $\frac{\zeta(2)}{\zeta(3)}\approx 1.36843>1$.
More generally, we may consider the analogue of
Theorem~\ref{thmorders} when the average is taken not over all orders,
but over some subset of orders defined by local conditions. More
precisely, for each prime $p$, let $\Sigma_p$ be any set of
isomorphism classes of orders in \'etale quadratic algebras over
${\mathbb Q}_p.$ We say that the collection $(\Sigma_p)$ of local
specifications is {\it acceptable} if, for all sufficiently large $p$, the
set $\Sigma_p$ contains all the maximal quadratic rings over ${\mathbb Z}_p.$
Let $\Sigma$ denote the set of quadratic orders ${\mathcal O}$, up to
isomorphism, such that ${\mathcal O}\otimes{\mathbb Z}_p\in\Sigma_p$ for all $p$. Then
we may ask what the mean number of 3-torsion elements in the class
groups of imaginary (resp.\ real) quadratic orders in $\Sigma$ is.
To state such a result for general acceptable $\Sigma$, we need a bit of
notation. For an \'etale cubic algebra $K$ over ${\mathbb Q}_p$, we write
$D(K)$ for the unique quadratic algebra over ${\mathbb Z}_p$ satisfying
${\rm Disc}(D(K))={\rm Disc}(K)$. Also, for an order $R$ in an \'etale
quadratic algebra over ${\mathbb Q}_p$, let $C(R)$ denote the weighted number
of \'etale cubic algebras $K$ over ${\mathbb Q}_p$ such that $R\subset D(K)$:
\begin{equation}\label{Cdef}
C(R) := \sum_{{\mbox{\scriptsize $K$ \'etale cubic$/ {\mathbb Q}_p$}}\atop{\mbox{\scriptsize s.t. $R \subset D(K)$}}} \frac1{\#{\rm Aut}(K)}.
\end{equation}
We define the ``cubic mass'' $M_\Sigma$ of the
family $\Sigma$ as a product of local masses:
\begin{equation}\label{massdef}
M_{\Sigma} ~:= \quad
\prod_p\,\frac{{\displaystyle\sum_{R\in\Sigma_p}\frac{C(R)}{{\rm Disc}_p(R)}}}{{\displaystyle \sum_{R\in\Sigma_p}\frac{1}{\#{\rm Aut}(R)}\cdot\frac1{{\rm Disc}_p(R)}}} = \prod_p\, \frac{{\displaystyle \sum_{R\in \Sigma_p}\frac{C(R)}{{\rm Disc}_p(R)}}}{\displaystyle{\sum_{R \in \Sigma_p} {\frac{1}{2\cdot {\rm Disc}_p(R)}}}}\,,
\end{equation}
where ${\rm Disc}_p(R)$ denotes the discriminant of $R$ viewed as a power
of $p$. We then prove the following generalization of~Theorem~\ref{thmorders}.
\begin{theorem}\label{gensigmaord}
Let $(\Sigma_p)$ be any acceptable collection of local specifications
as above, and let $\Sigma$ denote the set
of all isomorphism classes of quadratic orders ${\mathcal O}$ such that
${\mathcal O}\otimes{\mathbb Z}_p\in\Sigma_p$ for all $p$.
Then, when orders in $\Sigma$ are ordered by their absolute
discriminants:
\begin{itemize}
\item[{\rm (a)}]
The average number of $3$-torsion elements in the class
groups of imaginary quadratic orders in $\Sigma$ is
$\displaystyle{1+M_\Sigma}$.
\item[{\rm (b)}]
The average number of $3$-torsion elements in the class
groups of real quadratic orders \linebreak in~$\Sigma$~is~$\displaystyle{1+\frac{1}{3}M_\Sigma}$.
\end{itemize}
\end{theorem}
If $\Sigma$ is
the set of all orders in Theorem~\ref{gensigmaord}, we recover
Theorem~\ref{thmorders}; if $\Sigma$ is the set of all maximal orders,
we recover Theorem~1. As would be expected, the mean number of 3-torsion
elements in class groups of quadratic orders depends on which set of
orders one is taking the average over. However, a remarkable consequence
of Theorem~\ref{gensigmaord} is the following generalization of Theorem~1:
\begin{corollary}\label{maxcase}
Suppose one restricts to just those quadratic fields satisfying any
specified set of local conditions at any finite set of primes. Then,
when these quadratic fields are ordered by their absolute
discriminants:
\begin{itemize}
\item[{\rm (a)}]
The average number of $3$-torsion elements in the class
groups of such imaginary quadratic~fields is~$2$.
\item[{\rm (b)}]
The average number of $3$-torsion elements in the class
groups of such real quadratic fields is~$\textstyle{\frac{4}{3}}$.
\end{itemize}
\end{corollary}
Thus the mean number of 3-torsion elements in class groups of quadratic
{fields} (i.e., of maximal quadratic orders) remains the
same even when one averages over families of quadratic fields
defined by any desired finite set of local conditions.
We turn next to 3-torsion elements in the {\it ideal group} of a
quadratic order ${\mathcal O}$, i.e., the group ${\mathcal I}({\mathcal O})$ of invertible fractional ideals of ${\mathcal O}$, of which the class group
${\rm Cl}({\mathcal O})$ is a quotient. It may come as a surprise that, if a quadratic order
is not maximal, then it is
possible for an {\it ideal} to have order~$3$, i.e., $I$ can be
a fractional ideal of a
quadratic order ${\mathcal O}$ satisfying $I^3={\mathcal O}$, but $I\neq {\mathcal O}$. We first illustrate this phenomenon with
an example:
\begin{example}\label{ex1}
{\em Let ${\mathcal O}={\mathbb Z}[\sqrt{-11}]$ and let $I=(2,\frac{1-\sqrt{-11}}{2})$. Then
$I\subset{\mathcal O}\otimes{\mathbb Q}$ is a fractional ideal of ${\mathcal O}$ and has norm
one. Since $I^3\subset{\mathcal O}$, and $I$ has norm one, we must
have $I^3={\mathcal O}$, even though clearly $I\neq{\mathcal O}$.
Hence $I$ has order 3 in the ideal
group of ${\mathcal O}$.
It follows, in particular, that
the {\it ideal class} represented by $I$ also has order 3 in the class group of ${\mathcal O}$!
}\end{example}
Example~\ref{ex1} shows that some elements of the ideal class group can have order 3 simply
because there exists a (non-principal) ideal representing them that has
order 3 in the ideal group. This raises
the question as to how many 3-torsion elements exist in the ideal group on average in
quadratic orders. For maximal orders, it is easy to show that any
3-torsion element (indeed, any torsion element) in the ideal group
must be simply the trivial ideal. For all orders, we have the following theorem.
\begin{theorem}\label{sigmaid}
When orders in quadratic fields are ordered by their absolute discriminants,
the average number of $3$-torsion elements in the ideal groups of either imaginary or real quadratic orders is $\displaystyle\frac{\zeta(2)}{\zeta(3)}$.
\end{theorem}
In the case of general sets
of orders defined by any acceptable set of local conditions, we have
the following generalization of Theorem~\ref{sigmaid}:
\begin{theorem}\label{gensigmaid}
Let $(\Sigma_p)$ be any acceptable collection of local specifications
as above, and let $\Sigma$ denote the set
of all isomorphism classes of quadratic orders ${\mathcal O}$ such that
${\mathcal O}\otimes{\mathbb Z}_p\in\Sigma_p$ for all $p$.
Then, when orders in $\Sigma$ are ordered by their absolute
discriminants:
\begin{itemize}
\item[{\rm (a)}]
The average number of $3$-torsion elements in the ideal
groups of imaginary quadratic orders in $\Sigma$ is~$\displaystyle{M_\Sigma}$.
\item[{\rm (b)}]
The average number of $3$-torsion elements in the ideal
groups of real quadratic orders \linebreak in~$\Sigma$~is~$\displaystyle{M_\Sigma}$.
\end{itemize}
\end{theorem}
In the preceding theorems, we have distinguished between the two
groups ${\rm Cl}_3({\mathcal O})$, the group of {\it ideal classes of order~$3$},
and ${\mathcal I}_3({\mathcal O})$, the group of {\it ideals of order~$3$}.
Theorems~\ref{gensigmaord} and \ref{gensigmaid} give the mean values
of $|{\rm Cl}_3({\mathcal O})|$ and $|{\mathcal I}_3({\mathcal O})|$ respectively, as ${\mathcal O}$ ranges over any
family of orders defined by local conditions. In both
Theorems~\ref{gensigmaord} and \ref{gensigmaid}, we have seen that
unless the family consists entirely of maximal orders satisfying a finite number
of local conditions, these averages depend on the particular families
of orders over which the averages are taken over. However, we see that
these two theorems together imply:
\begin{theorem}\label{diff}
Let $(\Sigma_p)$ be any acceptable collection of local specifications
as above, and let $\Sigma$ denote the set
of all isomorphism classes of quadratic orders ${\mathcal O}$ such that
${\mathcal O}\otimes{\mathbb Z}_p\in\Sigma_p$ for all $p$.
Then, when orders in $\Sigma$ are ordered by their absolute
discriminants:
\begin{itemize}
\item[{\rm (a)}]
The mean size of
$|{\rm Cl}_3({\mathcal O})|-|{\mathcal I}_3({\mathcal O})|$ across imaginary quadratic orders ${\mathcal O}$
in $\Sigma$ is $1$.
\item[{\rm (b)}]
The mean size of $|{\rm Cl}_3({\mathcal O})|-\frac13|{\mathcal I}_3({\mathcal O})|$ across real
quadratic orders ${\mathcal O}$ in $\Sigma$ is $1$.
\end{itemize}
\end{theorem}
It is a remarkable fact, which begs for explanation, that the mean
values in Theorem~\ref{diff} do not depend on the family of orders
that one averages over! In particular, the case of maximal orders
gives Corollary~\ref{maxcase}, because
the only 3-torsion element of the ideal group in a maximal order is the trivial ideal.
We end this introduction by describing the methods used in this paper. Our approach combines the original methods of Davenport-Heilbronn
with techniques that are class-field-theoretically ``dual'' to those methods, which we explain now. First, recall that Davenport-Heilbronn proved Theorem 1 in~\cite{DH} by:
\begin{enumerate}
\item[1)] counting appropriate sets of binary cubic forms to compute the number of cubic fields
of bounded discriminant, using a bijection (due to Delone and
Faddeev~\cite{DF}) between irreducible binary cubic forms and cubic
orders;
\item[2)] applying a duality from class field theory between cubic fields and 3-torsion elements of class groups of
quadratic fields.
\end{enumerate}
In Sections 2 and 3, we give a new proof of Theorem~1 without class
field theory, by using a direct correspondence between binary cubic
forms and 3-torsion elements of class groups of quadratic orders
proved in~\cite{Bhargava2}, in place of the Delone-Faddeev
correspondence.
We describe a very precise version of this correspondence in Section~2
(cf.~Thm.~\ref{bcfideal}). In Section~3, we then show how
the original counting results of Davenport~\cite{Davenport1,Davenport2}---as
applied in the asymptotic count of cubic fields in
Davenport-Heilbronn~\cite{DH}---can also be used to extract Theorem~1, using the direct
correspondence between integral binary cubic forms and $3$-torsion elements of class groups of quadratic orders.
To fully illustrate the duality between the original strategy of \cite{DH} and our strategy described above, we give two ``dual'' proofs of Theorem~2. In Section~4, we first generalize the proof of Theorem~1 given in Sections~2 and~3, and then in Section~5, we give a second proof of Theorem~2 via
ring class field theory, generalizing the original proof of Davenport--Heilbronn \cite{DH}. Both methods involve counting \emph{irreducible} binary cubic forms
in fundamental domains for the action of either ${\rm SL}_2({\mathbb Z})$ or ${\rm GL}_2({\mathbb Z})$, as developed in the work
of Davenport~\cite{Davenport1,Davenport2}. However, in our direct method described in Section~4, one must also count points in the ``cusps'' of these fundamental regions! The
points in the so-called cusp correspond essentially to reducible cubic forms.
We find that reducible cubic forms correspond to 3-torsion elements of
\emph{ideal groups} of quadratic orders (cf.\ Thm.~\ref{reducible}). In
the case of maximal orders, the only torsion element of the ideal
group is the identity,
and thus the points
in the cusps can be ignored
when proving Theorem~1.
However, in order to prove Theorems~2 and~3 (which do not restrict to
maximal orders), we must include reducible forms in our counts, and
this is the main goal of Section~4. Isolating the count of
reducible forms in the fundamental domain for the action of
${\rm SL}_2({\mathbb Z})$ is also what allows us to deduce Theorem~\ref{sigmaid}.
On the other hand, in Section~5, we describe the duality between
nontrivial 3-torsion elements of class groups of a given quadratic
order and cubic fields whose Galois closure is a ring class field of
the fraction field of the quadratic order (cf.\ Prop.~\ref{rcf}). To then count 3-torsion
elements in the class groups of quadratic orders, we
use the count of cubic fields of bounded discriminant proved by
Davenport--Heilbronn~\cite{DH},
but we allow a
given cubic field to be counted multiple times, as the Galois closure
of a single cubic field can be viewed as the ring class field (of
varying conductor) of multiple quadratic orders (cf.\ \S5.2). This
yields a second proof of Theorem~2; furthermore, it allows us to prove also
Theorem~3 and Corollary~4, using a generalization of
Davenport and Heilbronn's theorem on the density of discriminants of cubic fields
established in \cite[Thm.~8]{BST}, which counts cubic
orders of bounded discriminant satisfying any acceptable collection of
local conditions.
Finally, in Section~6, we generalize the proof of Theorem~2 given in
Section~3 to general acceptable families of quadratic orders,
which in combination with Theorem~\ref{gensigmaord} allows
us to deduce Theorems~\ref{gensigmaid} and~\ref{diff}. We note
that, in order to conclude Theorem~\ref{gensigmaid}, we use both of the
``dual'' perspectives provided in the two proofs of Theorem~2.
\section{Parametrization of order 3 ideal classes in quadratic orders}
In this section we recall the parametrization of elements in the
3-torsion subgroups of ideal class groups of quadratic orders in terms
of (orbits of) certain integer-matrix binary cubic forms as proven in
\cite{Bhargava2}. We also deduce various relevant facts that will
allow us to prove Theorems~1 and~2
in \S3 and \S4, respectively, without using class field theory.
\subsection{Binary cubic forms and 3-torsion elements in class groups}
The key ingredient in the new proofs of Theorems~1 and~2 is a
parametrization of ideal classes of order~3 in quadratic rings by
means of equivalence classes of integer-matrix binary cubic forms,
which was obtained in~\cite{Bhargava2}. We begin by briefly recalling this
parametrization.
Let $V_{\mathbb R}$ denote the four-dimensional real vector space of binary cubic
forms $ax^3+bx^2y+cxy^2+dy^3$ where $a,b,c,d\in{\mathbb R}$, and let $V_{\mathbb Z}$ denote
the lattice of those forms for which $a,b,c,d\in{\mathbb Z}$ (i.e., the {\it
integer-coefficient} binary cubic forms). The group ${\rm GL}_2({\mathbb Z})$
acts on $V_{\mathbb R}$ by the so-called ``twisted action,'' i.e., an element $\gamma \in {\rm GL}_2({\mathbb Z})$ acts on a binary cubic form $f(x,y)$ by
\begin{equation}
(\gamma f)(x,y) := \frac{1}{\det(\gamma)}f((x,y)\gamma).
\end{equation}
Furthermore, the action preserves $V_{\mathbb Z}$.
We will be interested in the sublattice of binary
cubic forms of the form $f(x,y)=ax^3+3bx^2y+3cxy^2+dy^3$, called {\it classically
integral} or {\it integer-matrix} if $a,b,c,d$ are integral. We
denote the lattice of all integer-matrix forms in $V_{\mathbb R}$ by $V_{\mathbb Z}^\ast$.
Note that $V_{\mathbb Z}^\ast$ has index~9 in $V_{\mathbb Z}$ and is also preserved by ${\rm GL}_2({\mathbb Z})$.
We also define the {\it reduced discriminant} ${\rm disc}(\cdot)$ on $V_{\mathbb Z}^\ast$ by
\begin{equation}\label{discdef}
{\rm disc}(f) := -\frac{1}{27}{\rm Disc}(f) = -3b^2c^2 + 4ac^3 + 4b^3d + a^2d^2 - 6abcd
\end{equation}
where ${\rm Disc}(f)$ denotes the usual discriminant of $f$ as an element
of $V_{\mathbb Z}$. It is well-known and easy to check that the action of ${\rm GL}_2({\mathbb Z})$ on binary cubic forms preserves (both definitions of) the discriminant.
In \cite{Eisenstein}, Eisenstein proved a beautiful correspondence
between certain special ${\rm SL}_2({\mathbb Z})$-classes in $V_{\mathbb Z}^\ast$ and ideal classes
of order 3 in quadratic rings.
We state here a refinement of Eisenstein's correspondence
obtained in~\cite{Bhargava2}, which gives an
exact interpretation for {\it all} ${\rm SL}_2({\mathbb Z})$-classes in $V_{\mathbb Z}^\ast$
in terms of ideal classes in quadratic rings.
To state the theorem, we first require some terminology. We define a
{\it quadratic ring} over~${\mathbb Z}$ (resp.~${\mathbb Z}_p$) to be any commutative
ring with unit that is free of rank 2 as a ${\mathbb Z}$-module
(resp. ${\mathbb Z}_p$-module). An {\it oriented} quadratic ring ${\mathcal O}$ over
${\mathbb Z}$ is then defined to be a quadratic ring along with a specific
choice of isomorphism $\pi: {\mathcal O}/{\mathbb Z} \rightarrow {\mathbb Z}$. Note that an
oriented quadratic ring has no nontrivial automorphisms. Finally, we
say that a quadratic ring (or binary cubic form) is {\it
nondegenerate} if it has nonzero discriminant.
\begin{theorem}[{\bf \cite[Thm.~13]{Bhargava2}}]\label{bcfideal}
There is a natural bijection between the set of nondegenerate
${\rm SL}_2({\mathbb Z})$-orbits on the space $V_{\mathbb Z}^\ast$ of integer-matrix binary
cubic forms and the set of equivalence classes of triples
$({\mathcal O},I,\delta)$, where ${\mathcal O}$ is a nondegenerate oriented quadratic
ring over ${\mathbb Z}$, $I$ is an ideal of ${\mathcal O}$, and $\delta$ is an invertible element
of ${\mathcal O}\otimes{\mathbb Q}$ such that $I^3\subseteq \delta\cdot {\mathcal O}$ and
$N(I)^3=N(\delta)$. $($Here two triples $({\mathcal O},I,\delta)$ and
$({\mathcal O}',I',\delta')$ are equivalent if there is an isomorphism
$\phi:{\mathcal O}\rightarrow {\mathcal O}'$ and an element $\kappa\in {\mathcal O}'\otimes{\mathbb Q}$ such
that $I'=\kappa\phi(I)$ and $\delta'=\kappa^3\phi(\delta)$.$)$ Under
this bijection, the reduced discriminant of a binary cubic form is equal to
the discriminant of the corresponding quadratic ring.
\end{theorem}
The proof of this statement can be found in~\cite[\S3.4]{Bhargava2}; here we simply sketch the map. Given a triple $({\mathcal O}, I, \delta)$, we construct the corresponding binary cubic form as follows: Write ${\mathcal O} = {\mathbb Z} + {\mathbb Z}\tau$ where $\langle 1, \tau\rangle$ is a {\it positively oriented} basis for an oriented quadratic ring, i.e., $\pi(\tau) = 1$. Furthermore, we can write $I = {\mathbb Z} \alpha + {\mathbb Z} \beta$ when $\langle \alpha, \beta \rangle$ is a {\it positively oriented} basis for a ${\mathbb Z}$-submodule of ${\mathcal O} \otimes {\mathbb Q}$, i.e., the change-of-basis matrix from the positively-oriented $\langle 1, \tau\rangle$ to $\langle \alpha, \beta \rangle$ has positive determinant. We can then find integers $e_0$, $e_1$, $e_2$, $e_3$, $a$, $b$, $c$, and $d$ satisfying the following equations:
\begin{equation}\label{bcfdef}
\begin{array}{rcl}
\alpha^3 &=& \delta(e_0 + a \tau), \\
\alpha^2\beta &=& \delta(e_1 + b \tau), \\
\alpha\beta^2 &=& \delta(e_2 + c \tau), \\
\beta^3 &=& \delta(e_3 + d \tau).
\end{array}
\end{equation}
Then the binary cubic form corresponding to the triple $({\mathcal O}, I, \delta)$ is $f(x,y) = ax^3 + 3bx^2y + 3cxy^2 + dy^3$. In basis-free terms, $f$ is the symmetric trilinear form
\begin{equation}\label{cf}
I\times I\times I\to {\mathbb Z} \qquad \qquad (i_1,i_2,i_3) \mapsto \pi(\delta^{-1}\cdot i_1\cdot i_2\cdot i_3)
\end{equation}
given by applying multiplication in ${\mathcal O}$, dividing by $\delta$, and then applying $\pi$.
On the other hand, given a binary cubic form $f(x,y) = ax^3 + 3bx^2y +
3cxy^2 + dy^3$, we can explicitly construct the corresponding triple
as follows. The ring ${\mathcal O}$ is completely determined by having discriminant equal to ${\rm disc}(f)$. Examining the system of equations in (\ref{bcfdef}) shows that a positively oriented basis $\langle \alpha, \beta\rangle$ for $I$ must satisfy
$$\alpha:\beta = (e_1 + b \tau): (e_2 + c \tau)$$
where
\begin{equation}
e_1 = \displaystyle \frac{1}{2}(b^2c - 2ac^2 + abd - \epsilon b), \quad \mbox{and} \quad
e_2 = \displaystyle -\frac{1}{2}(bc^2 - 2b^2d + acd + \epsilon c).
\end{equation}
Here, $\epsilon = 0$ or $1$ in accordance with whether ${\rm Disc}({\mathcal O})
\equiv 0$ or $1$ modulo $4$, respectively. This uniquely determines
$\alpha$ and $\beta$ up to a scalar factor in ${\mathcal O} \otimes {\mathbb Q}$, and
once $\alpha$ and $\beta$ are fixed, the system in (\ref{bcfdef})
determines $\delta$ uniquely. The ${\mathcal O}$-ideal structure of the rank 2
${\mathbb Z}$-module $I$ is given by the following action of $\tau$ on the basis
elements of $I$:
$$\tau \cdot \alpha = \frac{B + \epsilon}{2} \cdot \alpha + A \cdot \beta \qquad \mbox{and} \qquad \tau \cdot \beta = -C\cdot\alpha + \frac{\epsilon - B}{2} \cdot \beta, \quad \mbox{where}$$
\begin{equation}\label{defABC}
A = b^2 - ac, \quad B = ad - bc, \quad C = c^2 - bd.
\end{equation}
This completely (and explicitly) determines the triple $({\mathcal O},I,\delta)$ from the binary cubic form $f(x,y)$. Note that the equivalence defined on triples in the statement of the theorem exactly corresponds to ${\rm SL}_2({\mathbb Z})$-equivalence on the side of binary cubic forms.
We may also deduce from this discussion a description of the stabilizer in ${\rm SL}_2({\mathbb Z})$ of an element in $V_{\mathbb Z}^\ast$ in terms of the corresponding triple $({\mathcal O},I,\delta)$.
\begin{corollary}\label{stab}
The stabilizer in ${\rm SL}_2({\mathbb Z})$ of a
nondegenerate element $v\in V_{\mathbb Z}^\ast$ is naturally isomorphic to~$U_3({\mathcal O}_0)$,
where $({\mathcal O},I,\delta)$ is the triple corresponding
to $v$ as in Theorem~$\ref{bcfideal}$,
${\mathcal O}_0={\rm End}_{\mathcal O}(I)$ is the endomorphism ring of~$I$, and
$U_3({\mathcal O}_0)$ denotes the group of units of ${\mathcal O}_0$ having order
dividing $3$.
\end{corollary}
Indeed, let $v \in V_{\mathbb Z}^\ast$ be associated to the triple $({\mathcal O}, I,
\delta)$ under Theorem~$\ref{bcfideal}$. Then an
${\rm SL}_2({\mathbb Z})$-transformation of the basis $\langle \alpha, \beta
\rangle$ for $I$ preserves the map in (\ref{cf}) precisely when
$\gamma$ acts by multiplication by a cube root of unity in the
endomorphism ring ${\mathcal O}_0$ of $I$.
We may also similarly describe the orbits of $V_{\mathbb Z}^\ast$ under the action of ${\rm GL}_2({\mathbb Z})$. This simply
removes the orientation of the corresponding ring ${\mathcal O}$, thus
identifying the triple $({\mathcal O},I,\delta)$ with its quadratic conjugate triple $({\mathcal O},\bar I,\bar\delta)$.
\begin{corollary}\label{gl2bijection}
There is a natural bijection between the set of nondegenerate ${\rm GL}_2({\mathbb Z})$-orbits on the space $V_{{\mathbb Z}}^\ast$ of integer-matrix binary cubic forms and the set of equivalence classes of triples $({\mathcal O}, I, \delta)$ where ${\mathcal O}$ is a nondegenerate $($unoriented$)$ quadratic ring, $I$ is an ideal of ${\mathcal O}$, and $\delta$ is an invertible element of ${\mathcal O} \otimes {\mathbb Q}$ such that $I^3 \subseteq \delta \cdot {\mathcal O}$ and $N(I)^3 = N(\delta)$.
Under this bijection, the reduced discriminant of a binary cubic form is equal to
the discriminant of the corresponding quadratic ring.
The stabilizer in ${\rm GL}_2({\mathbb Z})$ of a
nondegenerate element $v\in V_{\mathbb Z}^\ast$ is given by the
semidirect product
\[{\rm Aut}({\mathcal O};I,\delta)\ltimes U_3({\mathcal O}_0),\]
where: $({\mathcal O},I,\delta)$ is the triple corresponding
to $v$ as in Theorem~$\ref{bcfideal}$; ${\rm Aut}({\mathcal O};I,\delta)$
is defined to be $C_2$ if there exists $\kappa\in({\mathcal O}\otimes{\mathbb Q})^\times$ such that
$\bar I=\kappa I$ and $\bar \delta=\kappa^3\delta$, and
is defined to be trivial otherwise;
${\mathcal O}_0={\rm End}_{\mathcal O}(I)$ is the endomorphism ring of $I$; and
$U_3({\mathcal O}_0)$ denotes the group of units of ${\mathcal O}_0$ having order
dividing $3$.
\end{corollary}
\begin{proof}
Given Theorem~\ref{bcfideal}, it remains to check where the
now-combined ${\rm SL}_2({\mathbb Z})$-orbits of an integer-matrix binary cubic
form $f$ and of $\gamma f$ where $\gamma = \left(\begin{smallmatrix}
0 & 1 \\1 & 0\end{smallmatrix}\right)$ map to. If the
${\rm SL}_2({\mathbb Z})$-orbit of $f$ corresponds to a triple $({\mathcal O}, I, \delta)$
under the above bijection, then the ${\rm SL}_2({\mathbb Z})$-orbit of $\gamma f$
corresponds to the triple $({\mathcal O}, \bar{I}, \bar{\delta})$ where
$\bar{\cdot}$ denotes the image under the non-trivial automorphism
of the unoriented quadratic ring ${\mathcal O}$. Thus we obtain a
correspondence between ${\rm GL}_2({\mathbb Z})$-orbits of integer-matrix binary
cubic forms and triples $({\mathcal O}, I, \delta)$ as described above except
that ${\mathcal O}$ is viewed as a quadratic ring without orientation.
For the stabilizer statement, note that an element $g$ of ${\rm GL}_2({\mathbb Z})$ preserving $v$ must have determinant either
$+1$ or $-1$. If $g$ has determinant~1, then when it acts on the basis
$\langle\alpha,\beta\rangle$ of $I$, it preserves the vector
$v=(a,b,c,d)$ in (\ref{bcfdef}) if
and only if $\alpha^3,\alpha^2\beta,\alpha\beta^2,\beta^3$ remain
unchanged; thus $g$ must act by multiplication by a unit $u$ in the unit group
$U({\mathcal O}_0)$ of ${\mathcal O}_0$ whose cube is 1.
If $g$ has determinant $-1$, then the basis element $\tau$
gets replaced by its conjugate $\bar\tau$ in addition to
$\langle\alpha,\beta\rangle$ being transformed by $g$. If this is to
preserve the vector $v=(a,b,c,d)$ in (\ref{bcfdef}), then this means
that conjugation on ${\mathcal O}$ maps $I$ to $\kappa I$
and $\delta$ to $\kappa^3 \delta$ for some $\kappa\in({\mathcal O}\otimes{\mathbb Q})^\times$.
The result follows.
\end{proof}
\begin{remark}\label{rmkzp}{\em
The statements in
Theorem~\ref{bcfideal}, Corollary~\ref{stab}, and Corollary~\ref{gl2bijection} also hold after base change to~${\mathbb Z}_p$, with the same proofs.
In the case of Theorem~\ref{bcfideal}, in the proof, by a {\it
positively oriented} basis $\langle\alpha,\beta\rangle$ of an ideal
$I$ of $R$,
we mean that the change-of-basis matrix
from $\langle 1, \tau\rangle$ to $\langle \alpha, \beta \rangle$ has
determinant equal to a power of $p$; all other details remain identical.
Corollary~\ref{gl2bijection} and its
analogue over ${\mathbb Z}_p$ will be relevant in Section~6, during the proofs of
Theorems~\ref{gensigmaid} and \ref{diff}.
}\end{remark}
\subsection{Composition of cubic forms and 3-class groups}
Let us say that an integer-matrix binary cubic form $f$, or its corresponding triple $({\mathcal O},I,\delta)$ via the correspondence of Theorem~\ref{bcfideal}, is {\it projective} if $I$ is projective as an
${\mathcal O}$-module (i.e., if $I$ is invertible as an ideal of ${\mathcal O}$); in such a
case we have $I^3=(\delta)$. The bijection of Theorem~\ref{bcfideal} allows us to describe a composition law on the set of projective integer-matrix binary cubic forms, up to ${\rm SL}_2({\mathbb Z})$-equivalence, having the same reduced discriminant. This turns the set of all ${\rm SL}_2({\mathbb Z})$-equivalence classes of projective integer-matrix binary cubic forms having given reduced discriminant~$D$ into a group,
which is closely related to the group ${\rm Cl}_3({\mathcal O})$, if ${\mathcal O}$ also has discriminant $D$. In this section, we describe this group law and establish some of its relevant properties.
Fix an oriented quadratic ring ${\mathcal O}$.
Given such an ${\mathcal O}$, we obtain a natural law of composition on
equivalence classes of triples $({\mathcal O},I,\delta)$, where $I$ is an invertible ideal of ${\mathcal O}$ and $\delta \in ({\mathcal O} \otimes{\mathbb Q})^\times$ such that $I^3 = \delta \cdot {\mathcal O}$ and $N(I)^3 = N(\delta)$. It is defined by
\[
({\mathcal O},I,\delta)\circ({\mathcal O},I',\delta') = ({\mathcal O},II',\delta\delta').
\]
The equivalence classes of projective triples $({\mathcal O},I,\delta)$ thus
form a group under this composition law, which we denote by $H({\mathcal O})$
(note that two oriented quadratic rings ${\mathcal O}$ and ${\mathcal O}'$ of the same
discriminant are canonically isomorphic, and hence the groups $H({\mathcal O})$ and
$H({\mathcal O}')$ are also canonically isomorphic).
By Theorem~\ref{bcfideal}, we also then obtain a corresponding
composition law on ${\rm SL}_2({\mathbb Z})$-equivalence classes of integer-matrix
cubic forms $f$ having a given reduced discriminant
$D$
(a higher degree analogue of Gauss composition). We say that such a
binary cubic form $f$ is {\it projective} if the corresponding
$({\mathcal O},I,\delta)$ is projective. We will sometimes view $H({\mathcal O})$ as the
group consisting of the ${\rm SL}_2({\mathbb Z})$-equivalence classes of integer-matrix
binary cubic forms having reduced discriminant equal to ${\rm Disc}({\mathcal O})$.
In order to understand the relationship between $H({\mathcal O})$ and ${\rm Cl}_3({\mathcal O})$, we first establish a lemma describing the number of preimages
of an ideal class
under the ``forgetful'' map $H({\mathcal O}) \rightarrow {\rm Cl}_3({\mathcal O})$ defined by
$({\mathcal O},I,\delta) \mapsto [I]$:
\begin{lemma}\label{deltalemma}
Let ${\mathcal O}$ be an order in a quadratic field and $I$ an invertible
ideal of ${\mathcal O}$ whose class has order $3$ in the class group of
${\mathcal O}$. Then the number of elements $\delta\in{\mathcal O}$ $($up to cube
factors in $({\mathcal O}\otimes{\mathbb Q})^\times)$ yielding a valid triple
$({\mathcal O},I,\delta)$ in the sense of Theorem~$\ref{bcfideal}$ is $1$ if
${\rm Disc}({\mathcal O})<-3$, and $3$ otherwise.
\end{lemma}
\begin{proof}
Fix an invertible ideal $I$ of ${\mathcal O}$ that arises in some valid triple.
The number of elements~$\delta$
having norm equal to $N(I)^3$ and yielding distinct elements of $H({\mathcal O})$ is then $|U^+({\mathcal O})/U^+({\mathcal O})^{\times3}|$, where
$U^+({\mathcal O})$ denotes the group of units of ${\mathcal O}$ having norm~1. In fact, we have an exact sequence
\begin{equation}\label{hr}
1 \to \frac{U^{+}({\mathcal O})}{U^{+}({\mathcal O})^{\times 3}} \to H({\mathcal O}) \to {\rm Cl}_3({\mathcal O}) \to 1.
\end{equation}
We see that for all orders ${\mathcal O}$ in imaginary quadratic fields other
than the maximal order ${\mathbb Z}[\sqrt{-3}]$, the unit group has
cardinality 2 or 4, and therefore $|U^+({\mathcal O})/U^+({\mathcal O})^{\times3}|=1$.
For real quadratic orders ${\mathcal O}$, the unit group has rank one and
torsion equal to $\{\pm 1\}$, and so $|U^+({\mathcal O})/U^+({\mathcal O})^{\times3}|=3$.
Finally, for ${\mathcal O}={\mathbb Z}[\sqrt{-3}]$, we have $|U^+({\mathcal O})/U^+({\mathcal O})^{\times3}|=3$ as well.
\end{proof}
Equation (\ref{hr}) thus make precise the relationship between
$H({\mathcal O})$ and ${\rm Cl}_3({\mathcal O})$. With regard to the sizes of these groups,
we obtain:
\begin{corollary}\label{hr2}
We have $|H({\mathcal O})|= |{\rm Cl}_3({\mathcal O})|$ when ${\mathcal O}$ has discriminant ${\rm Disc}({\mathcal O}) < -3$, and $|H({\mathcal O})|= 3\cdot|{\rm Cl}_3({\mathcal O})|$ otherwise.
\end{corollary}
\subsection{Projective binary cubic forms and invertibility}\label{projsection}
We now wish to explicitly describe the projective binary cubic
forms. Recall that the \emph{quadratic Hessian covariant} of $f(x,y) = ax^3 + 3bx^2y + 3cxy^2 + dy^3$ is given by
$Q(x,y)=Ax^2+Bxy+Cy^2$, where $A$, $B$, $C$ are defined by
(\ref{defABC}); then $Q$ also describes the norm form on $I$ mapping into~${\mathbb Z}$. It is well-known, going back to the work of
Gauss, that $I$ is invertible if and only if $Q(x,y)$ is {\it primitive},
i.e., $(A,B,C)=(b^2-ac,ad-bc,c^2-bd)=1$ (see,
e.g., \cite[Prop.~7.4 \& Thm.~7.7(i)--(ii)]{Cox}). Thus,
\begin{equation}\label{projbcf}
f(x,y)=ax^3+3bx^2y+
3cxy^2+dy^3 \mbox{ is {projective} } \Leftrightarrow
~(b^2-ac,ad-bc,c^2-bd)=1.
\end{equation}
Let $\mathcal{S}$ denote the set of all projective forms
$f(x,y)=ax^3+3bx^2y+3cxy^2+dy^3$ in $V_{\mathbb Z}^\ast$.
Let~$V^\ast_{{\mathbb Z}_p}$ denote the set of all forms
$f(x,y)=ax^3+3bx^2y+3cxy^2+dy^3$ such that $a,b,c,d \in {\mathbb Z}_p$, and let $\mu_p^\ast(\mathcal{S})$ denote the $p$-adic density of the
$p$-adic closure of $\mathcal{S}$ in $V_{{\mathbb Z}_p}^\ast$, where we normalize the
additive measure $\mu_p^\ast$ on $V_{{\mathbb Z}_p}^\ast = {\mathbb Z}_p^4$ so that
$\mu_p^\ast(V_{{\mathbb Z}_p}^\ast)=1$. The following lemma gives the value of~$\mu_p^\ast(\mathcal{S})$:
\begin{lemma}\label{primdensity}
We have $\mu_p^\ast(\mathcal{S})=1-\displaystyle{\frac{1}{p^2}}.$
\end{lemma}
\begin{proof}
Suppose \begin{equation}\label{primeq}
b^2-ac\,\equiv\, bc-ad\,\equiv\,
c^2-bd\,\equiv\, 0 \pmod{p}.
\end{equation}
Then the pair $(a,b)$ can take any value except
$(0,r)$, where $r\not\equiv 0$ (mod $p$). Given any such nonzero
pair $(a,b)$, the variables $c$ and $d$ are then clearly
determined modulo $p$ from $(a,b)$. If
$(a,b)\equiv(0,0)$~(mod~$p$), then $c$ must also vanish
(mod~$p$), while $d$ can be arbitrary (mod~$p$). We conclude that
the total number of solutions (mod~$p$) to (\ref{primeq}) for
the quadruple
$(a,b,c,d)$ is $(p^2-(p-1))+(p-1)=p^2$. Thus $\mu_p^\ast(\mathcal{S})=
(p^4-p^2)/p^4$, as claimed.
\end{proof}
\subsection{Reducible forms}
As summarized in the introduction, the correspondence of
Delone-Faddeev in~\cite{DF} between irreducible binary cubic forms and
orders in cubic fields was used by Davenport--Heilbronn~\cite{DH} to
determine the density of discriminants of cubic fields.
Theorem~\ref{bcfideal}, however, gives a different correspondence than
the one due to Delone-Faddeev~\cite{DF}; in particular, it does
\emph{not} restrict to irreducible forms. The question then arises: which
elements of $H({\mathcal O})$ correspond to the integer-matrix binary cubic
forms that are reducible, i.e., that factor over ${\mathbb Q}$ (equivalently,
${\mathbb Z}$)? We answer this question here, first by establishing which
triples $({\mathcal O},I,\delta)$ correspond to reducible binary cubic
forms.
\begin{lemma}\label{lemma2}
Let $f$ be an element of $V^\ast_{\mathbb Z}$, and let
$({\mathcal O},I,\delta)$ be a representative for the corresponding equivalence class of triples as given by
Theorem~$\ref{bcfideal}$. Then $f$ has a rational zero as a binary
cubic form if and only if
$\delta$ is a cube in $({\mathcal O}\otimes{\mathbb Q})^\times$.
\end{lemma}
\begin{proof}
Suppose $\delta=\xi^3$ for some invertible $\xi\in {\mathcal O}\otimes{\mathbb Q}$.
Then by replacing $I$ by $\xi^{-1}I$ and $\delta$ by~$\xi^{-3}\delta$ if necessary, we may assume that $\delta=1$. Let
$\alpha$ be the smallest positive element in $I\cap {\mathbb Z}$, and extend
to a basis $\langle \alpha,\beta\rangle$ of $I$. Then the binary
cubic form $f$ corresponding to the basis $\langle
\alpha,\beta\rangle$ of $I$ via
Theorem~\ref{bcfideal}
evidently has a zero, since $\alpha\in{\mathbb Z}$, $\delta=1$, and so
$a=0$ in (\ref{bcfdef}).
Conversely, suppose $(x_0,y_0)\in{\mathbb Q}^2$ with $f(x_0,y_0) = 0$. Without loss of generality, we may assume that $(x_0,y_0) \in {\mathbb Z}^2$. If $({\mathcal O},I,\delta)$ is the corresponding triple and $I$ has positively oriented basis $\langle \alpha, \beta\rangle$, then by~(\ref{bcfdef}) or (\ref{cf}) we obtain
$$(x_0 \alpha + y_0 \beta)^3 = n\delta \quad \mbox{for some } n \in {\mathbb Z}.$$
If $\xi = x_0 \alpha + y_0\beta$, then we have $\xi^3 =n\delta$, and taking norms to ${\mathbb Z}$ on both sides reveals that $N(\xi)^3=n^2N(\delta)=n^2N(I)^3$. Thus $n=m^3$ is a cube.
This then implies that $\delta$ must be a cube in $({\mathcal O}\otimes{\mathbb Q})^\times$ as well,
namely, $\delta=(\xi/m)^3$, as desired.
\end{proof}
The reducible forms thus form a subgroup of $H({\mathcal O})$, which we denote
by $H_{{\rm red}}({\mathcal O})$; by the previous lemma, it is the subgroup
consisting of those triples $({\mathcal O},I,\delta)$, up to equivalence, for
which~$\delta$ is a cube. As in the introduction,
let~${\mathcal I}_3({\mathcal O})$ denote the 3-torsion subgroup of the ideal group of
${\mathcal O}$, i.e.~the set of invertible ideals $I$ of ${\mathcal O}$ such that $I^3 =
{\mathcal O}$. We may then define a map
\begin{equation}
\varphi: {\mathcal I}_3({\mathcal O}) \longrightarrow H({\mathcal O}) \qquad \qquad \varphi: I \mapsto ({\mathcal O},I,1).
\end{equation}
It is evident that ${\rm im}({\mathcal I}_3({\mathcal O})) \subseteq H_{{\rm red}}({\mathcal O})$. In fact, we show that $\varphi$ defines an isomorphism between ${\mathcal I}_3({\mathcal O})$ and $H_{{\rm red}}({\mathcal O})$:
\begin{theorem}\label{reducible}
The image of ${\mathcal I}_3({\mathcal O})$ under $\varphi$ is isomorphic to $H_{{\rm red}}({\mathcal O})$.
\end{theorem}
\begin{proof}
The preimage of the identity $({\mathcal O},{\mathcal O},1) \in H({\mathcal O})$ can only
contain 3-torsion ideals of the form $\kappa\cdot{\mathcal O}$ for $\kappa\in
({\mathcal O}\otimes {\mathbb Q})^\times$.
To be a 3-torsion ideal, we must have $(\kappa {\mathcal O})^3 = {\mathcal O}$ which implies that $\kappa^3 \in {\mathcal O}^\times$ and so $\kappa \in {\mathcal O}^\times$.
Therefore, the preimage of the identity is simply the ideal ${\mathcal O}$, and the map is injective.
It remains to show surjectivity onto $H_{{\rm red}}({\mathcal O})$. Assume $({\mathcal O},I,\delta) \in H_{{\rm red}}({\mathcal O})$. Since~$\delta$ is a cube by definition, let $\delta = \xi^3$ and recall that $({\mathcal O},I,\delta) \sim ({\mathcal O},\xi^{-1}I,1)$.
Thus $\xi^{-1}I \in {\mathcal I}_3({\mathcal O})$.
\end{proof}
\begin{corollary}\label{identity} Assume that ${\mathcal O}$ is maximal. Then $H_{{\rm red}}({\mathcal O})$ contains only the identity element of $H({\mathcal O})$, which can be represented by $({\mathcal O},{\mathcal O},1)$. \end{corollary}
\begin{proof} Since maximal orders are Dedekind domains, the only ideal that is 3-torsion in the ideal group is~${\mathcal O}$.
\end{proof}
\section{A proof of Davenport and Heilbronn's theorem on class numbers without class field theory}
Using the direct correspondence of Theorem~\ref{bcfideal}, we can now deduce Theorem~\ref{theoremdh} by counting the relevant binary cubic forms. To do so, we need the following result of Davenport describing the asymptotic behavior of the number of binary cubic forms of bounded reduced discriminant in subsets of $V_{\mathbb Z}^\ast$ defined by finitely many congruence conditions:
\begin{theorem}[{\bf \cite{Davenport1}, \cite{Davenport2}, \cite[\S5]{DH}, \cite[Thm.~26]{BST}}]\label{thmdensity}
Let $S$ denote a set of integer-matrix binary cubic forms
in $V_{{\mathbb Z}}^\ast$ defined by finitely many congruence conditions
modulo prime powers. Let $V_{\mathbb Z}^{\ast (0)}$ denote the set of
elements in~$V_{{\mathbb Z}}^\ast$
having positive reduced discriminant, and $V_{\mathbb Z}^{\ast (1)}$ the
set of elements in~$V_{\mathbb Z}^\ast$ having reduced negative
discriminant. For $i = 0$ or $1$, let $N^\ast(S \cap V_{\mathbb Z}^{\ast
(i)}, X)$ denote the number of {\em irreducible}
${\rm SL}_2({\mathbb Z})$-orbits on $S \cap V_{\mathbb Z}^{\ast (i)}$ having absolute
reduced discriminant $|{\rm disc}|$ less than $X$. Then
\begin{equation}\label{ramanujan}
\lim_{X \rightarrow \infty} \frac{N^\ast(S \cap V_{\mathbb Z}^{\ast (i)},X)}{X} = \frac{\pi^2}{4 \cdot n_i^\ast}\prod_p \mu_p^\ast(S),
\end{equation}
where $\mu_p^\ast(S)$ denotes the $p$-adic density of $S$ in $V_{{\mathbb Z}_p}^\ast$, and $n_i^\ast = 1$ or $3$ for $i = 0$ or $1$, respectively.
\end{theorem}
Note that in both \cite{BST} and \cite{DH}, this theorem is expressed
in terms of ${\rm GL}_2({\mathbb Z})$-orbits of binary cubic forms in $V_{\mathbb Z}$ with
discriminant ${\rm Disc}(\cdot)$ defined by $-27\cdot{\rm disc}(\cdot)$. Here,
we have stated the theorem for ${\rm SL}_2({\mathbb Z})$-orbits of integer-matrix
binary cubic forms, and the $p$-adic measure is normalized so that
$\mu_p^\ast(V^\ast_{{\mathbb Z}_p})$ = 1. This version is proved in exactly
the same way as the original theorem, but since:
\begin{enumerate}
\item[(a)] $V_{\mathbb Z}^\ast$ has index
9 in $V_{\mathbb Z}$;
\item[(b)] we use the reduced discriminant ${\rm disc}(\cdot)$ instead of ${\rm Disc}(\cdot)$; and
\item[(c)] there are two ${\rm SL}_2({\mathbb Z})$-orbits in every irreducible
${\rm GL}_2({\mathbb Z})$-orbit,
\end{enumerate} the constant on the right hand side of
(\ref{ramanujan}) changes from $\frac{\pi^2}{12n_i}$ as in \cite{BST}
to $\frac{\pi^2}{4n_i^\ast}$, where $n_i = 6$ or $2$ for $i = 0$ or
$1$, respectively.
Our goal then is to count the ${\rm SL}_2({\mathbb Z})$-orbits of forms in $V_{\mathbb Z}^{\ast
(i)}$ that correspond, under the bijection described in
Theorem~\ref{bcfideal}, to equivalence classes of triples $({\mathcal O}, I,
\delta)$ where ${\mathcal O}$ is a maximal quadratic ring and $I$ is
projective. However, if ${\mathcal O}$ is a maximal
quadratic ring, then
all ideals of~${\mathcal O}$ are projective,
and so our only restriction on elements $f \in V_{\mathbb Z}^{\ast (i)}$ then is that
${\rm disc}(f)$ be the discriminant of a maximal quadratic ring.
It is well known that a quadratic
ring ${\mathcal O}$ is maximal if and only if the odd part of the discriminant of ${\mathcal O}$ is
squarefree, and ${\rm disc}({\mathcal O}) \equiv 1$, $5$, $8$, $9$, $12$, or $13
\pmod{16}$. We therefore define for every prime $p$:
$${\mathcal V}_p := \begin{cases} \{ f \in V_{{\mathbb Z}}^{\ast} : {\rm disc}(f) \equiv 1,5,8,9,12,13 \pmod{16}\} & \mbox{ if $p = 2$;} \\
\{ f \in V_{{\mathbb Z}}^{\ast} : {\rm disc}_p(f) \mbox{ is squarefree}\} & \mbox{ if $p \neq 2$.}
\end{cases}$$
Here, ${\rm disc}_p(f)$ is the $p$-part of ${\rm disc}(f)$. If we set ${\mathcal V} := \cap_p {\mathcal V}_p$, then ${\mathcal V}$ is the set of forms in $V_{{\mathbb Z}}^\ast$ for which the ring ${\mathcal O}$ in the associated triple $({\mathcal O},I,\delta)$ is a maximal quadratic ring. The following lemma describes the $p$-adic densities of ${\mathcal V}$ (here, we are using the fact that the $p$-adic closure of ${\mathcal V}$ is ${\mathcal V}_p$):
\begin{lemma}[{\bf\cite[Lem. 4]{DH}}]\label{lemdensity} We have $\mu_p^\ast({\mathcal V}_p)= \displaystyle\frac{(p^2 - 1)^2}{p^4}$.
\end{lemma
We define $N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}, X)$ analogously, as the
number of irreducible orbits in ${\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}$ having
absolute reduced discriminant between $0$ and $X$ (for $i =
0,1$). Since we are restricting to irreducible orbits,
$N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}, X)$ counts
those (equivalence classes of) triples $({\mathcal O},I,\delta)$ where ${\mathcal O}$ is
maximal with $|{\rm Disc}({\mathcal O})| < X$, but by Corollary \ref{identity},
the identity of $H({\mathcal O})$
is {\it not} included
in this count.
We cannot immediately apply Theorem~\ref{thmdensity} to compute
$N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}, X)$, as the set ${\mathcal V}$ is defined by
infinitely many congruence conditions. However, the following
uniformity estimate for the complement of ${\mathcal V}_p$ for all $p$ will
allow us in \S3.1
to strengthen (\ref{ramanujan}) to also hold when $S = {\mathcal V}$:
\begin{proposition}[{\bf\cite[Prop.~1]{DH}}]\label{unifest} Define ${\mathcal W}_p^\ast = V_{{\mathbb Z}}^\ast \backslash {\mathcal V}_p$ for all primes $p$. Then $N({\mathcal W}_p^\ast;X) = O(X/p^2)$ where the implied constant is independent of $p$.
\end{proposition
\begin{remark}{\em
None of the proofs of the quoted results in this section use class
field theory except for \cite[Prop.~1]{DH}, which invokes one lemma
(namely, \cite[Lem.~7]{DH}) that is proved in \cite{DH} by class field
theory; however, this lemma immediately follows from our
Thms.~\ref{bcfideal} and~\ref{thmdensity}, which
do not appeal to class field theory.}
\end{remark}
\subsection{The mean number of 3-torsion elements in the class groups
of quadratic fields
without class
field theory (Proof of Theorem~\ref{theoremdh})}
\label{thm1pf}
We now complete the proof of Theorem~\ref{theoremdh}. Suppose $Y$ is any
positive integer. It follows from Theorem~\ref{thmdensity} and Lemma~\ref{lemdensity} that
$$\lim_{X \rightarrow \infty} \frac{N^\ast(\cap_{p<Y} {\mathcal V}_p \cap V_{\mathbb Z}^{\ast (i)}, X)}{X} = \frac{\pi^2}{4n_i^\ast}\cdot\prod_{p<Y} \left(1-\frac{1}{p^2}\right)^2.$$
Letting $Y$ tend to $\infty$, we obtain
$$\limsup_{X \rightarrow \infty} \frac{N^\ast({\mathcal V} \cap
V_{\mathbb Z}^{\ast (i)}, X)}{X} \leq
\frac{\pi^2}{4n_i^\ast}\cdot\prod_{p}
\left(1-\frac{1}{p^2}\right)^2 =
\frac{3}{2n_i^\ast\zeta(2)}.$$
To obtain a lower bound for $N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}, X)$, we use the fact that
\begin{equation}\label{above}
\bigcap_{p < Y} {\mathcal V}_p \subset ({\mathcal V} \cup \bigcup_{p \geq Y} {\mathcal W}_p^\ast).
\end{equation}
By Proposition~\ref{unifest} and (\ref{above}), we have
$$\liminf_{X \rightarrow \infty} \frac{N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}, X)}{X} \geq \frac{\pi^2}{4n_i^\ast}\cdot\prod_{p} \left(1-\frac{1}{p^2}\right)^2 - O(\sum_{p \geq Y} p^{-2}).$$
Letting $Y$ tend to $\infty$ again, we obtain
$$\liminf_{X \rightarrow \infty} \frac{N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}, X)}{X} \geq \frac{\pi^2}{4n_i^\ast}\cdot\prod_{p} \left(1-\frac{1}{p^2}\right)^2 = \frac{3}{2n_i^\ast\zeta(2)}.$$
Thus,
$$\lim_{X \rightarrow \infty} \frac{N^\ast({\mathcal V} \cap
V_{\mathbb Z}^{\ast (i)}, X)}{X} = \frac{9}{n_i^\ast\pi^2}.$$
Finally, we use Corollaries~\ref{hr2} and \ref{identity} to relate $N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (i)}, X)$ and 3-torsion ideal classes in maximal quadratic rings with discriminant less than $X$:
\begin{eqnarray*}
\sum_{{\mbox{\scriptsize $0 <{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} \bigl(3\cdot|{\rm Cl}_3({\mathcal O})|-1\bigr) &=&
N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (0)}, X)
; \\
\sum_{{\mbox{\scriptsize $0 <-{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} \bigl(|{\rm Cl}_3({\mathcal O})|-1\bigr) &=& N^\ast({\mathcal V} \cap V_{\mathbb Z}^{\ast (1)}, X).
\end{eqnarray*}
Since \begin{equation}\label{maxordcount}
\displaystyle{\lim_{X\rightarrow\infty}\frac{\displaystyle{\sum_{{\mbox{\scriptsize $0 <{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} 1}}{X}} = \displaystyle{\frac{3}{\pi^2}} \qquad \mbox{and} \qquad
\displaystyle{\lim_{X\rightarrow\infty}\frac{\displaystyle{\sum_{{\mbox{\scriptsize $0 <-{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} 1}}{X}}
= \displaystyle{\frac{3}{\pi^2}},
\end{equation}
we conclude
\[
\begin{array}{rcccl}
\displaystyle{
\lim_{X\rightarrow\infty}\frac{\displaystyle{\sum_{{\mbox{\scriptsize $0 <{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} |{\rm Cl}_3({\mathcal O})|}}
{\displaystyle{\sum_{{\mbox{\scriptsize $0 <{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} 1}} }
& = &
\displaystyle{\frac{1}{3}\left(1+\lim_{X\rightarrow\infty}\frac{N^\ast({\mathcal V}\cap V_{\mathbb Z}^{\ast (0)};X)}
{{\displaystyle\sum_{{\mbox{\scriptsize $0 <{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} 1}}\right)}
& = &
\displaystyle{\frac{1}{3}\left(1+\frac{9/n_0^\ast}{3}\right)}=
\displaystyle{\frac{4}{3}},\\[.5in]
\displaystyle{
\lim_{X\rightarrow\infty}\frac{\displaystyle \sum_{{\mbox{\scriptsize $0 <-{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}}|{\rm Cl}_3({\mathcal O})|}
{\displaystyle{\sum_{{\mbox{\scriptsize $0 <-{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} 1}} }
&=&
\displaystyle{
1+ \lim_{X\rightarrow\infty}\frac{N^\ast({\mathcal V}\cap V_{\mathbb Z}^{\ast (1)};X)}{\displaystyle \sum_{{\mbox{\scriptsize $0 <-{\rm Disc}({\mathcal O}) < X$,}}\atop{\mbox{\scriptsize{${\mathcal O}$ maximal}}}} 1}}
&=&
\displaystyle{1+\frac{9/n_1^\ast}{3}} =
\displaystyle{2}.
\end{array}
\]
\subsection{Generalization to orders}\label{noncftorders}
The above proof of Theorem~\ref{theoremdh} can be generalized to
orders to yield the special case of Theorem~\ref{diff} where we
average over all quadratic orders. This will also explain why the quantities
being averaged in Theorem~\ref{diff} arise naturally. All the
ingredients remain the same as in the previous subsection, except that
we now replace ${\mathcal V} \subset {\mathcal V}^{\ast}_{{\mathbb Z}}$ with the set
$\mathcal{S}$ of all {projective} integer-matrix binary cubic forms as
defined in \S2.3.
Recall that projective forms correspond under
Theorem~\ref{bcfideal} to valid triples with an invertible
ideal. However, since $N^\ast(S,X)$ only counts irreducible orbits, by
Corollary~\ref{hr2} and Theorem~\ref{reducible}, we obtain
\begin{equation}\label{irredcountorders}
N^\ast(\mathcal{S}\cap V_{{\mathbb Z}}^{\ast (i)}, X) =
\begin{cases} \displaystyle\sum_{0 < {\rm Disc}({\mathcal O}) < X} 3\cdot|{\rm Cl}_3({\mathcal O})| - \sum_{0 < {\rm Disc}({\mathcal O}) < X} |{\mathcal I}_3({\mathcal O})| & \mbox{if } i = 0, \\[.35in]
\displaystyle\sum_{0 < -{\rm Disc}({\mathcal O}) < X} |{\rm Cl}_3({\mathcal O})| - \sum_{0 < -{\rm Disc}({\mathcal O}) < X} |{\mathcal I}_3({\mathcal O})| & \mbox{if } i = 1.
\end{cases}
\end{equation}
As before, let $Y$ be any positive integer and let ${\mathcal S}_p$ denote the
$p$-adic closure of ${\mathcal S}$ in $V_{{\mathbb Z}_p}^\ast$, so that $\cap_p {\mathcal S}_p =
{\mathcal S}$. It follows from Lemma~\ref{primdensity} and Theorem~\ref{thmdensity} that
$$\lim_{X \rightarrow \infty} \frac{N^\ast(\cap_{p<Y} {\mathcal S}_p \cap V_{\mathbb Z}^{\ast (i)}, X)}{X} = \frac{\pi^2}{4n_i^\ast}\cdot\prod_{p<Y} \left(1-\frac{1}{p^2}\right).$$
Letting $Y$ tend to $\infty$ gives
$$\limsup_{X \rightarrow \infty} \frac{N^\ast({\mathcal S} \cap V_{\mathbb Z}^{\ast (i)}, X)}{X} \leq \frac{\pi^2}{4n_i^\ast}\cdot\prod_{p} \left(1-\frac{1}{p^2}\right) = \frac{3}{2n_i^\ast}.$$
Using again ${\mathcal W}_p^\ast$ to denote $V_{\mathbb Z}^\ast \backslash {\mathcal V}_p$, we still have that
$$\bigcap_{p<Y} {\mathcal S}_p \subset ({\mathcal S} \cup \bigcup_{p\geq Y} {\mathcal W}_p^\ast).$$
Thus, it follows from Theorem~\ref{unifest} that
$$\liminf_{X \rightarrow \infty} \frac{N^\ast({\mathcal S} \cap V_{\mathbb Z}^{\ast (i)}, X)}{X} \geq \frac{\pi^2}{4n_i^\ast}\cdot\prod_{p} \left(1-\frac{1}{p^2}\right) - O(\sum_{p \geq Y} p^{-2}),$$
and letting $Y$ tend to $\infty$ gives
$$\liminf_{X \rightarrow \infty} \frac{N({\mathcal S} \cap V_{\mathbb Z}^{\ast (i)}, X)}{X} \geq \frac{\pi^2}{4n_i^\ast}\cdot\prod_{p} \left(1-\frac{1}{p^2}\right) = \frac{3}{2n_i^\ast}.$$
Thus
$$\lim_{X\rightarrow\infty} \frac{N^\ast({\mathcal S} \cap V_{{\mathbb Z}}^{\ast (i)},X)}{X} = \frac{3}{2n_i^\ast}.$$
Since
\begin{equation}\label{disc}
\displaystyle{\lim_{X\rightarrow\infty}\frac{\displaystyle \sum_{{0<{\rm Disc}({\mathcal O})<X}} 1}{X}}
= \displaystyle{\frac{1}{2}} \qquad \mbox{and} \qquad
\displaystyle{\lim_{X\rightarrow\infty}\frac{\displaystyle\sum_{{0<-{\rm Disc}({\mathcal O})<X}} 1}{X}} = \displaystyle{\frac{1}{2}},
\end{equation}
by (\ref{irredcountorders}) we conclude that
\begin{equation}\label{difforders}
\begin{array}{ccccl}
\displaystyle{
\lim_{X\rightarrow\infty}\frac{\displaystyle{\sum_{{0<{\rm Disc}({\mathcal O})<X}} |{\rm Cl}_3({\mathcal O})| - \frac{1}{3}|{\mathcal I}_3({\mathcal O})|}}
{\displaystyle{\sum_{{0<{\rm Disc}({\mathcal O})<X}} 1}} }
&\!\!=\!\!&
\displaystyle{\frac{1}{3}\left(\frac{\displaystyle \frac{3}{2n_0^\ast}}{\displaystyle\frac{1}{2}}\right)}&\!\!=\!\!&
\displaystyle{1}, \quad \mbox{ and}\\[.5in]
\displaystyle{
\lim_{X\rightarrow\infty}\frac{\displaystyle\sum_{{0<-{\rm Disc}({\mathcal O})<X}}|{\rm Cl}_3({\mathcal O})| - |{\mathcal I}_3({\mathcal O})|}
{\displaystyle\sum_{{0<-{\rm Disc}({\mathcal O})<X}} 1} }
&\!\!=\!\!&
\displaystyle{\frac{\displaystyle\frac{3}{2n_1^\ast}}{\displaystyle\frac{1}{2}}} &\!\!=\!\!&
\displaystyle{1}.
\end{array}
\end{equation}
This proves Theorem~\ref{diff}
in the case that $\Sigma$
is the set of all isomorphism classes of quadratic
orders.
In the next section, we will count also the reducible
${\rm SL}_2({\mathbb Z})$-orbits of ${\mathcal S} \cap V_{{\mathbb Z}}^{\ast (i)}$ having bounded
reduced discriminant, which will establish the mean total number of
3-torsion elements in the class groups of imaginary quadratic
orders and of real quadratic orders, as stated in Theorem~\ref{thmorders}.
\section{The mean number of 3-torsion elements in the ideal
groups of quadratic orders (Proofs of Theorems~\ref{thmorders} and~\ref{sigmaid})}
We have seen in \S\ref{noncftorders} that counting irreducible orbits
of integer-matrix binary cubic forms and using the correspondence described in
Theorem~\ref{bcfideal} is not enough to conclude Theorem~2. In addition,
Theorem \ref{reducible} shows that in order to establish
Theorem~\ref{sigmaid}, we must compute the number of {\em reducible}
integer-matrix binary cubic forms, up to the action of ${\rm SL}_2({\mathbb Z})$,
having bounded reduced discriminant. In \cite{Davenport1,Davenport2},
Davenport computed the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of
irreducible integer-coefficient binary cubic forms of bounded
non-reduced discriminant. In this section, we similarly count
reducible integer-matrix forms with bounded reduced discriminant and
establish the following result, from which both
Theorems~\ref{thmorders} and \ref{sigmaid} follow.
\begin{proposition} \label{redprop}
Let $h_{{\rm proj},{\rm red}}(D)$ denote the number of
${\rm SL}_2({\mathbb Z})$-equivalence classes of projective and reducible
integer-matrix binary cubic forms of reduced discriminant $D$. Then
\[
\begin{array}{ccccl}
\displaystyle{\sum_{0 < {\rm Disc}({\mathcal O}) < X}} |H_{\rm red}({\mathcal O})| &=& \displaystyle{\sum_{0<D<X}} h_{{\rm proj},{\rm red}}(D) &=& \displaystyle{\frac{\zeta(2)}{2\zeta(3)}}\cdot X + o(X) \quad \mbox{and} \\
\displaystyle{\sum_{0 < -{\rm Disc}({\mathcal O}) < X}} |H_{\rm red}({\mathcal O})| &=& \displaystyle{\sum_{0<-D<X} h_{{\rm proj},{\rm red}}(D)} &=& \displaystyle{\frac{\zeta(2)}{2\zeta(3)}}\cdot X + o(X).
\end{array}
\]
\end{proposition}
Recall that by definition, if ${\mathcal O}$ is the quadratic ring of discriminant $D$, then $|H_{{\rm red}}({\mathcal O})| = h_{{\rm proj},{\rm red}}(D).$
By Theorem~\ref{reducible}, (\ref{disc}), and
Proposition~\ref{redprop}, we then obtain:
\begin{corollary}[Theorem~\ref{sigmaid}] Let ${\mathcal I}_3({\mathcal O})$ denote the $3$-torsion subgroup of the ideal group of the quadratic order ${\mathcal O}$. Then
\[
\displaystyle{
\lim_{X\rightarrow\infty}\frac{\displaystyle{\sum_{{0<{\rm Disc}({\mathcal O})<X}} |{\mathcal I}_3({\mathcal O})|}}
{\displaystyle{\sum_{{0<{\rm Disc}({\mathcal O})<X}} 1}} }
=
\displaystyle{\frac{\zeta(2)}{\zeta(3)}} \qquad \mbox{and} \qquad
\displaystyle{
\lim_{X\rightarrow\infty}\frac{\displaystyle\sum_{{0<-{\rm Disc}({\mathcal O})<X}}|{\mathcal I}_3({\mathcal O})|}
{\displaystyle\sum_{{0<-{\rm Disc}({\mathcal O})<X}} 1} }
=
\displaystyle{\frac{\zeta(2)}{\zeta(3)}}.
\]
\end{corollary}
Finally, combining Theorem~\ref{sigmaid} with (\ref{difforders}), we conclude:
\begin{corollary}[Theorem~\ref{thmorders}]
\[
\lim_{X \rightarrow \infty} \frac{\displaystyle\sum_{0<{\rm Disc}({\mathcal O})<X} |{\rm Cl}_3({\mathcal O})|}{\displaystyle\sum_{0<{\rm Disc}({\mathcal O})<X} 1}
= \displaystyle 1 + \frac{1}{3}\cdot\frac{\zeta(2)}{\zeta(3)}, \quad \mbox{and} \quad
\lim_{X \rightarrow \infty} \frac{\displaystyle\sum_{0<-{\rm Disc}({\mathcal O})<X} |{\rm Cl}_3({\mathcal O})|}{\displaystyle\sum_{0<-{\rm Disc}({\mathcal O})<X} 1} = \displaystyle 1 + \frac{\zeta(2)}{\zeta(3)}.
\]
\end{corollary}
We now turn to the proof of Proposition~\ref{redprop}.
\subsection{Counting reducible forms of negative reduced discriminant}
We first consider the case of negative reduced discriminant, when the quadratic
Hessian covariant of a binary cubic form is positive-definite.
Gauss described a fundamental domain for
the action of~${\rm SL}_2({\mathbb Z})$ on positive-definite real binary quadratic forms
in terms of inequalities on their coefficients. This allows us to
describe an analogous fundamental domain for the action of~${\rm SL}_2({\mathbb Z})$ on real binary cubic forms
of negative reduced discriminant. Bounding the reduced discriminant
cuts out a region of the fundamental domain which can be described
via suitable bounds on the coefficients of the binary cubic forms (cf.\ Lemma
\ref{funddomain}). Within this region, we show that the number of
${\rm SL}_2({\mathbb Z})$-classes of reducible integer-matrix cubic forms of
bounded reduced discriminant can be computed, up to a negligible error
term, by counting the number of integer-matrix binary cubic forms $f(x,y)$ in the region whose
$x^3$-coefficient is zero and $x^2y$-coefficient is positive.
We then carry out the latter count explicitly.
\subsubsection{Reduction theory}
Recall that if $f(x,y) = ax^3 + 3bx^2y + 3cxy^2 + dy^3$ is a binary cubic form where $a$, $b$, $c$, $d \in {\mathbb Z}$, then there is a canonically associated quadratic form $Q$, called the {\em quadratic $($Hessian$)$ covariant} of $f$, with coefficients defined by (\ref{defABC}):
\begin{equation}\label{quadcov}
Q(x,y) = Ax^2 + Bxy + Cy^2 \quad \mbox{where} \quad A = b^2 - ac, \quad B = ad - bc, \quad \mbox{and} \quad C = c^2 - bd.
\end{equation}
Note that ${\rm Disc}(Q) = {\rm disc}(f)$, so if ${\rm disc}(f)$ is negative, then its
quadratic covariant is definite.
The group ${\rm SL}_2({\mathbb Z})$ acts on the set of positive-definite real binary quadratic forms, and it is well known that a fundamental domain for this action consists of those quadratic forms whose coefficients satisfy
\begin{equation}\label{reduced}
-A < B \leq A < C \qquad \mbox{or} \qquad 0 \leq B \leq A = C.
\end{equation}
We call a binary quadratic form whose coefficients satisfy (\ref{reduced}) {\em reduced}. Any binary cubic form of negative reduced discriminant is ${\rm SL}_2({\mathbb Z})$-equivalent to one whose quadratic covariant is \emph{reduced}. Furthermore, if two such binary cubic forms are equivalent under ${\rm SL}_2({\mathbb Z})$ and both have quadratic covariants that are reduced, then their quadratic covariants are equal. The automorphism group of a reduced quadratic form always includes the identity matrix ${\rm Id}_2$ and its negation $-1\cdot{\rm Id}_2$. In all but two cases, this is the full automorphism group (the binary quadratic form $x^2 + y^2$ has two more distinct automorphisms while $x^2 + xy + y^2$ has 4 more distinct automorphisms).
We now describe bounds on the coefficients of a binary cubic form $f$ with reduced quadratic covariant $Q$ satisfying $0 < -{\rm Disc}(Q) < X$.
\begin{lemma}[{\bf\cite[Lem.~1]{Davenport1}}]\label{funddomain} Let $a$, $b$, $c$, $d$ be real numbers, and let $A$, $B$, $C$ be defined as in $(\ref{quadcov})$. Suppose that
\begin{equation}\label{reduced2}
-A < B \leq A \leq C \qquad \mbox{and} \qquad 0 < 4AC - B^2 < X .
\end{equation}
Then $$|a| < \frac{\sqrt{2}}{\sqrt[4]{3}}\cdot X^{1/4} \qquad |b| < \frac{\sqrt{2}}{\sqrt[4]{3}}\cdot X^{1/4}$$
$$|ad| < \frac{2}{\sqrt{3}}\cdot X^{1/2} \qquad |bc| < \frac{2}{\sqrt{3}}\cdot X^{1/2}$$
$$|ac^3| < \frac{4}{3}\cdot X \qquad |b^3d| < \frac{4}{3}\cdot X$$
$$|c^2(bc - ad)| < X.$$
\end{lemma}
Note that in the previous lemma, we have included some non-reduced quadratic forms, specifically when $A = C$. However, such cases are negligible by the following lemma:
\begin{lemma}[{\bf\cite[Lem.~2]{Davenport1}}]
The number of integral binary cubic forms satisfying
$$-A < B \leq A \leq C \quad \mbox{and} \quad 0 < 4AC - B^2 < X$$ such that $A = C$ is $O(X^{\frac{3}{4}}\log X).$
\end{lemma}
Finally, the following lemma implies that the number of reducible integer-matrix binary cubic forms with reduced quadratic covariant and bounded reduced discriminant is asymptotically the same as the number of binary cubic forms with $a = 0$, reduced quadratic covariant, and bounded reduced discriminant.
\begin{lemma}[{\bf\cite[Lem.~3]{Davenport1}}] The number of reducible integral binary cubic forms $f$ with $a \neq 0$ that satisfy $-A < B \leq A \leq C$ and for which $0 < -{\rm Disc}(Q) < X$ is $O(X^{\frac{3}{4} + \epsilon})$, for any $\epsilon > 0$.
\end{lemma}
Let $h(D)$ denote the number of ${\rm SL}_2({\mathbb Z})$-classes of integer-matrix binary cubic forms of reduced discriminant~$D$, and define~$h'(D)$ to be the number of ${\rm SL}_2({\mathbb Z})$-classes of integer-matrix binary cubic forms of reduced discriminant~$D$ having a representative with $a = 0$ and quadratic covariant that satisfies $-A < B \leq A \leq C$. Then by the previous two lemmas, we see that
\begin{equation}\label{reduction}
\sum_{0<-D<X} h(D) = \sum_{0<-D<X} h'(D) + O(X^{\frac{3}{4} + \epsilon}).
\end{equation}
Thus, we focus our attention on computing $\sum_{0<-D<X} h'(D).$
\subsubsection{The number of binary cubic forms of bounded reduced discriminant with $a = 0$, $b>0$, and reduced quadratic covariant}
If $f(x,y) = 3bx^2y + 3cxy^2 + dy^3$, then the coefficients of the quadratic covariant of $f$ are given by
$$A = b^2, \quad B = -bc, \quad \mbox{and} \quad C = c^2 - bd,$$
and furthermore ${\rm disc}(f) = {\rm Disc}(Q) = -3b^2c^2 + 4b^3d$. We are interested in the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of integer-matrix binary cubic forms with $a = 0$ such that
\begin{equation}\label{reduced3}
-A < B \leq A \leq C \quad \mbox{and} \quad 0 < -{\rm Disc}(Q) < X.
\end{equation}
Note that in order for ${\rm Disc}(Q)$ to be nonzero, we must have $b \neq 0$. Furthermore, the ${\rm SL}_2({\mathbb Z})$-element $-{\rm Id}_2$ acts on a form $f(x,y)$ by negating its coefficients, and thus we can assume that our choice of representative for a given ${\rm SL}_2({\mathbb Z})$-equivalence class has both $a = 0$ and nonnegative $b$. Apart from the cases when $A = B = C$ or $A = C$ and $B = 0$, the restrictions $a = 0$ and $b > 0$ describe a unique representative in each ${\rm SL}_2({\mathbb Z})$-equivalence class of forms satisfying (\ref{reduced3}) and $a = 0$. If $A = B = C$, then the binary cubic form is of the form $3bx^2y - 3bxy^2$. Similarly, if $A = C$ and $B = 0$, then a binary cubic form with such a quadratic covariant is of the form $3bx^2y - by^3$. Thus, by Lemma~\ref{funddomain}, there are $O(X^{1/4})$ such forms in the region described by (\ref{reduced2}) with $a = 0$. If we define $h_1'(D)$ to be the number of integer-matrix binary cubic forms of reduced discriminant $D$ with $a = 0$, $b > 0$, and whose quadratic covariant satisfies $-A < B \leq A \leq C$, then by (\ref{reduction}) we also have
\begin{equation}\label{reduction2}
\sum_{0<-D<X} h(D) = \sum_{0<-D<X} h_1'(D) + O(X^{\frac{3}{4} + \epsilon}).
\end{equation}
To compute $\sum_{0<-D<X} h_1'(D)$, note that the inequalities in (\ref{reduced3}) imply that $-b^2 < bc < b^2 \leq c^2 - 3bd$ and $0 < 3b^2c^2 - 4b^3d <X$ when $a = 0$; hence if $b > 0$, then
$$-b < c \leq b \quad \mbox{and} \quad d < \frac{3}{4}\cdot b.$$
Also, since $B^2 \leq AC$, we have $bd \leq 0$, so $d \leq 0$. Using the upper bound on the reduced discriminant of $f$ and the inequality $A \leq C$ from (\ref{reduced3}), we conclude that
$$\frac{3c^2}{4b} - \frac{X}{4b^3} < d \leq \frac{c^2}{b} - b .$$
The number of integer-matrix binary cubic forms with $a = 0$ and $b > 0$ satisfying (\ref{reduced3}) is therefore
\begin{eqnarray*}
\sum_{0<-D<X} h_1'(D) &=&\sum_{0<b<\frac{\sqrt{2}}{\sqrt[4]{3}}X^{1/4}}\, \sum_{-b < c\phantom{\frac{\sqrt{2}}{\sqrt[4]{3}}}\hspace{-.18in}\leq b}\#\{d \in {\mathbb Z} : \frac{3c^2}{4b} - \frac{X}{4b^3} < d \leq \frac{c^2}{b} - b\} \\
&=&\sum_{0<b<\frac{\sqrt{2}}{\sqrt[4]{3}}X^{1/4}}\,\sum_{-b < c\phantom{\frac{\sqrt{2}}{\sqrt[4]{3}}}\hspace{-.18in}\leq b} \left(\left(\frac{c^2}{b} - b\right) - \left(\frac{3c^2}{4b}-\frac{X}{4b^3}\right) + O(1)\right) \\
&=&\sum_{0<b<\frac{\sqrt{2}}{\sqrt[4]{3}}X^{1/4}} \left(2b\cdot\frac{X}{4b^3} + O(b^2)\right) \\
&=& \frac{\zeta(2)}{2}X + O(X^{3/4})
\end{eqnarray*}
Thus, by (\ref{reduction2}) the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of reducible integer-matrix binary cubic forms having bounded negative reduced discriminant is given by
$$\sum_{0<-D<X} h(D) = \frac{\zeta(2)}{2}\cdot X + O(X^{\frac34+\epsilon}).$$
\pagebreak
\subsubsection{Restriction to projective forms}
We now complete the proof of Proposition \ref{redprop} in the case of negative reduced
discriminant by further restricting to projective forms. Let
$h_{1,{\rm proj}}'(D)$ be the number of projective integer-matrix binary cubic
forms of reduced discriminant $D$ with $a = 0$, $b > 0$, and reduced
quadratic covariant. By~(\ref{projbcf}), we know that such a form is
projective if and only if $(b^2,bc,c^2-bd) = 1$, or equivalently if
and only if $(b,c) = 1$. Thus $h_{1,{\rm proj}}'(D)$ counts those integer-matrix binary cubic forms having reduced
discriminant~$D$, $a = 0$, $b > 0$, $(b,c) = 1$, and reduced
quadratic covariant. Define $h_{1,n}'(D)$ to be the number of integer-matrix binary
cubic forms of reduced discriminant~$D$ with $a = 0$, $b > 0$, $n \mid (b,c)$,
and reduced quadratic covariant. Note that $h_{1,1}'(D) = h_{1,{\rm proj}}'(D).$ We
compute $\sum_{0<-D<X} h_{1,{\rm proj}}'(D)$ by using the
inclusion-exclusion principle:
$$\sum_{0<-D<X} h_{1,{\rm proj}}'(D) = \sum_{0<-D<X} \sum_{n = 1}^\infty \mu(n)h_{1,n}'(D) = \sum_{n=1}^\infty \mu(n)\cdot\left( \sum_{0<-D<X} h_{1,n}'(D)\right),$$
where $\mu(\cdot)$ denotes the M\"obius function.
Fix $n \in {\mathbb Z}$, and let $3bx^2y + 3cxy^2 + dy^3$ have reduced discriminant $D = -3b^2c^2 + 4b^3d$, $\,b > 0$, and $n \mid (b,c)$. Let $b = n\cdot b_1$ and $c = n \cdot c_1$. Assume that $A = b^2$, $B = -bc$, $C = c^2-bd$ satisfy (\ref{reduced3}). Then
$$-b_1 < c \leq b_1 \quad \mbox{ and } \quad d < \frac{3}{4}nb_1.$$ Furthermore, $d \leq 0$ and $d$ satisfies
$$\frac{3nc_1^2}{4b_1} - \frac{X}{4n^3b_1^3} < d \leq \frac{nc_1^2}{b_1} - nb_1.$$
Therefore, the number of integer-matrix binary cubic forms with $a = 0$, $b > 0$, and $n \mid (b,c)$ satisfying~(\ref{reduced3}) is:
\begin{eqnarray*}
\sum_{0<-D<X} h_{1,n}'(D)
&=& \sum_{0<b_1<\frac{\sqrt{2}}{\sqrt[4]{3}n}X^{1/4}}\, \sum_{-b_1 < c_1\phantom{\frac{\sqrt{2}}{\sqrt[4]{3}}}\hspace{-.18in}\leq b_1} \#\{d : \frac{3nc_1^2}{4b_1} - \frac{X}{4n^3b_1^3} < d \leq \frac{nc_1^2}{b_1} - nb_1\} \\
&=& \sum_{0<b_1<\frac{\sqrt{2}}{\sqrt[4]{3}n}X^{1/4}}\, \sum_{-b_1 < c_1\phantom{\frac{\sqrt{2}}{\sqrt[4]{3}}}\hspace{-.18in}\leq b_1} \left(\left(\frac{nc_1^2}{b_1} - nb_1\right) - \left(\frac{3nc_1^2}{4b_1}-\frac{X}{4n^3b_1^3}\right) + O(1)\right) \\
&=& \sum_{0<b_1<\frac{\sqrt{2}}{\sqrt[4]{3}n}X^{1/4}} \left(2b_1\cdot\frac{X}{4n^3b_1^3} + O(nb_1^2)\right) \\
&=& \frac{\zeta(2)}{2n^3}X + O\left(\frac{X^{3/4}}{n^2}\right),
\end{eqnarray*}
where the implied constants are independent of $n$.
We conclude that
\begin{eqnarray*}
\sum_{0<-D<X} h_{1,{\rm proj}}'(D) &=& \sum_{n=1}^\infty \mu(n) \cdot \left(\frac{\zeta(2)}{2n^3}X + O\left(\frac{X^{3/4}}{n^2}\right)\right) \\
&=& \frac{\zeta(2)}{2\zeta(3)}\cdot X + O(X^{3/4}).
\end{eqnarray*}.
If we now let $h_{{\rm proj},{\rm red}}(D)$ denote the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of projective reducible integer-matrix cubic forms of reduced discriminant $D$, then by the analogous reduction formula as in (\ref{reduction2}), we obtain
$$\sum_{0<-D<X} h_{{\rm proj},{\rm red}}(D) = \frac{\zeta(2)}{2\zeta(3)}\cdot X + o(X).$$
\subsection{Counting reducible forms of positive reduced discriminant}
Recall that implicit in our study of reducible binary cubic forms of
negative reduced discriminant was the fact that their quadratic covariants
were definite, and thus the fundamental domain for positive definite
quadratic forms allowed us to make a well-defined choice for a
representative for each ${\rm SL}_2({\mathbb Z})$-class of binary cubic forms we
were counting. If $f(x,y) = ax^3 + 3bx^2y + 3cxy^2 + dy^3$ has
positive reduced discriminant, then its quadratic covariant as defined in
(\ref{quadcov}) is indefinite. However, if we can associate a
different ${\rm SL}_2({\mathbb Z})$-covariant quadratic form that is positive
definite to each binary cubic form of positive reduced discriminant, then we can
carry out the analogous count. Again, we follow Davenport
\cite{Davenport2} and note that a binary cubic form of the form
$f(x,y) = ax^3 + 3bx^2y + 3cxy^2 + dy^3$ with positive reduced discriminant
has one real root and two complex roots. Thus, if $\alpha$ denotes the
real root, then we can write
\begin{equation*
f(x,y) = (y-\alpha\cdot x)(Px^2 + Qxy + Ry^2) \quad \mbox{where} \quad
P = 3b + 3c\alpha + d\alpha^2, \quad Q = 3c + d\alpha, \quad R = d.
\end{equation*}
We call the binary quadratic form with coefficients $P$, $Q$, and $R$ the \emph{$($definite$)$ quadratic factor} of the binary cubic form $f$.
\subsubsection{Reduction theory}
As in the case of reduced negative discriminant, a fundamental domain for the action of ${\rm SL}_2({\mathbb Z})$ consists of those real quadratic forms $Px^2 + Qxy + Ry^2$ whose coefficients satisfy
\begin{equation}\label{reduced4}
-P < Q \leq P < R \quad \mbox{or} \quad 0 \leq Q \leq P = R.
\end{equation}
It is clear that any real binary cubic form having positive reduced discriminant is properly equivalent to one with quadratic factor satisfying the inequalities in (\ref{reduced4}). If there are two such binary cubic forms that are equivalent under ${\rm SL}_2({\mathbb Z})$ and both quadratic factors satisfy (\ref{reduced4}), then the element of ${\rm SL}_2({\mathbb Z})$ taking one cubic form to another must preserve the quadratic factor up to scaling. Thus, it must be an element of the automorphism group of the quadratic factor, hence either ${\rm Id}_2$ or $-{\rm Id}_2$ when the quadratic factor is not a scalar multiple of $x^2 + y^2$ or $x^2 + xy + y^2$. Apart from these two exceptional cases, in each such ${\rm SL}_2({\mathbb Z})$-equivalence class there is one binary cubic form with reduced quadratic factor and $b > 0$.
Furthermore, using the fact that the non-reduced discriminant of a binary form is the product of the pairwise differences of the roots, one can show that
$${\rm disc}(f) = \frac{1}{27}(4PR - Q^2)(P + Q\alpha + R\alpha^2)^2$$
if $\alpha$, $P$, $Q$, $R$ are defined as above. We now state the analogues of Lemmas~\ref{funddomain},~27, and~28.
\pagebreak
\begin{lemma}\label{funddomain2} {\bf \cite[Lem.~1]{Davenport2}} Let $\alpha$, $P$, $Q$, $R$ be real numbers satisfying
\begin{equation}\label{reduced5}
-P < Q \leq P \leq R \quad \mbox{ and } \quad 0 < \frac{1}{27}(4PR-Q^2)(P + Q\alpha + R\alpha^2)^2 <X.
\end{equation}
If $a$, $b$, $c$, and $d$ are given by the formulas
$$a = -P\alpha, \quad b = \frac{P-Q\alpha}{3}, \quad c = \frac{Q - R\alpha}{3}, \quad d = R,$$
then
$$ a < \sqrt{6} X^{1/4} \quad |b| < 2\sqrt[4]{\frac{2}{9}}X^{1/4}$$
$$|ad| < 3\sqrt{2} X^{1/2} \quad |bc| < \frac{4\sqrt{2}}{3}X^{1/2}$$
$$|ac^3| < \frac{20}{3}X \quad |b^3d| < \frac{20}{3}X$$
$$c^2|9bc-ad| < 432X.$$
\end{lemma}
\begin{lemma}[{\bf\cite[Lem.~2]{Davenport2}}] The number of integral binary cubic forms $f$ satisfying
$$-P < Q \leq P \leq R \quad \mbox{and} \quad 0 < {\rm disc}(f) < X$$
such that $P = R$ is $O(X^{\frac{3}{4}}\log X)$.
\end{lemma}
\begin{lemma}[{\bf\cite[Lem.~3]{Davenport2}}] The number of reducible integral binary cubic forms $f$ with $a \neq 0$ that satisfy $-P < Q \leq P \leq R$ and for which $0 < {\rm disc}(f) < X$ is $O(X^{\frac{3}{4} + \epsilon})$, for any $\epsilon > 0$.
\end{lemma}
Define $h'(D)$ to be the number of ${\rm SL}_2({\mathbb Z})$-classes of integer-matrix binary cubic forms having reduced discriminant $D$ with $a = 0$ and whose quadratic factor satisfies $$-P < Q \leq P \leq R.$$ Then by the previous two lemmas, we see that
\begin{equation}\label{reduction3}
\sum_{0<D<X} h(D) = \sum_{0<D<X} h'(D) + O(X^{\frac{3}{4}+\epsilon}).
\end{equation}
Thus, we focus our attention on computing $\sum_{0<D<X} h'(D)$.
\subsubsection{The number of binary cubic forms of bounded reduced discriminant with $a = 0$, $b > 0$ and reduced quadratic factor}
If $f(x,y) = 3bx^2y + 3cxy^2 + dy^3$, then the coefficients of its quadratic factor are given by
$$P = 3b, \quad Q = 3c, \quad R = d.$$
Furthermore, ${\rm disc}(f) = -\frac{1}{27}{\rm Disc}(Px^2 + Qxy + Ry^2)P^2 = -3b^2c^2 + 4b^3d$. We are interested in the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of integer-matrix binary cubic forms $f$ with $a = 0$ such that
$$-P < Q \leq P \leq R \quad \mbox{and} \quad 0 < {\rm disc}(f) < X.$$
Note that in order for the discriminant of $f$ to be nonzero, we must have $b \neq 0$. Thus, we can assume that our choice of representative for a given ${\rm SL}_2({\mathbb Z})$-equivalence class has both $a = 0$ and positive $b$. Apart from the cases when $P = Q = R$ or $P = R$ and $Q = 0$, the restrictions $a = 0$ and $b > 0$ describe a unique representative in each ${\rm SL}_2({\mathbb Z})$-equivalence class of forms satisfying (\ref{reduced4}) and~$a = 0$. If $P = Q = R$, then the binary cubic form is of the form $3bx^2y + 3bxy^2 + 3by^3$. Similarly, if $P = R$ and $Q = 0$, then the binary cubic form is of the form $3bx^2y + 3by^3$. Thus, by Lemma~\ref{funddomain2}, there are $O(X^{1/4})$ such forms in the region described by (\ref{reduced5}) with $a = 0$. If we define $h_1'(D)$ to be the number of integer-matrix binary cubic forms having reduced discriminant $D$ with $a = 0$ and $b > 0$ and whose quadratic factor satisfies $-P < Q \leq P \leq R$, then by (\ref{reduction3}) we have
\begin{equation}\label{reduction4}
\sum_{0<D<X} h(D) = \sum_{0<D<X} h_1'(D) + O(X^{\frac{3}{4} + \epsilon}).
\end{equation}
To compute $\sum_{0<D<X} h_1'(D)$, note that the inequalities in (\ref{reduced5}) imply that $-3b < 3c \leq 3b \leq d$ and $0 < -3b^2c^2 + 4b^3d < X$ when $a = 0$; hence if $b > 0$, then
$$-b < c \leq b \quad \mbox{and} \quad 3b < d.$$
Thus $d > 0$. Using the upper bound on the reduced discriminant of $f$, we conclude that
$$3b < d < \frac{X}{4b^3} + \frac{3c^2}{4b}.$$
Therefore, the number of integer-matrix binary cubic forms with $a = 0$ and $b > 0$ satisfying (\ref{reduced5}) is
\begin{eqnarray*}
\sum_{0<D<X} h_1'(D) &=& \sum_{0<b<\sqrt[4]{\frac{32}{9}}X^{1/4}}\, \sum_{-b < c\phantom{\frac{\sqrt{2}}{\sqrt[4]{3}}}\hspace{-.18in}\leq b} \#\{d : 3b < d < \frac{X}{4b^3} + \frac{3c^2}{4b}\} \\
&=& \sum_{0<b<\sqrt[4]{\frac{32}{9}}X^{1/4}}\, \sum_{-b < c\phantom{\frac{\sqrt{2}}{\sqrt[4]{3}}}\hspace{-.18in}\leq b} \left(\frac{X}{4b^3} + \frac{3c^2}{4b} - 3b + O(1)\right) \\
&=& \sum_{0<b<\sqrt[4]{\frac{32}{9}}X^{1/4}} \left(2b\cdot\frac{X}{4b^3} + O(b^2)\right) \\
&=& \frac{\zeta(2)}{2}X + O(X^{3/4}).
\end{eqnarray*}
Hence, by (\ref{reduction4}), the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of reducible integer-matrix binary cubic forms of bounded positive reduced discriminant is given by
$$\sum_{0<D<X} h(D) = \frac{\zeta(2)}{2}\cdot X + O(X^{3/4 + \epsilon}).$$
\subsubsection{Restriction to projective forms}
We have seen that the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of
reducible integer-matrix binary cubic forms with positive reduced discriminant
less than $X$ is $\frac{\zeta(2)}{2}X+o(X)$. We complete the proof
of Proposition~\ref{redprop} by further restricting to projective
forms. Let $h_{1,{\rm proj}}'(D)$ be the number of projective integer-matrix
binary cubic forms having reduced discriminant $D$, $a = 0$, $b > 0$, and
reduced definite quadratic factor. By~(\ref{projbcf}), we know that
such a form is projective if and only if $(b^2,bc,c^2-bd) = 1$, or
equivalently if and only if $(b,c) = 1$. Thus, $h_{1,{\rm proj}}'(D)$ counts those integer-matrix forms having reduced discriminant $D$, $a = 0$, $b > 0$, $(b,c) = 1$, and reduced quadratic factor. Define $h_{1,n}'(D)$ to be the number of ${\rm SL}_2({\mathbb Z})$-classes of integer-matrix binary cubic forms having reduced discriminant $D$, $a = 0$, $b > 0$, $n \mid (b,c)$, and reduced quadratic factor. Then we have $h_{1,1}'(D) = h_{1,{\rm proj}}'(D)$. As before, we compute $\sum_{0<D<X} h_{1,{\rm proj}}'(D)$ by using the inclusion-exclusion principle:
$$\sum_{0<D<X} h_{1,{\rm proj}}'(D) = \sum_{0<D<X} \sum_{n = 1}^\infty \mu(n)h_{1,n}'(D) = \sum_{n=1}^\infty \mu(n)\cdot\left( \sum_{0<D<X} h_{1,n}'(D)\right).$$
Fix $n \in {\mathbb Z}$, and let $3bx^2y + 3cxy^2 + dy^3$ have reduced discriminant $D = -3b^2c^2 + 4b^3d$, $b > 0$, and $n \mid (b,c)$. Let $b = n\cdot b_1$ and $c = n \cdot c_1$. Assume that $P$, $Q$, $R$ satisfy (\ref{reduced4}); then
$$-b_1 < c_1 \leq b_1 \quad \mbox{ and } \quad 3nb_1 < d.$$ Furthermore, $d > 0$ and $d$ satisfies
$$3nb_1 < d < \frac{X}{4n^3b_1^3} - \frac{3nc_1^2}{4b_1}.$$
Therefore, the number of integer-matrix binary cubic forms with $a = 0$, $b > 0$, and $n \mid (b,c)$ satisfying (\ref{reduced4}) is:
\begin{eqnarray*}
\sum_{0<D<X} h_{1,n}'(D) &=&\sum_{0<b_1<\sqrt[4]{\frac{32}{9n}}X^{1/4}}\, \sum_{-b_1 < c_1\phantom{\sqrt[4]{\frac{32}{9n}}X^{1/4}} \hspace{-.555in}\leq b_1} \#\{d : 3nb_1 < d < \frac{X}{4n^3b_1^3} + \frac{3nc_1^2}{4b_1}\} \\
&=&\sum_{0<b_1<\sqrt[4]{\frac{32}{9n}}X^{1/4}}\, \sum_{-b_1 < c_1\phantom{\sqrt[4]{\frac{32}{9n}}X^{1/4}}\hspace{-.555in}\leq b_1} \left(\frac{X}{4n^3b_1^3} + \frac{3nc_1^2}{4b_1} - 3nb_1 + O(1)\right) \\
&=&\sum_{0<b_1<\sqrt[4]{\frac{32}{9n}}X^{1/4}} \left(2b_1\cdot\frac{X}{4n^3b_1^3} + O(nb_1^2)\right) \\
&=& \frac{\zeta(2)}{2n^3}X + O\left(\frac{X^{3/4}}{n^2}\right),
\end{eqnarray*}
where the implied constants are again independent of $n$. We conclude that
\begin{eqnarray*}
\sum_{0<D<X} h_{1,{\rm proj}}'(D) &=& \sum_{n=1}^\infty \mu(n) \cdot \left(\frac{\zeta(2)}{2n^3}X + O\left(\frac{X^{3/4}}{n^2}\right)\right) \\
&=& \frac{\zeta(2)}{2\zeta(3)}X + O(X^{3/4}).
\end{eqnarray*}
If we let $h_{{\rm proj},{\rm red}}(D)$ denote the number of ${\rm SL}_2({\mathbb Z})$-equivalence classes of projective reducible integer-matrix binary cubic forms of reduced discriminant $D$, then by the analogous reduction formula as in (\ref{reduction2}), we obtain
$$\sum_{0<D<X} h_{{\rm proj},{\rm red}}(D) = \frac{\zeta(2)}{2\zeta(3)}\cdot X + o(X).$$
\section{The mean number of 3-torsion elements in class groups of quadratic orders via ring class field theory
(Proofs of Theorems~\ref{thmorders} and~\ref{gensigmaord})}
In the previous sections, we have proven
Theorems~\ref{theoremdh},~\ref{thmorders} and~\ref{sigmaid} without
appealing to class field theory. To prove Theorem~\ref{gensigmaord}
and Corollary~\ref{maxcase}, we use a generalization of the
class-field-theory argument originally due to Davenport and
Heilbronn. In particular, we show that the elements of ${\rm Cl}_3({\mathcal O})$
for a quadratic order ${\mathcal O}$ can be enumerated via certain non-Galois cubic
fields. This involves the theory of ring class fields (see \cite[\S9]{Cox}), together with the theorem of Davenport and Heilbronn on the density
of discriminants of cubic fields:
\begin{theorem}[\cite{DH}]\label{cubics} Let $N_3(\xi,\eta)$ denote the number of cubic fields $K$ up to isomorphism that satisfy $\xi < {\rm Disc}(K) < \eta$. Then
$$N_3(0,X) = \frac{1}{12\zeta(3)}X + o(X) \quad \mbox{and} \quad N_3(-X,0) = \frac{1}{4\zeta(3)}X + o(X).$$
\end{theorem}
Note that using class field theory, Davenport and Heilbronn were able to conclude Theorem~\ref{theoremdh} using this count. The new contribution of this section is to extend their argument to all orders and acceptable sets of orders (cf.\ Theorems~\ref{thmorders} and~\ref{gensigmaord}).
\subsection{Ring class fields associated to quadratic orders}
For a fixed quadratic order ${\mathcal O}$, denote the field ${\mathcal O} \otimes {\mathbb Q}$ by $k$, and let ${\mathcal O}_k$ denote the maximal order in $k$. If $[{\mathcal O}_k:{\mathcal O}] = f$, then we say that the \emph{conductor} of ${\mathcal O}$ is equal to $f$ (or sometimes, the ideal generated by $f$ in ${\mathcal O}_k$).
We begin with a well-known description of ${\rm Cl}({\mathcal O})$ in terms of ideal classes of~${\mathcal O}_k$:
\begin{lemma}[{\bf \cite[Prop.\ 7.22]{Cox}}]\label{coxlemma} Let $I_{k}(f)$ denote the subgroup of the group of invertible ideals of~${\mathcal O}_k$ consisting of ideals that are prime to $f$, and let $P_{k,{\mathbb Z}}(f)$ denote the subgroup of the group of principal ideals of ${\mathcal O}_k$ consisting of those $(\alpha)$ such that $\alpha \equiv a \pmod{f{\mathcal O}_k}$ for some integer $a$ that is coprime to $f$. Then
$${\rm Cl}({\mathcal O}) \cong I_k(f)/P_{k,{\mathbb Z}}(f).$$
\end{lemma}
Recall that the {\it ray class group} of $k$ of conductor $f$ is defined as the quotient ${\rm Cl}_k(f) := I_k(f)/P_{k,1}(f)$ where $P_{k,1}(f)$ is the subgroup of principal ideals of ${\mathcal O}_k$ consisting of those $(\alpha)$ such that $\alpha \equiv 1 \pmod {f{\mathcal O}_k}$.
By Lemma~\ref{coxlemma}, we have the following exact sequence:
\begin{equation}\label{clses}
1 \rightarrow P_{k,{\mathbb Z}}(f)/P_{k,1}(f) \rightarrow {\rm Cl}_k(f) \rightarrow {\rm Cl}({\mathcal O}) \rightarrow 1.
\end{equation}
Let $\sigma$ denote the nontrivial automorphism of ${\rm Gal}(k/{\mathbb Q})$. For a finite group $G$, let $G[3]$ denote its $3$-Sylow subgroup, and if $G$ is a finite ${\rm Gal}(k/{\mathbb Q})$-module, then we can decompose $G[3] = G[3]^+ \oplus G[3]^-$ where $G[3]^{\pm} := \{ g \in G: \sigma(g) = g^{\pm 1}\}$.
\begin{lemma}[{\bf \cite[Lem.~1.10]{Nakagawa}}]
If ${\mathcal O}$ is an order of conductor $f$, $k$ is the quadratic field ${\mathcal O} \otimes {\mathbb Q}$, and ${\rm Cl}_k(f)$ is the ray class group of $k$ of conductor $f$, then ${\rm Cl}_k(f)[3]^- \cong {\rm Cl}({\mathcal O})[3]$.
\end{lemma}
\begin{proof}
It is clear that the exact sequence in (\ref{clses}) is a sequence of
finite ${\rm Gal}(k/{\mathbb Q})$-modules, and implies the exactness of the
following sequences:
$$1 \rightarrow \left(P_{k,{\mathbb Z}}(f)/P_{k,1}(f)\right)[3]^+ \rightarrow {\rm Cl}_k(f)[3]^+ \rightarrow {\rm Cl}({\mathcal O})[3]^+ \rightarrow 1,$$
$$1 \rightarrow \left(P_{k,{\mathbb Z}}(f)/P_{k,1}(f)\right)[3]^- \rightarrow {\rm Cl}_k(f)[3]^- \rightarrow {\rm Cl}({\mathcal O})[3]^- \rightarrow 1.$$
We can see that $\left(P_{k,{\mathbb Z}}(f)/P_{k,1}(f)\right)[3]^-$ is trivial since any element $[(\alpha)]$ such that $\alpha \equiv a \pmod{f{\mathcal O}_k}$ can also be represented by $a{\mathcal O}_k$. Moreover, for $[a{\mathcal O}_k] \in \left(P_{k,{\mathbb Z}}(f)/P_{k,1}(f)\right)[3]^-$, we must have
$$[a{\mathcal O}_k] = [\sigma(a{\mathcal O}_k)] = [a{\mathcal O}_k]^{-1};$$
hence $[a{\mathcal O}_k]$ has order dividing~2 and~3, and so must be trivial.
Similarly, ${\rm Cl}({\mathcal O})[3]^+$ is trivial since if $[I] \in {\rm Cl}({\mathcal O})[3]^+$, then $[I] = [\sigma(I)]$. Since $N(I) = \sigma(I)I \in {\mathbb Z}$, $[I]$ has order dividing both~2 and~3 in ${\rm Cl}({\mathcal O})$ and is therefore trivial.
\end{proof}
\begin{proposition}\label{rcf}
Let ${\mathcal O}$ be a quadratic order. The number of isomorphism classes of cubic fields~$K$ such that
${\rm Disc}({\mathcal O})=c^2{\rm Disc}(K)$ for some integer $c$ is $\bigl(|{\rm Cl}_3({\mathcal O})|-1\bigr)/2$.
\end{proposition}
\begin{proof}
Let $K$ be a non-Galois cubic field. Then the normal closure
$\widetilde{K}$ of $K$ over ${\mathbb Q}$ contains a unique quadratic field
$k$. One checks that the discriminants of $K$ and $k$ satisfy
${\rm Disc}(K) = {\rm Disc}(k)f^2$, where $f$ is the conductor of the cubic
extension $\widetilde{K}/k$. By class field theory,
$\widetilde{K}/k$ corresponds to a subgroup $H$ of ${\rm Cl}_k(f)[3]$ of
index $3$. Since $\widetilde{K}/{\mathbb Q}$ is a Galois extension, $H$ is a
${\rm Gal}(k/{\mathbb Q})$-module. If~$\sigma$ denotes the nontrivial
automorphism in ${\rm Gal}(k/{\mathbb Q})$, we see that $\widetilde{K}$ is Galois
over ${\mathbb Q}$ if and only if $\sigma(\widetilde{K}) =
\widetilde{K}$. Artin reciprocity implies that the subgroup of
${\rm Cl}_k(f)[3]$ corresponding to $\sigma(\widetilde{K})$ is the image
of $H$ under the action of $\sigma$ on ${\rm Cl}_k(f)[3]$. Thus, since
$\widetilde{K}$ is Galois, we conclude that $H$ is stable under
$\sigma$, and we can write $H = H^+ \oplus H^-$ where $H^\pm := H
\cap {\rm Cl}_k^{\pm}(f)[3]$.
We now show that $H^+ = {\rm Cl}_k^+(f)[3]$. Consider the exact sequence
$$1 \rightarrow {\rm Gal}(\widetilde{K}/k) \rightarrow {\rm Gal}(\widetilde{K}/{\mathbb Q}) \rightarrow {\rm Gal}(k/{\mathbb Q}) \rightarrow 1.$$
Note that, by definition, ${\rm Gal}(\widetilde{K}/k) \cong
{\rm Cl}_k(f)[3]/H$. For any lift of $\sigma$ to $\tilde{\sigma}
\in {\rm Gal}(\widetilde{K}/{\mathbb Q})$, Artin reciprocity implies that
the action of conjugation on ${\rm Gal}(\widetilde{K}/k)$ by
$\tilde{\sigma}$ corresponds to acting by $\sigma$ on
${\rm Cl}_k(f)[3]/H$. Since ${\rm Gal}(\widetilde{K}/{\mathbb Q})$ is isomorphic
to the symmetric group (and is not a direct product),
conjugation ${\rm Gal}(\widetilde{K}/k)$ acts as inversion. Since
the index of $H$ in ${\rm Cl}_k(f)[3]$ is of odd prime order,
either $H^+ = {\rm Cl}_k(f)[3]^+$ or $H^- = {\rm Cl}_k(f)[3]^-$. For
$\tilde{\sigma}$ to act as inversion on ${\rm Cl}_k(f)[3]/H$, we
must have ${\rm Cl}_k(f)[3]/H \cong {\rm Cl}_k(f)[3]^{-}/H^-$. By the
previous lemma, $H$ corresponds to a subgroup of ${\rm Cl}({\mathcal O})[3]$
of index $3$, where ${\mathcal O}$ is the unique quadratic order of index~$f$ in the ring of integers ${\mathcal O}_k$.
Conversely, let $H$ be a subgroup of ${\rm Cl}({\mathcal O})[3] \cong
{\rm Cl}_k(f)[3]^-$ of index $3$ where ${\mathcal O}$ has index~$f$ in ${\mathcal O}_k$. Then $H$ corresponds to a cubic extension of
conductor~$d \mid f$, and the action of $\sigma$ is by
inversion. The above exact sequence therefore does not split,
and ${\rm Gal}(\widetilde{K}/{\mathbb Q}) = {\rm Gal}(\tilde{K}/k) \rtimes
{\rm Gal}(k/{\mathbb Q}).$ Using Pontryagin duality, we then have
\begin{equation}\label{pont}
\begin{array}{rcl}
& & \displaystyle \sum_{d \mid f} \#\{\mbox{non-Galois cubic fields with discriminant equal to } {\rm Disc}(k)\cdot d^2\} \\
&=& \displaystyle \#\{\mbox{subgroups of ${\rm Cl}({\mathcal O})$ of index $3$}\} = \frac{1}{2}\left(\left|{\rm Cl}_3({\mathcal O})\right| - 1\right),
\end{array}
\end{equation}
where ${\mathcal O}$ is the quadratic order of index $f$ in the maximal order
of $k$; equivalently, ${\mathcal O}$ is the unique order with discriminant
equal to ${\rm Disc}(k)\cdot f^2$. Note that conjugate cubic fields are
only counted once, and thus we obtain the desired statement (with $c =
\frac{f}{d}$).
\end{proof}
The integer $c$ corresponding to a cubic field $K$ in
Proposition~\ref{rcf} is called the {\it conductor}~of~$K$ relative to
${\mathcal O}$. In particular, we see that the conductor~$c$ of $K$ relative to
${\mathcal O}$ must divide the conductor~$f$ of ${\mathcal O}$.
\subsection{A second proof of the mean number of 3-torsion elements
in the class groups of quadratic orders (Proof of Theorem~\ref{thmorders} via class field theory)}
Proposition~\ref{rcf} shows that Theorem 2 may be proved by summing,
over all quadratic orders ${\mathcal O}$ of absolute discriminant less than $X$,
the number of cubic fields $K$ such that ${\rm Disc}({\mathcal O})/{\rm Disc}(K)$ is a
square. However, in this sum, a single cubic field $K$ may be counted
a number of times since there are many ${\mathcal O}$ for which
${\rm Disc}({\mathcal O})/{\rm Disc}(K)$ is a square, and one must control the asymptotic
behavior of this sum as $X\to\infty$.
To accomplish this, we rearrange the sum as a sum over the conductor $f$ of ${\mathcal O}$, and then sum
over ${\mathcal O}$ in the interior of this main sum. This allows us to use a uniformity estimate for large $f$, yielding the desired asymptotic formulae. More precisely, for $X$ large and $i = 0,1$, we are interested in evaluating
\begin{equation}\label{ni}
N^{(i)}(X) :=\sum_{0<(-1)^i{\rm Disc}({\mathcal O})<X} \#\{\mbox{cubic fields $K$ such that $\frac{{\rm Disc}({\mathcal O})}{{\rm Disc}(K)}=c^2$, $c \in {\mathbb Z}$}\}.
\end{equation}
We rearrange this as a sum over $c$ and subsequently over cubic fields:
\begin{equation}\label{theabove}
\begin{array}{rcl}
N^{(i)}(X) &=& \displaystyle{\sum_{c=1}^\infty \sum_{0<(-1)^i{\rm Disc}({\mathcal O})<X} \#\{\mbox{cubic fields $K$ such that ${\rm Disc}(K) = {\rm Disc}({\mathcal O})/c^2$}\}} \\
&=& \displaystyle{\sum_{c=1}^\infty \sum_{{\mbox{\scriptsize non-Galois $K$ s.t. }}\atop{\mbox{\scriptsize $0 < (-1)^i {\rm Disc}(K) < X/c^2$}}} 1}. \\
\end{array}
\end{equation}
Let $Y$ be an arbitrary positive integer. From (\ref{theabove}) and Theorem~\ref{cubics}, we obtain
\begin{eqnarray*}
N^{(i)}(X) &=& \displaystyle\sum_{c=1}^{Y-1}\frac{1}{2 n_i \zeta(3)}\cdot\frac{X}{c^2} + o(X) + O\left(\displaystyle\sum_{c=Y}^\infty X/c^2\right)\\
&=& \displaystyle\sum_{c=1}^{Y-1} \frac{1}{2n_i \zeta(3)}\cdot \frac{X}{c^2} + o(X) + O(X/Y),
\end{eqnarray*}
where $n_0 = 6$ and $n_1 = 2$ (i.e., $n_i$ is the size of the automorphism group of ${\mathbb R}^3$ if $i = 0$ and ${\mathbb R}\otimes {\mathbb C}$ if $i = 1$).
Thus,
$$\lim_{X \rightarrow \infty} \frac{N^{(i)}(X)}{X} = \sum_{c=1}^{Y-1} \frac{1}{2n_i \zeta(3)}\cdot \frac{1}{c^2} + O(1/Y).$$
Letting $Y$ tend to $\infty$, we conclude that
$$\lim_{X \rightarrow \infty} \frac{N^{(i)}(X)}{X} = \sum_{c=1}^{\infty} \frac{1}{2n_i \zeta(3)}\cdot\frac{1}{c^2} = \frac{\zeta(2)}{2n_i \zeta(3)}.$$
Finally, using Proposition~\ref{rcf} and (\ref{disc}), we obtain
$$\lim_{X \rightarrow \infty} \frac{\displaystyle\sum_{0<(-1)^i{\rm Disc}({\mathcal O})<X} |{\rm Cl}_3({\mathcal O})|}{\displaystyle\sum_{0<(-1)^i{\rm Disc}({\mathcal O})<X} 1} = 1 + \lim_{X\rightarrow \infty} \frac{4\cdot N^{(i)}(X)}{X} =
\begin{cases} \displaystyle 1 + \frac{\zeta(2)}{3\zeta(3)} & \mbox{if } i = 0, \\[.25in]
\displaystyle 1 + \frac{\zeta(2)}{\zeta(3)} & \mbox{if } i = 1.
\end{cases}$$
\subsection{The mean number of 3-torsion elements in the class groups
of quadratic orders in acceptable families (Proof of Theorem~\ref{gensigmaord})}
We now determine the mean number of 3-torsion elements in the class
groups of quadratic orders satisfying any acceptable set of local
conditions. As described in the introduction, for each prime~$p$, let
$\Sigma_p$ be a set of isomorphism classes of nondegenerate quadratic
rings over ${\mathbb Z}_p$. Recall that a collection $\Sigma = (\Sigma_p)$ is
\emph{acceptable} if for all sufficiently large $p$, the set
$\Sigma_p$ contains the maximal quadratic rings over ${\mathbb Z}_p$. We
denote by $\Sigma$ the set of quadratic orders ${\mathcal O}$ over ${\mathbb Z}$, up to
isomorphism, such that ${\mathcal O} \otimes {\mathbb Z}_p \in \Sigma_p$ for all $p$.
For a quadratic order~${\mathcal O}$, we write ``${\mathcal O} \in \Sigma$'' (or say
that ``${\mathcal O}$ is a $\Sigma$-order'') if ${\mathcal O} \otimes {\mathbb Z}_p \in
\Sigma_p$ for all primes $p$.
Let us fix an acceptable collection $\Sigma = (\Sigma_p)$ of local specifications. We first recall a necessary generalization of Theorem~\ref{cubics}:
\begin{theorem}[{\bf \cite[Thm.~8]{BST}}]\label{gencubics} Let
$(\Sigma^{(3)}_p) \cup \Sigma^{(3)}_\infty$ be an acceptable
collection of local specifications {\em for cubic orders}, i.e., for
all sufficiently large primes $p$, $\Sigma^{(3)}_p$ contains all
maximal cubic rings over ${\mathbb Z}_p$ that are not totally ramified. Let
$\Sigma^{(3)}$ denote the set of all isomorphism classes of orders
${\mathcal O}_3$ in cubic fields for which ${\mathcal O}_3 \otimes {\mathbb Q}_p \in
\Sigma^{(3)}_p$ for all $p$ and ${\mathcal O}_3 \otimes {\mathbb R} \in
\Sigma^{(3)}_\infty$, and denote by $N_3(\Sigma^{(3)},X)$ the number
of cubic orders ${\mathcal O}_3 \in \Sigma^{(3)}$ that satisfy
$|{\rm Disc}({\mathcal O}_3)| < X$. Then
$$N_3(\Sigma^{(3)},X) = \left(\frac{1}{2}\sum_{R_3 \in \Sigma^{(3)}_\infty} \frac{1}{|{\rm Aut}(R_3)|}\right)\cdot\prod_p \left(\frac{p-1}{p}\cdot\sum_{R_3 \in \Sigma^{(3)}_p} \frac{1}{{\rm Disc}_p(R_3)}\cdot \frac{1}{|{\rm Aut}(R_3)|}\right)\cdot X + o(X),$$
where ${\rm Disc}_p(\cdot)$ denotes the $p$-power of ${\rm Disc}(\cdot)$.
\end{theorem}
We can use the above theorem to prove Theorem~\ref{gensigmaord} by
comparing the number of 3-torsion elements in the class groups of
quadratic $\Sigma$-orders ${\mathcal O}$ of absolute discriminant less than $X$
and the number of cubic fields corresponding to such class group
elements of ${\mathcal O} \in \Sigma$ with absolute discriminant less than $X$
via Proposition~\ref{rcf}. Analogous to (\ref{ni}), we define
\begin{equation}\label{nisigma}
\begin{array}{rcl}
N^{(i)}(X, \Sigma) &:=&\displaystyle \sum_{{\mbox{\scriptsize ${\mathcal O} \in \Sigma$ s.t.}}\atop{\mbox{\scriptsize $0<(-1)^i{\rm Disc}({\mathcal O})<X$}}} \#\{\mbox{cubic fields $K$ such that $\frac{{\rm Disc}({\mathcal O})}{{\rm Disc}(K)}=c^2$, $c \in {\mathbb Z}$}\} \\
&=& \displaystyle{\sum_{c=1}^\infty \sum_{{\mbox{\scriptsize ${\mathcal O} \in \Sigma$ s.t.}}\atop{\mbox{\scriptsize $0<(-1)^i{\rm Disc}({\mathcal O})<X$}}} \#\{\mbox{cubic fields $K$ such that ${\rm Disc}(K) = \frac{{\rm Disc}({\mathcal O})}{c^2}$}\}}.
\end{array}
\end{equation}
For any $c \in {\mathbb Z}$, let $c^{-2}\Sigma$ denote the set of quadratic
orders ${\mathcal O}$ that contain an index $c$ $\Sigma$-order. We can
decompose $c^{-2}\Sigma$ into the following local specifications:
for all $p$, if $p^{c_p} \mid\mid c$ where $c_p \in {\mathbb Z}_{\geq 0}$,
let $p^{-2c_p}\Sigma_p$ denote the set of nondegenerate quadratic
rings over ${\mathbb Z}_p$ which contain an index $p^{c_p}$ subring that
lies in $\Sigma_p$. It is then clear that $(p^{-2c_p}\Sigma_p) =
c^{-2}\Sigma$ is acceptable since $\Sigma$
is.
Finally, let $\Sigma^{(3),c}$ be the set of cubic fields $K$ such
that there exists a quadratic order ${\mathcal O} \in c^{-2}\Sigma$ with
${\rm Disc}(K) = {\rm Disc}({\mathcal O})$. These are the set of cubic fields $K$ such
that their {\em quadratic resolvent ring} contains a $\Sigma$-order
with index $c$, or equivalently, is a $c^{-2}\Sigma$-order. Let
$D(K)$ denote the quadratic resolvent ring of the cubic field $K$,
i.e., $D(K)$ is the unique quadratic order with discriminant equal to
that of $K$. The local specifications for $\Sigma^{(3),c}$ are as
follows: for all $p$ and with~$c_p$ defined as above,
$\,\Sigma^{(3),c_p}_p$ is the set of
\'etale cubic algebras $K_p$ over ${\mathbb Q}_p$ such
that the quadratic resolvent ring $D(K_p)$ over ${\mathbb Z}_p$ is a
$p^{-2c_p}\Sigma_p$-order. Meanwhile, $\Sigma^{(3),c}_\infty$ has
one cubic ring over ${\mathbb R}$ specified by the choice $i = 0$ or $1$: it
contains ${\mathbb R}^3$ if $i = 0$ and ${\mathbb R} \otimes {\mathbb C}$ if $i = 1$. Then
$\Sigma^{(3),c} = (\Sigma_p^{(3),c_p}) \cup \Sigma^{(3),c}_\infty$,
and in order to use Theorem~\ref{gencubics}, it remains to show that
$\Sigma^{(3),c}$ is acceptable.
To show the acceptability of $\Sigma^{(3),c}$, consider any $p>2$
large enough so that $\Sigma_p$ contains all maximal quadratic
rings and $c_p = 0$, i.e., $p \nmid c$. Let $K_p$ be an \'etale cubic
algebra over ${\mathbb Q}_p$ that is not totally ramified. This implies that
$p^2 \nmid {\rm Disc}(K_p)$, and so $p^2 \nmid D(K_p)$; therefore,
$D(K_p)$ must be maximal. By our choice of $p$, we have $D(K_p) \in
\Sigma_p$, and so $K_p \in \Sigma^{(3),c}$. Hence $\Sigma^{(3),c}$
is acceptable.
Using these definitions, we can rewrite $N^{(i)}(X,\Sigma)$ as
\begin{equation}\label{theabove2}
\begin{array}{rcl}
N^{(i)}(X, \Sigma) &=& \displaystyle{\sum_{c=1}^\infty \sum_{{\mbox{\scriptsize ${\mathcal O} \in \Sigma$ s.t.}}\atop{\mbox{\scriptsize $0<(-1)^i{\rm Disc}({\mathcal O})<X$}}} \#\{\mbox{cubic fields $K$ such that ${\rm Disc}(K) = \frac{{\rm Disc}({\mathcal O})}{c^2}$}\}} \\
&=& \displaystyle{\sum_{c=1}^\infty \sum_{{\mbox{\scriptsize ${\mathcal O} \in c^{-2}\Sigma$ s.t.}}\atop{\mbox{\scriptsize $0<(-1)^i{\rm Disc}({\mathcal O})<\frac{X}{c^2}$}}} \#\{\mbox{cubic fields $K$ such that ${\rm Disc}(K) = {\rm Disc}({\mathcal O})$}\}} \\
&=& \displaystyle{\sum_{c=1}^\infty \sum_{{\mbox{\scriptsize $K \in \Sigma^{(3),c}$ s.t. }}\atop{\mbox{\scriptsize $0 < (-1)^i {\rm Disc}(K) < \frac{X}{c^2}$}}} 1} \\
&=& \displaystyle{\sum_{c=1}^\infty ~ ~N_3\left(\Sigma^{(3),c},\frac{X}{c^2}\right)}.
\end{array}
\end{equation}
Again, let $Y$ be an arbitrary positive integer. From (\ref{theabove2}), Theorem~\ref{gencubics}, and Theorem~\ref{cubics}, we obtain
\begin{equation*}
\begin{array}{rcl}
N^{(i)}(X, \Sigma) &=& \displaystyle{\sum_{c=1}^{Y-1} \frac{1}{2n_i} \cdot \prod_p \left(\frac{p-1}{p}\cdot\sum_{K_p \in \Sigma^{(3),c_p}_p} \frac{1}{{\rm Disc}_p(K_p)}\cdot \frac{1}{|{\rm Aut}(K_p)|}\right)\cdot \frac{X}{c^2}} \\
&& + \,\,\displaystyle{O\left(\sum_{c=Y}^\infty X/c^2\right)} + \,o(X),
\end{array}
\end{equation*}
where $n_0 = 6$ and $n_1 = 2$ as before. Thus, since $O\left(\sum_{c=Y}^\infty X/c^2\right) = O\left(X/Y\right)$, we have
$$\lim_{X\rightarrow \infty} \frac{N^{(i)}(X,\Sigma)}{X} = \displaystyle{\sum_{c=1}^{Y-1} \frac{1}{2n_i} \cdot \prod_p \left(\frac{p-1}{p}\cdot\sum_{K_p \in \Sigma^{(3),c_p}_p} \frac{1}{{\rm Disc}_p(K_p)}\cdot \frac{1}{|{\rm Aut}(K_p)|}\right)\cdot \frac{1}{c^2}} + \displaystyle{O\left(1/Y\right)}.$$
Letting $Y$ tend to $\infty$, we conclude that
$$\lim_{X\rightarrow \infty} \frac{N^{(i)}(X,\Sigma)}{X} = \displaystyle{\sum_{c=1}^{\infty} \frac{1}{2n_i} \cdot \prod_p \left(\frac{p-1}{p}\cdot\sum_{K_p \in \Sigma^{(3),c_p}_p} \frac{1}{{\rm Disc}_p(K_p)}\cdot \frac{1}{|{\rm Aut}(K_p)|}\right)\cdot \frac{1}{c^2}}.$$
Let $M_\Sigma$ be defined as in (\ref{massdef}) and let $M^{{\rm eq}}_\Sigma$ be the following product of local masses:
\begin{equation}\label{massdef2}
M^{{\rm eq}}_{\Sigma} :=
\prod_p\,
\frac{{\displaystyle\sum_{R\in\Sigma_p}\frac
{C^{{\rm eq}}(R)}
{{\rm Disc}_p(R)}}}
{{\displaystyle \sum_{R\in\Sigma_p}\frac1{{\rm Disc}_p(R)}\cdot\frac{1}{\#{\rm Aut}(R)}}}
= \prod_p\, \frac{{\displaystyle \sum_{R\in \Sigma_p}\frac{C^{{\rm eq}}(R)}{{\rm Disc}_p(R)}}}{\displaystyle{\sum_{R \in \Sigma_p} {\frac{1}{2\cdot {\rm Disc}_p(R)}}}},
\end{equation}
where $C^{{\rm eq}}(R)$ is defined for an \'etale quadratic algebra $R$ over ${\mathbb Z}_p$ as the (weighted) number of \'etale cubic algebras $K_p$ over ${\mathbb Q}_p$ such that $R = D(K_p)$:
$$C^{{\rm eq}}(R) := \sum_{{\mbox{\scriptsize $K_p$ \'etale cubic $/{\mathbb Q}_p$}}\atop{\mbox{\scriptsize s.t. $R = D(K_p)$}}} \frac1{\#{\rm Aut}(K_p)}.$$
Then
\begin{equation}\label{star2}
\begin{array}{rcl}
\displaystyle{ \lim_{X\rightarrow \infty} \frac{N^{(i)}(X, \Sigma)}{X}} &=& \displaystyle{\frac{1}{2n_i}\cdot \displaystyle\sum_{c=1}^\infty \frac{1}{c^2} \cdot \prod_p \left(\frac{p-1}{p}\cdot\sum_{K_p \in \Sigma^{(3),c_p}_p} \frac{1}{{\rm Disc}_p(K_p)}\cdot \frac{1}{|{\rm Aut}(K_p)|}\right)} \\
&=& \displaystyle{\frac{1}{2n_i}\cdot \displaystyle\sum_{c=1}^\infty \frac{1}{c^2} \cdot \prod_p \left(\frac{p-1}{p}\cdot\sum_{R \in p^{-2c_p}\Sigma_p} \frac{1}{{\rm Disc}_p(R)} \cdot C^{{\rm eq}}(R)\right)} \\
&=& \displaystyle{\frac{1}{2n_i}\cdot \displaystyle \prod_p \left(\frac{p-1}{p}\cdot\sum_{i=0}^\infty \sum_{{\mbox{\scriptsize $R \in \Sigma_p$ s.t. }}\atop{\mbox{\scriptsize $\exists R'$ s.t. $ [R':R] = p^{i}$}}} \frac{1}{{\rm Disc}_p(R)} \cdot C^{{\rm eq}}(R')\right)} \\
&=& \displaystyle{\frac{1}{2n_i}\cdot \displaystyle \prod_p \left(\frac{p-1}{p}\cdot\sum_{{R \in \Sigma_p }} \frac{1}{{\rm Disc}_p(R)} \cdot C(R)\right)} .
\end{array}
\end{equation}
Recall that $C(R)$ is defined as the (weighted) number of etale cubic algebras $K_p$ over ${\mathbb Q}_p$ such that $R \subset D(K_p)$ (cf.\ Equation (\ref{Cdef})). The final equality follows from the fact that if we fix $R \in \Sigma_p$, the cubic algebras $K_p$ with discriminant $p^{2i}\cdot{\rm Disc}_p(R)$ are disjoint for distinct choices of $i$. (The pentultimate equality follows from unique factorization of integers.)
Using (\ref{pont}) and (\ref{nisigma}), we see that
\begin{equation}\label{star}
2\cdot N^{(i)}(X,\Sigma) = \sum_{{\mbox{\scriptsize ${\mathcal O} \in \Sigma$ s.t.}}\atop{\mbox{\scriptsize $0<(-1)^i{\rm Disc}({\mathcal O})<X$}}} (\#{\rm Cl}_3({\mathcal O})-1).
\end{equation}
We now have the following elementary lemma counting quadratic orders:
\begin{lemma}[{\bf \cite[\S4]{Bhamass1}}]\label{dht}
\hfill
\begin{itemize}
\item[$($a$)$] The \,number \,of \,\,real \,\,$\Sigma$-orders\, ${\mathcal O}$ \,with
\,\,$|{\rm Disc}({\mathcal O})|<X$ is asymptotically
$$\frac{1}{2}\cdot\prod_p\Bigl(\frac{p-1}{p}\cdot \sum_{R\in\Sigma}
\frac{1}{{\rm Disc}_p(R)}\cdot\frac{1}{\#{\rm Aut}(R)}\Bigr)\cdot X.$$
\item[$($b$)$] The number of complex $\Sigma$-orders ${\mathcal O}$ with $
|{\rm Disc}({\mathcal O})|<X$ is asymptotically
$$\frac{1}{2} \cdot \prod_p\Bigl(\frac{p-1}{p}\cdot \sum_{R\in\Sigma}
\frac{1}{{\rm Disc}_p(R)}\cdot\frac{1}{\#{\rm Aut}(R)}\Bigr)\cdot X.$$
\end{itemize}
\end{lemma}
By (\ref{star2}), (\ref{star}), and Lemma~\ref{dht}, we then obtain
\begin{equation}
\begin{array}{rcl}
\displaystyle{\lim_{X \rightarrow \infty}} \displaystyle{\frac{\displaystyle{\sum_{{{\mathcal O} \in \Sigma,}\atop{0<(-1)^i{\rm Disc}({\mathcal O})<X}}} \#{\rm Cl}_3({\mathcal O})}{\displaystyle{\sum_{{{\mathcal O} \in \Sigma,}\atop{0<(-1)^i{\rm Disc}({\mathcal O})<X}} 1 }}}
&=& 1 + \displaystyle{\lim_{X \rightarrow \infty}} \frac{\displaystyle{2\cdot N^{(i)}(X,\Sigma)}}{\displaystyle{\sum_{{{\mathcal O} \in \Sigma,}\atop{0<(-1)^i{\rm Disc}({\mathcal O})<X}}} 1 } \\[.35in]
&=& 1 + \frac{\displaystyle{\frac{1}{n_i}\cdot \displaystyle \prod_p \left(\frac{p-1}{p}\cdot\sum_{{R \in \Sigma_p }} \frac{1}{{\rm Disc}_p(R)} \cdot C(R)\right)}}{\displaystyle{\frac{1}{2} \cdot \prod_p \left(\frac{p-1}{p} \cdot \sum_{R \in \Sigma_p} \frac{1}{{\rm Disc}_p(R)} \cdot \frac{1}{\#{\rm Aut}(R)}\right)}} \\[.6in]
&=& 1 + \displaystyle{\frac{2}{n_i}\cdot \prod_p \frac{\displaystyle{\sum_{{R \in \Sigma_p }} \frac{C(R)}{{\rm Disc}_p(R)}}}{\displaystyle{\sum_{R \in \Sigma_p} \frac{1}{2\cdot {\rm Disc}_p(R)}}}} \\
&=& 1 + \displaystyle{\frac{2}{n_i} \cdot M_\Sigma}.
\end{array}
\end{equation}
As $n_0 = 6$ and $n_1 = 2$, this proves Theorem~\ref{gensigmaord}.
\subsection{Families of quadratic fields defined by finitely many
local conditions always have the
same average number of 3-torsion elements in their class groups (Proof of Corollary 4)}
We now consider the special case of Theorem~\ref{gensigmaord} where
$(\Sigma_p)$ is any acceptable collection of local specifications of
\emph{maximal} quadratic rings over ${\mathbb Z}_p$. Then, if $\Sigma$ denotes
the set of all isomorphism classes of quadratic orders ${\mathcal O}$ such that
${\mathcal O} \otimes {\mathbb Z}_p \in \Sigma_p$ for all $p$, then $\Sigma$ will be a set
of maximal orders satisfying a specified set of local conditions at
some finite set of primes. We prove in this section that regardless of
what acceptable set of maximal orders $\Sigma$ is, the average size of
the 3-torsion subgroup in the class groups of imaginary (resp. real)
quadratic orders in $\Sigma$ is always given by $2$ (resp.\ $\frac{4}{3}$). To do so,
we use Theorem~\ref{gensigmaord} and show that $M_\Sigma = 1$ in these
cases.
\begin{lemma} For any maximal quadratic ring $R$ over ${\mathbb Z}_p$, we have $C(R) = \frac{1}{2}$, where $C(R)$ denotes the weighted number of \'etale cubic algebras $K_p$ over ${\mathbb Q}_p$ such that $R$ is contained in the unique quadratic algebra over ${\mathbb Z}_p$ with the same discriminant as $K_p$ $($cf.\ Equation $(\ref{Cdef}))$.
\end{lemma}
\begin{proof} For all primes $p \neq 2$, there are 4 maximal quadratic rings over ${\mathbb Z}_p$ (up to isomorphism), namely ${\mathbb Z}_p \oplus {\mathbb Z}_p$, ${\mathbb Z}_p[\sqrt{p}]$, ${\mathbb Z}_p[\sqrt{\epsilon}]$, and ${\mathbb Z}_p[\sqrt{\epsilon\cdot p}]$, where $\epsilon$ is an integer which is not a square mod $p$. For each choice of $R$, we compute $C(R)$:
\begin{eqnarray*}
C({\mathbb Z}_p \oplus {\mathbb Z}_p) &=& \frac{1}{\#{\rm Aut}({\mathbb Q}_p \oplus {\mathbb Q}_p \oplus {\mathbb Q}_p)} + \frac{1}{\#{\rm Aut}({\mathbb Q}_{p^3})} = \frac{1}{6} + \frac{1}{3} = \frac{1}{2},\\
C({\mathbb Z}_p[\sqrt{\alpha}]) &=& \frac{1}{\#{\rm Aut}({\mathbb Q}_p \oplus {\mathbb Q}_p[\sqrt{\alpha}])} = \frac{1}{2} \quad \mbox{for $\alpha = p$, $\epsilon$ and $p\cdot\epsilon.$}
\end{eqnarray*}
Here, ${\mathbb Q}_{p^3}$ denotes the unique unramified cubic extension of ${\mathbb Q}_p$. Note that any ramified cubic field extension $K_p$ of ${\mathbb Q}_p$ has discriminant divisible by $p^2$ (since $p$ will have ramification index~$3$ in $K_p$). This implies that $D(K_p)$ is not maximal for ramified $K_p$, and so no maximal quadratic ring is contained in $D(K_p)$.
When $p = 2$, there are 8 maximal quadratic rings over ${\mathbb Z}_2$ (up to isomorphism), namely, ${\mathbb Z}_2 \oplus {\mathbb Z}_2$ and ${\mathbb Z}_2[\sqrt{\alpha}]$ where $\alpha = 2$, $3$, $5$, $6$, $7$, $10$, or $14$. As above, we have that $C({\mathbb Z}_2 \oplus {\mathbb Z}_2) = \frac{1}{2}$. Finally, it is easy to see that for each possible value of $\alpha$,
$$C({\mathbb Z}_2[\sqrt{\alpha}]) = \frac{1}{\#{\rm Aut}({\mathbb Q}_2 \oplus {\mathbb Q}_2[\sqrt{\alpha}])} = \frac{1}{2}.$$
Again, any ramified cubic extension $K_2$ of ${\mathbb Q}_2$ has discriminant divisible by $4$, which implies that $D(K_2)$ does not contain any maximal orders.
\end{proof}
By the above lemma, we see that if $(\Sigma_p)$ is any acceptable collection of local specifications of maximal quadratic rings over ${\mathbb Z}_p$, then
$$\sum_{R \in \Sigma_p} \frac{C(R)}{{\rm Disc}_p(R)} = \sum_{R \in \Sigma_p} \frac{1}{2 \cdot {\rm Disc}_p(R)}.$$
Thus $M_\Sigma = 1$, and so by Theorem~\ref{gensigmaord} we obtain Corollary~\ref{maxcase}.
\section{The mean number of 3-torsion elements in the ideal groups of
quadratic orders in acceptable families (Proof of Theorems~\ref{gensigmaid} and \ref{diff})}
Finally, we prove Theorems~\ref{gensigmaid} and \ref{diff}, which generalize Theorem \ref{sigmaid} and the work of \S3.2 by determining the mean number of 3-torsion elements
in the ideal groups of quadratic orders satisfying quite general sets
of local conditions.
To this end, fix an acceptable collection $(\Sigma_p)$ of local
specifications for quadratic orders, and fix any $i\in\{0,1\}$. Let $S = S({\Sigma,i})$
denote the set of all irreducible elements $v\in
V_{\mathbb Z}^{\ast (i)}$ such that, in the corresponding triple $({\mathcal O},I,\delta)$, we
have that ${\mathcal O}\in\Sigma$ and $I$ is invertible as an ideal class of ${\mathcal O}$
(implying that $I\otimes{\mathbb Z}_p$ is the trivial ideal class of
${\mathcal O}\otimes{\mathbb Z}_p$ for all $p$).
\begin{proposition}[{\bf \cite[Thm.~31]{BST}}] Let $S_p(\Sigma,i)$
denote the closure of $S(\Sigma,i)$ in $V^\ast_{{\mathbb Z}_p}$.
Then
\begin{equation}\label{ramanujan22}
\lim_{X\to\infty} \frac{N^\ast(S(\Sigma,i);X)}X
\,\,=\,\,
\frac{1}{2n_i^\ast}\cdot
\prod_p\Bigl(\frac{p-1}{p}\cdot \sum_{x\in S_p(\Sigma,i)/{\rm GL}_2({{\mathbb Z}_p})}
\frac{1}{{\rm disc}_p(x)}\cdot\frac{1}{|{\rm Stab}_{{\rm GL}_2({\mathbb Z}_p)}(x)|}\Bigr),
\end{equation}
where ${\rm disc}_p(x)$ denotes the reduced discriminant of $x\in V_{{\mathbb Z}_p}^\ast$ as a
power of $p$ and ${\rm Stab}_{{\rm GL}_2({\mathbb Z}_p)}(x)$ denotes the stabilizer of $x$ in
${\rm GL}_2({\mathbb Z}_p)$. \end{proposition}
\begin{proof} First, note that although $S(\Sigma,i)$ might be defined by infinitely
many congruence conditions, the estimate provided
in Proposition~\ref{unifest} (and the fact that $\Sigma$ is
acceptable) shows that equation (\ref{ramanujan}) continues to hold
for the set $S(\Sigma,i)$, i.e.,
\begin{equation*}
\lim_{X \rightarrow \infty} \frac{N^\ast(S(\Sigma,i),X)}{X} = \frac{\pi^2}{4 \cdot n_i^\ast}\prod_p \mu_p^\ast(S(\Sigma,i)).
\end{equation*}
The argument is identical to that in \S\ref{thm1pf} or \S\ref{noncftorders}.
If $\mu_p(S)$ denotes the $p$-adic density of $S$ in $V_{\mathbb Z}$, where $\mu_p$ is normalized so that $\mu_p(V_{\mathbb Z}) = 1$, then $\mu_p^\ast(S) = \mu_p(S)$ for $p \neq 3$ and $\mu_3^\ast(S) = 9 \mu_3(S)$. (This is just a reformulation of the fact that $[V_{\mathbb Z}:V_{\mathbb Z}^\ast] = 9$.) Thus,
\begin{equation*}
\lim_{X \rightarrow \infty} \frac{N^\ast(S(\Sigma,i),X)}{X} = \frac{9\cdot\pi^2}{4 \cdot n_i^\ast}\prod_p \mu_p(S(\Sigma,i)).
\end{equation*}
By \cite[Lemma~32]{BST}, we have that
$$\mu_p(S(\Sigma,i)) = \frac{\#{\rm GL}_2({\mathbb F}_p)}{p^4}\cdot \sum_{x \in S_p/{\rm GL}_2({\mathbb Z}_p)} \frac{1}{{\rm Disc}_p(x)}\cdot \frac{1}{|{\rm Stab}_{{\rm GL}_2({\mathbb Z}_p)}(x)|},$$
where ${\rm Disc}_p(x)$ denotes the discriminant of $x \in V_{{\mathbb Z}_p}^\ast$ as a power of $p$.
Note that since ${\rm Disc}_p(x) = {\rm disc}_p(x)$ for all $p \neq 3$ and ${\rm Disc}_3(x) = 27\cdot{\rm disc}_3(x)$, we have that
\begin{eqnarray*}
\lim_{X \rightarrow \infty} \frac{N^\ast(S(\Sigma,i),X)}{X}
&=& \frac{9\cdot\pi^2}{4 \cdot n_i^\ast}\cdot \frac{1}{27}\cdot \prod_p \frac{\#{\rm GL}_2({\mathbb F}_p)}{p^4}\cdot \sum_{x \in S_p(\Sigma,i)/{\rm GL}_2({\mathbb Z}_p)} \frac{1}{{\rm disc}_p(x)}\cdot \frac{1}{|{\rm Stab}_{{\rm GL}_2({\mathbb Z}_p)}(x)|} \\
&=& \frac{1}{2n_i^\ast} \cdot \prod_p\left(\frac{p-1}{p}\right) \sum_{x \in S_p(\Sigma,i)/{\rm GL}_2({\mathbb Z}_p)} \frac{1}{{\rm disc}_p(x)}\cdot \frac{1}{|{\rm Stab}_{{\rm GL}_2({\mathbb Z}_p)}(x)|}.
\end{eqnarray*}
\end{proof}
Now, if we set
\begin{equation}\label{padicmassdef}
M_p(S(\Sigma,i)) := \sum_{x \in S_p(\Sigma,i)/{\rm GL}_2({\mathbb Z}_p)} \frac{1}{{\rm disc}_p(x)}\cdot\frac{1}{|{\rm Stab}_{{\rm GL}_2({\mathbb Z}_p)}(x)|},
\end{equation}
then the description of the stabilizer in Corollary~\ref{gl2bijection} (in its form over ${\mathbb Z}_p$; see Remark~\ref{rmkzp})
allows us to express $M_p(S(\Sigma,i))$ in another way.
Namely, if $R\in\Sigma_p$ is a nondegenerate
quadratic ring over~${\mathbb Z}_p$, then in a corresponding triple
$(R,I,\delta)$ we can always choose $I=R$, since $I$ is a principal
ideal (recall that invertible means locally principal).
Let $\tau(R)$ denote the number of elements $\delta$, modulo cubes, yielding a
valid triple $(R,R,\delta)$ over ${\mathbb Z}_p$. Then
$\tau(R)=|U^+(R)/U^+(R)^{\times3}|$, where $U^+(R)$ denotes the group of units
of $R$ having norm~1. Since $(R,R,\delta)$ is
${\rm GL}_2({\mathbb Z}_p)$-equivalent to the triple $(R,R,\bar\delta)$, and $\bar\delta =
\kappa^3\delta$ for some $\kappa\in R\otimes{\mathbb Q}$ if and only if
$\delta$ is itself a cube (since $\bar\delta=N(I)^3/\delta$), we see
that
\begin{equation}\label{massdef3}
M_p(S(\Sigma,i)) = \sum \frac{|U^+(R)/U^+(R)^{\times3}|}
{{\rm Disc}_p(R)\cdot|{\rm Aut}(R;R,\delta)|
\cdot|U_3^+(R)|},
\end{equation}
where the sum is over all isomorphism classes of quadratic rings $R$ over
${\mathbb Z}_p$ lying in $\Sigma_p$, and where $U_3^+(R)$ denotes the subgroup of 3-torsion elements of $U^+(R)$.
We have the following lemma:
\begin{lemma}\label{weird}
Let $R$ be a nondegenerate quadratic ring over ${\mathbb Z}_p$. Then
\[\frac{|U^+(R)/U^+(R)^{\times 3}|}{|U_3^+(R)|}
\]
is $1$ if $p\neq 3$, and is $3$ if $p=3$.
\end{lemma}
\begin{proof}
The unit group of $R$, as a multiplicative group, is a finitely
generated, rank 2 ${\mathbb Z}_p$-module. Hence the submodule $U^+(R)$,
consisting of those units having norm 1, is a finitely generated
rank~1 ${\mathbb Z}_p$-module. It follows that there is an isomorphism $U^+(R)\cong F \times
{\mathbb Z}_p$ as ${\mathbb Z}_p$-modules, where $F$ is a finite abelian $p$-group.
Let $F_3$ denote the 3-torsion subgroup of $F$. Since $F_3$ is the
kernel of the multiplication-by-3 map on $F$, it is clear that
$|F/(3\cdot F)|/|F_3|=1$. Therefore, it suffices to check the lemma on the
``free'' part of $U^+(R)$, namely, the ${\mathbb Z}_p$-module ${\mathbb Z}_p$,
where the result is clear. (The case $p=3$ differs because
$3\cdot{\mathbb Z}_p$ equals ${\mathbb Z}_p$ for $p\neq 3$, while
$3\cdot{\mathbb Z}_3$ has index 3 in ${\mathbb Z}_3$.)
\end{proof}
Combining (\ref{ramanujan22}), (\ref{padicmassdef}), (\ref{massdef3}), and
Lemma~\ref{weird}, we obtain
\begin{equation}\label{ramanujan2}
\lim_{X\to\infty} \frac{N^\ast(S(\Sigma,i), X)}X
=\frac{3}{2n_i^\ast}\cdot
\prod_p\Bigl(\frac{p-1}{p}\cdot \sum_{R\in\Sigma_p}
\frac{1}{2\cdot{\rm Disc}_p(R)}\Bigr).
\end{equation}
By Corollary~\ref{hr2} and Theorem~\ref{reducible}, we have
\begin{equation}\label{irredcountgen}
N^\ast(S(\Sigma,i), X) =
\begin{cases} \displaystyle\sum_{{{\mathcal O} \in \Sigma,}\atop{0<{\rm Disc}({\mathcal O})<X}} \left(3\cdot|{\rm Cl}_3({\mathcal O})| - |{\mathcal I}_3({\mathcal O})|\right) & \mbox{if } i = 0, \\[.35in]
\displaystyle\sum_{{{\mathcal O} \in \Sigma,}\atop{0 < -{\rm Disc}({\mathcal O}) < X}} \left(|{\rm Cl}_3({\mathcal O})| - |{\mathcal I}_3({\mathcal O})|\right) & \mbox{if } i = 1.
\end{cases}
\end{equation}
Hence we conclude using Lemma~\ref{dht} that
\begin{eqnarray*}
\frac{\displaystyle\sum_{{{\mathcal O} \in \Sigma,}\atop{0<{\rm Disc}({\mathcal O})<X}} \left(|{\rm Cl}_3({\mathcal O})| - \frac{1}{3}\cdot |{\mathcal I}_3({\mathcal O})|\right)}{\displaystyle\sum_{{{\mathcal O} \in \Sigma,}\atop{0<{\rm Disc}({\mathcal O})<X}} 1} &=& \frac{1}{3}\cdot\frac{\displaystyle\frac{3}{2n_0^\ast} \cdot\prod_p\left(\frac{p-1}{p}\cdot \sum_{R\in\Sigma_p}
\frac{1}{2 \cdot {\rm Disc}_p(R)}\right)}{\displaystyle\frac{1}{2}\cdot\prod_p\left(\frac{p-1}{p}\cdot \sum_{R\in\Sigma_p}
\frac{1}{2 \cdot {\rm Disc}_p(R)}\right)} = 1, \quad \mbox{and} \\\frac{\displaystyle\sum_{{{\mathcal O} \in \Sigma,}\atop{0<-{\rm Disc}({\mathcal O})<X}} \Bigl(|{\rm Cl}_3({\mathcal O})| - |{\mathcal I}_3({\mathcal O})|\Bigr)}{\displaystyle\sum_{{{\mathcal O} \in \Sigma,}\atop{0<-{\rm Disc}({\mathcal O})<X}} 1} &=& \frac{\displaystyle\frac{3}{2n_1^\ast} \cdot\prod_p\left(\frac{p-1}{p}\cdot \sum_{R\in\Sigma_p}
\frac{1}{2 \cdot {\rm Disc}_p(R)}\right)}{\displaystyle\frac{1}{2}\cdot\prod_p\left(\frac{p-1}{p}\cdot \sum_{R\in\Sigma_p}
\frac{1}{2 \cdot {\rm Disc}_p(R)}\right)} = 1,
\end{eqnarray*}
yielding Theorem~\ref{diff}. In conjunction with
Theorem~\ref{gensigmaord}, we also then obtain Theorem~\ref{gensigmaid}.
\subsection*{Acknowledgments}
We are very grateful to Henri Cohen, Benedict Gross, Hendrik Lenstra,
Jay Pottharst, Arul Shankar, Peter Stevenhagen, and Jacob Tsimerman
for all their help and for many valuable
discussions. The first author was supported by the Packard and Simons
Foundations and NSF Grant DMS-1001828. The second author was supported
by a National Defense Science \& Engineering Fellowship and an NSF
Graduate Research Fellowship.
|
1401.5832
|
\section{Effective surface Hamiltonian}
\end{document}
|
2212.08449
|
\section*{Supplementary Materials}
\subsection*{S1: AFM indentation measurements on fabricated ALD resonators}
We fit the measured curves of $F$ versus $\delta$ to extract the pretension $n_0$ and Young's modulus $E$ of the fabricated resonators. The applied force $F$ equals the product of the cantilever stiffness $k_c$ and its deflection $\Delta z_c$. We use a cantilever with $k_c=$ 53.7 $\pm$ \SI{0.1}{N/m}, and repeat the indentation measurement three times for each device.
The classical relation for the bending rigidity, $D=Et^3/(12(1-\nu^2))$, in general, is not valid for multilayer 2D materials, where the interlayer shear interactions are weak and slippage is inevitable. As a result, a calibration factor $\gamma$ is induced to describe this interaction, giving the formula as $D=\gamma Et^3/(12(1-\nu^2))$. Since the layers number of graphene and MoS$_2$ in the fabricated heterostructures are 40 and 13 roughly, we adopt $\gamma_g=0.1$ and $\gamma_m=0.4$ from literature \cite{wang2019bending}, respectively. We also assume $\gamma_n=\gamma_m$ due to their similar lattice structures. For NbS$_2$ resonators, we can directly fit the measured $F$ versus $\delta$ to Eq. 1 to obtain $n_0$ and $E_n$. However, for heterostructure resonators, considering the different mechanical properties of graphene and MoS$_2$ layers, their effective Young's modulus $E_h$ and effective bending rigidity $D_h$ are given by \cite{ye2017atomic,vsivskins2022nanomechanical}
\setcounter{equation}{0}
\begin{equation}
\label{eq:S1}
\begin{array}{l}
E_ht_h = E_gt_g+E_mt_m\quad \text{and}\quad D_ht_h = D_gt_g+D_mt_m,
\end{array}
\tag{S1}
\end{equation}
respectively, where $t_h=t_m+t_g$. As a result, the relation of $F$ versus $\delta$ for heterostructure resonators is expressed as
\setcounter{equation}{1}
\begin{equation}
\label{eq:S2}
F = \left[\frac{4\pi}{3r^2}\cdot \left(\frac{\gamma_g E_gt_g^4}{1-\nu_g^2}\cdot \frac{1}{t_h}+\frac{\gamma_m E_mt_m^4}{1-\nu_m^2}\cdot \frac{1}{t_h}\right) \right]\delta+n_0\pi\delta+(E_gt_g+E_mt_m)q^3 \left(\frac{\delta^3}{r^2}\right).
\tag{S2}
\end{equation}
Using the values of $t_g=13.3$ \SI{}{nm}, $t_m=7.8$ \SI{}{nm}, $\nu_g=0.165$, $\nu_m=0.25$, $\gamma_g=0.1$ and $\gamma_m=0.4$, the part inside parenthesis of the first term in Eq. \ref{eq:S2} can be rewritten as $(E_gt_g\cdot11.46+E_mt_m\cdot9.60)\times10^{-18}$. This part is then replaced by $E_ht_h\cdot9.60\times10^{-18}$ and $E_ht_h\cdot11.46\times10^{-18}$ separately in the fit, causing only a deviation $<0.7\%$ to the extracted $n_0$.
\subsection*{S2: Experimental results of nanomechanical characteristics for all fabricated ALD resonators in this work}
TABLE \ref{table: database1} gives the measured parameters of all ALD heterostructure devices, including radius $r$ of drums, Young's modulus $E_h$, pretension $n_0$, fundamental resonance frequency $f_0^h$ and $f_0^g$ (corresponding to the graphene membrane before ALD), quality factor $Q_h$ and $Q_g$, modes ratio $f_1^h/f_0^h$, and the calibration factor $\eta$ with respect to the mass of membrane. TABLE \ref{table: database2} gives the measured parameters of all ALD NbS$_2$ devices. We cannot extract a precise value of $E_n$ for device D8 due to its small size, which needs a quite large loading $F$ to achieve the effective indentation with cubic regime. The second resonance frequency $f_1$ of device D8 is missed as well, since we use a low pass filter (up to \SI{60}{MHz}) on VNA during the dynamic measurements.
Note that in the calculation of $\eta$ for heterostructure resonators, we have already used the relation $\sigma =\eta(\rho_g t_g + \rho_m t_m)$, where the thickness $t_m$ of ALD MoS$_2$ is determined by scanning the scratch of MoS$_2$ layer on Si/Si$_2$ substrate. We assume that ALD MoS$_2$ mainly grows on top of the graphene membrane, since the average value of $\eta$ in TABLE~\ref{table: database1} is close to 1, indicating that the thickness of ALD MoS$_2$ layer in heterostructure is roughly equal to that on substrate. For heterostructure devices D4 and D5, $\eta$ is larger than 1, which might result from a small quantity of ALD MoS$_2$ deposited on the bottom of graphene.
Figure~\ref{fig:SI1} shows the AFM scanning images of ALD NbS$_2$ device D1 and D2, as well as all ALD heterostructure devices. We can observe the visible polymer residues, crumples and wrinkles on these devices, which significantly affect their static and dynamic properties. More details about the TEM images of ALD MoS$_2$ can be found in our previous work \cite{sharma2020large}.
\begin{table} [h]
\centering
\label{table: database1}
\caption{Nanomechanical properties of graphene/MoS$_2$(ALD) heterostructure resonators.}
\begin{tabular}{lllllllll}
\hline
Device & $E$ (\SI{}{GPa}) & $n_0$ (\SI{}{N/m}) & $f^g_0$ (\SI{}{MHz}) & $Q_g$ & $f^h_0$ (\SI{}{MHz}) & $Q_h$ & $f^h_1/f^h_0$ & $\eta$ \\
\hline
D1 & 951.7 & 1.418 & 8.3 & 100.6 & 23.5 & 84.2 & 1.41 & 1.16 \\
\hline
D2 & 771.4 & 1.547 & 13.6 & 99.1 & 30.7 & 94.0 & 1.49 & 0.61 \\
\hline
D4 & 825.5 & 1.074 & 8.5 & 63.2 & 16.2 & 33.9 & 1.58 & 2.03 \\
\hline
D5 & 1095.7 & 1.293 & 9.0 & 57.5 & 14.6 & 34.0 & 1.71 & 3.23 \\
\hline
D6 & 1182.6 & 1.462 & 7.6 & 125.9 & 22.9 & 72.8 & 1.53 & 1.44 \\
\hline
D7 & 950.7 & 1.318 & 8.4 & 99.4 & 28.3 & 63.0 & 1.67 & 0.78 \\
\hline
D8 & 1162.9 & 0.892 & 10.2 & 33.6 & 25.8 & 26.4 & 1.92 & 1.01 \\
\hline
D10 & 820.9 & 1.010 & 12.4 & 43.0 & 35.0 & 50.3 & 1.45 & 0.43 \\
\hline
\end{tabular}
\end{table}
\begin{table} [h]
\centering
\label{table: database2}
\caption{Nanomechanical properties of ALD NbS$_2$ resonators.}
\begin{tabular}{lllllllll}
\hline
Device & $r$ (\SI{}{\micro\meter}) & $E_n$ (\SI{}{GPa}) & $n_0$ (\SI{}{N/m}) & $f_0$ (\SI{}{MHz}) & $Q_n$ & $f_1/f_0$ & $\eta$ \\
\hline
D1 & 4 & 116.3 & 0.899 & 12.4 & 31.2 & 1.81 & 2.05\\
\hline
D2 & 4 & 101.8 & 0.447 & 11.2 & 28.6 & 2.26 & 2.12\\
\hline
D3 & 4 & 96.5 & 0.593 & 10.8 & 26.7 & 1.58 & 2.20\\
\hline
D4 & 3 & 87.5 & 1.320 & 15.2 & 25.9 & 1.67 & 3.30\\
\hline
D5 & 3 & 106.7 & 1.077 & 15.8 & 28.5 & 1.72 & 3.60\\
\hline
D6 & 3 & 117.5 & 1.005 & 16.0 & 28.9 & 1.57 & 3.79\\
\hline
D7 & 3 & 83.2 & 0.797 & 15.1 & 25.1 & 3.10 & 3.05\\
\hline
D8 & 2 & $-$ & $-$ & 21.0 & 32.7 & $-$ & $-$ \\
\hline
\end{tabular}
\end{table}
\setcounter{figure}{0}
\begin{figure}[H]
\centering
\renewcommand{\thefigure}{S\arabic{figure}}
\includegraphics[width=13cm]{4.pdf}
\caption{AFM scanning results of our fabricated ALD devices. (a) and (b) ALD NbS$_2$ device D1 and D2, respectively. (c)$-$(j) All measured ALD heterostructure devices D1 to D10, except the broken ones D3 and D9. Scale bar is \SI{5}{\micro\meter}.}
\label{fig:SI1}
\end{figure}
Figures~\ref{fig:SI2}a and \ref{fig:SI2}b give the obtained $E_h$ versus $Q_h$ for heterostructure resonators and $n_0$ versus $Q_n$ for NbS$_2$ resonators, respectively. Unlike the proportional relations shown in Figs.~\ref{fig: dynamics}b and \ref{fig: dynamics}c in the main text, we see $E_h$ versus $Q_h$ and $n_0$ versus $Q_n$ are irregular here.
\setcounter{figure}{1}
\begin{figure}[H]
\centering
\renewcommand{\thefigure}{S\arabic{figure}}
\includegraphics[width=10cm]{5.pdf}
\caption{Discussion on mechanical properties of our fabricated ALD devices. (a) Effective Young's modulus $E_h$ versus quality factor $Q_h$ for heterostructure resonators. (b) Pretension $n_0$ versus quality factor $Q_n$ for NbS$_2$ resonators.}
\label{fig:SI2}
\end{figure}
\subsection*{S3: Experimental results of quality factors for purely exfoliated graphene/MoS$_2$ heterostructure resonators}
To shed light on the energy dissipation $Q_{m}^{-1}$ of MoS$_2$ layer in the heterostructure, purely exfoliated graphene/MoS$_2$ heterostructure devices are fabricated (Fig.~\ref{fig:SI}a) and measured in interferometry setup. As plotted in Fig.~\ref{fig:SI}b, the measured $Q_h^{-1}$ versus $Q_g^{-1}$ is fitted with Eq. \ref{Q} and thus extract $\alpha=1.1\pm0.2$ and $1/Q_{m}=17.6\pm1.9\times 10^{-3}$.
\setcounter{figure}{2}
\begin{figure}[H]
\centering
\renewcommand{\thefigure}{S\arabic{figure}}
\includegraphics[width=10cm]{6.pdf}
\caption{Discussion on the dissipation of MoS$_2$ layer in purely exfoliated graphene/MoS$_2$ heterostructure resonators. (a) Optical images of partial fabricated devices (marked by dotted circles), where the red and blue frames represent graphene (bottom) and MoS$_2$ (top) flakes, respectively. Scale bar is \SI{20}{\micro\meter}. (b) The measured results of $Q_h^{-1}$ versus $Q_g^{-1}$ (green points) and its fitting with Eq. \ref{Q} in the main text (black line and shadow).}
\label{fig:SI}
\end{figure}
\end{document}
|
1909.08355
|
\section{Introduction and main result}
Historically, advances in measurement techniques often are the reason
for physics to progress. Over time,\emph{ metrology} has developed
as a subject of its own, especially in the context of defining standard
units of measurement for physical quantities.
Quantum theory provides new perspectives on measurements, ranging
from fundamental limitations on measurements \cite{heisenberg27},
new opportunities \cite{gio11} as well as technical challenges
and even philosophical quagmires \cite{busch+2016}. From a practical
point of view, quantum information science requires ever better control
of microscopic systems and, hence, measurements which are as accurate
as possible. More specifically, quantum metrology \cite{Nawrocki19}
aims at finding bounds on the achievable measurement precision and
at identifying states which would be optimal for quantum measurements or other specific tasks. The optimal transmission of a Cartesian frame~\cite{Per01} or the efficient detection of inhomogeneous magnetic fields~\cite{Hak20} are typical examples. While the classical Cram\'er-Rao theorem \cite{Rao45, Cra46} provides
a lower bound on the variance of random estimators by means of the Fisher
information, its quantum-mechanical counterpart provides bounds for quantum
parameter estimation theory \cite{Hel76}. The quantum Cram\'er-Rao
bound is expressed as the inverse of the quantum Fisher information, which can be geometrically interpreted as the (Bures) distance between two quantum states differing by an infinitesimal amount in their parameter \cite{Hub92, Hub93}. It provides lower bounds on the variance of any quantum operator whose measurement aims at estimating the parameter. Optimal measurement is achieved by maximizing the quantum Fisher information over parameter-dependent states.
The quantum Cram\'er-Rao bound was calculated for instance in the reference
frame alignment problem \cite{Kol08}. This problem involves estimating
rotations about unknown axes. It has been shown in~\cite{Goldberg18} that spin states with vanishing spin expectation value and isotropic variances of the spin components are valuable for estimating such rotations, as they saturate the quantum Cram\'er-Rao bound for \emph{any} axis. Also, recently, the problem of characterizing a rotation about an unknown direction encoded into a spin-$j$ state has been considered in~\cite{MoC19}.
In this paper, we are interested to determine whether a quantum system
has undergone a rotation $R_{\mathbf{n}}(\eta)$ by a \emph{known}
angle $\eta$ about an \emph{unknown} axis $\mathbf{n}$. Suppose
first that we apply the rotation by $\eta$ to an initial state $|\psi\rangle$
about a \emph{known} axis and perform a measurement of the projector
$|\psi\rangle\langle\psi|$ in the rotated state $R_{\mathbf{n}}(\eta)\ket{\psi}$.
The expectation value of the observable $|\psi\rangle\langle\psi|$
is given by
\begin{equation}
F_{|\psi\rangle}(\eta,\mathbf{n})=|\bra\psi R_{\mathbf{n}}(\eta)|\psi\rangle|^{2}\,,\label{eq: fidelity}
\end{equation}
i.e.\ by the fidelity between the initial state and the final state.
The fidelity $F_{|\psi\rangle}(\eta,\mathbf{n})$ equals the probability to find the quantum system in the initial state after the rotation. Thus, the probability to detect that the rotation has occurred
is given by the quantity $1-F_{|\psi\rangle}(\eta,\mathbf{n})$. Therefore,
the measurement will be most sensitive if the rotation is applied
to states $|\psi\rangle$ which \emph{minimize} the expression \eqref{eq: fidelity}
for given angle and rotation axis.
Next, suppose that only the rotation angle $\eta$ is well-defined
while the rotation axis is not known, as described in \cite{ChrHer17}.
This situation occurs, for example, when spins prepared in the state $\ket\psi$ are---during the measurement sequence---subjected to a magnetic field whose direction randomly fluctuates on a time scale
much larger than the Larmor period. Measuring the observable $|\psi\rangle\langle\psi|$
on an ensemble of identically prepared systems will now produce a
value of the fidelity \eqref{eq: fidelity} averaged over all possible
spatial directions $\mathbf{n}$. Then, the most suitable quantum
states $\ket\psi$---called \emph{optimal} \emph{quantum rotosensors}
in \cite{ChrHer17}---are determined by the requirement that the
\emph{average fidelity}
\begin{equation}
{\cal F}_{|\psi\rangle}(\eta)=\frac{1}{4\pi}\int_{\mathcal{S}^{2}}F_{|\psi\rangle}(\eta,\mathbf{n})\,d\mathbf{n}\,,\label{eq: probability}
\end{equation}
achieve its minimum, for a given value of the parameter $\eta$.
The fidelity \eqref{eq: fidelity} and its average \eqref{eq: probability} also play a role when setting
up experiments which aim to determine an unknown rotation angle as accurately as possible. This is explained in more detail in Appendix~\ref{Appendix_param}.
For the spin values $j=1/2,1,3/2,2$, optimal quantum rotosensors
have been identified \cite{ChrHer17}, using an approach which combines
analytical and numerical methods. For rotation angles $\eta$ close
to $\pi$, the average fidelity is minimized systematically by \emph{coherent}
spin states. Coherent spin states are strongly localized in phase space and entirely specified by a spatial direction into which they point on the Bloch
sphere \cite{Are72}. For small rotation angles $\eta$, the
average fidelity is minimized by \emph{anticoherent} states, which
are characterized by the fact that they do not manifest any privileged
direction; in this respect, they are as distinct as possible from
coherent states \cite{Zimba06}. The role of anticoherent states
for optimal detection of rotations has also been observed and was subsequently quantified in terms of quantum Fisher information in~\cite{Goldberg18}.
Between these two extreme cases of $\eta\sim0$ and $\eta\sim\pi$, optimal states are neither coherent nor anticoherent
in general. From an experimental point of view, anticoherent and other non-classical spin states have been created
using a variety of physical systems. For instance, anticoherent states of quantum
light fields have been generated using orbital angular momentum states
of single photons with their usefulness for quantum metrology being
established in~\cite{Bou17}. Non-classical spin states---including
Schrödinger cat states (c.f.\ Sec.~\ref{sec: Optimal-quantum-rotosensors})---of
highly magnetic dysprosium atoms with spin quantum number $j=8$ have
been created in order to enhance the precision of a magnetometer \cite{Cha18}.
The main result of the present paper is a closed-form expression
of the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$, valid for
arbitrary values of $j$. A rather general argument, based solely
on the symmetries of the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$,
shows that it must be a linear combination of the form
\begin{equation}
{\cal F}_{|\psi\rangle}(\eta)=\varphi_{0}^{(j)}(\eta)+\sum_{t=1}^{\lfloor j\rfloor}\varphi_{t}^{(j)}(\eta)\,\mathcal{A}_{t}(|\psi\rangle),\label{PexpansionAC}
\end{equation}
as explained in detail in Sec.~\ref{sec: Tools and concepts}.
In this expression, the ${\cal A}_{t}(\ket\psi)$ are the anticoherence
measures of a state $\ket\psi$, introduced in \cite{Bag17} and given
explicitly in Eq.~\eqref{ACR}, while the real-valued functions $\varphi_{t}^{(j)}(\eta)$
are trigonometric polynomials independent of $\ket{\psi}$, and $\lfloor j\rfloor$
is the largest integer smaller than or equal to $j$. The main challenge is to
calculate the $\eta$-dependent coefficients $\varphi_{t}^{(j)}(\eta)$,
which we do in Sec.~\ref{sec: Closed-form}.
In earlier works, the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$
had been expressed as a sum of functions of $\eta$ weighted by \emph{state-dependent}
coefficients, upon representing the state in the polarization-tensor
basis \cite{ChrHer17}. The advantage of relation \eqref{PexpansionAC}
is that the average fidelity depends on the state under consideration
only through its measures of anticoherence, and thus it directly relates
to the degree of coherence or anticoherence of the state. Expression
\eqref{PexpansionAC} allows us to identify optimal quantum rotosensors
for spin quantum numbers up to $j=5,$ thereby confirming the role
played by coherent and anticoherent states beyond $j=2$. Readers
mainly interested in the optimal quantum rotosensors may want to directly
consult Sec.~\ref{sec: Optimal-quantum-rotosensors}.
Let us outline the overall argument leading to the expression of the
average fidelity ${\cal F}_{|\psi\rangle}(\eta)$ in \eqref{PexpansionAC}.
In Sec.~\ref{sec: Tools and concepts}, we introduce a number of
tools and concepts feeding into the derivation of \eqref{PexpansionAC}:
first, we discuss the symmetries built into the average fidelity ${\cal F}_{\ket\psi}(\eta)$,
followed by a brief summary of the Majorana representation which enables
us to interpret spin-$j$ states as completely symmetric states of
$N=2j$ qubits. This perspective allows us to introduce, for $1\leqslant t \leqslant \lfloor j\rfloor$, the anticoherence
measure $\mathcal{A}_{t}(\ket\psi)$, defined as the linear entropy of the $t$-qubit reduced density matrix of $\ket\psi\bra\psi$. To actually carry
out the integration in Eq.~\eqref{eq: probability}, we will use a
tensor representation (see Sec.~\ref{subsec:Tensor-representation})
of mixed spin-$j$ states generalizing the Bloch representation.
In addition, this representation also enables us to exploit the symmetries
of the average fidelity which can only depend on expressions invariant
under $\mathrm{SU}(2)$ rotations. As shown in Sec.~\ref{subsec:-Invariants},
it is then possible to establish a linear relation between these invariants
and the anticoherence measures $\mathcal{A}_{t}(\ket\psi)$, which finally leads
to \eqref{PexpansionAC}.
Section~\ref{sec: Closed-form} is dedicated to deriving explicit
expressions for the functions $\varphi_{t}^{(j)}(\eta)$. This will
be done in two ways: the first one is based on the fact that anticoherence
measures are explicitly known for certain states, so that the functions
$\varphi_{t}^{(j)}(\eta)$ appear as solutions of a linear system
of equations. The second approach makes use of representations of
the Lorentz group and allows us to obtain a general closed expression.
In Sec.~\ref{sec: Optimal-quantum-rotosensors} we make use of this
closed-form expression to identify the optimal quantum rotosensors.
We conclude with a brief summary given in Sec. \ref{sec:Conclusion}.
\section{Concepts and tools\label{sec: Tools and concepts}}
In this section, we introduce the tools that will be needed to address
the optimality problem described in the Introduction.
\subsection{Notation}
Quantum systems with integer or half-integer spin $j$ are described
by states $\ket{\psi}$ of the Hilbert space $\mathbb{C}^{N+1}$ with $N=2j$,
which carries a $(N+1)$-dimensional representation of the group
SU$(2)$. The components of the angular momentum operator ${\bf J}$
satisfy $[J_{k},J_{\ell}]=i\varepsilon_{k\ell m}J_{m}$, $k,\ell,m\in\{x,y,z\}$, where $\varepsilon_{k\ell m}$ is the Levi-Civita symbol.
Denoting unit vectors in $\mathbb{R}^{3}$ by
\begin{equation}
\mathbf{n}=\begin{pmatrix}\sin\theta\cos\phi\\
\sin\theta\sin\phi\\
\cos\theta
\end{pmatrix}\,,\quad\theta\in[0,\pi]\,,\quad\phi\in[0,2\pi[\,,\label{eq: unit vector}
\end{equation}
the operator
\begin{equation}
R_{\mathbf{n}}(\eta)=e^{-i\eta\mathbf{J}\boldsymbol{\cdot}\mathbf{n}}\label{rot}
\end{equation}
describes a rotation by an angle $\eta\in[0,4\pi[$ about the direction
$\mathbf{n}$.
\subsection{Symmetries}
By definition, the average fidelity in \eqref{eq: probability} is
a positive function of the angle $\eta$ and of the state $\ket{\psi}$ and possesses three symmetries:
it is $2\pi$-periodic in $\eta$, symmetric about $\eta=\pi$, and invariant under rotation of $\ket{\psi}$.
Periodicity with period $2\pi$ comes from the fact that $R_{\mathbf{n}}(2\pi)=(-1)^{N}$.
Symmetry about $\eta=\pi$ is equivalent to
\begin{equation}
{\cal F}_{\ket{\psi}}(\eta)={\cal F}_{\ket{\psi}}(2\pi-\eta)\,,\label{eq: reflection symmetry for angle eta}
\end{equation}
which can be shown using $R_{\mathbf{n}}(2\pi-\eta)=(-1)^{N}R_{-\mathbf{n}}(\eta)$
and the fact that the set of directions averaged over in \eqref{eq: probability}
is the same irrespective of the sign of the unit vector ${\bf n}$
since the fidelity \eqref{eq: fidelity} is given by the the \emph{squared}
modulus of the overlap between the states $|\psi\rangle$ and $R_{\mathbf{n}}(\eta)\ket{\psi}$.
Invariance under rotation of $\ket{\psi}$ can be understood in the following way.
Let $R_{\mathbf{m}}(\chi)=e^{-i\chi\mathbf{J}\boldsymbol{\cdot}\mathbf{m}}$
be a unitary operator
representing a rotation in $\mathbb{R}^{3}$ by an angle $\chi\in[0,4\pi[$
about the direction $\mathbf{m}$, acting on a state $|\psi\rangle\in\mathbb{C}^{N+1}$.
Then the average fidelities ${\cal F}$ associated with the states $|\psi\rangle$
and $|\psi^R\rangle\equiv R_{\mathbf{m}}(\chi)|\psi\rangle$ are equal. Indeed,
we have
\begin{equation}
F_{|\psi^R\rangle}(\eta,\mathbf{n})=\bra{\psi}R_{\mathbf{m}}(\chi)^{\dagger}R_{\mathbf{n}}(\eta)R_{\mathbf{m}}(\chi)|\psi\rangle\label{eq: trf of F under rotations}
\end{equation}
and
\begin{align}
R_{\mathbf{m}}(\chi)^{\dagger}R_{\mathbf{n}}(\eta)R_{\mathbf{m}}(\chi) & =e^{-i\eta (R_{\mathbf{m}}(\chi)^{\dagger}\mathbf{J}R_{\mathbf{m}}(\chi))\boldsymbol{\cdot}\mathbf{n}}\nonumber \\
& =e^{-i\eta(R\mathbf{J})\boldsymbol{\cdot}\mathbf{n}}=e^{-i\eta\mathbf{J}\boldsymbol{\cdot}\mathbf{n}^R}\,,\label{eq: trf of R under rotation}
\end{align}
with ${\bf n}^R\equiv R^{T}\mathbf{n}$ the vector obtained by the rotation $R\in$ SO$(3)$ associated with $R_{\mathbf{m}}(\chi)$.
Due to the invariance under rotations of the unit-ball region $\mathcal{S}^{2}$
appearing in \eqref{eq: probability} (invariance of
the Haar measure used), the result of the integration will be the
same, leading to
\begin{align}
{\cal F}_{|\psi^R\rangle}(\eta) & =\frac{1}{4\pi}\int_{\mathcal{S}^{2}}F_{|\psi^R\rangle}(\eta,\mathbf{n})\,d\mathbf{n}\nonumber \\
& =\frac{1}{4\pi}\int_{\mathcal{S}^{2}}F_{|\psi\rangle}(\eta,\mathbf{n})\,d\mathbf{n}={\cal F}_{|\psi\rangle}(\eta)\,.\label{eq: invariance of av fid}
\end{align}
This invariance of the fidelity can be seen in a geometrically appealing way by use of the Majorana representation,
which we consider now.
\subsection{Majorana representation of pure spin states \label{subsec:Majorana-representation}}
The Majorana representation establishes a one-to-one correspondence
between spin-$j$ states and $N=2j$-qubit states that
are invariant under permutation of their constituent qubits (see e.g.~\cite{Bie81,Zyczkowski_book,Coe98}). It allows
to geometrically visualise a pure spin-$j$ state as $N$ points on
the unit sphere associated with the Bloch vectors of the $N$ qubits.
The Majorana points are often referred to as stars, and the
whole set of Majorana points of a given state as its Majorana constellation.
Considering a spin-$j$ state $\ket{\psi}$ as an $N$-qubit state,
any local unitary (LU) operation $U=u^{\otimes N}$ with $u\in \mathrm{SU}(2)$
transforms $\ket{\psi}$ into a state whose Majorana
constellation is obtained by the constellation of $\ket{\psi}$ rotated
by the SO$(3)$ rotation associated with $u$. Spin-coherent states take a very simple form in the Majorana representation, as they can be seen as the tensor product $\ket{\phi}^{\otimes N}$ of some spin-$1/2$ state $\ket{\phi}$. Their constellation thus reduces to an $N$-fold degenerate point.
The fidelity \eqref{eq: fidelity} is given by the squared modulus of the overlap between $|\psi\rangle$
and $R_{\mathbf{n}}(\eta)\ket{\psi}$. Since the Majorana constellation of
$R_{\mathbf{n}}(\eta)\ket{\psi}$ is obtained by rigidly rotating that of $|\psi\rangle$, the fidelity \eqref{eq: fidelity} only depends on the relative positions of
these two sets of points. The \emph{average} transition probability
${\cal F}_{|\psi\rangle}(\eta)$ is obtained by integrating over all possible constellations obtained
by rigid rotations of the Majorana constellation of
$|\psi\rangle$, and therefore it must be invariant under LU.
In other words, the equality \eqref{eq: invariance of av fid}
takes the form ${\cal F}_{|\psi\rangle}(\eta)={\cal F}_{u^{\otimes N}|\psi\rangle}(\eta)$.
\subsection{Anticoherence measures \label{subsec:Anticoherence-measures}}
An order-$t$ \emph{anticoherent} state $\ket\chi$
is defined by the property that $\langle\chi|(\mathbf{J}\boldsymbol{\cdot}\mathbf{n})^{k}|\chi\rangle$
is independent of the vector $\mathbf{n}$ for all $k=1,\ldots,t$. In the Majorana representation, it is characterized by the fact that its $t$-qubit reduced density matrix is the maximally mixed state in the symmetric sector~\cite{prl}.
The degree of coherence or $t$-anticoherence of a spin-$j$ pure
state $|\psi\rangle$ can be measured by the quantities $\mathcal{A}_{t}(|\psi\rangle)$,
which are positive-valued functions of $|\psi\rangle$ \cite{Bag17}.
Let $\rho_{t}=\mathrm{tr}_{\neg t}\left[|\psi\rangle\langle\psi|\right]$ be
the $t$-qubit reduced density matrix of the state $|\psi\rangle$
interpreted as a $N$-qubit symmetric state with $N=2j$; it is obtained by taking
the partial trace over all but $t$ qubits (it does not matter which
qubits are traced over since $|\psi\rangle$ is a symmetric state).
The measures $\mathcal{A}_{t}(|\psi\rangle)$ are defined as the rescaled
linear entropies
\begin{equation}
\mathcal{A}_{t}(|\psi\rangle)=\frac{t+1}{t}\left(1-\mathrm{tr}\left[\rho_{t}^{2}\right]\right)\,,\label{ACR}
\end{equation}
where $\mathrm{tr}\left[\rho_{t}^{2}\right]$ is the purity of $\rho_{t}$.
Thus, anticoherence measures are quartic in the state $|\psi\rangle$
and range from $0$ to $1$, and are invariant
under SU$(2)$ rotations. Spin-coherent states are characterized by
pure reduced states and thus are the only states such that $\mathcal{A}_{t}=0$. Anticoherent
states to order $t$ are characterized by $\rho_t=\mathbb{1}/(t+1)$ and thus are the only states such that $\mathcal{A}_{t}=1$. In particular, if a state $|\psi\rangle$
is anticoherent to some order $t$,
then it is necessarily anticoherent to all lower orders $t'=1,\ldots,t$ since reductions of the maximally mixed state are maximally mixed.
While for any state we have $0\leqslant\mathcal{A}_{t}\leqslant1$,
not all possible tuples $(\mathcal{A}_{1},\mathcal{A}_{2},\ldots)$
are realised by a physical state $|\psi\rangle$. For instance, since
$\mathcal{A}_{t}=1$ implies that $\mathcal{A}_{t'}=1$ for all $t'\leqslant t$,
the choice $\mathcal{A}_{2}=1$ and $\mathcal{A}_{1}<1$ cannot correspond
to any state. We denote the domain of admissible values of the measures
$\mathcal{A}_{t}$ by $\Omega$.
\subsection{Tensor representation of mixed states \label{subsec:Tensor-representation}}
We now introduce a tensor representation of an arbitrary (possibly
mixed) spin-$j$ state $\rho$ acting on a $(N+1)$-dimensional Hilbert
space with $N=2j$, following~\cite{prl}. Any state can be expanded as
\begin{equation}
\rho=\frac{1}{2^{N}}\,x_{\mu_{1}\mu_{2}\ldots\mu_{N}}S_{\mu_{1}\mu_{2}\ldots\mu_{N}}.\label{rhoarbitrary}
\end{equation}
Here and in what follows, we use Einstein summation convention
for repeated indices, with Greek indices running from $0$ to $3$ and Latin indices running from $1$ to $3$.
Here, the $S_{\mu_{1}\mu_{2}\ldots\mu_{N}}$ are $(N+1)\times(N+1)$
Hermitian matrices invariant under permutation of the indices.
The $x_{\mu_{1}\mu_{2}\ldots\mu_{N}}$ are real coefficients also invariant
under permutation of their indices, which enjoy what we call the tracelessness
property
\begin{equation}
\sum_{a=1}^{3}x_{aa\mu_{3}\ldots\mu_{N}}=x_{00\mu_{3}\ldots\mu_{N}}\,,\quad\forall\;\mu_{3},\ldots,\mu_{N}.\label{traceless}
\end{equation}
Whenever $x_{\mu_{1}\mu_{2}\ldots\mu_{N}}$ has some indices equal
to $0$, we take the liberty to omit them, so that e.g.~for a spin-$3$
state $x_{110200}$ may be written $x_{112}$ (recall that the order
of the indices does not matter). In the case of a spin-coherent state
given by its unit Bloch vector $\mathbf{n}=(n_{1},n_{2},n_{3})$, the coefficients
in \eqref{rhoarbitrary} are simply given by $x_{\mu_{1}\mu_{2}\ldots\mu_{N}}=n_{\mu_{1}}n_{\mu_{2}}\ldots n_{\mu_{N}}$,
with $n_{0}=1$.
In the following, we will make use of two essential properties of
the tensor representation. Namely, let us consider a state $\rho$
with coordinates $x_{\mu_{1}\mu_{2}\ldots\mu_{N}}$ in the expansion
\eqref{rhoarbitrary}. Then, the tensor coordinates of the $t$-qubit
reduced state $\rho_{t}$ in the expansion \eqref{rhoarbitrary} are
simply given by $x_{\mu_{1}\mu_{2}\ldots\mu_{t}}=x_{\mu_{1}\mu_{2}\ldots\mu_{t}0...0}$.
Thus, since we omit the zeros in the string $\mu_{1}\mu_{2}\ldots\mu_{N}$,
the tensor coordinates of $\rho_{t}$ and $\rho$ coincide for any
string of $k\leq t$ nonzero indices.
The second property we use is that for states $\rho$ and $\rho'$
in the form \eqref{rhoarbitrary} with tensor coordinates respectively
$x_{\mu_{1}\mu_{2}\ldots\mu_{N}}$ and $x'_{\mu_{1}\mu_{2}\ldots\mu_{N}}$
we have
\begin{equation}
\mathrm{tr}\left[\rho\rho'\right]=\frac{1}{2^{N}}\sum_{\mu_{1},\mu_{2},...,\mu_{N}}x_{\mu_{1}\mu_{2}\ldots\mu_{N}}x'_{\mu_{1}\mu_{2}\ldots\mu_{N}}.
\end{equation}
Note that this equality holds despite the fact that the $S_{\mu_{1}\mu_{2}\ldots\mu_{N}}$ are not orthogonal; this property follows from the fact that these matrices form a $2^N$-tight frame, see~\cite{prl}.
In particular, for a pure state $\rho=\ket{\psi}\bra{\psi}$, the
equality $\mathrm{tr}\rho^{2}=1$ translates into
\begin{equation}
\sum_{\mu_{1},\mu_{2},...,\mu_{N}}x_{\mu_{1}\mu_{2}\ldots\mu_{N}}^{2}=2^{N},\label{forpure}
\end{equation}
while the purity of the reduced density matrix $\rho_{t}$ reads
\begin{equation}
\mathrm{tr}\left[\rho_{t}^{2}\right]=\frac{1}{2^{t}}\sum_{\mu_{1},\mu_{2},...,\mu_{t}}x_{\mu_{1}\mu_{2}\ldots\mu_{t}}^{2}\,.\label{trrt2}
\end{equation}
The normalization condition $\mathrm{tr}\left[\rho\right]=1$ imposes $x_{00\ldots0}=1$.
A consequence of \eqref{traceless} is then that $\sum_{a=1}^3x_{aa}=1$.
\subsection{SU$(2)$-Invariants \label{subsec:-Invariants}}
If $u\in$ SU(2) and $R\in$ SO(3) is the corresponding rotation matrix,
then the tensor coordinates of $U\rho U^{\dagger}$ with $U=u^{\otimes N}$
are the $\mathsf{R}_{\mu_{1}\nu_{1}}\ldots\mathsf{R}_{\mu_{N}\nu_{N}}x_{\nu_{1}\ldots\nu_{N}}$
where $\mathsf{R}$ is the $4\times4$ orthogonal matrix
\begin{equation}
\mathsf{R}=\left(\begin{array}{c|c}
\begin{array}{c}
1\end{array} & \begin{array}{c}
0\end{array}\\
\hline \begin{array}{c}
0\end{array} & \begin{array}{c}
R\end{array}
\end{array}\right).\label{matrixR}
\end{equation}
That is, $x_{\mu_{1}\mu_{2}\ldots\mu_{N}}$ transforms as a tensor.
Under such transformations, $x_{\mu}x_{\mu}$ goes into $\mathsf{R}_{\mu\nu}\mathsf{R}_{\mu\nu'}x_{\nu}x_{\nu'}=(\mathsf{R}^{T}\mathsf{R})_{\nu'\nu}x_{\nu}x_{\nu'}=x_{\nu}x_{\nu}$,
where the last equality comes from orthogonality of $\mathsf{R}$.
Thus $x_{\mu}x_{\mu}$ is an SU(2) invariant. Similarly, $x_{\mu}x_{\mu\nu}x_{\nu}$
and, more generally, any product of the $x_{\mu_{1}\mu_{2}\ldots\mu_{N}}$
such that all indices are contracted (i.e.\ summed from $0$ to $3$),
are invariant under SU(2) action on $\rho$. One can then show by
induction that products of terms $x_{a_{1}a_{2}\ldots a_{k}}$ with
$k\leqslant N$ where all indices appear in pairs and are summed from
$1$ to $3$ are also SU(2) invariant. For instance, $x_{a}x_{a}$,
$x_{ab}x_{ab}$, $x_{ab}x_{bc}x_{ca}$, $x_{a}x_{ab}x_{b}$ are such
invariants.
Invariants of degree 1 in $x$ are of the form $x_{a_{1}a_{2}\ldots a_{2k}}$,
where the $a_{i}$ appear in pairs. Since the order of indices is
not relevant, these invariants are in fact of the form $x_{a_{1}a_{1}a_{2}a_{2}\ldots a_{k}a_{k}}$.
Because of Eq.~\eqref{traceless}, each pair can be replaced by zeros
in the string, so that $x_{a_{1}a_{1}a_{2}a_{2}\ldots a_{k}a_{k}}=x_{00\ldots0}=1$.
Therefore, there is no invariant of degree 1. The invariants of degree
2 are products of the form $x_{a_{1}a_{2}\ldots a_{k}}x_{b_{1}b_{2}\ldots b_{k'}}$
where indices appear in pairs and are summed from $1$ to $3$. If
the two indices of a pair appear in the same index string ($a_{1}a_{2}\ldots a_{k}$
or $b_{1}b_{2}\ldots b_{k'}$), then from Eq.~\eqref{traceless},
they can again be replaced by zeros and discarded. Thus the invariants
of degree 2 are $\kappa_{1}=x_{a}x_{a}$, $\kappa_{2}=x_{ab}x_{ab}$,
and more generally, for $1\leqslant r\leqslant N$,
\begin{equation}
\kappa_{r}=x_{a_{1}a_{2}...a_{r}}x_{a_{1}a_{2}...a_{r}}.\label{defkappa}
\end{equation}
Using \eqref{ACR} and \eqref{trrt2} one can express the invariants
$\kappa_{r}$ in terms of a linear combination of the $\mathcal{A}_{t}$.
Indeed, grouping together terms with the same number of nonzero indices
in \eqref{trrt2} yields
\begin{equation}
\mathrm{tr}\left[\rho_{t}^{2}\right]=\frac{1}{2^{t}}\sum_{\mu_{1},\mu_{2},...,\mu_{t}}x_{\mu_{1}\mu_{2}\ldots\mu_{t}}^{2}=\frac{1}{2^{t}}\sum_{r=0}^{t}\binom{t}{r}\kappa_{r}\,.\label{trbis}
\end{equation}
Inverting that relation via the binomial inversion formula, we obtain
\begin{equation}
\kappa_{r}=\sum_{t=0}^{r}(-1)^{t+r}\,2^{t}\binom{r}{t}\mathrm{tr}\left[\rho_{t}^{2}\right]\,,\label{invrel}
\end{equation}
and by use of \eqref{ACR} we finally can express the $\mathrm{SU}(2)$-invariants
in terms of anticoherence measures,
\begin{equation}
\kappa_{r}=\sum_{t=0}^{r}(-1)^{t+r}\,2^{t}\binom{r}{t}\left(1-\frac{t}{t+1}\mathcal{A}_{t}\right)\label{kappar}
\end{equation}
for $r=1,\ldots,N$.
\subsection{General form of the average fidelity}
Let us now explain why the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$
given in Eq.~\eqref{PexpansionAC} is a linear combination of the
lowest $\left\lfloor j\right\rfloor $ anticoherent measures $\mathcal{A}_{t}$.
Due to its rotational symmetry, the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$---when
considered as a function of the tensor coordinates $x_{\mu_{1}\mu_{2}\ldots\mu_{N}}$---can
only involve invariants constructed from these coordinates. With ${\cal F}_{|\psi\rangle}(\eta)$
being quadratic in $\rho=\ket{\psi}\bra{\psi}$, it must also be quadratic
in $x$. As there is no invariant of degree 1, the only invariants
that can appear in the expression of ${\cal F}_{|\psi\rangle}(\eta)$
are the invariants $\kappa_{r}$ defined in \eqref{defkappa}. Since
the quantity ${\cal F}_{|\psi\rangle}(\eta)$ is quadratic it must
be a linear combination of the coefficients $\kappa_{r}$ which, according
to Eq.~\eqref{kappar}, implies that ${\cal F}_{|\psi\rangle}(\eta)$
is also a linear combination of the $\mathcal{A}_{t}$. Furthermore,
the identity
\begin{equation}
\mathrm{tr}\left[\rho_{t}^{2}\right]=\mathrm{tr}\left[\rho_{N-t}^{2}\right]\,,\label{eq: t vs N-t symmetry}
\end{equation}
which holds for any pure state, means that the anticoherence measures
$\mathcal{A}_{t}$ for $t>N/2$ can be expressed in terms of the measures
$\mathcal{A}_{t}$ for $t< N/2$. Therefore, \eqref{PexpansionAC}
is the most general form the fidelity ${\cal F}_{|\psi\rangle}(\eta)$
can take, with the dependence in $\eta$ being only in the coefficients
of the measures $\mathcal{A}_{t}$.
\subsection{Generalizations}
It is worth stressing that the form \eqref{PexpansionAC} for the average fidelity also holds for more general types of average fidelity
\begin{equation}\label{genfid}
\frac{1}{4\pi}\int_{S^{2}}|\langle\psi|U_{\mathbf{n}}(\eta)|\psi\rangle|^{2}\,d\mathbf{n}
\end{equation}
between a state $|\psi\rangle$ and its image under the unitary
\begin{equation}
U_{\mathbf{n}}(\eta)=e^{-i\eta\, f(\mathbf{J}\boldsymbol{\cdot}\mathbf{n})},
\end{equation}
where $f$ is an arbitrary real analytic function, ensuring that $f(\mathbf{J}\boldsymbol{\cdot}\mathbf{n})$ is an Hermitian operator. Indeed, from an argument similar to that of Sec.~\ref{subsec:-Invariants}, the generalized fidelity \eqref{genfid} can be expressed as a function of the $\kappa_r$ and hence of the $\mathcal{A}_t$. An interesting case is when $U_{\mathbf{n}}(\eta)$ is a spin-squeezing operator, which corresponds to choosing $f(\mathbf{J}\boldsymbol{\cdot}\mathbf{n})=(\mathbf{J}\boldsymbol{\cdot}\mathbf{n})^2$.
Moreover, if we now consider the quantities
\begin{equation}\label{genfid2}
\frac{1}{4\pi}\int_{S^{2}}|\langle\psi|U_{\mathbf{n}}(\eta)|\psi\rangle|^{2k}\,d\mathbf{n}
\end{equation}
with integer $k\geqslant 2$, the same arguments show that they are linear combinations of higher-order invariants, leading to generalizations of the relation \eqref{kappar}.
\section{Closed form of the average fidelity \label{sec: Closed-form}}
In this section we derive the angular functions $\varphi_{t}^{(j)}(\eta)$,
which characterize the fidelity through \eqref{PexpansionAC}, in
two different ways. The first method (subsection \ref{closed1})
is based on the fact that anticoherence measures can be evaluated explicitly
for Dicke states. The second method (subsection \ref{closed2}) exploits
a tensor representation of spin states \cite{prl} which uses Feynman
rules from relativistic spin theory. These approaches are independent and we checked, for all integers and half-integers $j$ up to $26$, that as expected they yield the same angular functions. Technical detail is delegated to appendices in both cases.
\subsection{Derivation based on anticoherence measures for Dicke states \label{closed1} }
In the following, we will work in the standard angular momentum basis
of $\mathbb{C}^{N+1}$, for positive integer or half-integer value of $j=N/2$. It consists of the
Dicke states $\left\{ \ket{j,m},\left|m\right|\leq j\right\} $ given
by the common eigenstates of $\mathbf{J}^{2}$, the square of the
angular momentum operator $\mathbf{J}$, and of its $z$-component
$J_{z}$. In this basis, any spin-$j$ state $|\psi\rangle$ can
be expanded as
\begin{equation}
|\psi\rangle=\sum_{m=-j}^{j}c_{m}\,\ket{j,m}\,,\label{eq:jm_decomp}
\end{equation}
with $c_{m}\in\mathbb{C}$ and $\sum_{m=-j}^{j}|c_{m}|^{2}=1$.
The first derivation is based on the fact that both the measures of
$t$-anticoherence $\mathcal{A}_{t}(|j,m\rangle)$ and the average
fidelities ${\cal F}_{|j,m\rangle}(\eta)$ can be determined explicitly
for Dicke states. Their measures of $t$-anticoherence are given by
\begin{equation}
\mathcal{A}_{t}(|j,m\rangle)=\frac{t+1}{t}\left[1-\frac{\sum_{\ell=0}^{t}\binom{j+m}{t-\ell}^{2}\binom{j-m}{j-m-\ell}^{2}}{\binom{2j}{t}^{2}}\right].
\label{ACRDicke}
\end{equation}
They can readily be obtained from the purities $\mathrm{tr}\left[\rho_{t}^{2}\right]$ for a state of the form \eqref{eq:jm_decomp}, which were calculated in \cite{Bag17} in terms of the coefficients $c_{m}$ and read
\begin{equation}
\mathrm{tr}\left[\rho_{t}^{2}\right]=\sum_{q,\ell=0}^t\left| \sum_{k=0}^{2j-t} c_{j-k-\ell}^* \, c_{j-k-q} \,\Gamma_k^{\ell q} \right|^2
\label{puritiescm}
\end{equation}
with
\begin{equation}
\Gamma_k^{\ell q}=\frac{\sqrt{\binom{2j-k-q}{t-q}\binom{2j-k-\ell}{t-\ell}\binom{k+q}{k}\binom{k+\ell}{k}}}{\binom{2j}{t}}.
\end{equation}
As for the
fidelity, the calculation is done in Appendix \ref{sec: appendix C (Dicke)}
and yields
\begin{equation}
{\cal F}_{|j,m\rangle}(\eta)=\frac{1}{(2j+1)^{2}}\sum_{\ell=0}^{2j}(2\ell+1)(C_{jm\ell0}^{jm}\,\chi_{\ell}^{j}(\eta))^{2}\,,\label{PDicke}
\end{equation}
with Clebsch-Gordan coefficients $C_{jm\ell0}^{jm}$ and the functions
$\chi_{\ell}^{j}(\eta)$ defined in Eqs.~\eqref{chilj}--\eqref{chij}.
The angular functions $\varphi_{t}^{(j)}(\eta)$ are then solutions
of the system of linear equations
\begin{equation}
\left\{ \begin{array}{l}
{\cal F}_{|j,m\rangle}(\eta)=\varphi_{0}^{(j)}(\eta)+\sum_{t=1}^{\lfloor j\rfloor}\varphi_{t}^{(j)}(\eta)\,\mathcal{A}_{t}(|j,m\rangle)\\[8pt]
\mathrm{for}\;\,m=j,j-1,\ldots,j-\lfloor j\rfloor.
\end{array}\right.\label{syseq}
\end{equation}
This system can easily be solved for the lowest values of $j$. A
general (but formal) solution can then be obtained by inverting the
system \eqref{syseq}.
\subsection{Derivation based on relativistic Feynman rules and tensor representation
of spin states \label{closed2}}
The second approach allows us to derive a closed-form expression for
the functions $\varphi_{t}^{(j)}(\eta)$. It is based on an expansion
of the operator
\begin{equation}
\Pi^{(j)}(q)\equiv(q_{0}^{2}-|{\mathbf q}|^{2})^{j}\,e^{-2\theta_{q}\,\hat{{\bf q}}\boldsymbol{\cdot}{\bf J}},\label{lorentzboost}
\end{equation}
with $\tanh\theta_{q}=-|{\mathbf q}|/q_{0}$ and $\hat{{\bf q}}={\mathbf q}/|{\mathbf q}|$, as a multivariate
polynomial in the variables $q_{0},q_{1} ,q_{2},q_{3}$. This operator is a
$(N+1)$-dimensional representation (with $N=2j$) of a Lorentz boost in the direction
of the 4-vector $q=(q_{0},{\mathbf q})=(q_{0},q_{1},q_{2},q_{3})$. As shown in \cite{Wei64},
it can be written as
\begin{equation}
\Pi^{(j)}(q)=(-1)^{2j}q_{\mu_{1}}q_{\mu_{2}}\ldots q_{\mu_{2j}}S_{\mu_{1}\mu_{2}\ldots\mu_{2j}}.\label{egaliteweinberg}
\end{equation}
The identification of Eqs.~\eqref{lorentzboost} and \eqref{egaliteweinberg}
defines the $(N+1)\times(N+1)$ matrices $S_{\mu_{1}\ldots\mu_{N}}$
appearing in \eqref{rhoarbitrary} (see~\cite{prl} for detail).
Taking
\begin{equation}
q_{0}=i\cot(\eta/2)\quad\mbox{and}\quad q_{i}=n_{i}\,,\quad i=1,2,3\,,\label{eq: q0 + qi}
\end{equation}
in \eqref{lorentzboost}, we see that $\Pi^{(j)}(q)$ reduces to a
rotation operator,
\begin{equation}
R_{\mathbf{n}}(\eta)=e^{-i\eta\mathbf{J}\boldsymbol{\cdot}\mathbf{n}}=\frac{\Pi^{(j)}(q)}{m^{N}}\label{eq: rot as Pi}
\end{equation}
with
\begin{equation}
m^{2}=q_{0}^{2}-|{\mathbf q}|^{2}=-\frac{1}{\sin^{2}(\eta/2)}.\label{eqm}
\end{equation}
Moreover, for a state $\rho$ given by \eqref{rhoarbitrary} we have
\begin{equation}
\mathrm{tr}\left[\rho\,\Pi^{(j)}(q)\right]=(-1)^{N}x_{\mu_{1}\mu_{2}\ldots\mu_{N}}q_{\mu_{1}}\ldots q_{\mu_{N}},\label{eq:xq-1}
\end{equation}
according to Eq.~(24) of \cite{prl}, which holds for any 4-vector
$q$. Thus, with $\rho=\ket{\psi}\bra{\psi}$, using the identity
\eqref{eq: rot as Pi} and the expansion \eqref{egaliteweinberg}
for the rotation operator in \eqref{eq: fidelity} allows us to
explicitly perform the integral in Eq.~\eqref{eq: probability}, resulting
in
\begin{equation}
\begin{aligned}
{\cal F}_{|\psi\rangle}(\eta) ={}& \frac{1}{4\pi}\int_{\mathcal{S}^{2}}|\bra{\psi}R_{\mathbf{n}}(\eta)\ket{\psi}|^{2}\,d\mathbf{n}\\
={}& \frac{1}{4\pi}\int_{\mathcal{S}^{2}}\left|\textrm{tr}\left[\rho\frac{\Pi^{(j)}(q)}{m^{N}}\right]\right|^{2}d\mathbf{n}\\
={}& (-1)^{N}\frac{x_{\mu_{1}\ldots\mu_{N}}x_{\nu_{1}\ldots\nu_{N}}}{4\pi}\\
& \times \int_{\mathcal{S}^{2}}\frac{q_{\mu_{1}}\ldots q_{\mu_{N}}q_{\nu_{1}}^{*}\ldots q_{\nu_{N}}^{*}}{m^{2N}}\,d\mathbf{n},
\end{aligned}
\label{integxx-1}
\end{equation}
where $*$ denotes complex
conjugation (which acts on $q_{0}$ only because of the choice \eqref{eq: q0 + qi}
and using $|m|^{2}=-m^{2}$). Each term $q_{\mu_{1}}\ldots q_{\nu_{N}}^{*}$
with $2(N-k)$ indices equal to 0 is proportional to
\begin{equation}
\frac{q_{0}^{2(N-k)}}{m^{2N}}=(-1)^{k}\sin^{2k}\left(\frac{\eta}{2}\right)\cos^{2(N-k)}\left(\frac{\eta}{2}\right).\label{q0k-1}
\end{equation}
For the remaining $2k$ nonzero indices, we have from \eqref{eq: q0 + qi}
that $q_{i}=n_{i}$, so that \eqref{integxx-1} involves an integral
of the form
\begin{equation}
\frac{1}{4\pi}\int_{\mathcal{S}^{2}}n_{a_{1}}n_{a_{2}}\ldots n_{a_{2k}}\,d\mathbf{n}\,,\qquad1\leqslant a_{i}\leqslant3 \,.\label{ints-1}
\end{equation}
These integrals
are performed in Appendix \ref{appexplicit}. The integrals \eqref{ints-1} are in fact precisely
given by the tensor coordinates $x_{a_{1}a_{2}\ldots a_{2k}}^{(0)}$
of the maximally mixed state, whose expression is
explicitly known. One can therefore rewrite \eqref{integxx-1} as
\begin{equation}
\begin{aligned} & {\cal F}_{|\psi\rangle}(\eta)=\sum_{k=0}^{N}(-1)^{N}\frac{q_{0}^{2(N-k)}}{m^{2N}}\\
& \times\hspace{-0.5cm}\sum_{\genfrac{}{}{0pt}{1}{\boldsymbol{\mu}{,}\boldsymbol{\nu}}{2(N-k)\textrm{zeros}}{}}\hspace{-0.5cm}(-1)^{\textrm{nr of 0 in }\boldsymbol{\nu}}x_{\mu_{1}\ldots\mu_{N}\nu_{1}\ldots\nu_{N}}^{(0)}x_{\mu_{1}\ldots\mu_{N}}x_{\nu_{1}\ldots\nu_{N}}\,,
\end{aligned}
\label{Ptot-1}
\end{equation}
where the sum over $\boldsymbol{\mu}{,}\boldsymbol{\nu}$ runs over all strings of indices
(between 0 and 3) containing $2(N-k)$ zeros. An explicit expression
for this sum is derived in Appendix~\ref{appexplicit}, leading to
the compact expression
\begin{equation}
{\cal F}_{|\psi\rangle}(\eta)=\sum_{k=0}^{N}\sin^{2k}\left(\frac{\eta}{2}\right)\cos^{2(N-k)}\left(\frac{\eta}{2}\right)\sum_{t=0}^{N}a_{t,k}^{(j)}\;\mathrm{tr}\left[\rho_{t}^{2}\right]\,,\label{Ptotmain}
\end{equation}
with numbers
\begin{equation}
a_{t,k}^{(j)}=\frac{4^{t}(-1)^{k+t}\binom{2N}{2k}\binom{k}{t}\binom{2N-2t}{N-t}}{(2k+1)\binom{2N}{N}}.\label{ajtk}
\end{equation}
Note that the sum over $k$ in \eqref{Ptotmain} can start at $k=t$ because
the factor $\binom{k}{t}$ in $a_{t,k}^{(j)}$ implies that $a_{t,k}^{(j)}=0$
for $t>k$. Using the symmetry $\mathrm{tr}\left[\rho_{t}^{2}\right]=\mathrm{tr}\left[\rho_{N-t}^{2}\right]$ we may rewrite \eqref{Ptotmain}
as
\begin{equation}
\begin{aligned}{\cal F}_{|\psi\rangle}(\eta)={} & \sum_{k=t}^{N}\sin^{2k}\left(\frac{\eta}{2}\right)\cos^{2(N-k)}\left(\frac{\eta}{2}\right)\\
& \times\sum_{t=0}^{\lfloor j\rfloor}\left(a_{t,k}^{(j)}+a_{N-t,k}^{(j)}\right)\left(1-\frac{\delta_{jt}}{2}\right)\mathrm{tr}\left[\rho_{t}^{2}\right].
\end{aligned}
\label{Ptotmainsym}
\end{equation}
From \eqref{ACR} we obtain a relation between $\mathcal{A}_{t}$
and $\mathrm{tr}\left[\rho_{t}^{2}\right]$, namely $\mathrm{tr}\left[\rho_{t}^{2}\right]=1-\frac{t}{t+1}\mathcal{A}_{t}$, which
yields the explicit expression of the polynomials $\varphi_{t}^{(j)}(\eta)$
in Eq.~\eqref{PexpansionAC} as
\begin{equation}
\varphi_{t}^{(j)}(\eta)=\sum_{k=t}^{N}b_{t,k}^{(j)}\,\sin^{2k}\left(\frac{\eta}{2}\right)\cos^{2(N-k)}\left(\frac{\eta}{2}\right),\label{Phimain}
\end{equation}
with coefficients
\begin{equation}
b_{t,k}^{(j)}=\left\{ \begin{array}{ll}
{\displaystyle -\frac{t}{t+1}\left(a_{t,k}^{(j)}+a_{N-t,k}^{(j)}\right)\left(1-\frac{\delta_{jt}}{2}\right)} & t\neq0\\
{\displaystyle \frac{\binom{N}{k}}{2k+1}} & t=0\,.
\end{array}\right.\label{btk}
\end{equation}
Note that although $q_{0}$ and $m$ are not well-defined for $\eta=0$,
the ratio in \eqref{q0k-1} always is, so that the expression above
is valid over the whole range of values of $\eta$. For spin-coherent
states, all $\mathcal{A}_{t}$ vanish and thus ${\cal F}_{|\psi\rangle}(\eta)=\varphi_{0}^{(j)}(\eta)$
from Eq.~\eqref{PexpansionAC}, which coincides with the expression
obtained in~\cite{ChrHer17}. For the smallest values of $j$, we
recover the functions obtained in Section \ref{closed1}. In the following
section, we will use the functions $\varphi_{t}^{(j)}(\eta)$ given
in \eqref{Phimain} to identify optimal quantum rotosensors.
\section{Optimal quantum rotosensors \label{sec: Optimal-quantum-rotosensors}}
\subsection{Preliminary remarks}
We now address the question of finding the states $|\psi\rangle$
which minimize the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$
for fixed rotation angles $\eta$. According to Eq.~\eqref{PexpansionAC}, the fidelity is a \emph{linear} function of the anticoherence measures
$\mathcal{A}_{t}$ with $1\leqslant t\leqslant\lfloor j\rfloor$. Linearity, when combined with the fact that the domain $\Omega$, over which the measures $\mathcal{A}_{t}$ vary, is \emph{bounded} implies that the fidelity must attain its minimum on the boundary. The minimization
problem thus amounts to characterizing this domain $\Omega$. Unfortunately,
even for the smallest values of $j$, no simple descriptions of this
domain are known.
We will first determine the states minimizing the $2\pi$-periodic
average fidelity for values of $j$ up to $j=7/2$, with the rotation
angle taking values in the interval $\eta\in[0,\pi]$ (which is sufficient
due to the symmetry \eqref{eq: reflection symmetry for angle eta}).
Then we will examine the limiting case of angles $\eta$ close to
$0$ for arbitrary values of the quantum number $j$. Throughout this
section, we will expand arbitrary states with spin $j$ in terms of
the Dicke states, as shown in Eq.~\eqref{eq:jm_decomp}.
For spins up to $j=2$ the states minimizing the average fidelity
${\cal F}_{\ket\psi}(\eta)$ are known \cite{ChrHer17}. In Sec.~\ref{subsec:jupto2},
we show that our approach based on the expression \eqref{PexpansionAC}
correctly reproduces these results. Then, in Sec.~\ref{subsec:ju5o2pto7o2},
we consider the minimization problem for spin quantum numbers up to
$j=7/2$, mainly identifying the optimal rotosensors within various
ranges of the rotation angle $\eta$ by numerical techniques. More
specifically, for a fixed angle $\eta$, ${\cal F}_{\ket\psi}(\eta)$
is a function of the $\mathcal{A}_{t}$ which can be parametrized
by the complex coefficients $c_{m}$ entering the expansion \eqref{eq:jm_decomp}
of the state $|\psi\rangle$ in the Dicke basis (see Eq.~\eqref{puritiescm}). We search numerically for the minimum value of ${\cal F}_{\ket\psi}(\eta)$
with respect to the $c_{m}$, taking into account the normalization
condition $\sum_{m}|c_{m}|^{2}=1$. In most cases this numerical search
converges towards states which have simple analytic expressions
which are the ones that we give. For each value of $j$, we performed this search at about 1000 evenly spaced values of $\eta$ in order to explore the whole range of rotation angles. Whenever we find a region of values of $\eta$ in which $|\psi_{1}\rangle$ is the optimal state adjacent to a region where $|\psi_{2}\rangle$ is optimal, at the critical angle separating these two regions, one should have ${\cal F}_{|\psi_{1}\rangle}(\eta)={\cal F}_{|\psi_{2}\rangle}(\eta)$ because the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$ is a continuous function of $|\psi\rangle$. Therefore, the critical angle is a solution of the equation
\begin{equation}\label{eqcritical}
\sum_{t=1}^{\lfloor j\rfloor}\varphi_{t}^{(j)}(\eta)\,\mathcal{A}_{t}(|\psi_{1}\rangle)=\sum_{t=1}^{\lfloor j\rfloor}\varphi_{t}^{(j)}(\eta)\,\mathcal{A}_{t}(|\psi_{2}\rangle).
\end{equation}
\subsection{Rotosensors for arbitrary rotation angles $\eta$ and $j\protect\leq2$
\label{subsec:jupto2}}
\subsubsection{$j=1/2$}
For a spin $1/2$, all pure states are coherent: each state $\ket\psi$ can be obtained by a suitable rotation of the state $|\tfrac{1}{2},\tfrac{1}{2}\rangle$. Since the fidelity is invariant under rotation, all states are equally sensitive to detect rotations for any angle $\eta$.
\subsubsection{$j=1$}
For $j=1$, the expansion \eqref{PexpansionAC} takes the form
\begin{equation}
{\cal F}_{|\psi\rangle}(\eta)=\varphi_{0}^{(j)}(\eta)+\varphi_{1}^{(1)}(\eta)\,\mathcal{A}_{1}\,,
\end{equation}
with
\begin{equation}
\begin{aligned}\varphi_{0}^{(1)}(\eta)={} & \frac{1}{15}\big(6\cos(\eta)+\cos(2\eta)+8\big),\\
\varphi_{1}^{(1)}(\eta)={} & -\frac{1}{15}\big(2\cos(\eta)-3\cos(2\eta)+1\big).
\end{aligned}
\end{equation}
The first strictly positive zero of $\varphi_{1}^{(1)}(\eta)$ is
given by $\eta_{0}=\arccos(-2/3)$. In the interval $\eta\in[0,\eta_{0}[$,
where $\varphi_{1}^{(1)}(\eta)$ is negative, the fidelity ${\cal F}_{|\psi\rangle}(\eta)$
is minimized by states with $\mathcal{A}_{1}=1$, i.e. by $1$-anticoherent
states. For $\eta=\eta_{0}$, the fidelity takes the same value for
all states $|\psi\rangle$, namely ${\cal F}_{|\psi\rangle}(\eta_{0})=\varphi_{0}^{(1)}(\eta_{0})=7/27$.
For rotation angles in the the remaining interval, $\eta\in]\eta_{0},\pi]$,
where $\varphi_{1}^{(1)}(\eta)$ is positive, ${\cal F}_{|\psi\rangle}(\eta)$
is minimized for states with $\mathcal{A}_{1}=0$, i.e.\ coherent
states. Thus, we indeed recover the results obtained in \cite{ChrHer17}.
\subsubsection{$j=3/2$}
In this case, the average fidelity \eqref{PexpansionAC} reads
\begin{equation}
{\cal F}_{|\psi\rangle}(\eta)=\varphi_{0}^{(3/2)}(\eta)+\varphi_{1}^{(3/2)}(\eta)\,\mathcal{A}_{1}\,,
\end{equation}
with
\begin{equation}
\begin{aligned}\varphi_{0}^{(3/2)}(\eta)={} & \frac{1}{70}\big(\cos(3\eta)+8\cos(2\eta)+29\cos(\eta)+32\big),\\
\varphi_{1}^{(3/2)}(\eta)={} & \frac{3}{70}\big(3\cos(3\eta)+3\cos(2\eta)-4\cos(\eta)-2\big).
\end{aligned}
\end{equation}
The situation is basically the same as for $j=1$. The first strictly
positive zero of the coefficient $\varphi_{1}^{(3/2)}(\eta)$ is found
to be $\eta_{0}=\arccos(\frac{-9+\sqrt{21}}{12})$. Hence, in the
interval $\eta\in[0,\eta_{0}[$ where $\varphi_{1}^{(3/2)}(\eta)$
is negative, the fidelity ${\cal F}_{|\psi\rangle}(\eta)$ is minimal
for $1$-anticoherent states. At the value $\eta=\eta_{0}$,
the fidelity takes the same value for all states $|\psi\rangle$,
namely, ${\cal F}_{|\psi\rangle}(\eta_{0})=\varphi_{0}^{(3/2)}(\eta_{0})=(33+2\sqrt{21})/80$.
Otherwise, ${\cal F}_{|\psi\rangle}(\eta)$ is minimized for coherent
states, thereby reproducing earlier results \cite{ChrHer17}.
\subsubsection{$j=2$}
For $j=2$, the fidelity \eqref{PexpansionAC} is a linear combination
of three terms,
\begin{equation}
{\cal F}_{|\psi\rangle}(\eta)=\varphi_{0}^{(2)}(\eta)+\varphi_{1}^{(2)}(\eta)\,\mathcal{A}_{1}+\varphi_{2}^{(2)}(\eta)\,\mathcal{A}_{2}\,,\label{Petaj2}
\end{equation}
with the angular functions $\varphi_{k}^{(2)},k=0,1,2$, displayed
in Appendix~\ref{Appendix_phi}. They all take negative values in
the interval $\eta\in[0,\eta_{0}]$, with $\eta_{0}\approx1.2122$
the first strictly positive zero of $\varphi_{1}^{(2)}(\eta)$. The tetrahedron state
\begin{equation}
\ket{\psi^{\mathrm{tet}}}=\frac{1}{2}\left(\ket{2,-2}+i\sqrt{2}\,\ket{2,0}+\ket{2,2}\right), \label{s2}
\end{equation}
whose Majorana points lie at the vertices of a regular tetrahedron,
is $2$-anticoherent, and for $j=2$ it is the only state (up to LU)
with $\mathcal{A}_{1}=\mathcal{A}_{2}=1$~\cite{Bag14}; hence it
provides the optimal rotosensor for angles in the interval $\eta\in[0,\eta_{0}]$. For larger angles of rotation comprised between $1.68374$ and $2.44264$,
we find numerically that an optimal state is the Schrödinger cat state
\begin{equation}
\ket{\psi^{\mathrm{cat}}}=\frac{1}{\sqrt{2}}\left(\ket{2,-2}+\ket{2,2}\right)\,,\label{s2GHZ}
\end{equation}
which is only 1-anticoherent, with $\mathcal{A}_{1}=1$ and $\mathcal{A}_{2}=3/4$.
For values $\eta\gtrsim 2.44264$, the optimal state is a coherent state.
We thus obtain numerically three intervals with three distinct optimal states corresponding to $(\mathcal{A}_1,\mathcal{A}_2)=(1,1), (1,3/4)$, and $(0,0)$, respectively. In order to find the critical angles, we solve Eq.~\eqref{eqcritical}. The angle $\eta_{1}$ separating the first two regions is a solution of $\varphi_{2}^{(2)}(\eta)=0$. The first positive zero of $\varphi_{2}^{(2)}(\eta)$ is $\eta_{1}=2\arctan(\sqrt{9-2\sqrt{15}})\approx 1.68374$, which coincides with the numerically obtained value. The angle $\eta_{2}$ at which the second and third region touch, is a zero of $\varphi_{1}^{(2)}(\eta)+\tfrac{3}{4}\,\varphi_{2}^{(2)}(\eta)$. Its first strictly positive zero is given by
\begin{equation}
\eta_{2}=2\arctan\left(\sqrt{-\frac{a+102b}{a-38b}}\right)\,,
\end{equation}
with $a=19\ 6^{2/3}+\sqrt[3]{6}\left(223-35\sqrt{7}\right)^{2/3}$
and $b=\sqrt[3]{223-35\sqrt{7}}$, and we have indeed $\eta_{2}\approx 2.44264$.
The results we obtained are summarized
in Fig.~\ref{figj2}; they agree with the findings of~\cite{ChrHer17}.
It is noteworthy that the state \eqref{s2GHZ} is not the only state with anticoherence
measures $\mathcal{A}_{1}=1$ and $\mathcal{A}_{2}=3/4$. For instance, any state of the form
\begin{equation}
\ket{\psi}=\frac{c_1\ket{2,-1}+c_2\ket{2,0}-c_1^*\ket{2,1}}{\sqrt{2|c_1|^2+|c_2|^2}}\,
\end{equation}
with $c_1\in\mathbb{C}$ and $c_2\in\mathbb{R}$ come with the same measures of anticoherence, as readily follows from Eq.~\eqref{puritiescm}. These states are thus also optimal in the interval $\eta\in[\eta_{1},\eta_{2}]$, thereby removing the uniqueness of optimal rotosensors observed for $j=1$ and $j=3/2$.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=0.475\textwidth]{Fig1.pdf}
\par\end{centering}
\caption{Average fidelity ${\cal F}_{|\psi\rangle}(\eta)$ (top, red solid curve) and measures of anticoherence ${\cal A}_{t}$ (bottom)
for optimal states with $j=2$, as functions
of the rotation angle $\eta$; the values of the measures $\mathcal{A}_{t}$ for the optimal states are discontinuous at the values $\eta_{1}\approx 1.68374$ and $\eta_{2}\approx 2.44264$
(see text for details). The dashed curve on top shows the average fidelity $\varphi_{0}^{(2)}(\eta)$ for coherent states. The blue (red) shaded area shows the range of rotation angles for which anticoherent states to order $\lfloor j\rfloor$ (coherent states) are optimal. \label{figj2}}
\end{figure}
\subsection{Rotosensors for $5/2\protect\leq j\protect\leq7/2$ \label{subsec:ju5o2pto7o2}}
\subsubsection{$j=5/2$}
For $j=5/2$, there is no anticoherent state of order $2$ but only
of order $1$~\cite{Kol08}. Numerical optimization shows that the
optimal state for small angles of rotation is the $1$-anticoherent
state with the largest measure of $2$-anticoherence, that is given
by
\begin{equation}
\ket{\psi}=\frac{1}{\sqrt{2}}\left(\ket{\tfrac{5}{2},-\tfrac{3}{2}}+\ket{\tfrac{5}{2},\tfrac{3}{2}}\right),\label{s52}
\end{equation}
and has $\mathcal{A}_{1}=1$ and $\mathcal{A}_{2}=99/100$. This state
is found to be optimal up to a critical angle $\eta_{1}\approx1.49697$, which is obtained from Eq.~\eqref{eqcritical} and coincides with the first strictly positive zero of $\varphi_{2}^{(5/2)}(\eta)$.
It is worth noting that the optimal state \eqref{s52} was also found
to be the most non-classical spin state for $j=5/2$, both in the sense that it maximizes the quantumness~\cite{Gir10} and that it minimizes the cumulative multipole distribution~\cite{Bjo15,BjoGra15}. The Majorana constellation of this state defines a triangular bipyramid, which is a spherical $1$-design~\cite{Del77,sloane}, thus corresponding to the arrangement of point charges on the surface of a sphere which minimize the Coulomb electrostatic potential energy (solution to Thomson's problem for 5 point charges, see~\cite{Sch13}).
For larger angles of rotation ranging between $\eta_{1}$ and $\eta_{2}\approx2.2521$,
we find that an optimal state is
\begin{equation}
\ket{\psi^{\mathrm{cat}}}=\frac{1}{\sqrt{2}}\left(\ket{\tfrac{5}{2},-\tfrac{5}{2}}+\ket{\tfrac{5}{2},\tfrac{5}{2}}\right)\,;\label{s5/2GHZ}
\end{equation}
unlike in the case $j=2$, we found this state for $j=5/2$ to be
the only state (up to LU) with $\mathcal{A}_{1}=1$ and $\mathcal{A}_{2}=3/4$.
For $\eta\in[\eta_{2},\pi]$, we find that coherent states are optimal.
The transition occurs at the first strictly
positive zero $\eta_{2}$ of $\varphi_{1}^{(5/2)}(\eta)+\tfrac{3}{4}\,\varphi_{2}^{(5/2)}(\eta)$.
Our results are summarized in Fig.~\ref{fig5o2}.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=0.475\textwidth]{Fig2.pdf}
\par\end{centering}
\caption{Average fidelity ${\cal F}_{|\psi\rangle}(\eta)$ (top, red solid curve) and measures
of anticoherence ${\cal A}_{t}$ (bottom)
for optimal states with $j=5/2$, as functions
of the rotation angle $\eta$; the values of the measures $\mathcal{A}_{t}$ for the optimal states are discontinuous at the values $\eta_{1}\approx1.49697$ and $\eta_{2}\approx2.2521$
(see text for details). The dashed curve on top shows the average fidelity $\varphi_{0}^{(5/2)}(\eta)$ for coherent states. Shaded areas are defined as in Fig.~\ref{figj2}. \label{fig5o2}}
\end{figure}
\subsubsection{$j=3$}
Anticoherent states of order $3$ do exist for $j=3$. They are all
connected by rotation to the octahedron state
\begin{equation}
\ket{\psi^{\mathrm{oct}}}=\frac{1}{\sqrt{2}}\left(\ket{3,-2}+\ket{3,2}\right),\label{s3}
\end{equation}
whose Majorana points lie at the vertices of a regular octahedron.
Therefore, the state \eqref{s3} is, at small $\eta$, the unique
optimal quantum rotosensor (up to LU) for $j=3$. Numerical optimization
shows that the octahedron state is optimal up to a critical angle $\eta_{1}\approx1.3635$
coinciding with the first strictly positive zero of $\tfrac{1}{4}\,\varphi_{2}^{(3)}(\eta)+\tfrac{1}{3}\,\varphi_{3}^{(3)}(\eta)$,
and that, for larger angles, the state
\begin{equation}
\ket{\psi^{\mathrm{cat}}}=\frac{1}{\sqrt{2}}\left(\ket{3,-3}+\ket{3,3}\right)\label{s3GHZ}
\end{equation}
with $\mathcal{A}_{1}=1$, $\mathcal{A}_{2}=3/4$ and $\mathcal{A}_{3}=2/3$
is optimal up to a critical angle $\eta_{2}\approx2.04367$ coinciding with
the first strictly positive zero of $\varphi_{1}^{(3)}(\eta)+\tfrac{3}{4}\,\varphi_{2}^{(3)}(\eta)+\tfrac{2}{3}\,\varphi_{3}^{(3)}(\eta)$.
We found that this is the only spin-$3$ state (up to LU) with $\mathcal{A}_{1}=1$,
$\mathcal{A}_{2}=3/4$ and $\mathcal{A}_{3}=2/3$. Coherent states
are found to be optimal for angles of rotation in the ranges $[\eta_{2},\eta_{3}]$
and $[\eta_{4},\pi]$ with $\eta_{3}\approx2.35881$ and $\eta_{4}\approx 2.65576$
coinciding with the second and third strictly positive zeros of $\varphi_{1}^{(3)}(\eta)+\varphi_{2}^{(3)}(\eta)+\varphi_{3}^{(3)}(\eta)$.
In the range $[\eta_{3},\eta_{4}]$, the octahedron state \eqref{s3}
becomes again optimal (although the three functions $\varphi_k^{(3)}$ for $k=1,2,3$ are not simultaneously negative in that range). Our results are displayed in Fig.~\ref{figj3}.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=0.475\textwidth]{Fig3.pdf}
\par\end{centering}
\caption{Average fidelity ${\cal F}_{|\psi\rangle}(\eta)$ (top, red solid curve) and measures
of anticoherence ${\cal A}_{t}$ (bottom)
for optimal states with $j=3$, as
functions of the rotation angle $\eta$; the values of the measures $\mathcal{A}_{t}$ for the optimal states are discontinuous at the values $\eta_{1}\approx 1.3635$, $\eta_{2}\approx 2.04367$,
$\eta_{3}\approx 2.35881$ and $\eta_{4}\approx 2.65576$ (see text
for details). The dashed curve on top shows the average fidelity $\varphi_{0}^{(3)}(\eta)$ for coherent states. Shaded areas are defined as in Fig.~\ref{figj2}. \label{figj3}}
\end{figure}
\subsubsection{$j=7/2$}
This is the smallest spin quantum number for which a smooth variation
of the optimal state with $\eta$ is observed, resulting in the complex
behaviour displayed in Figs.~\ref{figj72} and \ref{figj72bis}.
There are no anticoherent states to order $3$ for $j=7/2$, but there
exist anticoherent states to order $2$. The optimal state for small
angles of rotation (by which we mean here $\eta\to0$) turns out to
be one of those. Numerical optimization yields the state
\begin{equation}
\ket{\psi}=\sqrt{\tfrac{2}{9}}\,\ket{\tfrac{7}{2},-\tfrac{7}{2}}-\sqrt{\tfrac{7}{18}}\,\ket{\tfrac{7}{2},-\tfrac{1}{2}}-\sqrt{\tfrac{7}{18}}\,\ket{\tfrac{7}{2},\tfrac{5}{2}}\label{s72AC}
\end{equation}
with measures of anticoherence $\mathcal{A}_{1}=\mathcal{A}_{2}=1$
and $\mathcal{A}_{3}=1198/1215$. This is not the state with the highest
measure of $3$-anticoherence, as the state
\begin{equation}
\ket{\psi}=\frac{1}{\sqrt{2}}\left(\ket{\tfrac{7}{2},-\tfrac{5}{2}}+\ket{\tfrac{7}{2},\tfrac{5}{2}}\right),
\end{equation}
has measures of anticoherence $\mathcal{A}_{1}=1$, $\mathcal{A}_{2}=195/196$
and $\mathcal{A}_{3}=146/147>1198/1215$. The latter state is found
to be optimal for $\eta\in[\eta_{1},\eta_{2}]$ with $\eta_{1}\approx0.71718$
(not identified) and $\eta_{2}\approx1.24169$ coinciding with the
first strictly positive zero of $\tfrac{12}{49}\,\varphi_{2}^{(7/2)}(\eta)+\tfrac{16}{49}\,\varphi_{3}^{(7/2)}(\eta)$.
The state
\begin{equation}
\ket{\psi^{\mathrm{cat}}}=\frac{1}{\sqrt{2}}\left(\ket{\tfrac{7}{2},-\tfrac{7}{2}}+\ket{\tfrac{7}{2},\tfrac{7}{2}}\right)\label{s72GHZ}
\end{equation}
with $\mathcal{A}_{1}=1$, $\mathcal{A}_{2}=3/4$ and $\mathcal{A}_{3}=2/3$
is found to be optimal for $\eta\in[\eta_{2},\eta_{3}]$ and $\eta\in[\eta_{4},\eta_{5}]$
with $\eta_{3}\approx1.60141$ and $\eta_{4}\approx1.88334$ coinciding
with the third and fourth strictly positive zeros of $\varphi_{1}^{(7/2)}(\eta)$
and $\eta_{5}\approx2.41684$ with the first strictly positive zero
of $\varphi_{1}^{(7/2)}(\eta)+\tfrac{3}{4}\,\varphi_{2}^{(7/2)}(\eta)+\tfrac{2}{3}\,\varphi_{3}^{(7/2)}(\eta)$.
In the interval $[\eta_{5},\pi]$, coherent states are found to be
optimal.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=0.475\textwidth]{Fig4.pdf}
\par\end{centering}
\caption{Average fidelity ${\cal F}_{|\psi\rangle}(\eta)$ (top, red solid curve) and measures
of anticoherence ${\cal A}_{t}$ (bottom)
for optimal states with $j=7/2$, as functions of the rotation angle $\eta$. The dashed curve on top shows the average fidelity $\varphi_{0}^{(7/2)}(\eta)$ for coherent states. Shaded areas are defined as in Fig.~\ref{figj2}. \label{figj72}}
\end{figure}
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=0.475\textwidth]{Fig5.pdf}
\par\end{centering}
\caption{Measures of anticoherence ${\cal A}_{t}$ for optimal states with $j=7/2$,
as functions of the rotation angle $\eta\in[0,0.8]$. \label{figj72bis}}
\end{figure}
\subsection{Rotosensors for small rotation angles $\eta$ and arbitrary values
of $j$ \label{subsec: Rotosensors small eta any j}}
\subsubsection{Angular functions at small angles}
According to Secs.~\ref{subsec:jupto2}
and \ref{subsec:ju5o2pto7o2} optimal
rotosensors for integer values of spin ($j=1,2,3$) are given by $j$-anticoherent
states while for half-integer spin ($j=3/2,5/2,7/2$) the fidelity
is optimized by states which are anticoherent of order $t=1,1,2$,
respectively, and possess large anticoherence measures ${\cal A}_{t}$
for values of $t$ up to $t=\left\lfloor j\right\rfloor $. This fact
can be understood quite generally through the behaviour of the functions $\varphi_{t}^{(j)}(\eta)$ at small $\eta$ for arbitrary values of $j$. In the vicinity of $\eta=0$, the functions $\varphi_{t}^{(j)}(\eta)$
given in Eq.~\eqref{Phimain} take the form
\begin{equation}
\varphi_{t}^{(j)}(\eta)=\frac{b_{t,t}^{(j)}}{2^{2t}}\,\eta^{2t}+\mathcal{O}(\eta^{2t+2}),\label{phiseries}
\end{equation}
with coefficients $b_{t,t}^{(j)}$ given by Eq.~\eqref{btk}. These
coefficients are strictly negative for all $t\geqslant 1$ and all $j=N/2$, since $a_{t,t}^{(j)}>0$
and $a_{N-t,t}^{(j)}$ is either $0$ for $t<N/2$ or positive for
$t=N/2$. This implies that all functions $\varphi_{t}^{(j)}(\eta)$
are negative in some interval around $\eta=0$. Thus, the fidelity ${\cal F}_{|\psi\rangle}(\eta)$
is a linear combination of the $\mathcal{A}_{t}$ with negative coefficients
in that interval. Since $0\leq\mathcal{A}_{t}\leq1$, it follows that
if there exists a state with $\mathcal{A}_{t}=1$ for all $t\leq\lfloor j\rfloor$---that
is, an anticoherent state to order $\lfloor j\rfloor$---then this
state provides an optimal quantum rotosensor for $\eta$ in that interval.
This interval can be made more specific, at least for the lowest values of $j$. Let $\eta_{0}$ denote the first zero of $\varphi_{1}^{(j)}(\eta)$. Numerical results up to $j=85$ indicate
that all functions $\varphi_{t}^{(j)}(\eta)$ for $t=1,\ldots,\lfloor j\rfloor$
are negative for $\eta\in[0,\eta_{0}]$, so that an anticoherent state to order $\lfloor j\rfloor$ (if it exists) is optimal in the whole interval $[0,\eta_{0}]$. As shown in Fig.~\ref{etamin},
$\eta_{0}$ is found to scale as $3\pi/(4j)$ for large $j$. A simple explanation for this is that the expansion of the function $\varphi_{1}^{(j)}(\eta)$ as $\sum_{k}a_{k}\cos(k\eta)$ is dominated by the term $a_{2j}\cos(2j\eta)$ (note however that $\eta_0$ is even better approximated by $9/(4j)$).
Conversely, the states maximizing ${\cal F}_{|\psi\rangle}(\eta)$
for small angles of rotation are the states with $\mathcal{A}_{t}=0$
for all $t$, i.e.~coherent states.
\begin{figure}[!h]
\begin{centering}
\includegraphics[width=0.475\textwidth]{Fig6.pdf}
\par\end{centering}
\caption{First zero $\eta_{0}$ of the functions $\varphi_{1}^{(j)}(\eta)$
(blue dots) as a function of $j$: for $j=1$ and for $j\protect\geq5/2$,
the values are well approximated by $\eta_{0}\approx3\pi/(4j)$
(pink dashes).\label{etamin}}
\end{figure}
To see whether any general pattern emerges, we now identify optimal
small-angle rotosensors for the next few values of the spin quantum
numbers.
\subsubsection{$j=4$}
For $j=4$, there is no anticoherent state to order $t=4$. We find
that the optimal state for small angles of rotation is the $3$-anticoherent
state
\begin{equation}
\ket{\psi}=\sqrt{\tfrac{5}{24}}\,\ket{4,-4}-\sqrt{\tfrac{7}{12}}\,\ket{4,0}-\sqrt{\tfrac{5}{24}}\,\ket{4,4},
\end{equation}
with $\mathcal{A}_{1}=\mathcal{A}_{2}=\mathcal{A}_{3}=1$ and $\mathcal{A}_{4}=281/288$.
\subsubsection{$j=9/2$}
For $j=9/2$, there is no anticoherent state to order $t\geqslant3$.
The anticoherent states of order $t=2$ with the largest $\mathcal{A}_{3}$
are found to be of the form
\begin{equation}
\ket{\psi}=\tfrac{\sqrt{13}}{8}\,\ket{\tfrac{9}{2},-\tfrac{9}{2}}+e^{i\chi}\sqrt{\tfrac{15}{32}}\,\ket{\tfrac{9}{2},-\tfrac{1}{2}}-\tfrac{\sqrt{21}}{8}\,\ket{\tfrac{9}{2},\tfrac{7}{2}},
\end{equation}
with $\chi\in[0,\pi/2]$. Their measures of anticoherence are $\mathcal{A}_{1}=\mathcal{A}_{2}=1$,
$\mathcal{A}_{3}=2347/2352$ and $\mathcal{A}_{4}=5\left(355609+175\sqrt{273}\cos(2\chi)\right)/1806336$.
Among these states, the one with $\chi=0$ has the largest value of
$\mathcal{A}_{4}$ and numerical results suggest that this is the
optimal state for small angles of rotation.
\subsubsection{$j=5$}
For $j=5$, there is no anticoherent state to order $t\geqslant4$.
We find that the optimal state for small angles is the $3$-anticoherent
state
\begin{equation}
\ket{\psi}=\sqrt{\tfrac{5}{16}}\,\ket{5,-4}+\sqrt{\tfrac{3}{8}}\,\ket{5,0}-\sqrt{\tfrac{5}{16}}\,\ket{5,4},
\end{equation}
with $\mathcal{A}_{1}=\mathcal{A}_{2}=\mathcal{A}_{3}=1$, $\mathcal{A}_{4}=895/896$
and $\mathcal{A}_{5}=1097/1120$.
\begin{table*}
\begin{centering}
\begin{tabular}{|c|c|c|c|}
\hline
$j$ & $\ket{\psi^{\mathrm{optimal}}}$ & $\mathcal{A}_{t}$ & Interval \tabularnewline
\hline
\hline
$1$ & $\begin{array}{c}
\ket{\psi^{\mathrm{cat}}}\\
\mathrm{any~state}\\
|j,j\rangle
\end{array}$ & $\begin{array}{c}
\mathcal{A}_{1}=1\\
0\leqslant\mathcal{A}_{1}\leqslant 1\\
\mathcal{A}_{1}=0
\end{array}$ & $\begin{array}{c}
\eta\in [0,\eta_{0}[ \\
\eta=\eta_{0} \\
\eta\in [\eta_{0},\pi]
\end{array}$ \tabularnewline
\hline
$3/2$ & $\begin{array}{c}
\ket{\psi^{\mathrm{cat}}}\\
\mathrm{any~state}\\
|j,j\rangle
\end{array}$ & $\begin{array}{c}
\mathcal{A}_{1}=1\\
0\leqslant\mathcal{A}_{1}\leqslant 1\\
\mathcal{A}_{1}=0
\end{array}$ & $\begin{array}{c}
\eta\in [0,\eta_{0}[ \\
\eta=\eta_{0} \\
\eta\in [\eta_{0},\pi]
\end{array}$ \tabularnewline
\hline
$2$ & $\begin{array}{c}
\ket{\psi^{\mathrm{tet}}}\\
\ket{\psi^{\mathrm{cat}}}\\
|j,j\rangle
\end{array}$ & $\begin{array}{c}
\mathcal{A}_{1}=\mathcal{A}_{2}=1 \\
\mathcal{A}_{1}=1,\mathcal{A}_{2}=3/4\\
\mathcal{A}_{1}=\mathcal{A}_{2}=0
\end{array}$ & $\begin{array}{c}
\eta\in [0,\eta_{1}], \eta_{1}\approx 1.68374\\
\eta\in [\eta_{1},\eta_{2}]\\
\eta\in [\eta_{2},\pi], \eta_{2}\approx 2.44264
\end{array}$\tabularnewline
\hline
$5/2$ & $\begin{array}{c}
\mathrm{Eq}.~\eqref{s52}\\
\ket{\psi^{\mathrm{cat}}}\\
|j,j\rangle
\end{array}$ & $\begin{array}{c}
\mathcal{A}_{1}=1, \mathcal{A}_{2}=99/100 \\
\mathcal{A}_{1}=1,\mathcal{A}_{2}=3/4\\
\mathcal{A}_{1}=\mathcal{A}_{2}=0
\end{array}$ & $\begin{array}{c}
\eta\in [0,\eta_{1}], \eta_{1}\approx 1.49697\\
\eta\in [\eta_{1},\eta_{2}]\\\
\eta\in [\eta_{2},\pi], \eta_{2}\approx 2.2521
\end{array}$\tabularnewline
\hline
$3$ & $\begin{array}{c}
\ket{\psi^{\mathrm{oct}}} \\
\ket{\psi^{\mathrm{cat}}}\\
|j,j\rangle
\end{array}$ & $\begin{array}{c}
\mathcal{A}_{1}=\mathcal{A}_{2}=\mathcal{A}_{3}=1 \\
\mathcal{A}_{1}=1,\mathcal{A}_{2}=3/4 ,\mathcal{A}_{3}=2/3\\
\mathcal{A}_{1}=\mathcal{A}_{2}=\mathcal{A}_{3}=0
\end{array}$ & $\begin{array}{c}
\eta\in [0,\eta_{1}]\cup [\eta_{3},\eta_{4}], \eta_{3}\approx 2.35881\\
\eta\in [\eta_{1},\eta_{2}], \eta_{1}\approx 1.3635, \eta_{2}\approx 2.04367\\
\eta\in [\eta_{2},\eta_{3}]\cup [\eta_{4},\pi], \eta_{4}\approx 2.65576
\end{array}$\tabularnewline
\hline
$7/2$ & $\begin{array}{c}
\mathrm{Eq}.~\eqref{s72AC} \\
- \\
\ket{\psi^{\mathrm{cat}}}\\
- \\
|j,j\rangle
\end{array}$ & $\begin{array}{c}
\mathcal{A}_{1}=\mathcal{A}_{2}=1, \mathcal{A}_{3}=1198/1215 \\
\tfrac{195}{196}\leqslant\mathcal{A}_{2}\leqslant 1, \tfrac{1198}{1215}\leqslant\mathcal{A}_{3}\leqslant \tfrac{146}{147},\, \mathrm{see~Fig.}~\ref{figj72bis} \\
\mathcal{A}_{1}=1,\mathcal{A}_{2}=3/4 ,\mathcal{A}_{3}=2/3\\
\mathrm{see~Fig.}~\ref{figj72} \\
\mathcal{A}_{1}=\mathcal{A}_{2}=\mathcal{A}_{3}=0
\end{array}$ & $\begin{array}{c}
\eta \to 0 \\
\eta\in [0,\eta_{1}],\eta_{1}\approx 0.71718\\
\eta\in [\eta_{2},\eta_{3}]\cup [\eta_{4},\eta_{5}], \eta_{2}\approx 1.24169\\
\eta\in [\eta_{3},\eta_{4}], \eta_{3}\approx 1.60141, \eta_{4}\approx 1.88334\\
\eta\in [\eta_{5},\pi], \eta_{5}\approx 2.41684
\end{array}$\tabularnewline
\hline
\end{tabular}
\caption{Summary of the results of Secs.~\ref{subsec:jupto2} and \ref{subsec:ju5o2pto7o2} on optimal states for $1\leq j \leq 7/2$. Here, $\eta_{0}$ denotes the first strictly positive zero of $\varphi_{1}^{(j)}(\eta)$, $\ket{\psi^{\mathrm{tet}}}$ defined for $j=2$ is given by Eq.~\eqref{s2}, $\ket{\psi^{\mathrm{oct}}}$ defined for $j=3$ is given by Eq.~\eqref{s3}, and $\ket{\psi^{\mathrm{cat}}}=\frac{1}{\sqrt{2}}\left(\ket{j,-j}+\ket{j,j}\right)$ for any $j$. The state $|j,j\rangle$ has been taken as an example of coherent state. Note that optimal states given here are not necessarily unique (states not related by a rotation can have the same $\mathcal{A}_t$). \label{tab}}
\par\end{centering}
\end{table*}
\subsubsection{Arbitrary values of $j$}
As was mentioned earlier, if an anticoherent state to order $\lfloor j\rfloor$
exists for a given $j$, then this state gives rise to an optimal quantum
rotosensor for $\eta\in[0,\eta_{0}]$. This applies to values $j=1,3/2,2$
and $j=3$, which are the only cases where existence of anticoherent
states to order $t=\lfloor j\rfloor$ has been established (see e.g.~\cite{Bag15,Bag17}).
The situation is less straightforward if such a state is not known to exist from the outset. The only general conclusion one can draw is that minimizing the average fidelity ${\cal F}_{|\psi\rangle}(\eta)$ for a fixed angle $\eta\in[0,\eta_{0}]$ corresponds to maximizing the measures $\mathcal{A}_{t}$ within the domain $\Omega$ (by definition, $\Omega$ is the set of all reachable $\mathcal{A}_{t}$ so that by changing $|\psi\rangle$, we will remain within $\Omega$). In this sense, the more anticoherent a state is, the more sensitive it will be as a quantum rotosensor. In general, varying $|\psi\rangle$ will change all anticoherence measures simultaneously. The challenge is to determine whether a state with given values of the measures $\mathcal{A}_{t}$ exists and, if it does, to identify it.
The maximal order of anticoherence that a spin-$j$ state can display
is generally much smaller than $\lfloor j\rfloor$, typically $t\sim2\sqrt{j}$
for large spins $j$~\cite{Bag15}. Numerical results for $j\lesssim100$
seem to suggest that the pairs $(t,j)$ for which a $t$-anticoherent
spin-$j$ state exists coincide with those for which a $2j$-points
spherical $t$-design exists in three dimensions~\cite{Grashttp}.
The latter have been tabulated up to $j=50$ \cite{sloane}. For example,
the first pairs $(t,j)$ for $j\leq4$ are given by $(1,1)$, $(1,3/2)$,
$(2,2)$, $(1,5/2)$, $(3,3)$, $(2,7/2)$, $(3,4)$.
\section{Summary and conclusions \label{sec:Conclusion}}
The main result of this work is a closed-form expression
\eqref{PexpansionAC} for the fidelity ${\cal F}_{|\psi\rangle}(\eta)$
between a state and its image under a rotation by an angle $\eta$
about an axis ${\bf n}$, averaged over all rotation axes. The expression
takes the form of a linear combination of anticoherence measures $\mathcal{A}_{t}$, with explicit $\eta$-dependent coefficients. It follows that not
only spin-$j$ states which are related by a global rotation of the axes come with the same average fidelity, but more generally all states with identical purities of their reduced density matrices (calculated for any subset of their $2j$ constituent spin-$1/2$ in the Majorana representation). This gives an explanation for the observation of~\cite{ChrHer17} that optimal states are not necessarily unique. Moreover, since the fidelity is linear in the anticoherence measures, optimal states correspond to values of $\mathcal{A}_{t}$ on the boundary of the domain $\Omega$ of admissible values. This shows the relevance of characterizing the domain $\Omega$.
The expression \eqref{PexpansionAC} allows us to
characterize states which optimally detect rotations by their degree
of coherence or anticoherence. At small angles $\eta\leq\eta_{0}$,
where the coefficients of the measures $\mathcal{A}_{t}$ are all
negative, optimality of detection of rotations goes hand in hand with
high degrees of anticoherence. For angles close to $\eta=\pi$, however,
numerical results support the claim that optimality is achieved throughout
by spin coherent states.
We also performed a systematic investigation of
states minimizing the average fidelity for small values of $j$, for
all integers and half-integers from $j=1/2$ to $j=5$. Table~\ref{tab} summarizes our findings for the lowest values of $j$. At small rotation
angle, all optimal states were found to have a maximal lowest
anticoherence measure: $\mathcal{A}_{1}=1$. These states, which are
anticoherent to order $1$, exist for any value of $j$, and one may
conjecture that they should, in fact, be optimal for arbitrary values
of $j$. More generally, for all values of $j$ investigated and for
$\eta\leq\eta_{0}$, the optimal states turned out to have, for each
$t>1$, the largest admissible anticoherence measure $\mathcal{A}_{t}$
compatible with fixed values of the lower measures $\mathcal{A}_{1},\mathcal{A}_{2},\ldots,\mathcal{A}_{t-1}$.
Whether this property holds in general remains an open question.
Note that natural generalizations of this problem, such as maximization of the average fidelity, can also be addressed by our approach. For instance, for small rotation angles $\eta\in [0,\eta_0]$, where all $\varphi_t^{(j)}(\eta)$ with $t\geqslant 1$ are negative, the average fidelity is maximal for coherent states. For rotation angles close to $\eta=\pi$, numerical results indicate that the $1$-anticoherent state $\ket{\psi^{\mathrm{cat}}}=\frac{1}{\sqrt{2}}\left(\ket{j,-j}+\ket{j,j}\right)$ is optimal for all $j$ up to $17/2$.
\begin{acknowledgments}
OG and SW thank the hospitality of the University of Liège, where
this work has been initiated.
\end{acknowledgments}
|
2010.12796
|
\section{Introduction}
Visual localization provides accurate orientation and position information for many applications such as Augmented Reality\cite{Klein2007Parallel, middelberg2014scalable,qin2018vins} and mobile robots \cite{tang2019topological,sattler2018benchmarking}. The state-of-the-art visual localization systems usually contain four sequential modules: image retrieval \cite{arandjelovic2016netvlad, torii201524}, feature extraction and description \cite{rublee2011orb, detone2018superpoint }, feature matching \cite{sarlin2020superglue} and pose estimation \cite{lepetit2009epnp, jiao20202}. Each module in these modular pipeline based localization ({MPL}) methods has been investigated for many years varying from traditional to learning based solutions. Concluding from the current researches in the community, in many cases learning based methods show better performance in the first three modules, while for pose estimation traditional geometry based methods under the RANSAC frameworks\cite{lepetit2009epnp, jiao20202, sarlin2019coarse} still hold the superiority in terms of generalization and precision.
To localize in environments with low appearance change and sufficient texture, geometry based MPL could demonstrate superior localization performance. However, if the appearance changes significantly, or a majority of the view are textureless, those methods are prone to failure due to the inadequate matching inliers. Many researches devote to improving the precision and robustness of detecting pixel-level correspondences between images\cite{truong2020glu, sarlin2020superglue}.
But usually in these situations only coarse correspondence can be determined and is hard to find precise matches at high resolution level even with human annotation.
However, is accurate pixel-to-pixel image correspondence really necessary for pose estimation? End-to-end learning based visual localization methods propose a promising solution that could bypass it. Considering the coordinate of estimated pose, end-to-end localization could be categorized as absolute pose regression (APR) and relative pose regression (RPR) methods. APR methods directly regress the global pose of the query image \cite{kendall2015posenet, kendall2017geometric} or global 3D points for pixels on the query image \cite{brachmann2017dsac, brachmann2018learning, brachmann2019expert, li2020hierarchical}. Though some of these methods could achieve higher localization accuracy than geometry based MPL solutions \cite{brachmann2017dsac,brachmann2018learning, brachmann2019expert, li2020hierarchical}, APR methods can not generalize to unseen scenes as the scene-specific information is encoded within the models.
In contrast to regress the variables in the global coordinate directly, relative pose regression (RPR) based methods regress the relative pose between two images \cite{laskar2017camera, balntas2018relocnet, ding2019camnet, zhou2020learn}, which can be considered as the combination of last three modules in MPL. Thus combined with image retrieval, RPR methods can achieve global localization with no need to encode the scene-specific geographic information within the network, thus possessing the potential of generalization.
Unfortunately, currently many RPR based methods show poor generalization performance to unseen scenes as shown in experimental results \cite{zhou2020learn}. Compared with the MPL pipeline, matching and pose estimation processes are coupled in many RPR networks that regress the relative pose directly from the concatenation of the input image feature pair \cite{laskar2017camera, balntas2018relocnet, ding2019camnet}, which makes the regression results related to the enormous and scene-specific feature space. \cite{zhou2020learn} can be considered as explicitly including the matching process within the RPR pipeline by adding a learnable Neighborhood Consensus (NC) matching layer \cite{rocco2018neighbourhood} before regressing the pose. The matching layer outputs a score map that contains the entire pairwise feature correlation score between the two input images. In this way the regression result is related to the correlation between image features, which isolates the regression layer from the absolute feature values that vary across scenes. However, the generalization performance is still not acceptable, thus they infer that the implicit feature matching cannot be correctly learned within the RPR network \cite{zhou2020learn}.
In this paper, we argue that the implicit feature matching can be handled within the RPR network to boost generalization, and propose a novel framework to improve the performance of RPR methods.
As pose information is the only supervision during the training process, it's difficult to apply sufficient constraint for network to learn both matching and pose regression facing huge input data dimensions. We perform regularization to reduce the dimensions of both the image scale and feature correlations aiming to reasonably apply additional constraints on the network. To do this, besides adding a matching layer \cite{rocco2018neighbourhood} to explicitly calculate correlation information, we add a convolutional neural network (CNN) with bottleneck structure to regularize the feature correlations, and implement this new structure within a two-layer pyramid based framework to regress the relative pose from coarse to fine at low resolution with large receptive field, which further reduces the input dimension for regression.
Moreover, depth image is concatenated with the regularized feature correlation to recover the absolute scale of the regressed pose as shown in Fig. \ref{framework}.
We implement RPR networks with different regression structures within this two-layer framework and compare their performance on public indoor RGBD datasets. Through experiments we validate the effectiveness of both the implicit matching layer and dimension regularization in terms of generalization improvement as well as the depth fusion in terms of scale recovery. Besides, the structure with correlation feature regularization shows superior performance in occasions with large perspective of view.
The experimental results also demonstrate that in challenging changing environments learning based methods possess more potential than state-of-the-art MPL methods with geometry solvers.
\section{Related Work}
In this section we review the works of visual localization that are related to geometry and learning based solutions. For a more complete review of this area, we recommend the survey \cite{chen2020survey}.
\begin{figure*}[]
\begin{center}
\includegraphics[width=0.9\textwidth]{fig/framework-new.pdf
\caption{This figure demonstrates the whole pipeline of our visual localization framework. The left part of the figure shows our proposed two-layer relative pose regression network. We draw the detail about the regularization based pose regression layer (MotionNet) within the blue dotted box on the right.}
\label{framework}
\end{center}
\end{figure*}
\subsection{Geometry based visual localization}
Geometry based visual localization usually solves the image pose given matches between 2D keypoints and 3D map points using the RANSAC \cite{fischler1981random} based Perspective-n-Point (PnP) solver \cite{gao2003complete,lepetit2009epnp,jiao20202}. The matches can be computed following nearest neighbor searching according to the distance between feature descriptors on the query image and the image retrieved from the map. Recently, there are also some learning based methods solving out the matches using CNN \cite{rocco2018neighbourhood,rocco2020efficient} or Graph Neural Network (GNN) \cite{sarlin2020superglue}.
\subsection{Learning for absolute pose estimation}
Learning based methods for absolute pose estimation encode the environmental information within the network parameters during the training process. Given a query image, some of these works directly output the global poses w.r.t the map. PoseNet \cite{kendall2015posenet} is the first end-to-end network modified from GoogleNet \cite{szegedy2015going} to regress the translation and rotation represented by quaternion of the input image, and many following works \cite{kendall2017geometric, naseer2017deep, walch2017image} are designed to improve the performance.
Regressing the pose directly end-to-end is efficient but the precision is less accurate. Scene coordinate regression based visual localization \cite{brachmann2017dsac, li2020hierarchical} chooses another way for pose estimation. Instead of directly regressing the image pose, these methods regress the global 3D location for each pixel on the query image. \cite{brachmann2017dsac} utilizes a CNN network for scene coordinate regression, and the achieved 3D-2D matches are forwarded into a novel differentiable RANSAC to achieve end-to-end training. This method could exceed the traditional geometry based methods on localization accuracy, but only shows the efficiency in small environment. There are many following methods designed to improve its performance by adding reprojection \cite{brachmann2018learning} and multi-view geometry constrains \cite{cai2019camera}. To extend methods to large and ambiguous environments, some methods leverage hierarchical network structure to regress the 3D scene coordinates from coarse to fine \cite{li2020hierarchical}, or integrate DSAC within a Mixture of Expert (MoE) framework \cite{brachmann2019expert}. However, as the map information is encoded within the parameters, these methods can not be generalized to unseen scenes.
\subsection{Learning for relative pose estimation}
Learning the relative pose between two images is a more general solution to achieve global localization. The reference map images are retrieved by pre-trained networks \cite{zhou2020learn} or networks that are jointly trained with the following RPR parts \cite{balntas2018relocnet, ding2019camnet}. RPR problem is also studied in some visual odometry works \cite{yin2018geonet, wang2020flow}, but in the context of localization the photometric consistency is usually broken.
In many RPR networks the depth information is not utilized for localization, thus the estimated translation is up-to-scale, and absolute localization results have to be recovered by RANSAC based triangulation \cite{zhou2020learn,laskar2017camera}. In this paper we fuse the depth information within regression and show its ability to recover pose with scale.
\section{Method}
Our goal is to achieve robust visual localization given the current query image $I_q\in{\mathbb{R}^{H\times W\times 3}}$ and the map $\mathcal{M}$ constructed with RGB images $\{I_r\in{\mathbb{R}^{H\times W\times 3}}\}$ with corresponding depth $\{D_r\in\mathbb{R}^{H\times W\times1}\}$. To achieve this, a two-stage visual localization pipeline is utilized to first retrieve top-$N$ ranked images from the map, then estimate the relative poses between $I_q$ and each retrieved RGBD image for global localization.
\subsection{Image Retrieval from the Map}
We take advantage of the success in visual place recognition technologies \cite{torii201524,arandjelovic2016netvlad, tang2020adversarial} and utilize NetVLAD \cite{arandjelovic2016netvlad} to extract the global image descriptor for each map image offline. During localization, we extract the global image descriptor for the query image $I_q$ and find the nearest $N$ map images $\{I_{r_i}|i=1,\dots,N\}$ according to the Euclidean distance between image descriptors. As the global pose $T^M_{r_i}$ is known for each retrieved map image $I_{r_i}$, the global pose of $I_q$ can be calculated based on the estimated relative transformation $T^{q}_{r_i}$ between $I_{r_i}$ and $I_q$:
\begin{equation}
T^M_{q_i}=T^M_{r_i} \cdot (T^{q}_{r_i})^{-1}
\end{equation}
In the following sections we introduce our regularization based network designed to calculate the relative transformation and a validation method used to select the best regressed result out of the $N$ estimated poses $\{T^M_{q_i}|i=1,\dots,N\}$.
\subsection{Regularization based Relative Pose Regression}
\subsubsection{\textbf{Motivation}}
Many RPR methods utilize the concatenation of two CNN features as input for regression in the following Fully Connection (FC) layer \cite{laskar2017camera,balntas2018relocnet, ding2019camnet}. In this way, the output of the FC layer is related to the absolute values of the feature pair. Imagining that given two images of the same content, if some patches of the images are changed simultaneously, their feature pair would change accordingly while the regressed pose should not. This brings difficulty to network learning as enormous input features correspond to the same output. Furthermore, the values of the image features are scene-specific, thus making the network difficult to localize in unseen environment.
On the other way, the correlation score between two features only depends on their difference, which should stay stable as long as the feature descriptors are consistent.
Thus explicitly adding the matching layer within RPR networks could largely reduce the complexity of the pose regression problem, and endow the network with better generalization ability.
Traditionally the correlation volume contains the entire matching information between pixel pairs from two images. When the dynamics is dominant or the perspective of view is largely different, the valid overlap between two images is limited and the correlation volume is occupied with a major of confusing information.
Different from traditional methods that extract pixel-to-pixel correspondences to satisfy geometry based pose solvers \cite{rocco2018neighbourhood,zhou2020learn,rocco2020efficient}, we extract the matching information implicitly by regularizing the correlation volume with a bottleneck structure based CNN model for dimension reduction as shown in Fig. \ref{framework}. Circumventing pixel-level correspondence for pose estimation brings us with two benefits: i) No pixel-level supervision is required thus the training data is easier to obtain. ii) There is no need to find correspondences at high resolution for pose estimation solved by geometry based methods, thus effective global correlation is accessible due to restricted image resolution with large receptive field, leading to more robust patch-wise matching.
\subsubsection{\textbf{Network Architecture}}
The details of the regularization based RPR network is demonstrated in Fig. \ref{framework}.
We utilize the pre-trained VGG16 \cite{simonyan2014very} network for feature extraction and truncate it at the last pooling layer \cite{melekhov2019dgc,truong2020glu}. The input image is resized to $256\times256$ before put into network. We only use the last two layers of the features computed by VGG16 with resolution 16$\times$16 (feature $\bm{f}_1\in\mathbb{R}^{16\times16\times C_1}$) and 32$\times$32 (feature $\bm{f}_2\in\mathbb{R}^{32\times32\times C_2}$) for the following two-stage pose estimation.
In the first layer $\mathcal{L}_1$, the global correlation $\bm{c}^1\in \mathbb{R}^{16\times16\times16\times16}$ between features $\bm{f}^{r_i}_1$ and $\bm{f}^q_1$ of $I_{r_i}$ and $I_q$ respectively is first computed according to the scalar product between each pixel $\bm{u}\in\mathbb{Z}^2$ of $I_{r_i}$ and the corresponding pixel $\bm{u}'$ of $I_q$
\begin{equation}
\bm{c}^1(\bm{u},\bm{u}')={\bm{f}^{r_i}_1(\bm{u})}^T\bm{f}^{q}_1(\bm{u}')
\end{equation}
which then is forwarded into the NC matching layer \cite{rocco2018neighbourhood} to constrain geometric consistency. The following MotionNet module takes the output score map to regress the initial relative pose $T^{q_1}_{r_i}$. Different from other works that represent the pose with 3D translation and a 3D/4D rotation vector \cite{laskar2017camera, ding2019camnet},
we represent the pose with a 9D vector $\bm{\xi} = [\bm{r}\in \mathbb{R}^6, \bm{t}\in \mathbb{R}^3]$, where $\bm{t}$ denotes the translation and $\bm{r}$ denotes the rotation \cite{zhou2019continuity}. A mapping $\Phi$ is adapted to transform our rotation vector to conventional rotation matrix,
\begin{equation}
\Phi:\mathbb{R}^6\rightarrow \mathbb{SO}(3), \bm{r}\mapsto R = \Phi(\bm{r})
\end{equation}
Combined with $t$, $\Phi$ induces a mapping $\tilde{\Phi}$ from our 9D vector to the standard Euclidean transformation
\begin{equation}
\tilde{\Phi}: \mathbb{R}^6\times \mathbb{R}^3 \rightarrow \mathbb{SE}(3), (\bm{r}, \bm{t})\mapsto T = \tilde{\Phi}(\bm{r}, \bm{t})
\end{equation}
We utilize depth image $D_{r_i}$ to calculate rigid flow $\bm{w}$ between $I_q$ and $I_{r_i}$ at the same resolution as $\bm{f}_2$ according to $T^{q_1}_{r_i}$
\begin{equation}
\begin{split}
\left[\begin{matrix}
\bm{p}^{q_1}_k\\
1
\end{matrix}\right]&=T^{q_1}_{r_i}\left[\begin{matrix}
K^{-1} \left[\begin{matrix}
\bm{u}_k\\
1
\end{matrix}\right] \cdot z_k\\
1
\end{matrix}\right]\\
\bm{w}_k&=\pi(\bm{p}^{q_1}_k)-\bm{u}_k
\end{split}
\end{equation}
in which $ T^{q_1}_{r_i}=\tilde{\Phi}(\bm{\xi^{q_1}_{r_i}}) $, $\bm{u}_k$ denote the $k$th pixel on $I_{r_i}$, $z_k$ and $\bm{p}^{q_1}_k$ are the corresponding depth value and 3D point transformed by $T^{q_1}_{r_i}$. $K$ denotes the intrinsic matrix and $\pi(\cdot)$ denotes the projection function.
In the second layer $\mathcal{L}_2$, we utilize $\bm{w}$ to warp $\bm{f}^q_2$ and compute its correlation with $\bm{f}^{r_i}_2$. In this layer only local correlation $\bm{c}^2\in\mathbb{R}^{32\times32\times(2n+1)}$ within $n$ neighborhood pixels are searched to refine the matching information calculated on $\mathcal{L}_1$. And this correlation results are forwarded to regress relative pose $\delta T=\tilde{\Phi}(\bm{\xi}) $ as refinement. The final estimated pose in $\mathcal{L}_2$ is
\begin{equation}
T^{q_2}_{r_i}=T^{q_1}_{r_i} \delta T
\end{equation}
The only supervision during training is the groundtruth relative pose $T^{q_{gt}}_{r_i}\in \mathbb{SE}(3)$ between $I_q$ and $I_{r_i}$. The pose estimation error in each layer $\mathcal{L}_l (l=1,2)$ is calculated as
\begin{equation}
\Delta T_l = \left[
\begin{matrix}
\Delta R_l & \Delta t_l\\
\mathbf{0_{1\times 3}} & {1}
\end{matrix} \right]=T^{q_l}_{r_i} {T^{q_{gt}}_{r_i}}^{-1}
\end{equation}
We convert the rotational error into the angular value $\Delta\theta$ and the total loss is defined as
\begin{equation}
Loss = Loss_1+\beta \cdot Loss_2
\end{equation}
where
\begin{equation}
\begin{split}
\Delta \theta_l &= arccos(\frac{tr(\Delta R_l)-1}{2}) \\
Loss_l &= \|\Delta \theta_l\|+ \gamma_l \|\Delta \mathbf{t}_l\|
\end{split}
\end{equation}
in which $\beta$ and $\gamma_l$ represent the corresponding weights.
\subsubsection{\textbf{Detail of MotionNet}} In this module the correlation information is regularized for pose regression.
It takes the correlation volume from global or local correlation modules as input and utilize a CNN network with bottleneck structure along the feature dimension and a FC layer to regress the corresponding relative poses as shown in Fig. \ref{framework}. The feature dimension of the score map is regularized into a compact formation, on which the depth image with the same resolution is concatenated for scale recovery.
\subsection{Correlation based Pose Selection}
After calculating the relative poses between $I_q$ and the $N$ retrieved map images, we evaluate the $N$ results according to the correlation between $\bm{f}^{r_i}_1$ and warped $\bm{f}^q_1$ based on the rigid flow computed by $T^{q_2}_{r_i}$. We apply softmax along the channels of the correlation results and count the number of vectors in which the highest correlation score is larger than threshold $\alpha$. Only valid correlations that the warped positions are within the images are counted. The pose of the image pair with max number is selected as the best regressed result.
\section{Experiments }
In this section, we assess the performance of our proposed regularization based RPR framework \footnote{https://github.com/syywh/RRPR} with standard 7Scenes \cite{shotton2013scene} dataset for comparison with other networks, and also utilize a challenging indoor public dataset OpenLoris-Scene \cite{shi2020we} to investigate the potential of regression based methods addressing complex real-world environments compared with the geometry based MPL method \cite{jiao20202}.
\subsection{Datasets and implementation details}
\textbf{OpenLoris-Scene} \cite{shi2020we} is a public indoor dataset designed to evaluate the performance of lifelong visual SLAM methods.
It collects RGBD image sequences using a mobile robot in five scenes separately along various trajectories and situations. The data includes significant appearance, illumination and perspective of view changes as well as textureless areas and blur, which is valuable to access the performance of vision based methods facing real-world situation.
We utilize the first three sequences in both ``Home'' and ``Office'' scenes for training. Any two images from same or different sequences are selected as a training image pair if their translation and orientation distances are within the set thresholds. The translational threshold is set to $1.5m$ and rotational threshold is $30^{\circ}$. We test the localization performance on the other sequences in ``Home'' and ``Office'' scenes. To evaluate the performance of generalization, we also utilize the sequences in the ``Cafe'' scene for testing, in which the environmental appearance is entirely unseen in the training data.
\textbf{7Scenes} \cite{shotton2013scene} contains RGBD images collected from 7 indoor rooms. We utilize the training data listed in 7 scenes together for training and compare our testing results with the other learning based pose estimation methods. We also evaluate the cross-dataset generalization performance with 7Scenes dataset for comparison.
\textbf{TUM-RGBD} \cite{sturm2012benchmark} dataset contains sequential RGBD images collected in different scenes. We want to evaluate cross-dataset generalization based on the models trained OpenLoris-Scene and 7Scenes datasets, but the training data in OpenLoris-Scene only contains planar motion. We finetune the models on TUM-RGBD dataset to supplement the freedom of motion in training data of OpenLoris-Scene.
\textbf{Implementation details: }The network is implemented in PyTorch \cite{paszke2017automatic} and trained for at most 50 epochs with the weights of VGG16 feature extraction layer fixed. We would stop the training process early if the training loss does not decrease. All the models are trained using AdamOptimizer \cite{kingma2014adam} with beginning learning rate of $1e^{-4}$ and decay ratio of $0.7$ every 10 epochs.
The batch size is set to 6 and $\beta=4, \gamma_1=3, \gamma_2=2, n=4, \alpha=0.007$.
\textbf{Network structure details:}
To validate the effectiveness of the matching layer and correlation regularization process, we implement three types of RPR networks within our proposed two-layer framework with difference in the input feature for pose regression:
\begin{enumerate}
\item image feature concatenation (``feature-cat'' in tables)
\item correlation volume output from the NC matching layer (``score-map'' in tables)
\item correlation volume with dimension regularization (``score-map-dr$\bm{x}$'' in tables, $\bm{x}$ denotes the channel of the compact regularized feature)
\end{enumerate}
For fair comparison all the three networks have CNN modules between the input features and the pooling layer within each MotionNet.
\subsection{Localization performance}
\begin{table*}[htbp]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\centering
\caption{Visual Localization results evaluated on 7Scenes datasets.}
\setlength{\tabcolsep}{0.6mm}{
\begin{threeparttable}
\begin{tabular}{ccccccccccccc}
\toprule
& \multicolumn{1}{c|}{MPL} & \multicolumn{3}{c|}{APR} & \multicolumn{8}{c}{RPR} \\
& \multicolumn{1}{c|}{sift+5p}& PoseNet & MapNet & \multicolumn{1}{c|}{DSAC++} & RelativePN& RelocNet & \multicolumn{1}{c}{NC-EssNet}& {score-map}$\ $ & {score-map} & {score-map-dr4} & \multirow{2}{*}{score-map} & \multirow{2}{*}{feature-cat} \\
& \multicolumn{1}{c|}{\cite{zhou2020learn}}& \cite{kendall2015posenet} & \cite{brahmbhatt2018geometry} & \multicolumn{1}{c|}{ \cite{brachmann2018learning}} & \cite{laskar2017camera} & \cite{balntas2018relocnet} & \multicolumn{1}{c}{\cite{zhou2020learn}} &-dr2 &-dr3 &-dr4 & & \\
\midrule
median error & \multirow{2}{*}{1.99/0.08} & \multirow{2}{*}{10.44/0.44} & \multirow{2}{*}{7.78/0.21} & \multirow{2}{*}{1.10/0.04} & \multirow{2}{*}{9.30/0.21} & \multirow{2}{*}{6.74/0.21} & \multirow{2}{*}{7.50/0.21} & L1: 3.85/0.14 & 3.79/0.15&3.99/0.15 & 4.34/0.16 & 9.55/0.37 \\
(deg/m) & & & & & & & &L2: 3.22/0.11 & 3.37/0.12 & 3.26/0.11 & 3.35/0.12 & 8.85/0.36 \\
\bottomrule
\end{tabular}%
\begin{tablenotes}
\item ``L1'' denotes the outputs from the first layer $\mathcal{L}_1$ and ``L2'' denotes the outputs from layer $\mathcal{L}_2$ in our framework
\end{tablenotes}
\end{threeparttable}
}
\label{tab:7Scenes}%
\end{table*}%
\begin{table*}[htbp]
\centering
\caption{Visual localization results evaluated on OpenLoris-Scenes datasets. Each column lists the percentage of localization results satisfying the translational and rotational error thresholds ($0.25m,5\deg$)/($0.5m,5\deg$)/($1m,5\deg$). }
\begin{threeparttable}
\begin{tabular}{cccccccc}
\toprule
& Home1-4 & Home1-5 & Office1-4 & Office1-5 & Office1-6 & Office1-7 & Cafe2-1 \\
\midrule
\midrule
score-map-dr2 (gt select) &53.8/61.0/63.7 &90.2/\textcolor{blue}{92.4/92.8} &12.0/20.1/24.7 & 29.3/31.1/33.0 & 91.4/96.5/96.8 &0/0.1/5.7 &72.1/78.7/\textcolor{blue}{83.0} \\
score-map-dr3 (gt select) &53.2/59.6/62.8 &89.7/91.3/91.5 &13.8/23.4/{34.5} &29.5/31.7/35.8 &89.6/98.5/98.5 &\textcolor{blue}{9.1}/19.1/28.7 &70.7/76.2/82.2 \\
score-map-dr4 (gt select) &57.3/60.9/62.7 & 90.0/91.5/91.5 &{18.3/29.7}/32.4 &{30.5/34.9/37.4} &91.4/96.8/97.2 &3.2/\textcolor{blue}{20.7/30.6} & \textcolor{blue}{74.5/81.8}/82.2 \\
score-map (gt select) & \textcolor{blue}{58.0/63.6/64.4} & \textcolor{blue}{91.8}/92.3/92.3 & 2.1/7.4/13.9 & 27.6/28.9/30.5 & 90.5/99.4/99.6 & 0/0/0.1 & 72.2/80.4/82.4 \\
feature-cat (gt select) & 48.7/55.7/62.1 & 78.7/84.2/85.5 & \textcolor{blue}{26.4/32.6/36.3} & 26.4/32.4/37.7 & 84.4/97.5/98.6 & 0/0.1/11.3 & 57.6/64.5/67.4 \\
\midrule
score-map-dr2 (corr select) &43.1/49.9/51.6 &82.7/85.8/85.9 & 7.5/15.2/18.0 &26.2/28.4/30.0 &88.2/91.6/91.6 &0/0/4.7 &65.6/76.1/82.3 \\
score-map-dr3 (corr select) &43.4/49.6/52.2 &84.1/85.6/85.6 &10.1/18.7/22.1 &28.0/28.8/30.7 &87.3/93.9/94.0 &\textcolor{red}{5.2/11.7/20.2} &66.9/73.4/81.3 \\
score-map-dr4 (corr select) &47.3/52.9/53.5 &85.8/86.8/86.8 &10.5/\textcolor{red}{21.1/24.0} &27.9/29.3/30.7 &89.4/95.0/95.0 &1.6/7.7/23.0 &\textcolor{red}{71.3/80.4}/80.6 \\
score-map (corr select) &\textcolor{red}{50.1/57.2/58.1} &\textcolor{red}{86.6/86.9/86.9} &1.1/4.4/8.3 &27.3/27.7/28.4 &87.0/91.3/91.3 &0/0/0 &69.4/78.0/81.9 \\
feature-cat (corr select) &40.5/49.7/55.5 &69.7/74.1/74.1 &{13.6}/20.0/22.4 &23.7/28.4/32.9 &73.4/90.5/91.4 &0/0/6.7 & 52.9/61.4/62.1 \\
\midrule
SuperPoint+2p \cite{jiao20202} & 38.3/48.5/53.6 & 84.2/86.0/86.0 & \textcolor{red}{15.3}/20.9/20.9 & \textcolor{red}{39.2/39.2/39.3} & \textcolor{red}{100/100/100} & 0.1/0.5/0.5 & 55.3/79.6/\textcolor{red}{85.9} \\
\bottomrule
\end{tabular}%
\begin{tablenotes}
\item The best results are marked as red, and the best results selected by groundtruth are marked as blue.
\end{tablenotes}
\end{threeparttable}
\label{tab:OpenLoris}%
\end{table*}%
\begin{table*}[htbp]
\centering
\caption{Generalization results of different RPR methods evaluated on 7Scenes.}
\begin{tabular}{cccccccc}
\toprule
method& score-map-dr2 & score-map-dr3 & score-map-dr4 & {score-map} & {feature-cat} & RelativePN \cite{laskar2017camera} & {RelocNet}\cite{balntas2018relocnet} \\
training dataset & TUM\cite{sturm2012benchmark} &TUM\cite{sturm2012benchmark} &TUM\cite{sturm2012benchmark} & TUM\cite{sturm2012benchmark} & TUM\cite{sturm2012benchmark} & University\cite{laskar2017camera} & ScanNet\cite{dai2017scannet} \\
\midrule
median error(deg/m) & 6.37/0.24 &5.84/0.23 &5.89/0.23 & 5.71/0.21 & 10.5/0.30 & 18.37/0.36 & 11.29/0.29 \\
\bottomrule
\end{tabular}%
\label{tab:generalization}%
\end{table*}%
We first compare our methods with state-of-the-art methods in terms of localization accuracy by training and testing the models on OpenLoris-Scenes and 7Scenes datasets separately. As there is no visual localization benchmark on OpenLoris-Scenes datasets, we implement 2p-RANSAC based MPL solution \cite{jiao20202} with SuperPoint features \cite{detone2018superpoint} as the baseline. Note that as there is only planar motion in OpenLoris-Scenes datasets, 2p-RANSAC based solution should outperform traditional EPnP \cite{lepetit2009epnp} or 5p \cite{nister2004efficient} based RANSAC solvers. We set the first sequences in ``Home'' and ``Office'' as the maps for each scene and for images in the query sequences we utilize NetVLAD \cite{arandjelovic2016netvlad} to retrieve top-5 images as the reference images. For 2p-RANSAC based solution we merge the features in all of the retrieved 5 images to do pose estimation for better robustness. While for our RPR methods we regress the relative pose with each retrieved image separately and select out one result according to the evaluation method in III-C (``corr select'' in TABLE \ref{tab:OpenLoris}). We also list the results selected by the groudtruth (``gt select'' in TABLE \ref{tab:OpenLoris}) that chooses the result with smallest localization error, which can be considered as the best performance the RPR networks achieve. While for 7Scenes datasets, we retrieve the top-1 image for RPR evaluation and compare the results with the other MPL \cite{zhou2020learn} and learning based localization methods \cite{zhou2020learn,kendall2015posenet,laskar2017camera,brahmbhatt2018geometry,balntas2018relocnet,brachmann2018learning}. The localization results are shown in TABLE \ref{tab:7Scenes}, \ref{tab:OpenLoris}.
From the evaluation results on TABLE \ref{tab:7Scenes}, we can see that our proposed models implemented with matching layers (``score-map-dr2, score-map-dr3, score-map-dr4, score-map'') outperform the other listed RPR based methods and part of APR based methods. Note that we also outperform the result in \cite{zhou2020learn} which also leverages matching layer for pose regression and we owe it to the pyramid structure based image scale regularization and the combination with depth information. As there is tiny appearance change or dynamics within the environments, MPL and scene coordinate based methods \cite{brachmann2018learning} could achieve superior localization precision as adequate pixel-level correspondences could be found, but our methods also show comparable performance. In TABLE \ref{tab:7Scenes} we list the outputs from the two layers in our frameworks to demonstrate the effectiveness of the second layer in terms of precision improvement. In other experiments we only list the output of the second layer as the regression results.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig/bettercase.pdf
\caption{Two cases that SuperPoint+2p-RANSAC method fails but our proposed RPR methods success in pose estimation. In second row we draw the match results computed by SuperPoint descriptors with yellow lines.}
\label{bettercase}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{fig/office.pdf
\caption{Rotational errors of different methods evaluated in ``Office1-7''. For better visualization we set the results of the image pairs with overlapped ratio less than 0.1 as 0. The two images are related to the data indicated by the green arrow.}
\label{office-error}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{fig/cafe-new.pdf
\caption{The generalization results tested in ``Cafe2-1'' with different methods. The image pairs related to the red arrows are listed in the second line as reference to indicating the environment.}
\label{cafe-error}
\end{figure}
TABLE \ref{tab:OpenLoris} includes the localization results on OpenLoris-Scenes dataset. As we only use one sequence for each scene as the map, some query images cannot find the matched reference images thus for some sequences the best performance cannot achieve 100$\%$. We can see that in most of the localization sequences our RPR based methods could outperform SuperPoint+2p-RANSAC method.
We analyze through the failure cases in scenes with large different performance to find the strengths and weaknesses of each methods. In ``Home1-4'' scene where RPR based methods extremely outperform RANSAC based method, we find that many failure cases occur in places that the appearance changes significantly or many dynamics exist.
We draw two cases that SuperPoint+2p-RANSAC method fails but RPR methods could give good localization results in Fig. \ref{bettercase}. We can see that many matches are inaccurate, and some matched points belong to dynamic objects such as the curtain, which would further degrade the pose estimation results. While in the proposed RPR methods, as the utilized image features have large receptive field and global correlation information is leveraged, accurate pose estimation could be achieved even though there are many dynamics in the views.
In ``Office1-4'' and ``Office1-7'', we find that RPR method without correlation regularization shows largely decreased performance compared with the results with correlation regularization. In this two cases, the trajectories of query sequences are almost opposite to the mapping trajectories, and some objects are removed across scenes. Thus many retrieved images only have little overlap with the query ones. We draw the rotational errors of ``score-map'' and ``score-map-dr4'' tested in ``Office1-7'' along with the overlap ratio in Fig. \ref{office-error}. We can see at the beginning of the trajectory in which the environments can be inferred from figures in the first row of Fig. \ref{office-error}, ``score-map-dr4'' shows exceeding performance facing large perspective of view, which validates the effectiveness of our proposed correlation reduction process.
\subsection{Generalization Study}
\begin{table}[]
\centering
\caption{Generalization results evaluated on OpenLoris datasets based on the models trained on 7Scenes. Each column lists the percentage of localization results satisfying the translational and rotational error thresholds ($0.25m,5\deg$)/($0.5m,5\deg$)/($1m,5\deg$). }
\begin{threeparttable}
\begin{tabular}{lcc}
\toprule
& Office1-6 & Cafe2-1 \\
\midrule
score-map-dr2 (7S, gt select) &\textcolor{blue}{87.7/90.5/90.8} &64.6/74.6/77.2 \\
score-map-dr3 (7S, gt select) & 78.8/84.3/85.1& \textcolor{blue}{71.1/82.4/83.5}\\
score-map-dr4 (7S, gt select) &78.8/84.3/85.1 &65.5/70.3/71.0 \\
score-map (7S, gt select) &74.2/81.5/81.5 &{67.7}/72.5/73.2\\
feature-cat(7S, gt select) &55.8/75.3/77.3 &22.2/41.0/47.2 \\
\midrule
score-map-dr2 (7S, corr select) &\textcolor{red}{78.9/79.9/79.9} &55.9/64.5/65.3 \\
score-map-dr3 (7S, corr select) &64.0/67.9/67.9 &58.0/\textcolor{red}{76.2/77.3} \\
score-map-dr4 (7S, corr select) &64.0/67.9/67.9 &58.1/62.4/62.7 \\
score-map (7S, corr select) &52.9/60.5/60.5 & \textcolor{red}{59.0}/66.0/65.0\\
feature-cat(7S, corr select) &34.8/50.6/50.9 & 11.1/24.9/27.9\\
\midrule
SuperPoint+2p \cite{jiao20202} & {100/100/100} & 55.3/ {79.6/85.9} \\
\bottomrule
\end{tabular}%
\begin{tablenotes}
\footnotesize
\item The best results of RPR methods are marked as red and the best results selected by groundtruth are marked as blue.
\end{tablenotes}
\end{threeparttable}
\label{tab:OpenLoris-general}%
\end{table}%
In this subsection we execute generalization study of our networks and evaluate the performance of different input feature structures for the regression layer. We reuse the trained models on ``Home'' and ``Office'' scenes to test the cross-scene generalization performance on the ``Cafe'' scene, and the results are listed in the last column of TABLE \ref{tab:OpenLoris}. Then we finetune these models on TUM-RGBD dataset for 5 epochs to supplement the degree of motion before applying them on 7Scenes dataset to test the cross-dataset generalization performance. Our results as well as the other generalization results from the state-of-the-art methods are shown in TABLE \ref{tab:generalization}.
From the generalization results in TABLE \ref{tab:OpenLoris}, we find that our RPR networks with matching layer still outperform SuperPoint+2p-RANSAC method at the first two error intervals, which validates the generalization ability of our RPR networks. In this case the result of ``feature-cat'' network largely degrades, which reflects the importance of the matching layer in terms of improving generalization performance. We draw the localization errors of different methods in Fig. \ref{cafe-error} and the results show that most of the failure cases of SuperPoint+2p-RANSAC method are due to large translational error. We select two places from these failure cases and show the image pairs on Fig. \ref{cafe-error}. We can find that in these cases though the appearance change is not as significant as in ``Home'', there are many dynamic objects and people in the scene, making it hard to find accurate pixel-level correspondences with reliable depth for pose estimation, which also reveals the advantage of regression based methods that release the requirement for pixel-to-pixel correspondence.
TABLE \ref{tab:generalization} shows the generalization performances of different methods tested on 7Scenes and we can find that the precision of our proposed methods with matching layer even outperform the results of some RPR methods listed in TABLE \ref{tab:7Scenes}. We also test the generalization performance of the models trained on 7Scenes to OpenLoris-Scenes. As there is no appearance change in the 7Scenes datasets, we only show the generalization results on ``Office1-6'' and ``Cafe2-1'' in which the query and mapping trajectories are almost the same and only small part of the scenes are changed. The results are listed in TABLE \ref{tab:OpenLoris-general}. The experimental results show that even the sensors and environments are both changed, the degeneration problem is not severe.
\subsection{Ablation Study}
Here we evaluate the effectiveness of depth fusion in terms of scale recovery.
To validate the effectiveness of scale recovery of our method, we train the network with no depth concatenation on the regularized feature and test its generalization performance on ``Cafe'' dataset. Besides listing the results according to the thresholds in TABLE \ref{tab:OpenLoris}, we also list the percentage of the results that with angular error smaller than $5^{\circ}$ in TABLE \ref{tab:depth_ablation}. From the results we can find that the performances of rotational estimation are similar between methods with and without depth concatenation, while the performances taking translational error into account differ a lot, which can demonstrate that the depth concatenation within the model can successfully contribute to recover scale.
\begin{table}[]
\centering
\caption{Ablation results to validate the effectiveness of scale recovery tested on Cafe2-1 dataset.}
\setlength{\tabcolsep}{0.6mm}{
\begin{tabular}{ccccc}
\toprule
& score-map-dr2 & score-map & feature-cat & dr2-noDepth \\
\midrule
median error & 72.1/78.7/83.0 & 72.2/80.4/82.4 & 57.6/64.5/67.4 & 56.4/63.9/66.6 \\
$\Delta\theta_2<5^{\circ}$ & 84.1 & 83.8 & 68.3 & 83.8 \\
\bottomrule
\end{tabular}%
}
\label{tab:depth_ablation}%
\end{table}%
\section{CONCLUSIONS}
In this paper we propose a novel relative pose regression framework for visual localization. In order to improve the network generalization towards unseen scenes, we explicitly add a matching layer and utilize the correlation volume for pose regression. Besides, we design a pyramid based structure to regress the pose from coarse to fine with restricted resolution, and apply dimension reduction on the correlation channel to improve the robustness facing large perspective of view. The experiments validate that our network could achieve state-of-the-art localization performance and demonstrate comparable results even in generalization test to unseen scenes. Experiments also show that regression based visual localization methods possess large potential in complicated real-world environments compared with the methods that require pixel-level correspondences for pose estimation. In the future we design to do research on fusing multiple map images to improve the robustness of visual localization.
\addtolength{\textheight}{-3cm}
\bibliographystyle{ieeetr}
|
2010.12766
|
\section{Introduction}\label{sec:intro}
The separation of speech audios containing different speakers is recognized as one of the core problems in speech processing in the last few decades \cite{cherry1953some,brown1994computational}. Speaker extraction \cite{wang2018voicefilter} is a special case of speech separation, in which the system is expected to regenerate the speech of one particular target speaker from the input audio. Specifically, given a reference audio from the target speaker and a mixed audio with different speakers, the algorithm extracts vocal features from the reference audio, and outputs a new audio clip based on the mixed audio, containing speech from the target speaker only.
Traditional approaches \cite{hershey2016deep,chen2017deep} of speech separation mostly target the spectrogram of the mixed audio based on Short-Term Fourier Transform (STFT). The key is to build a mask over the 2-dimensional spectrogram image, such that irrelevant information to the target speaker is filtered. The magnitude information in spectrograms is incomplete for speech separation tasks, when the phase information of the signals is totally discarded during STFT. The performance of mask-based approaches over spectrogram is known to be bounded by the performance of \emph{optimal} masks based on ground truth, such as Ideal Binary Mask (IBM) \cite{wang2005ideal}, Ideal Ratio Mask (IRM) \cite{li2009optimality}, and Winener Filter-like Mask (WFM) \cite{erdogan2015phase}. It is therefore straightforward to apply these strategies over the raw time-domain waveform instead of the time-frequency domain spectrogram. TaSNet \cite{luo2018tasnet,luo2019conv}, for example, is one of the most successful neural networks for time-domain speech separation, which generates masks over original signals to split waveform based on a given number of speakers. Despite the huge success of TaSNet on speaker separation task, it is unfortunately challenging to directly extend TaSNet for speaker extraction task, due to the following limitations.
\begin{figure*}[t]
\centering
\includegraphics[width=6in]{images/modelstructure.pdf}
\vspace{-5pt}
\caption{The overall workflow of X-TaSNet.}
\label{fig:workflow}
\vspace{-10pt}
\end{figure*}
Firstly, TaSNet does not attempt to identify the speakers during the separation process, and is consequently unable to focus on a specified speaker.
Secondly, TaSNet unrealistically assumes that the model knows the number of speakers as prior knowledge.
In order to maximize the power of such a time-domain approach for the speaker extraction task, one simple solution to the \emph{speaker identification} problem above is to add an extra speaker verification module over the outputs of the speech separation network. It is expected to recognize the output speech from the target speaker. This strategy, however, has difficulties when the target speaker is not present in the mixed audio. Moreover, TaSNet still needs to know the \emph{number of speakers} in advance, which is almost impossible in most of the real-world application scenarios.
In this paper, we propose X-TaSNet. It aims to transfer the knowledge of a speaker verification model to the speech separation model, such that separation and speaker verification are accomplished with a single shot.
It obtains the speaker identification capability from a pre-trained speaker verification model, and finally includes it in separation network in order to perform speech extraction without knowing the exact number of speakers, even on noisy audio clips.
As a short summary, the core contributions of the work include:
\begin{enumerate}
\item
We present the first time-domain speaker extraction approach, which is seamlessly combined with a speaker verification model. The model exhibits state-of-the-art performance.
\item
We incorporate novel loss function and corresponding alternating training scheme to fully exploit the power of the time-domain neural network.
\item
We propose new metrics to better measure the accuracy of extracting the voices from the correct target speaker.
\item
We explore a new scenario where the target speaker is not present in the input speech, and propose a new training scheme for this scenario and new metrics for measuring the performance in this case.
\end{enumerate}
The rest of the paper is organized as follows. Section \ref{sec:model} introduces the workflow and the neural network model used in our speaker extraction method. Section \ref{sec:train} discusses our new training scheme to fully exploit the power of the model. Section \ref{sec:experiment} presents the empirical results of our approach over different speech datasets. Section \ref{sec:related} reviews the existing studies and finally Section \ref{sec:conclusion} concludes the paper and discusses the future research directions.
\section{Model}\label{sec:model}
Given a single channel audio clip, denoted by a sequence $x(t)$ over time index $t$, and a reference speech audio $r_i(t)$ from a known speaker $i$, the goal of speech extraction is to generate a new audio clip $s_i(t)$ from $x(t)$, such that $s_i(t)$ contains pure speech audio from speaker $i$.
In this section, we present the details of our X-TaSNet model, which is built on top of the convolutional implementation of TaSNet, i.e., Conv-TaSNet \cite{luo2019conv}. Different from original Conv-TaSNet, our X-TaSNet revises its original encoder, extractor, and decoder, and adds an additional \textit{Speaker Encoder} to the model.
Following \cite{luo2019conv}, the input mixture audio signal is firstly split into overlapping segments of dimension $L$.
These segments are then transformed to vectors of dimension $N$. This transformation $\mathbb{R}^L\xrightarrow{}\mathbb{R}^N$ is learnable, and the mask prediction is performed in $\mathbb{R}^N$ space instead.
After the model completes the transformation, the outputs from \textit{Encoder} are concatenated with the voiceprint of the target speaker on each segment.
The voiceprint is produced based on the reference speech audio $r(t)$ from the target speaker's by using a pre-trained speaker verification model. It is usually called the \textit{Speaker Encoder}. The concatenation is then fed to the \textit{Extractor} to infer the mask for the target speaker's speech.
By applying the mask over the output of the \textit{Encoder}, the masked vectors are the internal representation of the target speaker's speech within each segment in the space $\mathbb{R}^N$. The \textit{Decoder} is then invoked to convert it to the space of $\mathbb{R}^L$ and finally produce the output waveform.
The \textit{Extractor} employs Temporal Convolutional Network (TCN), following \cite{luo2019conv}.
Speaker embedding from \textit{Speaker Encoder} is generated by using Generalized End-to-End (GE2E) speaker verification model \cite{wan2018generalized}.
We skip the details of TCN and GE2E due to space limitation.
\section{Training}\label{sec:train}
To fully unleash the power of the proposed neural network structure in X-TaSNet, we design a few novel technologies on the training scheme based on the model structure.
\noindent\textbf{Additional Loss on Distortion}
While speech separation models, e.g., TasNet, are clearly designed to minimize the loss over all output speakers when the number of speakers is known, it becomes tricky when the speech extraction model targets one particular speaker only. The present speakers in the audio, except the target speaker, are called \emph{distortion speakers} in the rest of the section.
Instead of optimizing the quality of speech from the target speaker only, we find it is equally helpful to minimize the error on distortion speakers.
The core challenge here is how to define the loss function when the number of distortion speakers is unknown to the speech extraction model.
Our solution in X-TasNet is to adopt a new Loss on Distortion (LoD) configuration. Based on the configuration, X-TaSNet generates two outputs. The first output is expected to contain the target speaker's voice only, while the second output is the mixture of all the distortion speakers' voices.
The LoD is defined as the reconstruction error over the mixture of speech audio from all distortion speakers. This strategy encourages the model to generate a clean separation more than a pure extraction.
\begin{table}[h!]
\centering
\caption{Performance of SI-SNRi and NSR with and without LoD. SI-SNRi stands for scale-invariant signal-to-noise ratio improvement, and NSR stands for the Negaitve SI-SNRi Rate. Both of the metrics are explained in Section~\ref{sec:experiment}.}
\label{tab:LoDmetric}
\begin{tabular}{l|c|c}
\hline
& SI-SNRi & NSR \\
\hline
X-TaSNet w.o. LoD & 12.7 & 6.3\% \\
X-TasNet w. LoD & 12.8 & 7.0\% \\
\hline
\end{tabular}
\end{table}
In Table~\ref{tab:LoDmetric}, we report the effectiveness of LoD strategy on output speech quality in metrics of SI-SNRi and negative SI-SNRi rate (NSR). SI-SNRi is the scale-invariant signal-to-noise improvement, and NSR is the rate of SNRi that is negative, indicating the likelihood of extremely poor performance usually caused by wrong speaker identity in the output audio. The results imply that LoD does not enhance the average quality of the output speech, and the reason is the downgraded capability on speaker identification.
We believe the additional term of loss on the distortion speakers, which adopts the speech separation losses, is helpful for improving the purity of speech output from the extraction. However, this additional loss may also \emph{distract} the optimization of the model, and confuse the model on extracting the correct speaker. More discussions and empirical evaluations on the robustness issue are available in Section~\ref{sec:experiment}.
\noindent\textbf{Alternating training} According to the observations in Table \ref{tab:LoDmetric}, direct optimization over SI-SNRi may not be the right choice for model training, if we seek to build a robust speech extraction model.
This motivates us to deploy a different training scheme, which targets at improving the accuracy of speaker extraction, namely rates of extracting the correct speakers. This leads to the design of an \emph{alternating training scheme} to replace standard training in the original design of TasNet.
Each training tuple in the dataset is formulated as $\langle x(t),\; r_i(t),\; s_i(t),\; m_i(t) \rangle$ where $x(t)$ is the input audio, $s_i(t)$ is the ground truth of pure speech from target speaker $i$, $m_i(t)$ is the ground truth of the mixture of distortion speakers except speaker $i$, and $r_i(t)$ is the reference speech audio of the target speaker $i$.
In our new alternating training scheme, the extracting target of one tuple is expanded to all the speakers in the mixed speech.
To be precise, the expanded training tuple is reformulated as $\langle x(t), \left(r_1(t),s_1(t), m_1(t)\right),\ldots,\left(r_n(t),s_n(t),m_n(t)\right)\rangle$ where $n$ is the total number of speakers in the mixed audio clip $x(t)$.
For each training tuple, the model expands to $n$ voices based on references speech audio clips in $\left(r_1(t),\ldots,r_n(t)\right)$ for all speakers. The corresponding target voices are $\left(s_1(t), \ldots, s_{n}(t)\right)$.
The distortion voice is calculated by summing up all speech signals from other speakers.
Alternating training may look similar to traditional data augmentation strategy commonly used in machine learning training. We believe it unveils a deep insight into the trade-off on training efficiency:
Whether we take more time on exploring different mixed speeches (without alternating training) or to take $n$ steps for each mixture to achieve higher accuracy of pairing up each speaker embedding and corresponding audios (with alternating training).
Given that speaker matching accuracy is the bottleneck under LoD, alternating training is expected to help it to achieve higher performance by increasing the accuracy of extracting the correct speaker's voice.
\noindent\textbf{Speaker Presence Invariant Training (SPIT)}
During the construction of the training dataset, for each mixed audio, we ensure there is at least one reference audio from a speaker not present in the mixed audio $x(t)$. This helps to force the model to consider cases when the target speaker is not present in the audio.
This additional reference audio is denoted as $\hat{r}$.
The target speech for this absent speaker is a segment of silent audio. In order to control the influence of this silent training target, only a small portion of the training tuples include such an absent target speaker.
A special training loss is designed in Section~\ref{subsec:trainloss} in order to penalize the outputs when they are far from silence.
\noindent\textbf{Training loss}\label{subsec:trainloss}
Regarding the training loss, the Scale-Invariant Signal-to-Noise ratio (SI-SNR) is used in the proposed model. It is similar to the Signal-to-Distortion Ratio (SDR) which employed by the original version of TaSNet. It is formulated as:
\begin{equation}\label{eq:1}
SI\text{-}SNR:=
10\log_{10}
\frac
{
\left\lVert
\frac
{
\langle \hat{s},s \rangle s
}
{
\left\lVert s \right\rVert^2
}
\right\rVert^2
}
{
\left\lVert
\hat{s}-
\frac
{
\langle \hat{s},s \rangle s
}
{
\left\lVert s \right\rVert^2
}
\right\rVert^2
}
\end{equation}
where $s,\hat{s} \in \mathbb{R}^T$ are target speaker's voice, and estimated voice normalized at zero mean. $T$ is the audio length.
The loss function is thus defined as:
\begin{equation}\label{eq:2}
L:=-SI\text{-}SNR
\end{equation}
When \emph{Loss on Distortion} is activated, the loss function is revised as follows:
\begin{equation}\label{eq:3}
L':=-(SI\text{-}SNR_{target}+SI\text{-}SNR_{distortion})
\end{equation}
If alternating training is employed, the generation of the training dataset follows the strategy introduced above, each random combination of speech audios renders $n$ different target speech audio, based on the number of engaged speakers in the mixed audio.
Finally, by using \textit{Speaker Presence Invariant Training(SPIT)}, there is one more speaker $\hat{s}$ add into the target speaker speech for each mixed audio, such as $\hat{s}$ is not in any of the $n$ present speakers.
Note that the target speech audio of the absent speaker is a silent audio.
Applying the loss as in Equation~\ref{eq:1} directly to silent audios is not feasible, since the norm of them is zero.
To solve this problem, the loss function is revised when the target speech audio is silent, as:
\begin{equation}\label{eq:5}
Decibel:=
10\log_{10}
(\left\lVert \hat{s} \right\rVert^2)
\end{equation}
\section{Experiments}\label{sec:experiment}
\noindent\textbf{Experimental Setup} We train the speaker encoder model GE2E over LibriSpeech \cite{panayotov2015librispeech}, VoxCeleb1 \cite{nagrani2017voxceleb}, and VoxCeleb2 \cite{chung2018voxceleb2} datasets.
For the speech extraction model, the training dataset is taken from LibriSpeech. The mixture is produced by randomly mixing speeches from two different speakers, following the setting in the experiments of Voicefilter \cite{wang2018voicefilter}. we use the same mixture dataset from Google's release\footnote{\url{https://github.com/google/speaker-id/tree/master/publications/VoiceFilter/dataset/LibriSpeech}}. Due to the limited computation resources, we only use the \emph{clean} subset of LibriSpeech.
The audios are clipped to 3 seconds each for more efficient training.
All the reference audios do not appear in mixed audios.
For GE2E and TCN, we follow the settings of the models and training as proposed in \cite{wan2018generalized} and \cite{luo2019conv}.
\noindent\textbf{Evaluation Metrics} Scale-Invariant Signal-to-Noise ratio improvement(SI-SNRi) and Signal-to-Distortion Ratio improvement(SDRi) are used to measure the quality of the extracted speech.
Some models may render higher SDRi over another model, but achieves lower accuracy on extracting the correct target speaker. To better address the robustness of extraction output, we measure speech extraction accuracy using two metrics, including \textit{Negative SI-SNRi Rate} (NSR) as an objective metric and \textit{Speaker Error Rate} (SpkER) as a subjective metric. Specifically, NSR is the portion of the extracted speeches that have negative SI-SNRi, which means that, the extraction doesn't provide quality improvements on the given noisy speeches. Our subjective observations imply that once the model extracts a wrong speaker's voice, SI-SNRi is almost a significant negative value. It is thus a good approximation to the speaker extraction error rate.
Finally, we also report SpkER, which is the error rate of speakers directly evaluated by humans, by listening to the extracted speech audio from the algorithms.
\noindent\textbf{Quality Evaluation}
In Table~\ref{tab:metrics}, LoD stands for \textit{Loss on Distortion}, AT stands for \textit{Alternating Training}. Voicefilter is adopted as our baseline approach in this group of experiments\footnote{\url{https://github.com/mindslab-ai/voicefilter}}.
Both X-TaSNet and VoiceFilter are trained with the same training and testing data.
Our results show that X-TaSNet outperforms VocieFilter by a large margin, when both are tested using the same speaker verification model.\footnote{Demos can be found at \url{https://speech-ai.github.io/xtasnet/}}
The SDRi and SI-SNRi of X-TaSNet are 2 times better than those of Voicefilter. LoD and alternating training together give the best performance.
Alternating training, as is shown in Table \ref{tab:metrics}, helps LoD to improve the SDRi and SI-SNRi, as well as the speaker extraction accuracy.
Alternating training without LoD does not generate equally good results, because of the poor data efficiency as discussed in Section \ref{sec:train}.
Table~\ref{tab:metrics} also presents similar NSR and SpkER scores, where SpkER is the subjective metric that measures the speaker extraction error rate.
This verifies that NSR is a reasonable indicator of speaker extraction error without human efforts on annotations.
\begin{table}[t]
\centering
\caption{The performance of VoiceFilter and our proposaed models on mean of SI-SNRi (dB), mean of SDRi (dB), NSR and SpkER.}
\label{tab:metrics}
\begin{tabular}{p{3.5cm}|p{.7cm}|p{.7cm}|p{0.6cm}|p{0.6cm}}
\hline
Model & SDRi & SI-SNRi & NSR & SpkER \\
\hline
VoiceFilter & 7.4 & 6.4 & 9.2\% & 9.5\% \\
\hline
X-TaSNet w.o. LoD AT & 13.8 & 12.7 & 6.3\% & 6.6\% \\
X-TaSNet w.o. AT & 13.8 & 12.8 & 7.0\% & 7.0\% \\
X-TaSNet w.o. LoD & 14.0 & 13.1 & 5.2\% & 5.5\% \\
X-TaSNet & \textbf{14.7} & \textbf{13.8} & \textbf{4.3\%} & \textbf{4.6\%} \\
\hline
\end{tabular}
\end{table}
\noindent\textbf{Effects of Speaker Presence Invariant Training}
To better understand the robustness of the models on the extraction of the absent target speaker, we further investigate the energy measurement in dB as defined in Equation~\ref{eq:5} over outputs of the models on arbitrary audio input without the presence of the target speaker. The distribution of the energy of the models is plotted in Figure~\ref{subfig:dist-db}. It is clear that X-TaSNet using Speaker Presence Invariant Training(SPIT) is able to detect the target speaker's absence, and output silent audios with the energy around -100dB.
To have a quantitative evaluation of the speaker's presence dependence property, we propose a new metric Negative Energy Rate (NER), which indicates speaker absence detection accuracy. The results in Figure~\ref{subfig:dist-db} imply that \emph{zero point} could be a good separation boundary for analysis of absence detection accuracy.
\begin{figure}[t]
\centering
\includegraphics[width=.94\linewidth]{images/db.png}
\vspace{-5pt}
\caption{The distribution of extracted absent speakers' voice energy in dB. Voicefilter and X-TaSNet have scale difference since our measure is not scale-invariant. X-TaSNet returns silent audio over most of the arbitrary audios.}
\label{subfig:dist-db}
\vspace{-10pt}
\end{figure}
With SPIT, X-TaSNet achieves higher NER. But SI-SNRi, which is \textit{12.9}, is below the current best model without SPIT. This is partially because of the higher speaker extraction error which is 8.3\%. Another important reason is about the SI-SNRi distribution.
For X-TaSNet, once it extracts the wrong speaker, there is still some voice in the extracted speech, but for X-TaSNet-SPIT, it may mistakenly decide that the target speaker is not present, and output silent audios.
Silent audios make SI-SNRi a negative number with large absolute value. Thus in the speaker absence scenario, SI-SNRi is not a fair metric.
From the discussion above, we propose a new metric Silence-Invariant Scale-Invariant Signal-to-Noise Ratio improvement(SISI-SNRi). When SI-SNRi turns negative, it is highly likely that the extraction output is on the wrong speaker.
Because we are not interested in the output speech audio quality when the target speaker is wrong in the first place, we combine SISI-SNRi and NSR and build a new metric, which is expected to better reflect the actual usefulness of the speech extraction in real-world application scenarios.
Specifically, the new metric SISI-SNRi denotes the cleanness of the output speech at SI-SNRi when the model targets the correct speaker.
The results of SISI-SNRi are summarized in Table~\ref{tab:allmetric}.
When compared against X-TaSNet, X-TaSNet-SPIT achieves comparable SISI-SNRi but worse NSR. Although the speaker extraction accuracy degrades, X-TaSNet-SPIT gains the ability of target speaker absence detection. Both Voicefilter and X-TaSNet has 0\% NER, while X-TaSNet-PIT hits 72.4\% accuracy on absence detection.
\begin{table}[h!]
\centering
\caption{Performance comparison between Voicefilter, X-TaSNet and X-TaSNet-SPIT}
\label{tab:allmetric}
\begin{tabular}{c|c|c|c}
\hline
Model & SISI-SNRi & NSR & NER \\
\hline
Voicefilter & 7.31 & 9.2\% & 0\% \\
X-TaSNet & \textbf{15.57} & \textbf{4.3\%} & 0\% \\
X-TaSNet-PIT & 14.50 & 8.3\% & \textbf{72.4\%} \\
\hline
\end{tabular}
\end{table}
\section{Related Work}\label{sec:related}
Speech extraction is a task closely related to speech separation.
Due to the limitations of order unawareness in speech separation, as well as the progressive improvements of neural speaker encoder, researchers are attempting to specify the speaker embedding in order to target the speech audio from specific speakers.
\cite{vzmolikova2017learning,zmolikova2017speaker} are models extracting the target speaker's voice from an array of microphones. They use mask-based methods, and train the speaker information encoder jointly with the model.
\cite{delcroix2018single} uses a similar method, and proves that the method is feasible for the single-channel scenario.
\cite{wang2018deep} also solves on single-channel speaker extraction, but with short reference utterances. It achieves this by creating embedding of mixture and reference utterance separately and combined to a single input to mask prediction network.
\cite{zmolikova2019speakerbeam} proposed SpeakerBeam, which discusses different ways of utilizing the reference audio information,
and \cite{delcroix2019compact} is a simpler version of SpeakerBeam with fewer parameters but comparable performance.
\cite{xiao2019single} uses an attention network, which is different from the above-mentioned models. However, it is only used in the scenario where the inventory of speakers is given.
\cite{wang2018voicefilter} is our baseline model. It uses a pretrained speaker verification model to extract the reference speaker's voiceprint. The speaker identification ability is highly dependent on this separate model. The benefit is that, we can use a separate large corpus to train the speaker verification model to increase its generalization ability on speaker identification, without considering the speech extraction model.
All the methods described above process the speech signal on the time-frequency domain, instead of the time domain as X-TaSNet does.
\section{Conclusion and Future Work}\label{sec:conclusion}
In this paper, we present X-TaSNet, a new speech extraction approach based on a time-domain speech separation neural network. By employing new loss functions and training scheme, X-TaSNet demonstrates significant performance enhancement over the state-of-the-art solution, on both output speech audio quality and speaker identity robustness. In the future, we would like to explore two research directions.
Firstly, the NER of X-TaSNet-SPIT is below 80\% which may not be sufficient for serious scenarios. We believe improving on NER is an important direction. Secondly, the current design and testing of X-TaSNet targets at speech separation task, which could be extended to handle other types of noises, such as background music, for other speech enhancement tasks.
\bibliographystyle{IEEEtran}
|
2007.12188
|
\section{Introduction: holographic order parameter}
The strong correlation is property of a phase of general matters not a few special materials, because even a weakly interacting material can become strongly interacting in some parameter region. It happens when the fermi surface (FS) is tuned to be small, or when conduction band is designed to be flat. The Coulomb interaction in a metal is small only because the charge is screened by the particle-hole pairs which are abundantly created when FS is large. In fact, any Dirac material is strongly correlated as far as its FS is near the tip of the Dirac cone. This was demonstrated in the clean graphene \cite{pkim,Lucas:2015sya} and the surface of topological insulator \cite{liu2012crossover,zhang2012interplay,bao2013quantum} through the anomalous transports that could be quantitatively explained by a holographic theory\cite{Seo:2016vks,Seo:2017oyh,Seo:2017yux}.
In the cuprate and other transition metal oxides, hopping of the electrons in 3d shells are much slowed down because the outermost 4s-electrons are taken by the Oxygen. In disordered system electrons are slowed down by the Kondo physics\cite{Coleman:2015uma}.
In twisted bi-layered graphene\cite{cao2018unconventional,cao2018correlated} flat band appears due to the formation of larger size effective lattice system called Moire lattice.
In short, strong correlation phenomena is ubiquitous, where the traditional methods are not working very well, therefore new method has been longed-for for many decades.
When the system is strongly interacting, it is hard to characterize the system in terms of its basic building blocks and one faces the question how to handle the huge degrees of freedom to make a physics, which would allow just a few number of parameters.
Recently, much interest has been given to the holography as a possible tool for strongly interacting system (SIS) by applying the idea to describes the quantum critical point (QCP) describing for example the normal phase of unconventional superconductivity.
Notice however that the QCP is often surrounded by an ordered phase. Physical system can be identified by the information of nearby phase as well as the QCP itself.
For the ordinary finite temperature critical point, the Ginzberg-Landau (GL) theory is introduced precisely for that purpose. As is well known, it describes the transition between the ordered and disordered states near the critical point.
It works for weakly interacting theory and when it works it is a simple but powerful. The order parameter depends on the symmetry of the system and the phase transition is due to the symmetry breaking.
The tantalizing question is whether there is a {\it working} GL theory for strongly interacting systems.
The GL theory works also because of the universality coming from the vast amount of information loss at at the critical point, which resembles a black hole.
For the quantum critical point,
we need one more dimension to encode the evolution of physical quantities along the probe energy scale \cite{wilson1971renormalization,wilson1975renormalization}. Therefore it is natural to interpret AdS/CFT \cite{Maldacena:1997re, Witten:1998qj,Gubser:1998bc} as a GL theory for the strongly interacting system where the radial coordinate describe the dependence on the renormalization scale\cite{alvarez1998geometric,balasubramanian1999spacetime,de2000holographic,heemskerk2011holographic}.
For this reason we call it as Ginzberg-Landau-Wilson theory.
The transport and the spectral function (SF) have been calculated in various gravity backgrounds using the holographic method. However, it has been less clear in general for what system such results correspond to.
For this we believe that the information on the ordered phase is as important as the information on the QCP itself. Clarifying this point will be the first step for more serious condensed matter physics application of the holography idea and this is the purpose of this paper.
The idea is to introduce the holographic order parameters of various symmetry type and calculate the spectral function in the presence of the order. The resulting features of the fermion spectrum should be compared with the Angle Resolved photo-emission spectroscopy (ARPES) data, which is the most important finger print of the materials.
Notice that both the magnetization and the gap of superconductor can be understood as the expectation value of fermion
bi-linears\cite{fradkin2013field}
$\vev{\chi^\dagger \vec{\sigma} \chi}$ and $\vev{\chi \chi}$ of the fermion $\chi$.
In fact, the expectation value of any fermion bilinears can play the role of leading order parameters.
When two or more of them are non-zero, they can compete or coexist according to details of dynamics.
Then, the most natural order parameter in the holographic theory should be {\it the bulk dual field of the fermion bilinear} because it contains the usual order parameter as the coefficient of its sub-leading term in the near boundary expansion.
The presence of the order parameter actually characterizes the physical system off but near the critical point.
We will calculate spectral functions \cite{sslee,Liu:2009dm,Iqbal:2009fd,Cubrovic:2009ye} in the presence of the order parameter.
Our prescription for them is to add the Yukawa type interaction between the order parameter and the fermion bilinear in the bulk and see its effect on the spectrum.
\vskip .3cm
To be more specific, let $\psi_{0}$ be the source field of the fermion $\chi$ at the boundary and $\Phi_{0I}$ be the source of the fermion bilinear ${ \bar \chi}\Gamma^{I}\chi $ where $I=\{\mu_{1}\mu_{2}\cdots \mu_{n}\}$ represent different tensor types of Gamma matrix.
The extension of source fields $\psi_{0}$ and $\Phi_{0I}$ to the AdS bulk is the bulk dual field $\psi$ and the order parameter field $\Phi_{I}$.
We calculate the fermion spectral function by considering
the Yukawa type interaction of the form
\begin{equation}
\Phi_I\cdot{\bar \psi}\Gamma^I\psi.
\ee
For example, the complex scalar can be associated with the superconductivity, and the neutral scalar to a magnetic order.
We will classify 16 types of interactions into a few class of scalars, vectors and two-tensors and calculate the spectral functions. With such tabulated results, one may identify the order parameter of a physical system by comparing the ARPES data with the spectral functions.
Some of the idea has been explored for scalar \cite{Faulkner:2009am} and tensors \cite{benini2011holographic,Vegh:2010fc} to discuss the spectral gap of the superconductivity. But in our paper, we will see much more variety of spectral features like flat band, pseudo gap, surface states, split cones and nodal line etc.
The most studied feature of the fermion spectral function is the gap. The authors of \cite{Edalati:2010ww,Edalati:2010ge} considered the dipole term ${\bar\psi}F_{rt}\Gamma^{rt}\psi$ to discuss the Mott gap. However, if we define the gap as vanishing density of state for a finite width of energy around the fermi level, the dipole term does not generate such spectrum because the band created by the dipole interaction approaches to the Fermi level for large momentum. In \cite{Vegh:2010fc} the author reported the observation of Fermi-arc in the sense of incomplete Fermi surface. Our Fermi-arc is in the sense of surface state in the presence of various different types of vector order.
One of the surprising aspects of our result is that usual scalar interaction creates not a gap but a vivid zero modes which were absent without the order parameter coupling.
We find that the pseudo scalar interaction $ \Phi_{5} {\bar\psi}\Gamma^{5}\psi$ generates a gap as it was discovered in \cite{Faulkner:2009am}. We found that the parity symmetry controls the presence of the zero mode.
Another interesting aspect is that
some of the order parameters in holographic theory, especially those of tensors with radial index do not have direct symmetry breaking interpretation in the boundary theory, and this opens the possibility of 'an order without symmetry breaking'.
\section{Flat spacetime spectrum for various Yukawa interactions}
To learn the effect of the each type of interaction,
we first study the spectral functions(SF) of flat space fermions and classify them.
The spectral functions will be delta function sharp. This will help us by suggesting what to expect in curved space if there are correspondence, because the AdS version will be a deformed and blurred version of flat space SF by interaction effects which is transformed into the geometric effect.
However, AdS$_{4}$ and its boundary has difference in the number of independent gamma matrices, threrefore there are interaction terms in the bulk which does not have analogue in its boundary fermion theory.
We now consider boundary fermion $\chi_{1}$, $\chi_{2}$ whose action is given by
\begin{eqnarray}
&S=S_{ \chi}+S_{\Phi}+S_{int}, \quad {\rm where} \\
& S_{ \chi} =\int d^{3}x\; \sum_{j=1}^{2}i\bar{ \chi}_j
\gamma^\mu\mathcal{D}_\mu \chi_j \\
& S_{\Phi}=\int d^{3}x\left((D_{\mu}\Phi_{I})^2 -m^{2}_{\Phi} \Phi_{I}\Phi^{I}) ,\right) \\
& S_{int}=p_1\int d^{3}x\left(\bar{ \chi_1}\,\Phi\cdot\gamma\, \chi_1+h.c \right) + p_{2}\sum_{j=1}^{2}\int d^{3}x\left( \bar{ \chi_1}\, \Phi\cdot\gamma\, \chi_2+h.c\right) ,
\end{eqnarray}
where $\Phi\cdot\gamma=\gamma^{\mu_{1}\mu_{2}\cdots\mu_{I}}\Phi_{\mu_{1}\mu_{2}\cdots\mu_{I}}$ and $I$ is the number of the indices.
For one flavor case, we set $p_{1}=1, p_{2}=0$ and
set $p_1=0, p_{2}=1$ for 2 flavor.
Each two component fermion in 2+1 dimension has definite helicity and the spin is locked with the momentum. Therefore with one flavor, we can not have a Pauli paramagnetism.
We list $2\times 2$ gamma matrices of 2+1 dimension.
\begin{align}
& \gamma^t=i\sigma_{2},\quad \gamma^x=\sigma_{1}, \quad \gamma^y=\sigma_{3},
\\
& \gamma^{\mu\nu}=\frac{1}{2}\left[\gamma^\mu,\gamma^\nu\right], \quad \gamma^{tx}=\sigma_{3}, \quad \gamma^{ty}=-\sigma_{1}, \quad \gamma^{xy}=-i\sigma_{2}
\end{align}
Following identity is necessary and useful to construct lagrangian.
\begin{equation}
\gamma^{\mu\dagger}=\gamma^{0}\gamma^{\mu}\gamma^{0} ,\quad
\hbox{ and }\quad
\gamma^{\mu\nu}=\epsilon_{\mu\nu\lambda}\gamma_{\lambda},
\label{2p1}
\ee
\subsection{Spectrum in flat space}
Because we did not introduce a lattice structure, we do not have periodic structure in momentum space. instead
we focus on the band structure near the zero momentum. If we include only one flavor, only two bands will appear in the spectrum. For the zero mass, left and right modes can be split, while it can not be for the massive case.
For two flavors, the number of bands is just doubled.
\subsubsection{One flavor case: $\bar{ \chi_1}\,\Phi\cdot\gamma\, \chi_1$}
\subsubsection*{Scalar : $\Phi\cdot\gamma=\Phi$}
For the flat space, there is not much difference between the scalar interaction and the mass term. Gap is generated as one can see from the equation of motion. See also the figure \ref{flatspec}(a). The mass term, if exist, violate the parity symmetry.
\subsubsection*{Vector : $\Phi\cdot\gamma=\boldmath{B_{\mu}} \gamma^{\mu}$}
Its effect is shifting the spectral cone in $x^{\mu}$ direction. See figure \ref{flatspec}(b,c).
\subsubsection*{Antisymmetric Tensor : $\Phi\cdot\gamma=\boldmath{B_{\mu\nu}}\gamma^{\mu\nu}$}
In 2+1,
The role ofInt. $B_{\mu\nu}$ is the same as that of $\epsilon_{\mu\nu\lambda}B^{\lambda} $ due to the second identity of eq.(\ref{2p1}). Therefore no new spectrum is generated.
%
\begin{figure}[ht!]
\centering
\subfigure[Int. with $\Phi$]
{\includegraphics[width=3cm]{./fig/figfs/M/1flavor/kx/M_2_.jpeg}}
\subfigure[ Int.with $B_{t}$]
{\includegraphics[width=3cm]{./fig/figfs/Bt/1flavor/kx/Bt_2_.jpeg}}
\subfigure[ Int.with $B_{i}$]
{\includegraphics[width=3cm]{./fig/figfs/Bx/1flavor/kx/Bx_2_.jpeg}}
\caption{SF for one flavor. (a) scalar Interaction generate a gap. (b) $B_t$ shift the spectrum along $\omega$ direction. $B_t=2$ (c) $B_i$ shift the spectral cone in $k_{i}$ direction. $B_i=2$
}\label{flatspec}
\end{figure}
\begin{figure}[ht!]
\centering
\subfigure[Int.with $\Phi$]
{\includegraphics[width=3cm]{./fig/figfs/M/1flavor/kx/M_2_.jpeg}}
\subfigure[Int.with $B_{t}$]
{\includegraphics[width=3cm]{./fig/figfs/B5t/2flavor/kx/B5t_2_.jpeg}}
\subfigure[Int.with $B_{i}$]
{\includegraphics[width=3cm]{./fig/figfs/B5x/2flavor/kx/B5x_2_.jpeg}}
\caption{SF for two flavors. (a) scalar interaction generates a gap.
(b) $B_t$ shift the spectrum along $\omega$ direction. The configuration has rotational symmetry in $k_{x},k_{y}$ space.
(c) $B_i$ shift the spectral cone along $k_{i}$ direction.
Different flavor shifts in opposite direction.
}
\end{figure}
Comparing with Figure 1 and Figure 2, we can see that the spectral double of the two flavor case is manifest as doubling of the bands.
\subsubsection{2 flavor: $\bar{ \chi_1}\,\Phi\cdot\gamma\, \chi_2+h.c$}
Here for convenience, we consider parity symmetry invariant combination of interaction terms,
\begin{itemize}
\item Scalar : ${\cal L}_{int}= i\Phi({\bar\chi}_{1}\chi_{2}+{\bar\chi}_{2}\chi_{1})$ or $ \Phi({\bar\chi}_{1}\chi_{2}-{\bar\chi}_{2}\chi_{1})$ .
\item{Vector : ${\cal L}_{int}= \boldmath{B_{\mu}} ({\bar\chi}_{1}\gamma^{\mu}\chi_{2}+{\bar\chi}_{2}\gamma^{\mu}\chi_{1})$, or $\boldmath{iB_{\mu}} ({\bar\chi}_{1}\gamma^{\mu}\chi_{2}-{\bar\chi}_{2}\gamma^{\mu}\chi_{1})$ }
\item{Antisymmetric Tensor : ${\cal L}_{int}= \boldmath{B_{\mu\nu}} ({\bar\chi}_{1}\gamma^{\mu\nu}\chi_{2}+{\bar\chi}_{2}\gamma^{\mu\nu}\chi_{1})$, or $\boldmath{iB_{\mu\nu}} ({\bar\chi}_{1}\gamma^{\mu\nu}\chi_{2}-{\bar\chi}_{2}\gamma^{\mu\nu}\chi_{1})$
}
\end{itemize}
The point is that when the order parameter fields has non-zero vacuum expectation values, the result of the operation
depends on the
fluctuating fields, the interactions are invariant but when
In each case, two forms of the interaction are equivalent because
the second form is just unitary transform of the first by $\chi_{1}\to -i\chi_{1}, \chi_{2}\to \chi_{2}$. Notice that when the equation motion does not involve the $i$ in the interaction term, the flavors shift in opposite direction for vector and anti-symmetric tensor cases while two flavors share the same spectrum for the scalar interaction.
The spectrum for the two flavor system is a double of one flavor case.
For scalar, gap is generate and spectrum is degenerated because two flavors has identically gapped spectrum.
For vector interaction, the spectral cone of each flavor is shifted in opposite direction.
Therefore in 2+1 dimensional flat space, anti-symmetric sector
can be mapped to the vectors.
because the role of $B_{\mu\nu}$ is that that of $\epsilon_{\mu\nu\lambda} B^{\lambda}$.
However, in anti-de Sitter space, two sectors can be different.
\section{The fermions in $AdS_{4}$ }
\subsection{ Dirac fermions in flat 2+1 space and in $AdS_{4}$ }
For massless case, the spin-orbit coupling locks the spin direction to that of the momentum so that for fixed momentum
only one helicity is allowed for one flavor. In AdS$_{4}$, half of the fermion components are projected out depending on the choice of the boundary terms\cite{laia2011holographic}. The spectrum of the fermions with AdS bulk mass term is still gapless, unless interaction creates a gap, because the AdS bulk mass is a measure of the scaling dimension not a gap. Therefore 4-component AdS$_{4}$ fermion suffer the same problem of 2 component massless fermions in 2+1 dimension. For example such spin-momentum locked fermion system does not have a Pauli paramagnetism\cite{Alexandrov:2012xe}. One way to avoid such problem is to introduce two flavor and create a gap in the spectrum by coupling with non-zero scalar field $\Phi$ as we will show later.
A Dirac fermion in real system is that of 3+1 dimension even in the case the system is arranged into a two dimensional array of atoms. Therefore it should be described by two flavor of two component fermions, which corresponds to two flavor 4-component fermions in AdS. Then the spectrum of massless Dirac fermion in condensed matter system should be described as a degenerated Dirac cones. To describe the sublattice structure of the graphene, we need another doubling of the flavor.
Therefore we consider only two flavor cases in the maintext, and provide the spectrum of the one flavor in the appendix for curiosity.
Notice that in 2+1 dimension, a fermion field has two component while in AdS$_{4}$ it has 4 components, where only half of the fermion components are physical\cite{laia2011holographic}. Therefore, the degrees of freedom of the bulk match with those of boundary in $AdS_{4}$ theory if the number of flavor in each side are the same. However, in $AdS_{5}$, we need to double the number of the fields, because 4 components in the boundary corresponds to the 8 components in the AdS bulk.
To avoid too many cases, we will consider only $AdS_{4}$ cases here, and treat the AdS5 separately in the future if necessary.
The boundary action must be chosen such that it respect the Parity symmetry as we have done in eq. (\ref{bdryaction}), otherwise the flat space and curved space does not have correspondence especially in scalar order.
\subsection{ Fermion action and equation of motion}
We consider the action of bulk fermion $\psi$ which is the dual to the boundary fermion $\chi$.
Let $\Phi^{I}$ be the dual bulk field of the operator ${\bar \chi}\Gamma^I\chi$.
The question is how the $\Phi^{I}$ couples to the bulk fermion $\psi$.
When $\Phi^{I}$ is a complex field, it describe a charged order like the superconductivity that has been already studied in holographic context.\cite{Hartnoll:2008vx,Gubser:2008px}.
If it is real, it describes a magnetic order like anti-ferromagnetism or gapped singlet order.
The main difference is the absence or presence of the
order parameter with vector field $A_{\mu}$ which is dual to the electric current $J^{\mu}$. We will consider both cases simultaneously and summarize simply as ``without or with chemical potential'', $\mu=A_{t}(r)|_{=\infty}$.
The action is given by the sum $S=S_{ \psi}+S_{bdry}+S_{\Phi}+S_{int},$ where
\begin{align}
& S_{ \psi} =\int d^{4}x\; \sum_{j=1}^{2}i\bar{ \psi}_j
\gamma^\mu\mathcal{D}_\mu \psi_j
-im({\bar\psi}_{1}\psi_{1}-{\bar\psi}_{2}\psi_{2}), \label{action}\\
& S_{ bdry}=\frac{1}{2} \int_{bdry} d^{3}x\; i({\bar\psi}_{1}\psi_{1}+{\bar\psi}_{2}\psi_{2})\label{bdryaction},\\
& S_{\Phi}=\int d^{4}x\sqrt{-g}\left(|D_{\mu}\Phi_{I}|^2 -m^{2}_{\Phi} \Phi^{*}_{I}\Phi^{I}\right), \\
& S_{int}=p_{2f}\sum_{j=1}^{2}\int d^{4}x\left( \bar{ \psi_1}\, \Phi\cdot\gamma\, \psi_2+h.c\right)+ p_{1f}\int d^{4}x\left(\bar{ \psi_1}\,\Phi\cdot\gamma\, \psi_1\right) ,
\end{align}
where $\Phi\cdot\gamma=\gamma^{\mu_{1}\mu_{2}\cdots\mu_{I}}\Phi_{\mu_{1}\mu_{2}\cdots\mu_{I}}$ and it is important to remember that for scalar $\gamma\cdot\Phi=i\Phi$.
For one flavor, $p_{2f}=0$ and
for 2 flavor, $p_{1f}=0$.
Also depending on real/complexity of $\Phi^{I}$, the covariant derivative $D_{\mu}=\partial_{\mu}-igA_{\mu}$ has $g=0$ or 1, and
we use the AdS Schwarzschild or Reisner-Nordstrom metric.
\begin{align}
d s^2&=-\frac{r^2}{L^2}f(r)d t^2+\frac{L^2}{r^2f(r)}d r^2+\frac{r^2}{L^2}d x^2\nonumber
\\
f(r)&=1-\frac{{r_H}^3}{r^3}-\frac{r_H \mu^2}{r^3}+\frac{r_H^2 \mu^2}{r^4}
\end{align}
where the horizon of the metric $r_H=\frac{1}{3}(2 \pi T + \sqrt{4\pi^2 T^2+3 \mu^2})$ and $\mu$ is a chemical potential.
Following the standard dictionary of AdS/CFT for the $p$-form bulk field $\Phi$ dual to the operator $O$ with dimension $\Delta$, its mass is related to the operator dimension by
\begin{equation}
m^{2}_{\Phi}=-(\Delta-p)(d-\Delta-p),
\ee
and asymptotic form near the boundary is
\begin{equation}
\Phi=\Phi_{0}z^{d-\Delta-p}+ \vev{O_{\Delta}}z^{\Delta-p} .
\ee
For the $AdS_{4}$, $d=3, p=2, \Delta=2[\psi]=2$, we should set
\begin{equation}
m_\Phi^2=0, \quad {\rm and }\quad B_{\mu\nu}=B_{\mu\nu}^{(-1)} z^{-1} +B^{(0)}_{\mu\nu} .
\ee
Here we used the coordinate $z=1/r$ which is simpler due to the homogeneity of the AdS metric in this coordinate. We can find out the expression of the fields in $r$ coordinate by using the tensorial property.
Throughout this paper, we use the probe solution $\Phi$ which is the solution in the pure AdS background. This approximation can give qualitatively the same behavior of the fermion spectral function because for finite temperature, the horizon of the black hole cut out the black hole's inner region where the true solution of $\Phi$ deviate much from the probe solution.
Following \cite{Liu:2009dm}, we introduce $\phi_{\pm}$ by
\begin{eqnarray}
\psi_\pm = {(-gg^{rr})}^{-\frac{1}{4}}
\phi_\pm, \quad \phi_\pm =
\left(
y_\pm ,
z_\pm
\right)^T. \label{Eq:psi_pm}
\end{eqnarray}
Then the equations of motion for the one flavor, with all the possible terms turned on, can be written as
\begin{align}
(\partial_r+\mathbf{U_K})\phi +\mathbf{U_I}\phi=0, \quad
\phi=\left(
y_{+},
z_{+},
y_{-},
z_{-} \right)^T
\end{align}
where matrix $U_K$ is from the kinetic terms and $U_Y$ is from the interaction term. If all types of interaction terms are turned on, they are given by
\begin{align}
\hskip -2cm \mathbf{U_K}=& -i \frac{\omega}{r^2f}\Gamma^{rt}+i \frac{k_{x}}{r^2\sqrt{f}}\Gamma^{rx}+i \frac{k_{y}}{r^2\sqrt{f}}\Gamma^{ry}
-i g \frac{A_{t}}{r^{2}f}\Gamma^{rt}
- \frac{m}{r\sqrt{f}}\Gamma^{r} , \qquad {\rm and} \\
\mathbf{U_I}=&- \frac{\Phi}{r\sqrt{f}}\Gamma^{r}-i \frac{\Phi_5}{r\sqrt{f}}\Gamma^{r5} +\frac{B_{xy}}{r^{3}\sqrt{f}}\Gamma^{t5} +i \frac{B_{rt}}{r\sqrt{f}}\Gamma^{t}+i \frac{B_{rx}}{r}\Gamma^{x}\nonumber\\ \nonumber
&+i \frac{B_{ry}}{r}\Gamma^{y}-\frac{B_{tx}}{r^{3}f}\Gamma^{y5} +\frac{B_{ty}}{r^{3}f}\Gamma^{x5} -i \frac{B_{x}}{r^{2}\sqrt{f}}\Gamma^{rx} \nonumber
\\&-i \frac{B_{y}}{r^{2}\sqrt{f}}\Gamma^{ry}-i \frac{B_{t}}{r^{2}f}\Gamma^{rt}-i B_{r}\mathbb{1}- \frac{B_{5x}}{r^{2}\sqrt{f}}\Gamma^{ty}+\frac{B_{5y}}{r^{2}\sqrt{f}}\Gamma^{tx} \nonumber
\\&-\frac{B_{5t}}{r^{2}f}\Gamma^{xy}-i B_{5r}\Gamma^{5},
\end{align}
where
\begin{eqnarray}
\Phi &=\frac{\Phi_{(s)}}{r}+\frac{\Phi_{(c)}}{r^2}, \quad \Phi_5&=\frac{\Phi_{5(s)}}{r}+\frac{\Phi_{5(c)}}{r^2} \nonumber\\
B_{\mu\nu}&=r B_{\mu\nu(s)}+B_{\mu\nu(c)},\quad
B_{r\mu}&=\frac{B_{r\mu(s)}}{r}+\frac{B_{r\mu(c)}}{r^2} \nonumber
\\
B_{\mu}&=B_{\mu(s)}+\frac{B_{\mu(c)}}{r}, \quad
B_{5\mu}&=B_{5\mu(s)}+\frac{B_{5\mu(c)}}{r},
\nonumber \\
B_{r}&=\frac{B_{r(s)}}{r^2}+\frac{B_{r(c)}}{r^3},\quad
B_{(5)r}&=\frac{B_{5r(s)}}{r^2}+\frac{B_{5r(c)}}{r^3}, \nonumber
\end{eqnarray}
where the index $i,j$ runs $t,x,y$ and $f$ is the screening factor of the metric.
For AdS Schwartzschild case, $f=1-r_{H}^{3}/r^{3}$.
For two flavors, the equation of motion changes minimally:
\begin{align}
(\partial_r+\mathbf{U_K})\phi_{1} +\mathbf{U_I}\phi_{2}&=0, \quad \\
(\partial_r+\mathbf{U_K})\phi_{2} +\mathbf{U_I}\phi_{1}&=0, \quad
\end{align}
with the same $U_{K}$ and $U_{I}$ given above.
For the clarity of the physics we turn on just one field $\Phi_I$ to calculate corresponding spectral function.
$\Phi^{I}$ is the order parameter field that couples with spinor bilinear in the bulk. In this paper, we will treat it at the probe level with AdS background. Although the probe solution for $\Phi_{I}$ does not respect all the requirements at the horizon, the IR region where the probe solution blows up by $\sim 1/r^{\Delta}$ is removed by the presence of the horizon. Therefore it is a good approximation, unless the temperature is excessively small.
We will separately consider the cases where order parameter field
with condensation only and the case with source only in order to understand the effect of each case.
%
\subsection{Discrete symmetries in AdS$_{4}$ }
To discuss the discrete symmetry, we first list the explicit forms of the Gamma Matrices we use.
\begin{eqnarray}
&&\Gamma^t=\sigma_{1}\otimes i \sigma_{2},
\quad\Gamma^x=\sigma_{1}\otimes \sigma_{1},
\quad\; \;\Gamma^y=\sigma_{1}\otimes \sigma_{3},
\quad\Gamma^r=\sigma_{3}\otimes \mathbb{1}, \\
&&\Gamma^5=i\Gamma^{0123}=\sigma_{2}\otimes \mathbb{1} ,
\;\;\Gamma^{\mu\nu}=\frac{1}{2}\left[\Gamma^\mu, \Gamma^\nu\right],
\Gamma^{tx}=\mathbb{1} \otimes \sigma_{3},
\;\Gamma^{ty}=\mathbb{1} \otimes -\sigma_{1}, \\
&&\Gamma^{xy}=\mathbb{1} \otimes -i \sigma_{2},
\Gamma^{rt}=i\sigma_{2} \otimes i\sigma_{2},
\;\;\Gamma^{rx}=i\sigma_{2} \otimes \sigma_{1},
\;\Gamma^{ry} =i\sigma_{2} \otimes \sigma_{3},\\
&&\Gamma^{t5}=i\sigma_{3} \otimes i\sigma_{2},
\;\Gamma^{x5}=i\sigma_{3} \otimes \sigma_{1},
\;\;\Gamma^{y5}=i\sigma_{3} \otimes \sigma_{3},
\;\Gamma^{r5}=-i\sigma_{1} \otimes \mathbb{1}
\end{eqnarray}
Our convention of the tensor product is that the second factor is imbedded into each component of the first factor.
Notice that the construction is based on $\Gamma^{\mu}=\sigma_{1} \otimes \gamma^{\mu}$, for $\mu=0,1,2$, and $\Gamma^{r}$ was chosen to satisfy the Clifford algebra $\{\Gamma^{\mu},\Gamma^{\mu}\}=2\eta^{\mu\nu}$. $\frac{1}{2}(1\pm\Gamma^{r})$ are projections to the upper (lower) two components of the 4-component Dirac spinor. In AdS space, the bulk mass of a field is not
playing the role of the gap. Therefore without interaction, fermion spectrum is basically massless, and therefore helicity is a good quantum number. The upper two components are for positive helicity while lower two components have negative helicity. Depending on the boundary term, some of the components are projected out.
In this paper we will choose the upper two components of
the first flavor and lower two of the second flavor.
The bulk gamma matrix is $4\times 4$ and we can decompose it into irreducible representations of Lorentz group:
\begin{equation} \rm
{\bf 16}={\bf 1}(scalar)+{\bf 4}(vector)+{\bf 6}(tensor)+{\bf 4}(axial~ vector)+{\bf 1}(pseudo ~scalar), \label{class}\ee
and we will consider each type of the interaction in detail.
From the boundary point of view, we have scalar and vector interaction.
What happened to the correspondence between the bulk and and the boundary?
we can reclassify the the 16 AdS4 tensors in terms of
2+1 tensors.
\begin{itemize}
\item 4 scalars: $1, \Gamma^{5}, \Gamma^{r}, \Gamma^{r5}=\sigma^{A}\otimes \mathbb{1}$ with $\sigma^{A}=( \mathbb{1}, \sigma^{2},\sigma^{3}, -i\sigma^{1})$ .
\item 3 types of vectors
$\Gamma^{\mu}= \sigma^{1}\otimes\gamma^{\mu}$,
$\Gamma^{\mu 5}=i\sigma^{3}\otimes\gamma^{\mu}$,
$\Gamma^{r\mu}=\i\sigma^{2}\otimes\gamma^{\mu}$,
\item 3 tensors $\Gamma^{\mu\nu}=\epsilon^{\mu\nu\alpha}\mathbb{1}\otimes\gamma_{\alpha} $, where index runs 0, 1, 2.
\end{itemize}
We will see the similarities in each classes.
Below, we discuss the three discrete symmetries. ${\cal T},{\cal P},{\cal C}$ acting on the Dirac spinors and its bilinear in our gamma matrix convention.
We need to know that the hermitian form of interaction lagrangian is given by
\begin{equation} {\cal L}_{int} =
\Phi_{I}{\bar\psi}_{1}\Gamma^{I}\psi_{2} +
\Phi_{I}^{*}{\bar\psi}_{2}\Gamma^{I}\psi_{1}
\ee
for all $\Gamma^{I}=i\mathbb{1},\Gamma^{\mu},\Gamma^{5\mu},\Gamma^{\mu\nu}$ with $\mu,\nu=t,x,y,r$.
\begin{itemize}
\item
The time reversal operation is given by $ {\cal T}=T K$ where $K$ is complex conjugation and T is a unitary matrix. From the invariance of the Dirac equation, we have $T\Gamma^{0*}T^{-1}= -\Gamma^{0}$ and $T\Gamma^{i*}T^{-1}= +\Gamma^{i}$. Since $\Gamma^{\mu}$ ($\mu=t,x,y,r$) are all real in our gamma matrix convention, we should have have $T=\Gamma^{1}\Gamma^{2}\Gamma^{3}$.
Under the $\psi(t)\to\psi'(t') ={\cal T}\psi(t)=T\psi^{*}(-t)$,
\begin{equation}
{\bar\psi}_{1}\Gamma^{I}\psi_{2} \to {\bar\psi}_{2}\Gamma^{5}\Gamma^{I\dagger}\Gamma^{5}\psi_{1} .
\ee
Therefore the invariant Hermitian bilinears correspond to following 8 matrices:
\begin{equation}
\Gamma^{I}= \Gamma^{5}, \Gamma^{5r}, \Gamma^{t}, \Gamma^{5i}, \Gamma^{ti}, \Gamma^{tr}.
\ee
On the other hand, the other half with
change sign under the time reversal operation.
\begin{equation}
\Gamma^{I}=i\mathbb{1}, \Gamma^{r}, \Gamma^{5t}, \Gamma^{i}, \Gamma^{ri}, \Gamma^{xy}.
\ee
\item The parity symmetry $(t,x,y,z)\to (t,-x,-y,-z)$ with $z=1/r$. For this, one should imagine that two AdS spaces with $z>0$ and $z<0$ are patched together along the hyperplane at $z=0$. Notice that vierbeins are even function of $z$ because $e^{\mu}_{a}=\delta^{\mu}_{a}\sqrt{g^{\mu\mu}}$ and the horizon of the mirror geometry is located at $- z_{H}$. The operation $P: \psi(t,x,y,r)\to \Gamma^{0}\psi(t,-x,-y,-z)$ realizes the symmetry, under which a fermion bilinear transforms
\begin{equation}
{\bar\psi}_{1}\Gamma^{I}\psi_{2} \to - {\bar\psi}_{1}\Gamma^{0}\Gamma^{I} \Gamma^{0}\psi_{2}.
\ee
Then the invariant Hermitian quadratic forms correspond to following 8 gamma matrices:
\begin{equation}
\Gamma^{I}=i\mathbb{1}, \Gamma^{5r}, \Gamma^{t}, \Gamma^{5i}, \Gamma^{ri}, \Gamma^{xy}.\ee
On the other hand, the other half with
\begin{equation}
\Gamma^{I}= \Gamma^{5}, \Gamma^{r}, \Gamma^{5t}, \Gamma^{i}, \Gamma^{ti}, \Gamma^{tr}.
\ee
change sign under the parity operation. Later we will see that the fermions with interactions invariant under the Parity
will have zero modes, that would be interpreted as a surface mode, if there were an edge of the boundary of the AdS.
\item The charge conjugation in our Gamma matrix convention is given by ${\cal C}=CK$ with $C=\mathbb{1}$. This is due to the reality of the $\Gamma^{a}$ with $a=t,x,y,r,5$. Under this symmetry,
\begin{equation}
{\bar\psi}_{1}\Gamma^{I}\psi_{2} \to {\bar\psi}_{2} \Gamma^{0}\Gamma^{I\dagger}\Gamma^{0}\psi_{1} = {\bar\psi}_{2}\Gamma^{I}\psi_{1}. \ee
Therefore the bilinear term is invariant if the interaction is invariant under the $ 1\leftrightarrow 2$ and the order parameter is real.
\item Next, we define the chiral symmetry under which
we combine the time reversal and sublattice symmetry ${\cal S}: 1\leftrightarrow 2$,
\begin{equation}
\psi_{1}(t,x,y,r)\to \Gamma^{0}\psi^{*}_{2}(-t,x,y,r).
\ee
It can be realized
by ${\cal X
=\Gamma^{0}K\cal{S}$, so that
\begin{equation}
{\bar\psi}_{1}\Gamma^{I}\psi_{2} \to - {\bar\psi}_{1}\Gamma^{I\dagger}\psi_{2}.
\ee
Therefore quadratic forms corresponding to following 8 hermitian gamma matrices change the sign
\begin{equation}
\Gamma^{I}= \Gamma^{5}, \Gamma^{r}, \Gamma^{5t}, \Gamma^{i}, \Gamma^{ti}, \Gamma^{tr} ,
\ee
while the other half
\begin{equation}
\Gamma^{I}=i\mathbb{1}, \Gamma^{5r}, \Gamma^{t}, \Gamma^{5i}, \Gamma^{ri}, \Gamma^{xy}, \ee
which are anti-hermitian matrix
does not change sign under this symmetry operation.
Then the kinetic term effectively reverse the sign while the mass term is invariant and
the equation of motion, hence {\it the spectrum, is invariant as far as the order parameter is real and the $\Gamma^{I}$ is hermitian}.
Notice the set of spectral symmetry of ${\cal X}$ is precisely complements of that of the parity symmetry.
Notice also that $\cal X$ could be a possible symmetry of the system because the bulk mass terms were chosen as
$-im({\bar \psi}_{1}\psi_{1}-{\bar \psi}_{2}\psi_{2} ) $ instead of
$-im({\bar \psi}_{1}\psi_{1}+{\bar \psi}_{2}\psi_{2} ) $, which explains our choice of the opposite signs in mass terms.
However, such a change of mass term is a unitaty operation and can not change the spectrum. On the other hand
{The parity is a symmetry regardless of such sign choices.}
As we will see, the spectrum of our theory follows $\cal P$ while its dual follows $\cal X$.
\end{itemize}
\section{Classifying the spectrum by the order parameter for 2 flavours}
we classify the spectrum into scalar, vector, and tensor along the line we discussed above. In all the figure below, we should keep in mind that the $k_{x}$ and the vertical one represents
either $\omega$ mostly except the fixed $\omega$ slice, where the vertical axis is $k_{y}$.
\subsection{Summary of spectral features }
Here, we classify, summarize and tabulate some essential spectral features.
\begin{description}
\item[Spectral Classification] Following the discussion below \ref{class}, we classify the spectrum according to the 2+1 Lorentz tensor.
\\$\bullet $ There are 4 scalars. $\mathbb{1}, \Gamma^{5},\Gamma^{r},\Gamma^{5r}$. The first two were described above. For the gauge invariant fields $B_{\mu}$, we should set $B_{r}=B_{5r}=0$. In fact, even for non-gauge invariant case,the last two are identical to the zero Yukawa coupling in our gamma matrix representation.
For scalar interactions, the roles of source and condensation are qualitatively the same.
\\
$\bullet $ There are three classes of vectors:
$B_{\mu}, B_{r\mu}$ and $B_{5\mu}$.
The source creates the split cones and the condensation creates just asymmetry.
The first two are invariant under the parity symmetry showing zero mode related features like Fermi-arc and surface states(Ribbon band).
\\
$\bullet $ There are 3 rank 2-tensor terms: $\Gamma^{xy}, \Gamma^{tx},\Gamma^{ty}, $. The first one is parity invariant and has zero modes.
\item [Gap vs zero modes with scalar order] Out of the 16 interaction types, only parity symmetry breaking scalar interaction with $\Gamma=\Gamma^{5}$)
creates a gap without ambiguity. Both source and the condensation create gaps. \\
On the other hand, parity invariant scalar with $\Gamma=i\mathbb{1}$ has a zero mode Dirac cone in spectrum, which is much sharper than the case of the non-interacting case due to the transfer of the spectral weight to the zero mode by the interaction. The genuine physical system with full gap will be described by this coupling.
\item[Pseudo scalar]
When the interaction is parity non-invariant,
the spectrum has pseudo gap apart from $\Gamma^{5}$ which produces real gap. Seven interactions corresponding to
$$
\Gamma^{I}= \Gamma^{r}, \Gamma^{5t}, \Gamma^{i}, \Gamma^{ti}, \Gamma^{tr}.
$$
have pseudo gaps. Therefore the pseudo gap is a typical phenomena rather than an exception for general interaction in this theory, while the true gap is a rare phenomena.
\item [Fermi-arc] For vectors $B_{\mu}$, $B_{5\mu}$ and $B_{r\mu}$ the role of source term is to generate the split Dirac cones along $k_{\mu}$ direction, $\mu=0,1,2$, while that of the condensation is to generate an anisotropy. This is just like the flat space cases.
However, there is a very interesting phenomena when the interaction term is invariant under the parity: that is, for $B^{5}_{i},B_{ri}$ $i=x,y$,
there exist a spectral line connecting the tips of two Dirac cones at $\omega=0$ plane. This resembles
the ``Fermi-arc'' in the study of Dirac or Weyl-semi metal.
At the plane $\omega=\epsilon \neq 0$ there is also a spectral line(s) connecting the surfaces of the cones. This is precisely the same as the surface modes of topological materials \cite{Armitage:2017cjs}.
If we collect all the slice of different $\epsilon$, the surface modes form a Ribbon shaped band. See Figure \ref{Ribbon}.
One should notice that we did not introduce the edge of 2+1 dimensional system. In our case, the ``surface states'' is properties of the bulk zero modes which does not depends on the presence of the edge. Perhaps this is general phenomena
and responsible to the bulk-edge correspondence.
\footnote{This is analogous to the Faraday's law
$\oint_{C} {\bf dl}\cdot{\bf E} =-\frac{\partial {\bf B}}{\partial t} $
where left hand side is non-zero regardless of the presence of the real circuit along the curve, if there is a time dependent magnetic flux.}
\begin{figure}[ht!]
\centering
{\includegraphics[width=6cm]{./Ribbon.pdf}}
\caption{Splitted Dirac Cones, Fermi-arc and Surface modes. Our analysis indicates that the so called ``surface states'' are bulk zero modes which exists regardless of the presence of the edge of the physical system. The figure came from \cite{Armitage:2017cjs}.
}
\label{Ribbon}
\end{figure}
\item [Flat band] $B_{xy}$ interaction introduces a flat band which is a disk like isolated band at the fermi level $\omega=0$. If chemical potential is applied, the disk bend like a bowl and the fermi level shifts.
\item [Zero mode and parity symmetry]
In the presence of the background field $B_{I}$ with coupling
$B_{I}{\bar\psi}\Gamma^{I}\psi$,
the spectrum shows the zero modes if the quadratic form is parity invariant.
\item[Duality] If we change the boundary term to
$ S_{ bdry}=\frac{1}{2} \int_{bdry} d^{3}x\; i({\bar\psi}_{1}\psi_{1}-{\bar\psi}_{2}\psi_{2})\label{bdryaction},$
then the spectrum of dual pairs are exchanged.
By the dual pair, we mean one of following set of pairs:
$$(\Phi, \Phi_{5}), (B_{\mu},B_{5\mu}), (B_{\mu\nu}, \epsilon_{\mu\nu\alpha\beta}B^{\alpha\beta}), $$
with indices running $t,x,y,r$.
We found that, in this case, the presence of the zero mode are protected by the chiral symmetry $\cal X$ we defined earlier.
\item[Order without rotational symmetry breaking]
The presence of the non-vanishing order parameter field means the breaking of the some rotational or Lorentz symmetry from the bulk point of view. However, from the boundary point of view, some order parametrs involving $r$-index, like $B_{rt}$ does not have obvious symmetry breaking interpretation and therefore they can be interpreted as `orders without symmetry breaking'.
\end{description}
\begin{table}[hbt!]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Order p./ Fig.\#} & Gap & zero mode & spectral feature & possible dual system \\
\hline \hline
\multirow{2}{*}{$\Phi_{5}$} & s/4(a) & $\ocircle$ & \multirow{2}{*}{$\times$} & \multirow{2}{*}{ gap } & { RS(real $\Phi$)} \\ \cline{2-3}
& c/4(b) & $\ocircle$ & & & SC(complex $\Phi$) \\
\hline
\multirow{2}{*}{$i\Phi$} & s/4(c) & $\times$ & \multirow{2}{*}{$\ocircle$} & \multirow{2}{*}{ Dirac cone } & \multirow{2}{*}{ Majorana Fermion in SC} \\
\cline{2-3}
& c/4(d) & $\times$ & & &\\
\hline
\multirow{2}{*}{$B^{5}_{r}$} & s/4e & $\times$ & \multirow{2}{*}{$\ocircle$} & \multirow{2}{*}{Non-coupling} & \multirow{2}{*}{NA } \\ \cline{2-3}
& c/4e & $\times$ & & &\\
\hline
\multirow{2}{*}{$B_{r}$} & s/4e & $\times$ & \multirow{2}{*}{$\times$} & \multirow{2}{*}{Non-coupling} & \multirow{2}{*}{NA}\\ \cline{2-3}
& c/4e & $\times$ & & &\\
\hline\hline
\multirow{2}{*}{$B^{5}_{i}$} & s/6abc & $\times$ & \multirow{2}{*}{$\ocircle$} & {Split cones } & \multirow{2}{*}{ Top. semi-metal} \\
\cline{2-3}
& c/6def & $\times$ & & Fermi arc &\\
\hline
\multirow{2}{*}{$B_{i}$} & s/6ghi & $\times$ & \multirow{2}{*}{$\times$} & {Split cones} & \multirow{2}{*}{ NA}\\ \cline{2-3}
& c/6jkl & $\times$ & & pseudo gap &\\
\hline
\multirow{2}{*}{$B^{5}_{t}$} & s/7abc & $\times$ & \multirow{2}{*}{$\times$} & \multirow{2}{*}{Rot. Sym } & \multirow{2}{*}{NA }\\ \cline{2-3}
& c/7ghi & $\times$ & & &\\
\hline
\multirow{2}{*}{$B_{t}$} & s/7def & $\times$ & \multirow{2}{*}{$\ocircle$} & \multirow{2}{*}{Nodal line} & \multirow{2}{*}{Top. semi-metal }\\ \cline{2-3}
& c/7jkl & $\times$ & & &\\
\hline
\multirow{2}{*}{$B_{rt}$} & s/8c & $\times$ & \multirow{2}{*}{$\times$} & \multirow{2}{*}{Marginal gap} & \multirow{2}{*}{NA}\\ \cline{2-3}
& c/8d & $\times$ & & &\\
\hline
\multirow{2}{*}{$B_{ri}$} & s/9ghi & $\times$ & \multirow{2}{*}{$\ocircle$} & {Split cones} & \multirow{2}{*}{Top. Ins.}\\ \cline{2-3}
& c/9jkl & $\times$ & & Fermi-arc &\\
\hline\hline
\multirow{2}{*}{$B_{xy}$} & s/8a & $\triangle$ & \multirow{2}{*}{$\ocircle$} & \multirow{2}{*}{Disk flat band } & {twisted bi-layer graphene }\\ \cline{2-3}
& c/8b & $\triangle$ & & &Kondo lattice\\
\hline
\multirow{2}{*}{$B_{ti}$} & s/9abc & $\times$ & \multirow{2}{*}{$\times$} & \multirow{2}{*}{Split cones, Fermi-arc} & \multirow{2}{*}{Top. Ins.}\\ \cline{2-3}
& c/9def & $\times$ & & &\\
\hline
\end{tabular}
\label{tab:class2}
\caption{In the table of "Gap", $\ocircle$ denotes gap at the fermi-level, $\triangle$ represents gap off the fermi level and $\times$ is gapless. SC=superconductivity, RS=Random Singlet.
$A(k_x,\omega)$ means we consider the spectral function $A$ as the function of $k_x$ and $\omega$. Under $k_{x}\leftrightarrow k_{y}$ those with one spatial index are assymetric. All others are symmetric.
NA=not available.
}
\end{table}
The table 1 summarizes all the features we found.
We attributed the presence of the zero modes to the protection of the parity invariance. The zero mode is of course the key for the surface states. The presence of the zero mode results in the bright crossing of the Dirac cone with the Fermi-level. This means that the zero modes create sharp Fermi-surface, which was orginally fuzzy due to the strong interaction at the boundary. This is one of the most interesting observation made in this paper. That is, the parity invariant interaction can make a strongly interacting system be
fermi-liquid like. Below in the figure \ref{fig:kondo} we give comparison of the spectrum with coupling $B_{xy}{\bar \psi}\Gamma^{xy}\psi$
in the presence of chemical potential and that of the heavy fermion in Kondo lattice.
More explicit comparison with the experimental data is left as a future project.
\begin{figure}[ht!]
\centering
\subfigure[ $B_{xy,c}=0$]
{\includegraphics[width=3.5cm]{./fig/Bxy_mu_2_c_0__kx-w_.jpg}}
\subfigure[ $B_{xy,c}=5$]
{\includegraphics[width=3.5cm]{./fig/Bxy_mu_2_c_5__kx-w_.jpg}}
\subfigure[ $B_{xy,c}=10$]
{\includegraphics[width=3.5cm]{./fig/Bxy_mu_2_c_10__kx-w_.jpg}}
\subfigure[ Kondo lattice]
{\includegraphics[width=4.5cm]{./fig/kondo2.png}}
\caption{(a-c) Formation process of bent flat band by as we change the stregnth of the coupling. From the left to right $B_{xyc}=0,5,10$. The chemical potential is fixed to be $\mu=2\sqrt3$. (d) Formation of flat band by hybridization of localized state and conducting state.
}
\label{fig:kondo}
\end{figure}
\subsection{ Spectral Function (SF) with Scalar interaction}
\subsubsection{ parity symmetry breaking case :
${\cal L}_{int}= \Phi_{5}({\bar\psi}_{1}\Gamma^{5}\psi_{2}+{\bar\psi}_{2}\Gamma^{5}\psi_{1})$ }
We begin with the simplest case where the order parameter field is scalar field. We choose $m_{\Phi}^2=-2$ in $AdS_4$ for simplicity. Then \cite{erlich2005qcd,oh2019holographic}
\begin{equation}
\Phi_{5}=M_{05} z + M_{5} z^2,\ee
in the probe limit. We consider source only and condensation only cases separately.
\paragraph{ Scalar Source: $M_{05}$ }
The scalar source is usually interpreted as a mass of the boundary fermion. Indeed our result given in
the Figure \ref{fig:scalar}(a), where we draw the spectral function (SF) in the presence of scalar with source term only, fulfill such expectation.
\begin{figure}[ht!]
\centering
\subfigure[ $\Phi_{5}$, s]
{\includegraphics[width=3cm]{./fig/2f/fig/M5_s_4__kx-w_.jpg}}
\subfigure[ $\Phi_{5}$, c]
{\includegraphics[width=3cm]{./fig/2f/fig/M5_c_4__kx-w_.jpg}}
\subfigure[$\Phi$, s]
{\includegraphics[width=3cm]{./fig/2f/fig/M_s_4__kx-w_.jpg}}
\subfigure[ $\Phi$. s]
{\includegraphics[width=3cm]{./fig/2f/fig/M_c_4__kx-w_.jpg}}
\subfigure[ $B_{r},B_{5r}$, s, c]
{\includegraphics[width=3cm]{./fig/2f/fig/Br_s_4__kx-w_.jpg}}
\caption{Spectral Function(SF) (a,b) with parity
breaking scalar. (a) with source only. Gap $\Delta\sim M_{05}$. (b) with condensation only, $\Delta\sim \sqrt{M_{5}}$;
(c,d) with parity invariant scalar. Notice the zero modes. (c) source only (d) condensation only. (e) $B_{r},B_{rt}$ shows the spectrum of zero coupling due to the gamma matrix structrure. }
\label{fig:scalar}
\end{figure}
\paragraph{ Scalar Condensation: $M_{5}$. }
This case describes the spontaneous scalar condensation. For complex $\Phi$ with nonzero $M$ it describe the cooper pair condensation while for real case it may describe a chiral condensation or a random spin singlet condensation where lattice spins pair up to form singlets, the dimers, in random direction so that there is no net magnetic ordering. In fact, in lattice models with antiferromagnetic coupling, the ground state is anti-ferromagneticaly ordered if frustrations and randomness are small enough.
On the other hand, it has random singlet (RS) state\cite{bhatt1982scaling,paalanen1988thermodynamic,PhysRevLett.48.344, guo1994quantum} if there is a randomness, a distribution of next-nearest site couplings. Whether a RS like state has a gap or not depends on the details of the lattice symmetry as well as the size of the randomness \cite{PhysRevX.8.031028,PhysRevB.98.054422,Uematsu_2018,Liu_2018,PhysRevLett.123.087201,Kawamura_2019,Im}.
Our philosophy is to bypass all such details and characterize the system only by a few order parameter, assuming this is possible at least near the critical points.
From our calculation, a RS state with gap is described by a scalar order.
Notice that the dipole type interaction $F_{rt}{\bar\psi}\gamma^{rt}\psi$, which was used to study the Mott physics \cite{Edalati:2010ww,Edalati:2010ge,Seo:2018hrc}, does not generate a true gap, because its density of state does not really has a gap although its spectral function has gap like features in small momentum region.
This is because the spectral function shows a band that approaches to the fermi level for large momentum.
\paragraph{ Spectrum in potential picture }
More characteristic feature is the appearance of Kaluza-Klein (KK) modes in the Figure \ref{fig:scalar}, which is due to the effective $z^{2}$ Schr\"odinger potential for large $z$ generated by the condensation part: the effective Schr\"odinger potential
$V\sim \Phi_{5}^{2}/z^{2}$ which goes like $\sim M_{5}^{2}z^{2}$ for large $z$
\cite{oh2019holographic}.
Comparing the effect of the scalar condensation with that of the scalar source, the gap is generated by condensation is smaller than that generated by the source, as shown in \ref{fig:scalar} (a,b).
In the presence of chemical potential or temperature, the effect of $z^2$ term is suppressed because both $T$ and $ \mu$ increase the horizon size $r_{0}$ and the region 'inside' the black hole, $z>z_{0} =1/r_{0}$ is cut out. Then the rising potential $z^2$ also disappear, and the potential near the horizon collapses into $-\infty$ because near the horizon,
\begin{align}
V_{eff}(z_H)\sim - \frac{4+w^2 {z_{H}}^2}{16(z-z_H)^2},
\end{align}
Furthermore the solution should satisfy the infalling boundary condition, so that
instead of the infinitely many clean quantized eigenvalues (KK modes), only finitely many imaginary eigenvalues due to the tunneling to the horizon appears.
See figure \ref{fig:esp}.
This explains the fuzziness and disappearance of KK modes in \ref{fig:scalar}(d) in the presence of the chemical potential.
\begin{figure}[ht!]
\centering
{\includegraphics[width=5cm]{./fig/potentialtemp.pdf}}
\caption{ Shape of potential near the horizon. Dashed lines are the event horizons at a few temperatures. As T increases, horizon moves out in $z$-coordinate.} \label{fig:esp}
\end{figure}
For the vector and tensor cases, there can be a pole between the horizon and the boundary.
We emphasize that this case is not related to the rotational symmetry breaking. The $Z_{2}$ symmetry is not encoded in this model either.
So one natural candidate is the spin liquid with a gap\cite{Fu_2015,Oh:2018wfn}.
This case may be also useful to describe the coupling between the localized (lattice) spin net work and the itinerant electron, namely the Kondo physics.
\subsubsection{ Parity preserving Scalar interaction :
${\cal L}_{int}=i \Phi({\bar\psi}_{1} \psi_{2}+{\bar\psi}_{2} \psi_{1})$ }
For the consistency with scalar model, we choose the $m_{\Phi}^2=-2$ so that we still have $\Phi=M_{0} z + M z^2$. The spectrum is gapped for both source and condensation.
For the latter case zero chemical potential case shows sharp KK modes. Compared with the scalar case, spectrum is sharper.
See Figure \ref{fig:scalar} (c,d)
for the spectral functions with pseudo scalar source and condensation respectively. The system is similar to the scalar but with Parity symmetry broken. The most famous case is the pion condensation in nuclear physics.
\subsection{Vectors }
From the 2+1 dimension boundary point of view there are
three classes of vectors:
$B_{\mu}, B_{r\mu}$ and $B_{5\mu}$, respectively.
In each class,
the source shifts the two degenerate Dirac cones, one to negative and the other to positive $k_{\mu}$ directions.
The last two classes are invariant under the parity showing zero mode related feature like Fermi-arc and surface states(Ribbon band). See figures \ref{fig:Bx}, and \ref{fig:Bx2_slice}.
\subsubsection{Polar Vector: ${\cal L}_{int}= \boldmath{iB_{\mu}} ({\bar\psi}_{1}\Gamma^{\mu}\psi_{2}-{\bar\psi}_{2}\Gamma^{\mu}\psi_{1})$ }
$B_\mu$ is the extension of the source field that couples to boundary fermion current ${\bar \chi }\Gamma^\mu\chi$.
We can set the mass of vector and axial-vector order field to zero. Then,
$B_{i}=B^{(0)}_{i}+B^{(1)}_{i} z^{1} $.
Notice that there are asymmetry between $x$ and $y$ direction.
Therefore when $B_{x} \rightarrow B_{y}$, it has to be followed by $k_x\to k_y$ at the same time.
\paragraph{Source} SF for $B_{x}$ coupling with source only is just superposition of two shifted SF's of zero coupling cases along the $k_{x}$. Different flavors shift in opposite directions.
\paragraph{Condensation}:
Anisotropy instead of cone splitting is created.
See Figure and \ref{fig:Bx2_slice} and \ref{fig:Bx} .
\subsubsection{pseudo Vector: ${\cal L}_{int}= \boldmath{iB_{5\mu}} ({\bar\psi}_{1}\Gamma^{5\mu}\psi_{2}-{\bar\psi}_{2}\Gamma^{5\mu}\psi_{1})$ }
Pseudo vectors mostly follows the pattern of polar vectors. One important point to emphasize is the line shaped zero mode at $\omega=0$ plane. At higher slice $\omega=2$, there are also lines connecting two cicles representing the two shifted Dirac cones. See figure \ref{fig:Bx2_slice}(b)(c). So the total 3 dimensional figure is like Figure \ref{Ribbon} with double of the surface modes, which we call Ribbon bands.
\subsubsection{radial Vector: ${\cal L}_{int}= \boldmath{iB_{r\mu}} ({\bar\psi}_{1}\Gamma^{r\mu}\psi_{2}-{\bar\psi}_{2}\Gamma^{r\mu}\psi_{1})$ }
Radial vectors follows the pattern of polar vectors including zero modes. See Figure \ref{fig:Bx} and \ref{fig:Bx2_slice}.
\begin{figure}[ht!]
\centering
\subfigure[$B_{x}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bx_s_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{x}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bx_s_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$, $B_{x}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bx_s_4__kx-ky-_2__.jpg}}
\subfigure[$B_{x}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bx_c_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{x}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bx_c_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$, $B_{x}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bx_c_4__kx-ky-_2__.jpg}}\\
\subfigure[$B_{5x}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig/B5x_s_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{5x}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/B5x_s_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$, $B_{5x}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/B5x_s_4__kx-ky-_2__.jpg}}
\subfigure[$B_{5x}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig/B5x_c_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{5x}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/B5x_c_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$, $B_{5x}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/B5x_c_4__kx-ky-_2__.jpg}}
\caption{(adgj)Spectral Functions of $B_{x}$ and $B_{5x}$
with additional sliced view in $k_{x},k_{y}$ plane of $\omega=0,2$ slices. Notice that source splits the degenerated Dirac cones. $B_{x}$ has zero modes nut $B_{5x}$ does not. In all figures, we used $B_{*} =4 $.
} \label{fig:Bx2_slice}
\end{figure}
\begin{figure}[ht!]
\centering
\subfigure[s, $B_{5t(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/B5t_s_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{5t} s$]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/B5t_s_4__kx-ky-_0__.jpg}}
\subfigure[s, $B_{5y(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/B5y_s_4__kx-w_.jpg}}
\subfigure[s, $B_{t(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bt_s_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{t} s$]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bt_s_4__kx-ky-_0__.jpg}}
\subfigure[s, $B_{y(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/By_s_4__kx-w_.jpg}}
\\
\subfigure[c, $B_{5t(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/B5t_c_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{5t} c$]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/B5t_c_4__kx-ky-_0__.jpg}}
\subfigure[c, $B_{5y(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/B5y_c_4__kx-w_.jpg}}
\subfigure[c, $B_{t(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bt_c_4__kx-w_.jpg}}
\subfigure[$\omega=0$, $B_{t} c$]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bt_c_4__kx-ky-_0__.jpg}}
\subfigure[c, $B_{y(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/By_c_4__kx-w_.jpg}}
\caption{Spectral function with (pseudo) vector source interactions (a-f), and SF with (pseudo)-vector
condensation (g-l). s means source and c means condensation.
} \label{fig:Bx}
\end{figure}
\newpage
\subsection{Antisymmetric 2-Tensor }
6 anti-symmetric rank 2 tensors can be decomposed into
three $B_{\mu r}, \mu=0,1,2$ and the rests $B_{tx}, B_{ty},B_{xy}$. The former was already described above.
Notice the manifest zero mode Disk in $B_{tr}$ from the figure \ref{fig:Bdd2_s}. There are rotational symmetry in $B_{tr},B_{xy}$, but notin $B_{xr}$.
Spectrum of $B_{ty}$ and $B_{xr}$ are ambiguous without
the views in $k_{x}$-$k_{y}$ at various $\omega$ slices, which we provide in figure \ref{fig:Bx22_slice}. Both of them have split cones and the zero modes. Both have Ribbon bands connecting the two cones.
\begin{figure}[ht!]
\centering
\subfigure[$B_{xy(-1)}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bxy_s_4__kx-w_.jpg}}
\subfigure[$B_{xy(-1)}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bxy_s_4__kx-ky-_0__.jpg}}
\subfigure[$B_{xy(0)}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bxy_c_4__kx-w_.jpg}}
\subfigure[ $B_{tr(-1)}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Btr_s_4__kx-w_.jpg}}
\subfigure[$B_{tr(0)}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Btr_s_4__kx-ky-_0__.jpg}}
\subfigure[$B_{tr(0)}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Btr_c_4__kx-w_.jpg}}
\caption{Spectral function with various types tensor interaction, which is decomposed into 2+1 radial vector $B_{\mu r}$'s (ab) and 2-tensor $B_{xy}$ (cd). Notice the zero mode Disk in $B_{xy}$. There are rotational symmetry in $B_{tr},B_{xy}$. }
\label{fig:Bdd2_s}
\end{figure}
\begin{figure}[ht!]
\centering
\subfigure[$B_{ty(0)}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bty_s_4__kx-w_.jpg}}
\subfigure[$\omega=0$ $B_{ty}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bty_s_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$ $B_{ty}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bty_s_4__kx-ky-_3__.jpg}}
\subfigure[c, $B_{ty(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bty_c_4__kx-w_.jpg}}
\subfigure[$\omega=0$ $B_{ty}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bty_c_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$ $B_{ty}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bty_c_4__kx-ky-_3__.jpg}}\\
\subfigure[$B_{rx(0)}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bxr_s_4__kx-w_.jpg}}
\subfigure[$\omega=0$ $B_{rx}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bxr_s_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$ $B_{rx}$ s]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bxr_s_4__kx-ky-_3__.jpg}}
\subfigure[c, $B_{rx(0)}$]
{\includegraphics[width=2.5cm]{./fig/2f/fig/Bxr_c_4__kx-w_.jpg}}
\subfigure[$\omega=0$ $B_{rx}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bxr_c_4__kx-ky-_0__.jpg}}
\subfigure[$\omega=2$ $B_{rx}$ c]
{\includegraphics[width=2.5cm]{./fig/2f/fig0/Bxr_c_4__kx-ky-_3__.jpg}}
\caption{Spectrum of $B_{rx}$ and $B_{ty}$ with sliced views $k_{x}$-$k_{y}$ at $\omega=0,2$ slices, without which these spectra are ambiguous. Notice the zero modes and Ribbons connecting the two split cones.
} \label{fig:Bx22_slice}
\end{figure}
\section{Conclusion}
We classified the Yukawa type interactions according to its Lorentz symmetry of the boundary theory and calculate their spectral functions.
We met many interesting features that appear in the strongly correlated system: gap, pseudogap, flat band and Fermi-arc stuctures appear.
Out of the 16 interaction types, only parity breaking scalar interaction ($\Gamma=\mathbb{1}$) creates a gap without ambiguity. Both source and the condensation create gaps. However, the parity conserving scalar ($\Gamma=\Gamma^{5}$) has a zero mode Dirac cone in spectrum, which is much sharper than the case of the non-interacting case due to the transfer of the spectral weight to the zero mode by the interaction. The genuine physical system with full gap will be described by this coupling.
For vector $B_{\mu}$, $B_{5\mu}$, $B_{r\mu}$ show feature of split cones if the order parameter field have source part.
The last two are invariant under the parity , there are the Fermi-arc.
Another interesting feature is the flat disk band of $B_{xy}$, which might be useful to describe the twisted bi-layered graphene. If chemical potential is applied, the disk bend like a bowl and the fermi level shifts, resembling the band of the Kondo lattice.
There are three classes of vectors,
$B_{\mu}, B_{r\mu}$ and $B_{5\mu}$, respectively.
The source creates the splited cones and the condensation creates just asymmetry.
The first two are invariant under the reflection showing zero mode related features like Fermi-arc and surface states(Ribbon band).
There are 3 tensor types: $\Gamma^{xy},\Gamma^{tx},\Gamma^{ty}$. These respect the parity and has zero modes.
Since the spectral data is the fingerprint of a matter, we would be able to determine the order of a strongly interacting system by comparing the calculated spectral function in the presence of these orders with the experimental data.
We expect that our result will give an insight for magnetic orders of strongly interacting materials.
Final remark is that the quartic or higher terms do not contribute to the spectral functions.Therefore it is enough to discuss the effect of the Yukawa coupling to calculate the leading order effects of order parameter fields on the fermion spectrum. In fact, the Yukawa coupling terms are most relevant ones in low energy. Here we consider the gravity background of asymptotically AdS4 with Lorentz invariance.
In the future, we will study the AdS$_{5}$ version of this paper, which related to Weyl Semi-metal instead of Dirac semi-metal.
It will be also interesting to extend our work to higher quantum critical points.
There are ten classes of different topological insulator/superconductors depending on the discrete symmetries. It will be also interesting to realize all of such 10 folds way in terms of the explicit laglangian.
One more possibility is to study the effect of combinations of the Yukawa interactions to create different types of spectral features.
Studies in these directions are under progress.
|
2202.11017
|
\section{Orthogonal polynomials} \label{sec:OP}
\subsection{Orthogonal polynomials on the real line}
Let $\mu$ be a positive measure on the real line for which all the moments $m_n$ exist. Then the orthonormal polynomials are given by the
orthogonality relations
\[ \int_{\mathbb{R}} p_n(x)p_m(x)\, d\mu(x) = \delta_{m,n}, \]
with $p_n(x) = \gamma_n x^n +\cdots$ and $\gamma_n > 0$.
One of their most remarkable features is that they always satisfy a three term recurrence relation
\begin{equation} \label{3trr}
xp_n(x) = a_{n+1}p_{n+1}(x) + b_n p_n(x) + a_n p_{n-1}(x), \qquad n \geq 0
\end{equation}
with initial values $p_0(x)=1/\sqrt{m_0}$ and $p_{-1} = 0$. The recurrence coefficients are given by
\[ a_n = \int_{\mathbb{R}} xp_n(x)p_{n-1}(x)\, d\mu(x), \quad b_n = \int_{\mathbb{R}} xp_n^2(x)\, d\mu(x), \]
and comparing the coefficient of $x^{n+1}$ in \eqref{3trr} one also finds
\begin{equation} \label{a-gamma}
a_{n+1} = \frac{\gamma_n}{\gamma_{n+1}}.
\end{equation}
The monic orthogonal polynomials are $P_n(x) = p_n(x)/\gamma_n$ and they satisfy the recurrence relation
\begin{equation} \label{3TRR}
xP_n(x) = P_{n+1}(x) + b_n P_n(x) + a_n^2 P_{n-1}(x),
\end{equation}
with $P_0=1$ and $P_{-1}=0$. The square of the norm of the monic polynomial is
\begin{equation} \label{Pnorm}
\int_{\mathbb{R}} P_n^2(x) \, d\mu(x) = \frac{1}{\gamma_n^2}.
\end{equation}
Relevant literature for orthogonal polynomials on the real line is Szeg\H{o} \cite{Szego}, the more recent book by Ismail \cite{Ismail} and \cite{AB}.
\subsection{Orthogonal polynomials on the unit circle}
Let $\nu$ be a positive measure on the unit circle $\{ z \in \mathbb{C} : |z| = 1 \}$. The orthonormal polynomials on the unit circle are given by
the orthogonality relations
\[ \int_0^{2\pi} \varphi_n(z) \overline{\varphi_m(z)}\, d\nu(\theta) = \delta_{m,n}, \qquad z=e^{i\theta}, \]
with $\varphi(z) = \kappa_n z^n + \cdots$ and $\kappa_n >0$.
The Szeg\H{o}-Levison recursion relation is
\[ \kappa_{n+1} z\varphi_{n}(z) = \kappa_n \varphi_{n+1}(z) - \varphi_{n+1}(0) \varphi_n^*(z), \]
where $\varphi_n^*(z) = z^n \bar{\varphi}_n(1/z)$ is the reversed polynomial, with $\bar{\varphi}_n$ the polynomial with complex conjugated coefficients.
The reversed polynomials satisfy the orthogonality relations
\[ \int_0^{2\pi} \varphi_n^*(z) z^{-k}\, d\nu(\theta) = 0, \qquad 1 \leq k \leq n. \]
The monic orthogonal polynomials $\Phi_n(z) = \varphi_n(z)/\kappa_n$ satisfy
\begin{equation} \label{RecSz}
z \Phi_n(z) = \Phi_{n+1}(z) + \overline{\alpha_n} \Phi_n^*(z),
\end{equation}
with recurrence coefficients $\alpha_n = - \overline{\Phi_{n+1}(0)}$ which are nowadays known as Verblunsky coefficients.
The square of the norm of the monic orthogonal polynomial is
\[ \int_0^{2\pi} |\Phi_n(z)|^2 \, d\nu(\theta) = \frac{1}{\kappa_n^2}. \]
Observe that $|z|=1$ on the unit circle so that $\Phi_n^*(z) = z^n\overline{\Phi_n(z)}$ on the unit circle, hence taking the norm on both sides of \eqref{RecSz} gives
\[ \frac{1}{\kappa_n^2} = \frac{1}{\kappa_{n+1}^2} + \frac{|\alpha_n|^2}{\kappa_n^2}, \]
from which one finds
\begin{equation} \label{kappa-alpha}
\frac{\kappa_n^2}{\kappa_{n+1}^2} = 1 - |\alpha_n|^2,
\end{equation}
which implies that $|\alpha_n| < 1$ for all $n$. One usually takes $\alpha_{-1}=-1$ which is compatible with $\Phi_0 = 1$.
The recurrence relation \eqref{RecSz} and its $*$-companion can be written in matrix form as
\begin{equation} \label{Phitransfer}
\begin{pmatrix} \Phi_{n+1}(z) \\ \Phi_{n+1}^*(z) \end{pmatrix}
= \begin{pmatrix} z & -\overline{\alpha_n} \\ -\alpha_n z & 1 \end{pmatrix}
\begin{pmatrix} \Phi_n(z) \\ \Phi_n^*(z) \end{pmatrix}
\end{equation}
and the matrix in this expression then serves as a transfer matrix with determinant $z(1-|\alpha_n|^2)$.
Orthogonal polynomials on the unit circle already appear in Szeg\H{o}'s book \cite[Ch. XI]{Szego}. The two books by Simon \cite{Simon} are
strongly recommended. See also Ismail \cite[Ch. 8]{Ismail} and \cite[Ch. 9]{AB}.
\section{Toda lattices} \label{sec:Toda}
The Toda lattice is a system of differential equations for particles with an exponential interaction
\[ x_n''(t) = e^{x_{n-1}-x_n} - e^{x_n-x_{n+1}}, \qquad n \in \mathbb{Z}, \]
which was introduced by Morikazu Toda in 1967 (Toda \cite{Toda}).
We will consider the semi-infinite system (with $x_{-1}=-\infty$) or the finite system (with $x_{-1}=-\infty$ and $x_{n+1}=+\infty$)
for the connection with orthogonal polynomials on the real line.
The change of variables (Flaschka variables)
\[ a_k^2 = \exp(x_{k-1}-x_k), \quad b_k=-x_k', \]
gives the system of equations
\begin{eqnarray}
(a_k^2)' &=& a_k^2(b_k-b_{k-1}), \qquad 1 \leq k \leq n, \label{Toda-a} \\
b_k' &=& a_{k+1}^2 - a_k^2, \qquad 0 \leq k \leq n, \quad \label{Toda-b}
\end{eqnarray}
with $a_0^2 = 0 = a_{n+1}^2$ for the finite system and $a_0^2=0$ for the semi-infinite system.
\subsection{The Toda lattice}
The semi-infinite Toda lattice is related to orthogonal polynomials for an exponential modification of a positive measure $\mu$ on the real line
with a factor $e^{xt}$:
\[ \int_{\mathbb{R}} p_n(x,t) p_m(x,t) e^{xt} \, d\mu(x) = \delta_{m,n}, \]
whenever all the moments of the measure $e^{xt}\, d \mu(x)$ exist. The Toda equations express the compatibility between the recurrence
relation \eqref{3TRR} and the dynamics of these polynomials as a function of $t$.
\begin{lemma}
Let $P_n(x,t)$ be the monic orthogonal polynomials for the measure $d\mu_t(x) = e^{xt}\, d\mu(x)$. If all the moments of $\mu_t$ exist,
then
\begin{equation} \label{dPdt}
\frac{d}{dt} P_n(x,t) = -a_n^2(t) P_{n-1}(x,t).
\end{equation}
\end{lemma}
\begin{proof}
Since $P_{n}(x,t) = x^n + \cdots$, the derivative $dP_n/dt$ is a polynomial of degree $n-1$. Differentiating the orthogonality relations
\[ \int_{\mathbb{R}} P_n(x,t) x^k e^{xt}\, d\mu(x) = 0, \qquad 0 \leq k \leq n-1, \]
gives
\[ \int_{\mathbb{R}} \frac{dP_n(x,t)}{dt} x^k e^{xt}\, d\mu(x) + \int_{\mathbb{R}} P_n(x,t) x^{k+1} e^{xt}\, d\mu(x) = 0, \qquad
0 \leq k \leq n-1, \]
hence the orthogonality of the polynomial $P_n$ implies that
\[ \int_{\mathbb{R}} \frac{dP_n(x,t)}{dt} x^k e^{xt}\, d\mu(x) = 0, \qquad 0 \leq k \leq n-2, \]
so that $dP_n(x,t)/dt$ is indeed $c_n(t) P_{n-1}(x,t)$ for some $c_n(t)$. This $c_n(t)$ satisfies
\[ \int_{\mathbb{R}} \frac{dP_n(x,t)}{dt} x^{n-1} e^{xt}\, d\mu(x) = c_n(t) \int_{\mathbb{R}} P_{n-1}(x,t) x^{n-1} e^{xt}\, d\mu(x) = \frac{c_n(t)}{\gamma_{n-1}^2} . \]
For $k=n-1$ we have
\[ \int_{\mathbb{R}} \frac{dP_n(x,t)}{dt} x^{n-1} e^{xt}\, d\mu(x) + \int_{\mathbb{R}} P_n(x,t) x^{n} e^{xt}\, d\mu(x) = 0, \]
and since
\[ \int_{\mathbb{R}} P_n(x,t) x^n e^{xt}\, d\mu(x) = \int_{\mathbb{R}} P_n^2(x,t) e^{xt}\, d\mu(x) = \frac{1}{\gamma_n^2} \]
we find
\[ \int_{\mathbb{R}} \frac{dP_n(x,t)}{dt} x^{n-1} e^{xt}\, d\mu(x) = - \frac{1}{\gamma_n^2} = \frac{c_n(t)}{\gamma_{n-1}^2}, \]
so that
\[ c_n(t) = - \frac{\gamma_{n-1}^2}{\gamma_n^2} = -a_n^2(t). \]
\end{proof}
If we now differentiate the recurrence relation \eqref{3TRR} with respect to the variable $t$, then
\begin{multline*}
x \frac{d}{dt} P_n(x,t) = \frac{d}{dt} P_{n+1}(x,t) + \frac{db_n(t)}{dt} P_n(x,t) + b_n(t) \frac{d}{dt}P_n(x,t) \\
+ \frac{da_n^2}{dt} P_{n-1}(x,t) + a_n^2 \frac{d}{dt} P_{n-1}(x,t).
\end{multline*}
Now use \eqref{dPdt} to find
\begin{multline*}
- a_n^2 xP_{n-1}(x,t) = -a_{n+1}^2 P_n(x,t) + \frac{db_n(t)}{dt} P_n(x,t) - b_n(t) a_n^2 P_{n-1}(x,t) \\
+ \frac{da_n^2}{dt} P_{n-1}(x,t)
- a_n^2 a_{n-1}^2 P_{n-2}(x,t).
\end{multline*}
For $xP_{n-1}(x,t)$ we use the three term recurrence relation \eqref{3TRR} to find
\begin{multline} \label{eq:2.4}
-a_n^2 \bigl( P_n(x,t) + b_{n-1}P_{n-1}(x,t) + a_{n-1}^2 P_{n-2}(x,t) \bigr) \\
= -a_{n+1}^2 P_n(x,t) + \frac{db_n(t)}{dt} P_n(x,t) - b_n(t) a_n^2 P_{n-1}(x,t) \\ + \frac{da_n^2}{dt} P_{n-1}(x,t)
- a_n^2 a_{n-1}^2 P_{n-2}(x,t).
\end{multline}
Comparing the coefficients of $P_n(x,t)$ in \eqref{eq:2.4} gives
\[ \frac{db_n}{dt} = a_{n+1}^2 - a_n^2, \]
and the coefficients of $P_{n-1}(x,t)$ give
\[ \frac{da_n^2}{dt} = a_n^2 (b_n-b_{n-1}) . \]
These are the Toda lattice equations \eqref{Toda-a}--\eqref{Toda-b} for $n \geq 0$ and $a_0^2=0$. The initial conditions $a_n^2(0)$ and $b_n(0)$ are given by the recurrence coefficients of the orthogonal polynomials for the measure $\mu$.
A method for solving the semi-infinite Toda lattice equations is:
\begin{enumerate}
\item Use the initial conditions $a_n^2(0)$ and $b_n(0)$ to determine the orthogonality measure $\mu$ for the orthogonal polynomials $P_n(x,0)$.
\item Consider the modified measure $d\mu_t(x) = e^{xt} \, d\mu(x)$ to describe the dynamics as a function of $t$.
\item Find the recurrence coefficients for the orthogonal polynomials for the measure $d\mu_t(x) = e^{xt} \, d\mu(x)$.
\end{enumerate}
Step 3 is known as the direct problem for orthogonal polynomials: find the recurrence coefficients if one knows the orthogonality measure.
Step 1 is the inverse problem for orthogonal polynomials: find the orthogonality measure if one knows the recurrence coefficients.
If one looks at the three term recurrence relation as a discrete version of the Schr\"odinger equation, then the recurrence coefficients correspond
to a discrete potential. Finding the potential from spectral data is known as the inverse problem in scattering theory, and hence the direct problem
for orthogonal polynomials corresponds to the inverse problem in discrete scattering.
\subsection{Lax pair for the Toda lattice}
One can consider the recurrence relation \eqref{3TRR} and the dynamics \eqref{dPdt} as the Lax pair for the Toda lattice, since the Toda lattice
equations express the compatibility between those two difference and differential equations. It is however somewhat more practical and handy
to use the corresponding Jacobi operator $J$ which has the matrix representation
\begin{equation} \label{Jacobi}
J = \begin{pmatrix} b_0 & a_1 & 0 & 0 & 0 & 0 & 0 & 0 &\cdots \\
a_1 & b_1 & a_2 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & a_2 & b_3 & a_3 & 0 & 0 & 0 & 0 &\cdots\\
\vdots & & \ddots & \ddots & \ddots & & & &\vdots \\
0 & \cdots & 0 & a_{n-1} & b_{n-1} & a_{n} & 0 & 0 & \cdots\\
0 & \cdots & 0 & 0 & a_{n} & b_{n} & a_{n+1} & 0 &\cdots\\
0 & \cdots & 0 & 0 & 0 & a_{n+1} & b_{n+1} & a_{n+2} & \\
\vdots & \cdots & \vdots & \vdots & \vdots & & \ddots & \ddots & \ddots
\end{pmatrix} .
\end{equation}
Let $\Lambda$ be the diagonal matrix $\textup{diag}(\gamma_0,\gamma_1,\gamma_2,\ldots,\gamma_n, \ldots)$, then
the recurrence relation \eqref{3trr} can be written as
\begin{equation} \label{JP}
J \Lambda \mathbf{P} = x \Lambda \mathbf{P}
\end{equation}
and \eqref{dPdt} becomes
\begin{equation} \label{dPA}
\frac{d}{dt} \mathbf{P} = - \Lambda^{-1} J_- \Lambda \mathbf{P},
\end{equation}
where $J_-$ is the lower triangular part of $J$, and
\[ \mathbf{P} = \begin{pmatrix} P_0(x,t) \\ P_1(x,t) \\ P_2(x,t) \\ \vdots \\ P_n(x,t) \\ \vdots \end{pmatrix} . \]
\begin{theorem} \label{thm:Toda-Lax}
The Toda lattice equations are
\[ J' = [J,A], \]
where $[J,A]=JA-AJ$ and $A= \frac12 (J_ - - J_+)$, with $J_-$ the lower triangular part of $J$ and $J_+$ the upper triangular part of $J$.
\end{theorem}
\begin{proof}
If we use the orthonormal polynomials $\mathbf{p} = \Lambda \mathbf{P}$, then \eqref{JP} becomes $J\mathbf{p} = x\mathbf{p}$. Differentiating with respect to $t$ gives
\[ J' \mathbf{p} + J \mathbf{p}' = x\mathbf{p}'. \]
Equation \eqref{dPA} in terms of orthonormal polynomials is
\[ \mathbf{p}' = L \mathbf{p} - J_- \mathbf{p}, \]
with $L = -\Lambda (\Lambda^{-1})' = \textup{diag}(\gamma_0'/\gamma_0,\gamma_1'/\gamma_1,\ldots,\gamma_n'/\gamma_n,\ldots)$.
We thus find
\[ J' \mathbf{p} + J (L \mathbf{p} - J_- \mathbf{p}) = x (L \mathbf{p} - J_- \mathbf{p}). \]
The term $x \mathbf{p}$ can be replaced by $J \mathbf{p}$ and one has
\[ J' \mathbf{p} + J (L \mathbf{p} - J_- \mathbf{p}) = L J \mathbf{p} - J_- J \mathbf{p} . \]
The orthogonality of the orthonormal polynomials implies
\[ \int_{\mathbb{R}} \mathbf{p} \mathbf{p}^T e^{xt} \, d\mu(x) = I \]
where $I$ is the identity matrix, hence we get
\[ J' = -JL + J J_- + L J - J_- J . \]
The matrix $J$ is symmetric, so taking the transpose gives
\[ J' = -LJ + J_+ J + JL - J J_+ . \]
Summing both equations gives
\[ 2 J' = J ( J_- - J_+) - (J_- - J_+) J, \]
which is the desired form of the Toda equations.
\end{proof}
The pair of infinite matrices $(J,A)$ is known as the Lax pair for the Toda equations. They imply that the Toda lattice is an integrable system.
\subsection{The Toda hierarchy}
The orthogonal polynomials for the measure $e^{x^kt}\, d\mu(x)$, with $k$ a positive integer,
\[ \int_{\mathbb{R}} p_n(x,t) p_m(x,t) e^{x^kt}\, d\mu(x) = \delta_{m,n}, \]
are related to the Toda hierarchy. The Toda lattice corresponds to $k=1$. The dynamics of these polynomials as a function of the parameter $t$ is:
\begin{lemma}
Let $P_n(x,t)$ be the monic orthogonal polynomials for the measure $d\mu_t(x) = e^{x^kt}\, d\mu(x)$. If all the moments of $\mu_t$ exist, then
\begin{equation} \label{dPdt-k}
\frac{d}{dt} P_n(x,t) = - \sum_{j=1}^k (J^k)_{n,n-j} \frac{\gamma_{n-j}}{\gamma_n} P_{n-j},
\end{equation}
where $J$ is the Jacobi matrix \eqref{Jacobi}.
\end{lemma}
\begin{proof}
Clearly the polynomial $dP_n(x,t)/dt$ is a polynomial of degree $n-1$. We can express it in terms of the monic orthogonal polynomials as
\[ \frac{d}{dt} P_n(x,t) = \sum_{j=1}^n c_j P_{n-j}(x,t), \]
and the coefficients $c_j$ satisfy
\[ c_j \int_{\mathbb{R}} P_{n-j}^2(x,t) e^{x^kt}\, d\mu(x) = \int_{\mathbb{R}} \frac{dP_n(x,t)}{dt} P_{n-j}(x,t) e^{x^kt}\, d\mu(x). \]
One has
\[ \int_{\mathbb{R}} P_{n-j}^2(x,t) e^{x^kt}\, d\mu(x) = \frac{1}{\gamma_{n-j}^2}, \]
and if we differentiate
\[ \int_{\mathbb{R}} P_n(x,t) P_{n-j}(x,t) e^{x^kt}\, d\mu(x) = 0, \qquad 1 \leq j \leq n \]
with respect to $t$, then the orthogonality implies
\[ \int_{\mathbb{R}} \frac{dP_n(x,t)}{dt} P_{n-j}(x,t) e^{x^kt}\, d\mu(x) = - \int_{\mathbb{R}} x^k P_n(x,t) P_{n-j}(x,t) e^{x^kt}\, d\mu(x). \]
The last integral is zero whenever $j > k$, so we only need the terms $1 \leq j \leq k$. The Jacobi matrix has the property
\[ (J)_{m,n} = \int_{\mathbb{R}} x p_m(x,t) p_n(x,t) e^{x^kt}\, d\mu(x), \]
and in general
\[ (J^k)_{m,n} = \int_{\mathbb{R}} x^k p_m(x,t)p_n(x,t) e^{x^kt}\, d\mu(x). \]
If we use $p_n(x,t) = \gamma_n P_n(x,t)$, then we find
\[ \int_{\mathbb{R}} x^k P_n(x,t) P_{n-j}(x,t) e^{x^kt}\, d\mu(x) = \frac{(J^k)_{n,n-j}}{\gamma_n\gamma_{n-j}}, \]
so that
\[ c_j = - (J^k)_{n,n-j} \frac{\gamma_{n-j}}{\gamma_n}, \]
from which \eqref{dPdt-k} follows.
\end{proof}
The expression \eqref{dPdt-k} can be written in matrix form as
\[ \Lambda \mathbf{P}' = - (J^k)_- \Lambda \mathbf{P}, \]
where $(J^k)_-$ is the lower triangular part of $J^k$. This is equivalent with
\[ \mathbf{p}' = L \mathbf{p} - (J^k)_- \mathbf{p}, \]
with $L$ as in the proof of Theorem \ref{thm:Toda-Lax}.
Following the same calculations as before, we then find:
\begin{theorem} \label{thm:Toda-k}
The Toda hierachy corresponds to the differential equations
\[ J' = [J,\frac12 \bigl((J^k)_- - (J^k)_+\bigr)] . \]
\end{theorem}
As a special case, one can start with a symmetric measure $\mu$, so that $b_n=0$ for all $n \geq 0$. When $k=2$ we then have
\[ (J^2)_{n,n} = a_n^2+a_{n+1}^2, \quad (J^2)_{n,n-2} = a_n a_{n-1}, \quad (J^2)_{n,n+2} = a_{n+1}a_{n+2} , \]
and $(J^2)_{n,m} = 0$ elsewhere, so that
\[ a_n' = \frac12 a_n (a_{n+1}^2 - a_{n-1}^2), \qquad n \geq 1, \]
with $a_0=0$, or
\[ (a_n^2)' = a_n^2 (a_{n+1}^2 - a_{n-1}^2) , \qquad n \geq 1, \]
and this is known as the Langmuir lattice or the Kac-van Moerbeke equations \cite[\S 2.4]{WVA}.
\subsection{Ablowitz-Ladik lattice}
The Ablowitz-Ladik lattice (or the Schur flow) is related to orthogonal polynomials on the unit circle with an exponential modification of a positive
measure $\nu$ on the unit circle with a factor $e^{t\cos \theta}$:
\[ \int_0^{2\pi} \varphi_n(z,t) \overline{\varphi_m(z,t)} e^{t\cos \theta}\, d\nu(\theta) = \delta_{m,n}. \]
The Ablowitz-Ladik equations express the compatibility between the recurrence relation \eqref{RecSz} and the dynamics of the orthogonal polynomials
as a function of $t$. See Golinskii \cite{Gol2006} and Nenciu \cite{Nenciu} for a detailed analysis.
\begin{lemma}
Let $\Phi_n(z,t)$ be the monic orthogonal polynomials for the measure $d\nu_t(\theta) = e^{t\cos \theta}\, d\nu(\theta)$ on the unit circle.
Then
\begin{equation} \label{dPhidt}
\frac{d}{dt} \Phi_n(z,t) = - \frac{\kappa_{n-1}^2}{2 \kappa_{n}^2} \left( \Phi_{n-1}(z,t) + \overline{\alpha_n} \Phi_{n-1}^*(z,t) \right).
\end{equation}
\end{lemma}
\begin{proof}
Differentiating the orthogonality relations
\[ \int_0^{2\pi} \Phi_n(z,t) z^{-k} e^{t\cos \theta} \, d\nu(\theta) = 0, \qquad 0 \leq k \leq n-1 \]
with respect to $t$ gives for $0 \leq k \leq n-1$
\[ \int_0^{2\pi} \frac{d}{dt} \Phi_n(z,t) z^{-k} e^{t\cos \theta} \, d\nu(\theta) + \int_0^{2\pi} \Phi_n(z,t) z^{-k} \frac{z+1/z}{2} e^{t\cos \theta} \, d\nu(\theta) = 0. \]
The orthogonality relations for $\Phi_n$ then imply
\[ \int_0^{2\pi} \frac{d}{dt} \Phi_n(z,t) z^{-k} e^{t\cos \theta} \, d\nu(\theta) = 0, \qquad 1 \leq k \leq n-2. \]
The polynomial $d\Phi_n(z,t)/dt$ is of degree $n-1$, and the above orthogonality relations imply that
\[ \frac{d}{dt} \Phi_n(z,t) = c_1(t) \Phi_{n-1}(z,t) + c_2 \Phi_{n-1}^*(z,t). \]
The coefficients $c_1$ and $c_2$ can be obtained from
\[ c_1 \int_0^{2\pi} \Phi_{n-1}(z,t) z^{-n} e^{t\cos \theta}\, d\nu(\theta)
= \int_0^{2\pi} \frac{d\Phi_n(z,t)}{dt} z^{-n} e^{t\cos\theta}\, d\nu(\theta), \]
and
\[ c_2 \int_0^{2\pi} \Phi_{n-1}^*(z,t) e^{t\cos \theta}\, d\nu(\theta)
= \int_0^{2\pi} \frac{d\Phi_n(z,t)}{dt} e^{t\cos\theta}\, d\nu(\theta).\]
One has
\[ \int_0^{2\pi} \Phi_{n-1}(z,t) z^{-n} e^{t\cos \theta}\, d\nu(\theta)
= \int_0^{2\pi} |\Phi_{n-1}^*(z,t)|^2 e^{t\cos \theta}\, d\nu(\theta) = \frac{1}{\kappa_{n-1}^2}, \]
and
\[ \int_0^{2\pi} \Phi_{n-1}^*(z,t) e^{t\cos \theta}\, d\nu(\theta)
= \int_0^{2\pi} z^n \overline{\Phi_n(z,t)} e^{t\cos \theta} \, d\nu(\theta) =\frac{1}{\kappa_{n-1}^2}, \]
and one also has
\begin{align*}
\int_0^{2\pi} \frac{d}{dt} \Phi_n(z,t) z^{-n+1} e^{t\cos \theta} \, d\nu(\theta)
&= - \int_0^{2\pi} \Phi_n(z,t) z^{-n+1} \frac{z+1/z}{2} e^{t\cos \theta} \, d\nu(\theta) \\
&= - \frac{1}{2\kappa_n^2},
\end{align*}
and
\begin{align*}
\int_0^{2\pi} \frac{d}{dt} \Phi_n(z,t) e^{t\cos \theta} \, d\nu(\theta)
&= - \int_0^{2\pi} \Phi_n(z,t) \frac{z+1/z}{2} e^{t\cos \theta} \, d\nu(\theta) \\
&= - \frac12 \int_0^{2\pi} z\Phi_n(z) e^{t\cos \theta}\, d\nu(\theta).
\end{align*}
By using the recurrence relation \eqref{RecSz} we find
\[ \int_0^{2\pi} z\Phi_n(z) e^{t\cos \theta}\, d\nu(\theta) = \overline{\alpha_n} \int_0^{2\pi} \Phi_n^*(z,t) e^{t\cos \theta} \, d\nu(\theta)
= \frac{\overline{\alpha_n}}{\kappa_n^2}. \]
Hence
\[ c_1(t) = - \frac{\kappa_{n-1}^2}{2\kappa_n^2}, \quad c_2(t) = - \overline{\alpha_n} \frac{\kappa_{n-1}^2}{2\kappa_n^2} , \]
from which the result follows.
\end{proof}
The expression \eqref{dPhidt} and its $*$-companion can be written in matrix form as
\begin{equation} \label{dPhidtmatrix}
\frac{d}{dt} \begin{pmatrix} \Phi_n(z,t) \\ \Phi_n^*(z,t) \end{pmatrix}
= - \frac{\kappa_{n-1}^2}{\kappa_n^2} \begin{pmatrix} 1 & \overline{\alpha_n(t)} \\ z \alpha_n(t) & z \end{pmatrix}
\begin{pmatrix} \Phi_{n-1}(z,t) \\ \Phi_{n-1}^*(z,t) \end{pmatrix},
\end{equation}
where the matrix is $(1-|\alpha_n|^2)$ times the inverse of the transfer matrix in \eqref{Phitransfer}.
The compatibility between the recurrence relation \eqref{Phitransfer} and the dynamics \eqref{dPhidtmatrix} is as follows: differentiate
\eqref{Phitransfer} to find
\begin{multline*} \frac{d}{dt} \begin{pmatrix} \Phi_{n+1}(z,t) \\ \Phi_{n+1}^*(z,t) \end{pmatrix}
= \begin{pmatrix} 0 & \overline{\alpha_n'(t)} \\ - z \alpha_n'(t) & 0 \end{pmatrix}
\begin{pmatrix} \Phi_n(z,t) \\ \Phi_n^*(z,t) \end{pmatrix} \\
+ \begin{pmatrix} z & -\overline{\alpha_n(t)} \\ -z\alpha_n(t) & 1 \end{pmatrix}
\frac{d}{dt} \begin{pmatrix} \Phi_n(z,t) \\ \Phi_n^*(z,t) \end{pmatrix}
\end{multline*}
and using \eqref{dPhidtmatrix} this gives
\begin{multline*}
- \frac{\kappa_n^2}{2\kappa_{n+1}^2} \begin{pmatrix} 1 & \overline{\alpha_{n+1}(t)} \\ z \alpha_{n+1}(t) & z \end{pmatrix}
\begin{pmatrix} \Phi_n(z,t) \\ \Phi_n^*(z,t) \end{pmatrix}
= \begin{pmatrix} 0 & \overline{\alpha_n'(t)} \\ - z \alpha_n'(t) & 0 \end{pmatrix}
\begin{pmatrix} \Phi_n(z,t) \\ \Phi_n^*(z,t) \end{pmatrix} \\
- \frac{\kappa_{n-1}^2}{2\kappa_n^2} z (1-|\alpha_n|^2) \begin{pmatrix} \Phi_{n-1}(z,t) \\ \Phi_{n-1}^*(z,t) \end{pmatrix}.
\end{multline*}
Multiply both sides by the transfer matrix in \eqref{Phitransfer} with $n \mapsto n-1$ to get
\begin{multline*}
- \frac{\kappa_n^2}{2\kappa_{n+1}^2} \begin{pmatrix} z & - \overline{\alpha_{n-1}(t)} \\ -z \alpha_{n-1}(t) & 1 \end{pmatrix}
\begin{pmatrix} 1 & \overline{\alpha_{n+1}(t)} \\ z \alpha_{n+1}(t) & z \end{pmatrix}
\begin{pmatrix} \Phi_n(z,t) \\ \Phi_n^*(z,t) \end{pmatrix} \\
= \begin{pmatrix} z & - \overline{\alpha_{n-1}(t)} \\ -z \alpha_{n-1}(t) & 1 \end{pmatrix}
\begin{pmatrix} 0 & \overline{\alpha_n'(t)} \\ - z \alpha_n'(t) & 0 \end{pmatrix}
\begin{pmatrix} \Phi_n(z,t) \\ \Phi_n^*(z,t) \end{pmatrix} \\
- \frac{\kappa_{n-1}^2}{2\kappa_n^2} z (1-|\alpha_n|^2) \begin{pmatrix} \Phi_{n}(z,t) \\ \Phi_{n}^*(z,t) \end{pmatrix}.
\end{multline*}
If we work out the matrix products, then this identity is true if and only if
\begin{multline*} - \frac{\kappa_n^2}{2\kappa_{n+1}^2} \begin{pmatrix} 1 - \overline{\alpha_{n-1}}\alpha_{n+1} & \overline{\alpha_{n+1}}-\overline{\alpha_{n-1}} \\
-\alpha_{n-1}+\alpha_{n+1} & 1 - \alpha_{n-1}\overline{\alpha_{n+1}} \end{pmatrix} \\
= \begin{pmatrix} \alpha_n' \overline{\alpha_{n-1}} & - \overline{\alpha_n'} \\ - \alpha_n' & \overline{\alpha_n'} \alpha_{n-1} \end{pmatrix}
- \frac{\kappa_{n-1}^2}{2\kappa_n^2} (1-|\alpha_n|^2) \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} .
\end{multline*}
If one uses \eqref{kappa-alpha} then one gets the equation
\begin{equation} \label{Ablowitz-Ladik}
\alpha_n'(t) = \frac12 (1-|\alpha_n|^2) (\alpha_{n+1}-\alpha_{n-1}) , \qquad n=0,1,2,\ldots
\end{equation}
which are the equations for the Ablowitz-Ladik lattice (or the Schur flow).
\subsection{Lax pair for the Ablowitz-Ladik lattice}
For orthogonal polynomials on the unit circle, there are two matrix representations that replace the Jacobi matrix: an infinite Hessenberg matrix
known as the GGT matrix (Geronimus-Gragg-Teplyaev) and the CMV matrix (Cantero-Moral-Vel\'azquez \cite{CMV}), see \cite[Chapter 4]{Simon}.
The CMV matrix is the most convenient for a Lax pair formulation. Even though CMV is named after the authors of \cite{CMV}, the
matrix representation was known earlier in numerical linear algebra, where Ammar, Gragg and Reichel and later Bunse-Gerstner and Elsner
obtained essentially the same results (see Watkins \cite{Watkins} for a survey of their work, and Simon \cite{SimonCMV}).
The CMV matrix appears if one orthogonalizes $1, z, z^{-1}, z^2, z^{-2}, z^3, z^{-3}, \cdots$. Recall that one needs all integer power of $z$
for completeness on the unit circle. The orthogonal basis that results is $\{ \chi_n, n=0,1,2,\ldots \}$ with
\[ \chi_{2n}(z) = z^{-n} \varphi_{2n}^*(z), \qquad \chi_{2n+1}(z) = z^{-n} \varphi_{2n}(z), \qquad n \geq 0. \]
The CMV matrix $C = \bigl(c_{m,n}\bigr)_{m,n=0}^\infty$ then has the entries
\[ c_{m,n} = \int_{0}^{2\pi} z \chi_n(z) \overline{\chi_m(z)} \, d\nu(\theta). \]
It is a pentadiagonal infinite matrix, and it has a nice factorization $C=LM$, where
\[ L = \begin{pmatrix} \Theta_0 & 0 & 0 & 0 &\cdots \\
0 & \Theta_2 & 0 & 0 & \cdots \\
0 & 0 & \Theta_4 & 0 & \cdots \\
\vdots & & & \ddots & \vdots \end{pmatrix},
\quad
M = \begin{pmatrix} 1 & 0 & 0 & 0 &\cdots \\
0 & \Theta_1 & 0 & 0 & \cdots \\
0 & 0 & \Theta_3 & 0 & \cdots \\
\vdots & & & \ddots & \vdots \end{pmatrix}, \]
where $\Theta_n$ are the $2\times 2$ matrices
\[ \Theta_n = \begin{pmatrix} \overline{\alpha_n} & \rho_n \\ \rho_n & - \alpha_n \end{pmatrix}, \qquad \rho_n = \sqrt{1-|\alpha_n|^2}. \]
\begin{theorem}
Let $\varphi_n(z,t)$ be the orthonomal polynomials for the measure $d\nu_t(\theta) = e^{t\cos\theta} \, d\nu(\theta)$ on the unit circle,
and $C(t)$ the corresponding CMV matrix. Then the Ablowitz-Ladik lattice (Schur flow) has the Lax pair representation
\[ \frac{d}{dt} C(t) = \frac12 [B,C], \]
with
\begin{align*} B &= \frac{(C+C^*)_+-(C+C^*)_-}{2} \\
&= \frac12 \begin{pmatrix} 0 & \rho_0\overline{\Delta}_0 & \rho_0\rho_1 & & \\
-\rho_0\Delta_0 & 0 & \rho_1\Delta_1 & \rho_1\rho_2 & \\
-\rho_0\rho_1 & -\rho_1\overline{\Delta}_1 & 0 & \rho_2 \overline{\Delta}_2 & \rho_2\rho_3 \\
& \ddots & \ddots & \ddots &\ddots & \ddots
\end{pmatrix} = -B^*,
\end{align*}
with $\Delta_n = \alpha_{n+1}-\alpha_{n-1}$ and $\alpha_{-1}=-1$.
\end{theorem}
See Golinskii \cite[Thm. 1]{Gol2006}, \cite[Thm. 9.8.9]{AB}, Simon \cite[\S 42 in Part 2]{Simon}, Nenciu \cite{Nenciu} for a proof of this theorem.
\section{Discrete Painlev\'e equation} \label{sec:DP}
So far we have looked at orthogonal polynomials $p_n(x,t)$ for a measure $d\mu_t(x) = e^{xt}\, d\mu(x)$ on the real line as functions of two variables
$x$ and $t$. However the degree $n$ is also an important parameter and the three term recurrence relation \eqref{3trr} gives the dynamics in this discrete
variable. So one must really consider $p_n(x,t)$ as a functions of three variables $n,x,t$, with $n$ a variable taking values in the discrete set $\mathbb{N}$.
Of course, one can also consider the monic polynomials $P_n(x,t)$ and the three term recurrence relation \eqref{3TRR}.
Compatibility between the recurrence relation (which acts as a difference equation for the variable $n$) and the behavior in the variable $x$ (differential or difference equation in $x$) will give difference equations for the coefficients $a_n$ and $b_n$ in the recurrence relation \eqref{3trr} or \eqref{3TRR}.
One of the earliest examples are the orthogonal polynomials for the weight $w(x,t)= e^{-x^4+tx^2}$ on $\mathbb{R}$.
In 1976 G\'eza Freud found that $b_n=0$ (because the weight is symmetric around $0$) and
\begin{equation} \label{dPI}
4a_n^2 (a_{n+1}^2+a_n^2+a_{n-1}^2 - \frac{t}{2}) = n,
\end{equation}
for the case $t=0$, but Shohat already had this recurrence in 1939 but didn't really do anything with it.
Fokas, Its and Kitaev \cite{FIK} realized that \eqref{dPI} should be thought of as a discrete Painlev\'e equation and called it discrete Painlev\'e I
(d-P$_{\scriptstyle\textup{I}}$).
The equation \eqref{dPI} follows from compatiblity between the recurrence relation
\[ xp_n(x) = a_{n+1} p_{n+1}(x) + a_n p_{n-1}(x), \]
and the structure relation
\[ p_n'(x) = A_np_{n-1}(x) + C_n p_{n-3}(x), \]
which follows from the fact that $w'(x) = (-4x^3 + 2tx) w(x)$ and integration by parts (see, e.g., \cite[\S 2.1]{WVA}).
Many other situations were investigated and they all have the same features: the measure is $d\mu(x) = w(x)\, dx$ and the weight
$w(x)$ satisfies a simple differential equation
\begin{equation} \label{Pearson}
[\sigma(x) w(x)]' = \tau(x) w(x),
\end{equation}
where $\sigma$ and $\tau$ are polynomials, and this gives a structure relation
\begin{equation} \label{structure}
\sigma(x) p_n'(x) = \sum_{k=n-t}^{n+s-1} A_{n,k} p_k(x),
\end{equation}
where $s = \deg \sigma$ and $t=\max \{ \deg \tau, \deg \sigma -1 \}$. The differential equation \eqref{Pearson} is known as the Pearson equation
and the corresponding orthogonal polynomials are known as semi-classical orthogonal polynomials. The classical orthogonal polynomials
of Hermite, Laguerre and Jacobi correspond to the case when $s \leq 2$ and $t=1$. Compatibility between the three term recurrence
relation \eqref{3trr} and the structure relation \eqref{structure} then gives a system of equations for the unknown recurrence coefficients $a_n, b_n, A_{n,k}$, and eliminating the $A_{n,k}$ then gives a system of two non-linear recurrence relations for the $a_n, b_n$, which can usually be identified
as some discrete Painlev\'e equation.
The classification of discrete Painlev\'e equations is more complicated than for the Painlev\'e differential equations, which we will encounter in the
next section. Originally one gave names to discrete Painlev\'e equations in a somewhat chronological order, taking into account the
limiting differential equation that appears as a continuum limit (taking $x=nh$ and $h \to 0$), but nowadays one identifies the discrete Painlev\'e
equation using symmetry and geometry, following the work of Okamoto \cite{Okamoto} and Sakai \cite{Sakai}. A very good survey was given by
Kajiwara, Noumi and Yamada \cite{KNY} and is strongly recommended. So the discrete Painlev\'e equation d-P$_{\scriptstyle\textup{I}}$ in
\eqref{dPI} corresponds to d-P$(A_2^{(1)}/E_6^{(1)})$, with symmetry $A_2^{(1)}$ and surface type $E_6^{(1)}$,
where $A_2^{(1)}$ and $E_6^{(1)}$ are affine Weyl groups, see \cite[Eq. (8.25)]{KNY}.
The structure relation can be replaced by using a lowering and raising operator for the orthogonal polynomials, as was worked out
by Chen and Ismail (see, e.g., \cite[\S 3.2]{Ismail} or \cite[Ch. 4]{WVA}). If we denote the weight function by $w(x)= \exp(-V(x))$, with $V$ twice differentiable,
and define
\begin{eqnarray*}
A_n(x) &=& a_n \int \frac{V'(x)-V'(y)}{x-y} \ p_n^2(y) w(y)\, dy, \\
B_n(x) &=& a_n \int \frac{V'(x)-V'(y)}{x-y}\ p_n(y)p_{n-1}(y) w(y)\, dy ,
\end{eqnarray*}
then the lowering operator $L_{1,n}$ and the raising operator $L_{2,n}$ are given by
\[ L_{1,n} = \frac{d}{dx} + B_n(x), \quad L_{2,n} = - \frac{d}{dx} + B_n(x) + V'(x), \]
and one has
\[ L_{1,n} p_n(x) = A_n(x) p_{n-1}(x), \quad L_{2,n} p_{n-1}(x) = \frac{a_n}{a_{n-1}} A_{n-1} p_n(x). \]
Combining both operators gives a second order differential equation
\[ L_{2,n} \Bigl( \frac{1}{A_n(x)} \bigl(L_{1,n} p_n(x) \bigr) \Bigr) = \frac{a_n}{a_{n-1}} A_{n-1} p_n(x). \]
The compatibility between this differential equation (in the variable $x$) and the three term recurrence relation \eqref{3trr} (in the variable $n$)
then gives a system of non-linear difference equations for the recurrence coefficients $a_n,b_n$.
Observe that the functions $A_n$ and $B_n$ not only depend directly on the weight function $w$ and the potential $V$, but also on the polynomials
$p_n$ and $p_{n-1}$, and hence it is not always possible to compute $A_n$ and $B_n$ explicitly without knowing the orthogonal polynomials.
However, one can usually figure out that they are rational functions with known poles and then introduce some unknown parameters. The compatibility
relations will then give relations connecting these parameters with the recurrence coefficients, and eliminating the parameters then give the required
difference equations for the recurrence coefficients $a_n,b_n$. Let us give some examples supplementing the earlier example $w(x)=\exp(-x^4+tx^2)$
on $(-\infty,\infty)$.
\begin{itemize}
\item (Chen and Its \cite{ChenIts}) if $w(x) = x^\alpha e^{-x-t/x}$ for $x \in [0,\infty)$, then
\[ A_n(x) = a_n \left(\frac{1}{x} + \frac{c_n}{x^2}\right) , \quad B_n(x) = -\frac{n}{x} + \frac{d_n}{x^2} . \]
The recurrence coefficients can be expressed in terms of $c_n$ and $d_n$ as
\[ b_n = 2n+\alpha + 1 + c_n, \quad a_n^2 = n(n+\alpha) +d_n + \sum_{j=0}^{n-1} c_j. \]
If we put $x_n=1/c_n$ and $y_n=d_n$ then
\begin{align*}
x_n+x_{n-1} &= \frac{nt-(2n+\alpha)y_n}{y_n(y_n-t)}, \\
y_n+y_{n+1} &= t - \frac{2n+\alpha+1}{x_n} - \frac{1}{x_n^2},
\end{align*}
which can be identified as d-P$((2A_1)^{(1)}/D_6^{(1)})$.
\item (Basor, Chen, Ehrhardt \cite{BCE}) the Toda evolution of the Jacobi weight is $w(x)=(1-x)^\alpha(1+x)^\beta e^{-tx}$ on $[-1,1]$.
The $A_n$ and $B_n$ are rational functions with poles at $\pm 1$:
\[ A_n(x) = a_n \left( \frac{R_n}{1-x} + \frac{t+R_n}{1+x} \right), \quad B_n(x) = \frac{r_n}{1-x} + \frac{r_n-n}{1+x}. \]
The recurrence coefficients can be expressed in terms of $r_n$ and $R_n$ as
\begin{align*}
tb_n &= 2n+1+\alpha+\beta - t - 2R_n, \\
t(t+R_n) a_n^2 &= n(n+\beta) -(2n+\alpha+\beta)r_n - \frac{tr_n(r_n+\alpha)}{R_n},
\end{align*}
and $(r_n,R_n)$ satisfy
\begin{align*}
2t(r_n+r_{n+1}) &= 4R_n^2-2R_n(2n+1+\alpha+\beta-t) -2\alpha t, \\
\left( \frac{t}{R_n} +1 \right) \left(\frac{t}{R_{n-1}}+1 \right) &= 1+ \frac{n(n+\beta)-(2n+\alpha+\beta)r_n}{r_n(r_n+\alpha)}.
\end{align*}
An identification as a discrete Painlev\'e equation was not made, but after an appropriate transformation $r_n \to q_n$ and $t/R_n \to f_n-1$ one can recognize d-P$(D_4^{(1)}/D_4^{(1)})$.
\item (Boelen-Van Assche \cite{BVA}, Clarkson-Jordaan \cite{ClarkJor}) for the modified Laguerre weight
$w(x)=x^\alpha e^{-x^2+tx}$ on $[0,\infty)$ the recurrence coefficients can be written as
\[ 2a_n^2 = y_n+n+\alpha/2, \quad 2b_n = t - \frac{\sqrt{2}}{x_n}, \]
and the $(x_n,y_n)$ satisfy the system
\begin{align*}
x_nx_{n-1} &= \frac{y_n + n+\alpha/2}{y_n^2-\alpha^2/4}, \\
y_n+y_{n+1} &= \frac{1}{x_n} \left( \frac{t}{\sqrt{2}} - \frac{1}{x_n} \right).
\end{align*}
Using a rational transformation $(x_n,y_n) \to (p_n,q_n)$, Dzhamay et al. \cite[Eq. (3.86)]{DFS1} were able to identify the resulting equations
\begin{align*}
q_n + q_{n+1} &= p_n-t- \frac{n+\alpha+2}{p_n}, \\
p_n+p_{n-1} &= q_n + t - \frac{n+1}{q_n},
\end{align*}
as the discrete Painlev\'e equations d-P$(A_2^{(1)}/E_6^{(1)})$ .
\end{itemize}
This also works for discrete orthogonal polynomials, i.e., when the orthogonality is in terms of a discrete measure on a lattice or a $q$-lattice.
Some examples of orthogonal polynomials on the lattice $\mathbb{N}$ are
\begin{itemize}
\item Generalized Charlier polynomials: the orthogonality relations are
\[ \sum_{k=0}^\infty P_n(k)P_m(k) \frac{a^k}{k! (\beta)_k} = 0, \qquad m \neq n, \]
with $a, \beta >0$. The discrete weight $w_k=a^k/(\beta)_k k!$ satisfies the discrete Pearson equation
\[ w_{k-1}=\frac{k(\beta+k-1)}{a} w_k, \]
and the orthonormal polynomials have the structure relation
\[ p_n(x+1)-p_n(x) = A_n p_{n-1}(x) + B_n p_{n-2}(x). \]
Compatibility of this structure relation with the three term recurrence relation \eqref{3trr} gives the discrete Painlev\'e equations \cite[Thm. 3.7]{WVA}
\begin{align*}
(a_{n+1}^2-a)(a_n^2-a) &= ad_n(d_n+\beta-1), \\
d_{n} + d_{n-1} &= -n-\beta+1+\frac{an}{a_n^2},
\end{align*}
with $d_n=b_n-n$, which is a limiting case of d-P$(D_4^{(1)}/D_4^{(1)})$. Observe that for $a=e^t$ the recurrence coefficients $a_n^2(t)$ and $b_n(t)$ satisfy
the Toda-lattice equations \eqref{Toda-a}--\eqref{Toda-b}.
\item Generalized Meixner polynomials: the orthogonality relations are
\[ \sum_{k=0}^\infty P_n(k) P_m(k) \frac{(\gamma)_k a^k}{(\beta)_k k!} = 0, \qquad m \neq n, \]
with $a,\beta,\gamma >0$. Here it is more convenient to use the ladder operators to obtain the discrete Painlev\'e equation \cite[\S 5.3]{WVA}
\begin{align*}
(u_n+v_n)(u_{n+1}+v_n) &= \frac{\gamma-1}{a^2} v_n(v_n-a) \left( v_n - a \frac{\gamma-\beta}{\gamma-1} \right), \\
(u_n+v_n)(u_n+v_{n-1}) &= \frac{u_n}{u_n-\frac{an}{\gamma-1}} (u_n+a) \left( u_n + a \frac{\gamma-\beta}{\gamma-1} \right),
\end{align*}
and the recurrence coefficients are given by
\[ a_n^2 = na - (\gamma-1)u_n, \quad b_n = n+\gamma-\beta+a - \frac{\gamma-1}{a} v_n. \]
These discrete Painlev\'e equations correspond to d-P$(E_6^{(1)}/A_2^{(1)})$ in \cite{KNY}.
Again, for $a=e^t$ the recurrence coefficients satisfy the Toda lattice equations \eqref{Toda-a}--\eqref{Toda-b}.
\item Hypergeometric weights: the orthogonality relations are
\[ \sum_{k=0}^\infty P_m(k)P_n(k) \frac{(\alpha)_k(\beta)_k a^k}{(\gamma)_k k!} = 0, \qquad m \neq 0, \]
where $\alpha,\beta,\gamma >0$ and $0 < a < 1$. These orthogonal polynomials were investigated in \cite{FWVA}, who obtained
the discrete Painlev\'e equations, but later on these were simplified in \cite[Thm. 1]{DFS0}. The discrete Painlev\'e equations
are
\begin{align*}
f_{n+1}f_n &= \frac{g_n(g_n- \gamma+\beta)}{c(g_n+n+\beta-\gamma+1)(g_n+n+\alpha+\beta-\gamma)}, \\
g_n+g_{n-1} &= -2n+\gamma-\alpha -2\beta+\gamma - \frac{n+\beta}{f_n-1} -\frac{n+\alpha-\gamma}{cf_n-1},
\end{align*}
which corresponds to d-P$(D_4^{(1)}/D_4^{(1)})$, where
{\small
\[ f_n = \frac{(x_n - \beta)(x_n - \gamma)}{c \bigl((x_n - \alpha)(x_n - \beta) - nx_n - y_n\bigr)}, \]
\[ \rule{-20pt}{0pt} g_n = - \frac{(x_n - \gamma)\Bigl( \bigl( (x_n - \alpha)(x_n - \beta) - nx_n - y_n \bigr) - t(x_n - \beta)(x_n - \gamma + \beta + n) \Bigr)}{\bigl((x_n - \alpha)(x_n - \beta) - nx_n - y_n\bigr) - t(x_n - \beta)(x_n - \gamma)}, \]}
where the recurrence coefficients are given by
\begin{align*}
\frac{1 - c}{c} a_n^2 &= y_n + \sum_{k=0}^{n-1} x_k + \frac{n(n + \alpha + \beta - \gamma - 1)}{1-c} , \\
b_n &= x_n + \frac{n + (n + \alpha + \beta)c - \gamma)}{1-c}.
\end{align*}
Again the recurrence coefficients satisfy the Toda lattice equations if $c=e^t$.
\end{itemize}
The prototype example for orthogonal polynomials on the unit circle is the weight function $v(\theta) = e^{t\cos \theta}$,
where $z=e^{i\theta}$. The orthogonal polynomials
for this weight on the unit circle were first investigated by Periwal and Shevitz \cite{PerShe1990} and they are related to unitary random matrices.
The Verblunsky coefficients for these orthogonal polynomials $\alpha_n(t)$ are real and satisfy \cite[Thm. 3.2]{WVA}
\[ -\frac{t}{2} (1-\alpha_n^2)(\alpha_{n+1}+\alpha_{n-1}) = (n+1)\alpha_n, \]
which corresponds to discrete Painlev\'e II (d-P$_{\scriptstyle\textup{II}}$). As a function of $t$ they also satisfy the Ablowitz-Ladik equation \eqref{Ablowitz-Ladik}
\[ \alpha_n'(t) = \frac12 (1-\alpha_n^2) (\alpha_{n+1}-\alpha_{n-1}) , \]
with initial values $\alpha_n(0)=0$ for $n\geq 0$ and $\alpha_{-1}=-1$, since the orthonormal polynomials for $t=0$ are $\varphi_n(z,0) = z^n$.
\section{Painlev\'e differential equations} \label{sec:Pain}
The orthogonal polynomials depend on three variables $n,x,t$ and in each of these variables they satisfy differential or difference equations.
The compatibility between these relations for the variables $n$ and $t$ results in the Toda lattice equations or related lattice equations such as
Ablowitz-Ladik (see \S \ref{sec:Toda}). The compatibility between the relations for the variables $n$ and $x$ give discrete Painlev\'e equations
(see \S \ref{sec:DP}). A combination of the Toda equations and the discrete Painlev\'e equations then gives the compatibility between all three
variables $n,x,t$ of the orthogonal polynomials $P_n(x,t)$. In many cases this results in a second order differential equation that (after some transformation)
can be identified as one of the six Painlev\'e equations. For the examples that we have considered before, one arrives at the following equations.
\begin{itemize}
\item For the weight function $w(x,t) = e^{-x^4+tx^2}$ on $(-\infty,\infty)$ one finds $\textup{P}_{\scriptstyle\textup{IV}}$
\[ x_n''(t) = \frac{(x_n')^2}{2x_n} + \frac{3x_n^3}{2} -tx_n^2 + x_n \left( \frac{n}{4} + \frac{t^2}{8} \right) - \frac{n^2}{32x_n} , \]
where $x_n = a_n^2$ (Magnus \cite{Magnus}). The transformation $2x_n(t) = y(-t/2)$ gives P$_{\scriptstyle \textup{IV}}$ in its usual form.
\item For the weight function $v(\theta) = e^{t\cos \theta}$ on the unit circle, the Verblunsky coefficients $\alpha_n(t)$ satisfy
\[ \alpha_n'' = - \frac{\alpha_n}{1-\alpha_n^2} (\alpha_n')^2 - \frac{\alpha_n'}{t} - \alpha_n(1-\alpha_n^2) + \frac{(n+1)^2}{t^2} \frac{\alpha_n}{1-\alpha_n^2}, \]
(Periwal and Shevitz \cite{PerShe1990})
and if we put $\alpha_n = (1+y)/(1-y)$ then $y$ satisfies P$_{\scriptstyle\textup{V}}$ but with one of the parameters $\gamma=0$. In that case the
Painlev\'e equation can be transformed to P$_{\scriptstyle\textup{III}}$ \cite[\S 3.1.3]{WVA}. If one puts $w_n=\alpha_n/\alpha_{n-1}$, then
\[ w_n'' = \frac{(w_n')^2}{w_n} - \frac{w_n'}{t} + \frac{2n}{t} w_n^2 - \frac{2(n+1)}{t} + w_n^3 - \frac{1}{w_n}, \]
which is Painlev\'e III.
\item For the weight function $w(x,t) = x^\alpha e^{-x-t/x}$ on $[0,\infty)$ one has
\[ c_n'' = \frac{(c_n')^2}{c_n} - \frac{c_n'}{t} + (2n+\alpha+1) \frac{c_n^2}{t} + \frac{c_n^3}{t^2} + \frac{\alpha}{s} - \frac{1}{c_n} ,\]
which after a transformation is P$_{\scriptstyle\textup{III}}$ (Chen and Its \cite{ChenIts}).
This shows that the same Painlev\'e equation may appear for the recurrence coefficients
of very different families of orthogonal polynomials. What these families have in common is that the moments can be expressed in terms of Bessel functions
and the solution of the Painlev\'e equations that we want is the special function solution in terms of Bessel functions.
\item For the Jacobi-Toda weight function $w(x,t) = (1-x)^\alpha(1+x)^\beta e^{-tx}$ on $[-1,1]$ one uses the transformation $y(t) = 1+t/R_n(t)$ to find
\begin{multline*}
y'' = \frac{2y-1}{2y(y-1)} (y')^2 - \frac{y'}{t} + 2(2n+\alpha+\beta+1) \frac{y}{t} - \frac{2y(y+1)}{y-1} \\
+ \frac{(y-1)^2}{t^2} \left( \frac{\alpha^2y}{2}-\frac{\beta^2}{2y} \right),
\end{multline*}
which corresponds to P$_{\scriptstyle\textup{V}}$ (Basor, Chen and Ehrhardt \cite{BCE}).
\item The modified Laguerre weight $w(x,t) = x^\alpha e^{-x^2+tx}$ on $[0,\infty)$ gives a Painlev\'e IV equation for $x_n$
\[ x_n'' = \frac{3}{2} \frac{(x_n')^2}{x_n} + \frac{\alpha^2}{4} x_n^3 - \frac{x_n}{8} (t^2-4-8n-4\alpha) + \frac{t}{\sqrt{2}} - \frac{3}{4x_n}, \]
and in \cite{DFS1} the identification with the Hamiltonian form of P$_{\scriptstyle\textup{IV}}$ was made.
\item For the generalized Charlier polynomials one puts $x_n(a) = \displaystyle \frac{a}{1-y(a)}$ to find
\[ y'' = \left( \frac{1}{2y} + \frac{1}{y-1} \right) (y')^2 - \frac{y'}{a} + \frac{(y-1)^2}{a^2} \left( \frac{n^2}{2}y - \frac{(\beta-1)^2}{2y} \right) - \frac{2y}{a}. \]
which is P$_{\scriptstyle\textup{V}}$ but with one of the parameters $\delta=0$. This can be transformed to P$_{\scriptstyle\textup{III}}$, see
\cite[\S 3.2.3]{WVA}.
The moments of the generalized Charlier polynomials are in terms of modified Bessel functions.
\item The generalized Meixner polynomials have a genuine P$_{\scriptstyle\textup{V}}$ with all the parameters different from $0$.
One has \cite[\S 5.3]{WVA}
\[ y'' = \left( \frac{1}{2y} + \frac{1}{y-1} \right) (y')^2 - \frac{y'}{a} + \frac{(y-1)^2}{a^2} \left(Ay+ \frac{B}{y}\right) + \frac{Cy}{a} +
\frac{Dy(y+1)}{y-1}, \]
with
\[ A= \frac{(\beta-1)^2}{2}, \quad B= -\frac{n^2}{2}, \quad C = n-\beta+2\gamma, \quad D = -\frac12. \]
Here $y(a)$ is related to $v_n(a)$ by the rational transformation
\[ v_n(a) = \frac{a\bigl(ay'-(1+\beta-2\gamma)y^2+(1+n-a+\beta-2\gamma)y - n\bigr)}{2(\gamma-1)(y-1)y}. \]
The moments are in terms of the confluent hypergeometric function $M(a,b,z)$ and the solution we need is the special function solution
of P$_{\scriptstyle\textup{V}}$.
\item For the hypergeometric weights, Dzhamay et al.\ \cite{DFS1} found the appropriate transformation that leads to Painlev\'e VI:
\begin{multline*}
f'' = \frac12 \left( \frac{1}{f} + \frac{1}{f-1} + \frac{1}{f-t} \right) (f')^2 - \left( \frac{1}{t} + \frac{1}{t-1} + \frac{1}{f-t} \right) f' \\
+ \frac{f(f-1)(f-t)}{t^2(t-1)^2} \left( A + B \frac{t}{f^2} + C \frac{t-1}{(f-1)^2} + D \frac{t(t-1)}{(f-t)^2} \right),
\end{multline*}
with $ct=1$ and parameters
{\small
\[ A = \frac{(\alpha-1)^2}{2}, \quad B=- \frac{(\beta-\gamma)^2}{2}, \quad C= \frac{(n+\beta)^2}{2}, \quad D= \frac12 - \frac{(n+\alpha-\gamma)^2}{2}. \] }
The moments of these discrete weights are in terms of Gauss hypergeometric functions, and the solution that we need is a special function
solution of P$_{\scriptstyle\textup{VI}}$.
\end{itemize}
\section{Conclusion}
Orthogonal polynomials on the real line and the unit circle for semi-classical weights have a remarkable underlying integrable structure. One needs to
look at orthogonal polynomials $P_n(x,t)$ as functions of three variables $n,x,t$, where $n$ is discrete and denotes the degree, $x$ is the usual variable for the polynomial which may be continuous or discrete, an $t$ is the time variable for the Toda-evolution. The three term recurrence relation for the
orthogonal polynomials plays a crucial role and the corresponding Jacobi operator is part of the Lax pair for the Toda lattice. On the unit circle one uses the CMV matrix instead of the Jacobi matrix. Differential equations or difference equations for the orthogonal polynomials (in the variable $x$) lead to non-linear
difference equations for the recurrence coefficients which can be identified as discrete Painlev\'e equations. This identification is not straightforward but
recently a lot of progress has been made using the geometric theory behind (discrete) Painlev\'e equations (Kajiwara et al. \cite{KNY}, Dzhamay et al. \cite{DFS0,DFS1}). A combination of the discrete Painlev\'e equations and the Toda equations finally leads to Painlev\'e equations for the
recurrence coefficients. In all the cases considered one needs the special function solutions of the Painlev\'e equations, and the special functions
are already visible in the first few moments of the weights under consideration. This simplifies the identification because one knows which special functions
can appear in special function solutions of Painlev\'e equations, see e.g., Clarkson \cite{Clarkson}.
|
2001.02050
|
\section*{\refname}}
\def\note#1{\marginpar{\textbf{\footnotesize Note:}\\ {\footnotesize #1}}}
\biboptions{sort&compress}
\journal{Journal of Sound and Vibration}
\begin{document}
\begin{frontmatter}
\title{Deep learning surrogate interacting Markov chain Monte Carlo based full wave inversion scheme for properties of materials quantification}
\author[label1]{Reza Rashetnia$^{*}$}
\author[label2]{Mohammad Pour-Ghaz}
\address[label1]{$^{*}$InstroTek Inc., 1 Triangle Dr, Research Triangle Park, Durham, NC 27709, USA,
(Corresponding author) Tel.:+1-979-402-0417. E-mail:reza.rashetnia@gmail.com}
\address[label2]{Department of Civil Construction and Environmental Engineering, North Carolina State University, Raleigh, NC 27695, USA.}
\begin{abstract}
Full Wave Inversion (FWI) imaging scheme has many applications in engineering, geoscience and medical sciences.
In this paper, a surrogate deep learning FWI approach is presented to quantify properties of materials using stress waves.
Such inverse problems, in general, are ill-posed and nonconvex, especially in cases where the solutions exhibit shocks, heterogeneity, discontinuities, or large gradients.
The proposed approach is proven efficient to obtain global minima responses in these cases.
This approach is trained based on random sampled set of material properties and sampled trials around local minima, therefore, it requires a forward simulation can handle high heterogeneity, discontinuities and large gradients.
High resolution Kurganov-Tadmor (KT) central finite volume method is used as forward wave propagation operator.
Using the proposed framework, material properties of 2D media are quantified for several different situations.
The results demonstrate the feasibility of the proposed method for estimating mechanical properties of materials with high accuracy using deep learning approaches.
\end{abstract}
\begin{keyword}
Deep Learning, Full Wave Inversion, Inverse problems, Kurganov-Tadmor, Markov chain Monte Carlo, Surrogate Model, Wave propagation.
\end{keyword}
\end{frontmatter}
\section{Introduction}
Tomography techniques are valuable and widely used in engineering, geoscience and medical sciences.
Examples of such tomography based techniques include, Infrared Thermography \cite{Bagavathiappan2013, Lahiri2012}, Electrical Impedance Tomography \cite{Rashetnia2018a, Rashetnia2017, Smyl2017, Rashetnia2017a, Brown2009, Davalos2004, Borcea2002},
Electrical Capacitance Tomography \cite{Voss2018,Voss2017},
radar acoustics and Radiography \cite{Buyukozturk1998, Topczewski2007}, X-Ray Computed Tomography \cite{Balazs2018, Mees2003, Ketcham2001}, and stress wave based tomography \cite{Choi2015, Lin2018, Krause2001, Schickert2005, Haza2013, Beniwal2016, Liu2010, Liu2011, Zhu2007, Yu2019, Beniwal2015, Liu2019, Kawashima2006, Rashetnia2018, Law2005, Lu2006, Rashetnia2016}.
Among these methods, stress wave-based methods are attractive since they can provide information about the mechanical properties of materials and structures.
Stress wave methods are mostly refraction and reflection tomography techniques, which use only the travel time kinematics of the transducer data.
Full Wave Inversion (FWI) is a type of stress wave tomography which uses complete waveforms and derives high resolution velocity models by minimizing the difference between observed and modeled waveforms.
FWI goes beyond refraction and reflection tomography techniques, which use only the travel time kinematics of the signals, by using additional information provided by the amplitude and phase of the stress waveform.
The highly detailed models provided by FWI can be used to resolve complex mechanical features both in time and frequency domains\cite{Guitton2012,Hu2009,Lin2015a,Lin2015b,Vigh2008}.
There are few research can be applied on deep learning to solve FWI for mechanical behavior reconstruction \cite{Lewis2017,Richardson2018}.
This paper provides a computational inverse framework for stress wave FWI tomography to quantify distribution of elastic moduli and densities at the same time or separately.
For numerical inverse solutions of stress wave tomography, nonlinear least-squares, Newton’s method or other data-fitting methods can be natural choices commonly used for finding the coefficients in the systems of partial differential equations chosen to model the wave physics.
However, they may be cumbersome and computationally expensive to implement for stress wave differential equation specially for finer mesh discretization.
Also, gradient-based methods require proper objective functions and constraints for optimal convergence, and they suffer from inability of optimal convergence to global minima in cases of nonconvexity and high ill-posedness and nonlinearity.
The numerical implementations of FWI is considered highly non-linear, ill-posed and often nonconvex inverse problem \cite{Virieux2009}.
This becomes worst in the case of materials with large properties gradient, nonlinearity or shock waves.
To resolve these issues, we employ a surrogate deep learning interacting Markov chain Monte Carlo based optimization method to solve the FWI inverse problem and estimate properties of materials.
In this paper, a surrogate deep-learning optimization approach is presented to minimize the difference between observed and modeled responses.
Surrogate deep learning random search naturally causes large properties gradient and high heterogeneity itself.
Therefore, this method requires a wave propagation forward modeling which can handle heterogeneity, large properties gradient, nonlinearity and shock waves properly.
Kurganov-Tadmor (KT) high resolution central finite volume scheme is used for solving stress waves in two dimensional heterogeneous media.
KT is highly accurate for quantification of materials properties with large properties gradient, nonlinearity or shock waves.
This scheme is non-oscillatory and enjoy the main advantage of Godunov-type central schemes: simplicity, i.e., they employ neither characteristic decomposition nor approximate Riemann solvers.
This makes it universal method that can be applied to a wide variety of physical problems, including hyperbolic systems of conservation laws.
KT central scheme has the numerical dissipation with an amplitude of order $O(\Delta X^{3}/\Delta t)$ \cite{Kurganov2000}.
Beside of the good resolution obtained by the KT, it can use a semi-discrete formulation coupled with an appropriate ODE solver retaining simplicity and high resolution with lower numerical viscosity, proportional to the vanishing size of the time step $\Delta t$ \cite{Kurganov2000}.
This semi-discrete central scheme is based on the ideas of Rusanov’s method using a more precise information about the local speeds of wave propagation computed at each Riemann Problem in two-space dimensions \cite{Rusanov1961}.
In this paper, KT is used as forward-wave propagation operator $f$ to map the stress wave velocity to ultrasonic stress wave signals.
In the following, we present KT scheme as the forward model used in this work.
Then, deep learning model used in the inverse problem is described.
This is followed by deep learning surrogate interacting Markov chain Monte Carlo inverse algorithm.
Then, results and discussions are provided and proposed method is investigated on three different examples.
Finally, we summarize our findings in conclusions.
\section{Forward model: Kurganov-Tadmor scheme}
\label{sec.KT}
Kurganov-Tadmor (KT) high resolution central scheme recently developed by Kurganov and Tadmor \cite{Kurganov2000}, which is one of Monotonic Upwind Schemes for Conservation Laws (MUSCL) \cite{Leer1979}.
In order to solve partial differential equations, the MUSCL schemes are finite volume schemes that can provide highly accurate numerical solutions for given systems, even in cases where the solutions exhibit shocks, discontinuities, or large gradients.
Examples of pioneering works in MUSCL schemes include the first-order Lax Friedrichs scheme \cite{Lax1954} and the Nessyahu-Tadmor (NT) scheme \cite{Nessyahu1990} which offers higher resolution as compared to the former.
Both Lax Friedrichs and NT schemes have a numerical viscosity of the order of $o((\Delta x)^{2r}\setminus\Delta t)$ and suffer from high numerical viscosity when sufficiently small time steps are used $(\Delta t \rightarrow 0)$\cite{Kurganov2000}.
KT has a much smaller numerical viscosity of $(o((\Delta x)^{2r-1}))$ which is independent of the $o(1\setminus\Delta t)$.
KT retains the independence of eigen structure of the problem and approximates the solutions of nonlinear conservation laws and convection-diffusion equations with less effort \cite{Kurganov2000}.
It also requires much lower number of mesh points containing the wave compared with a first-order methods with similar accuracy.
KT provides simple semi-discrete formulation and also uses more precise information of the local propagation speeds which in turn increases the accuracy of the solution.
For these reasons, KT is chosen for this study which let us to achieve more accurate FWI results.
In this study, KT is developed to solve the stress wave propagation,
\begin{equation}
\frac{\partial^{2} u}{\partial t^{2}} = c^{2}\nabla^{2} u
\label{PDE}
\end{equation}
where $u(x,t)$ is the displacement and $c$ is the constant coefficient.
To use the semi-discrete KT scheme, Equation \ref{PDE} need to be expressed in first order hyperbolic equation.
Therefore, we implement the following change of variables on Equation \ref{PDE}.
\begin{equation}
\frac{\partial}{\partial t} \varepsilon - \frac{\partial}{\partial x} v = 0
\label{1DPDEs_1}
\end{equation}
\begin{equation}
\frac{\partial}{\partial t}(\rho(x) v)-\frac{\partial}{\partial x} \sigma = 0
\label{1DPDEs_2}
\end{equation}
where $v$, $\varepsilon$, $\sigma$, and $\rho$ are velocity, strain, stress, and density respectively. In Equation \ref{1DPDEs_1}, $\varepsilon$ is the state variable, and $v$ is the flux. Similarly, in Equation \ref{1DPDEs_2}, $\rho(x) v$ is the state variable, and $\sigma$ is the flux.
In KT scheme, linear piecewise approximation of state variable ($\hat{u}$) is shown by Equation \ref{Piecewise} within each cell. Equation \ref{Piecewise} is used to discretize Equations \ref{1DPDEs_1} and \ref{1DPDEs_2} using slope-limited, left and right extrapolated state variable.
Hence, high resolution total variation diminishing discretization can be written as Equation \ref{TVD}
\begin{equation}
\hat{u}(x) = \hat{u}_{i} + \frac{x-x_{i}}{x_{i+1}-x_{i}} (\hat{u}_{i+1}-\hat{u}_{i}) \hspace{3 mm} \forall x \in (x_{i},x_{i+1}]
\label{Piecewise}
\end{equation}
\begin{equation}
\frac{d\hat{u}_{i}}{dt} + \frac{1}{\Delta x_{i}} [f^{*}(\hat{u}_{i+1/2})-f^{*}(\hat{u}_{i-1/2})] = 0
\label{TVD}
\end{equation}
where $i$ is the cell center index.
The fluxes $f^{*}(\hat{u}_{i\pm1/2})$ are nonlinear combination of the first and second order approximations of the continuous flux function at cell edges. The fluxes, $f^{*}(\hat{u}_{i\pm1/2})$, are calculated based on Equations \ref{Fneg} and \ref{Fpos}.
\begin{equation}
f^{*}(\hat{u}_{i-\frac{1}{2}}) = \frac{1}{2} \{[f(\hat{u}^{R}_{i-\frac{1}{2}})+f(\hat{u}^{L}_{i-\frac{1}{2}})]-a_{i-\frac{1}{2}}[\hat{u}^{R}_{i-\frac{1}{2}}-\hat{u}^{L}_{i-\frac{1}{2}}]\}
\label{Fneg}
\end{equation}
\begin{equation}
f^{*}(\hat{u}_{i+\frac{1}{2}}) = \frac{1}{2} \{[f(\hat{u}^{R}_{i+\frac{1}{2}})+f(\hat{u}^{L}_{i+\frac{1}{2}})]-a_{i+\frac{1}{2}}[\hat{u}^{R}_{i+\frac{1}{2}}-\hat{u}^{L}_{i+\frac{1}{2}}]\}
\label{Fpos}
\end{equation}
where $R$ and $L$ are the right and left cells at the $i\pm\frac{1}{2}$ edges. The local propagation speed in each cell edge, $a_{i\pm\frac{1}{2}}$, is the maximum absolute value of the eigenvalue of the Jacobian of $f(\hat{u}(x,t))$, over cell $i$ and $i\pm1$:
\begin{equation}
a_{i\pm\frac{1}{2}}(t)=max[\textrm{abs}(\rho(\frac{\partial f(\hat{u}_{i}(t))}{\partial \hat{u}})),\textrm{abs}(\rho(\frac{\partial f(\hat{u}_{i\pm 1}(t))}{\partial \hat{u}}))]
\label{F21}
\end{equation}
where $\rho$ is spectral radius of $\frac{\partial f (\hat{u} (t))}{\partial \hat{u}}$.
We use $2^{nd}$ Runge Kutta for time integration over $\frac{d\hat{u}_{i}}{dt}$ after it is found by Equation \ref{TVD} \cite{Kurganov2000}.
Figure \ref{FC1} presents the flowchart of KT implementation.
\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{Flowchart1.pdf}
\caption{Flowchart of Kurganov-Tadmor central scheme}
\label{FC1}
\end{figure}
\section{Deep learning model}
\label{DLM}
In this paper, deep neural network with fully connected layers is used to input nodal elastic moduli and densities and output nodal strains over certain boundaries.
Figure \ref{DLM_1} presents the deep learning architecture in this study.
The architecture of this network includes input layer ($L_{0}$) of nodal KT material characteristics, hidden layers ($L_{1}-L_{H-1}$) and output layer $L_{H}$ which are nodal strain values over certain boundaries.
For these networks, high number of hidden layers are considered to potentially account for the sophisticated nonlinear function mapping the inputs to the outputs, where the number layers and units of these layers will be chosen as hyperparameters based on the network’s performance for each example.
All hidden layers utilize the tanh activation for modelling the nonlinearity, and the output layer has linear activation to reconstruct strain values.
The inverse problem of wave propagation equation is non-convex.
In order to deal with non-convexity of the deep network optimization, the Logistic Regression cost function was used.
Stochastic Gradient Descent (SGD) approach was used for network optimization.
To improve the model performance, $L_{2}$ and dropout regularization approaches were used.
$L_{2}$ regularization parameters were chosen using an initial batch of random parameter vectors as training and development sets to avoid overfitting (high variance) and underfitting (high bias).
To capture better randomness, dropout method was used.
Dropout probability is chosen for each example as a regularization hyperparameter.
For weight initialization of deep network, Xavier initialization approach is used for faster convergence of optimization.
For faster optimization, also mini-batch gradient descent with momentum and RMSprop (Adam method) is used.
The mini-batch sizes and momentum terms also will be tuned as hyperparameters.
Learning rate and its decay coefficient will be tuned as hyperparameters.
In order to tune all deep-learning hyperparameters such as number of hidden layers, layers' units, regularization parameters, mini-batch sizes, Adam hyperparameters and learning rates, initial deep learning network is trained using KT forward models.
\begin{figure}[ht!]
\centering
\includegraphics[width=3.5in]{Deeplearning.jpg}
\caption{Deep learning architecture.}
\label{DLM_1}
\end{figure}
\section{Inverse model: deep learning surrogate interacting Markov chain Monte Carlo search}
\label{subsect.SMARS}
The inverse problem is addressed as estimation of the mechanical properties of media using stress wave propagation, which has a non-convex error surface with multiple local minima due to data sparsity.
Therefore, the gradient-based optimization algorithms may provide convergence to local minima and computationally become highly expensive.
Further, high ill-posedness of this inverse problem increases gradient-based algorithms variance which required more complex regularization techniques without guaranteeing better reconstructions.
In this study, Interacting Markov chain Monte Carlo (IMCMC) algorithm is used as the search algorithm in order to find global minima and handle the data sparsity.
Unlike most of the current MCMC methods that ignore the previous trials; here, optimized deep learning model is used to speed-up the Markov Chain Monte Carlo algorithm by an order of magnitude \cite{Tahmasebi2016}.
In this method, the IMCMC method iteratively samples randomly from a sequence of probability distributions and estimating their errors with respect to waveform measurements which increases level of sampling complexity \cite{DelMoral2013}.
Then, using the deriven data set, a deep learning model is trained based on samplings.
Finally, the deep learning model is optimized to estimate the best unknown parameters in order to provide lowest errors with respect to measurements.
This sequence is iteratively repeated until the global minima is aquired.
The IMCMC method provides sampling from sequence of probability distributions and avoids convergence to local minima.
The objective is to minimize model error function ($e^{fm}$), per Equation \ref{error}
\begin{equation}
e^{FM}(\vec{p})=\sum^{m}_{i=1}\|f_{i}^{FM}(\vec{p},\vec{q})-f_{i}^{mes}(\vec{q})\|_{L_{2}(Q)}
\label{error}
\end{equation}
\noindent
where $ \vec{p} \in \mathbb{R}^{n}$ are a set of vectors with $n$ unknown material parameters,
$\vec{q} \in Q$ is the vector of variables in the measurement domain (e.g., time, boundary conditions), $f_{i}^{FM}(\vec{p},\vec{q})$ is the strain response from the forward model (i.e., KT),
$f_{i}^{mes}(\vec{q})$ is the measured strain response, and $m$ is the number of response components.
Similarly, the deep learning model error function is defined per Equation \ref{error2}
\begin{equation}
e^{ML}(\vec{p})=\sum^{m}_{i=1}\|f_{i}^{ML}(\vec{p},\vec{q})-f_{i}^{mes}(\vec{q})\|_{L_{2}(Q)}
\label{error2}
\end{equation}
The solution to the optimization problem is then defined by Equation \ref{min}
\begin{equation}
\vec{p^{*}}=\arg\min_{\vec{p} \in S^{n}}\{e^{FM}(\vec{p})\}
\label{min}
\end{equation}
\noindent
where, $\vec{p^{*}}$ is the best parameters vector and $S^{n}$ is constrained searching domain.
The iterative search algorithm continues until either convergence tolerance, $\varepsilon$, or the maximum number of iterations exceed the specified limit.
The search domain is specified for all $n$ parameters, $S^{n}[\vec{s}^{1} \hspace{3 mm} \vec{s}^{2}]$.
First, the IMCMC algorithm samples randomly and stores $k$ number of parameters sets, $\vec{p^{i}} \in S^{n};i=1,2,...,k$.
Then, the model error functions ,$e^{FM}(\vec{p})$, are calculated and stored in error function vector, $\vec{\mathcal{E}}_{1\times k}=\{e^{fm}(\vec{p^{1}}),e^{fm}(\vec{p^{2}}),...,e^{fm}(\vec{p^{k}})\}$.
Then from the current error function vector, the lowest error value, $\textrm{min}(\vec{\mathcal{E}})=e^{fm}(\vec{p}^{*})$, and the corresponding set of parameters, $\vec{p}^{*}$, are identified.
$\vec{p}^{*}$ is chosen as the closest approximation of the solution to the global minima.
Using IMCMC in complex forward problems with a wide search range generally shows a slow convergence rate.
Therefore, surrogate models are used to accelerate the entire process by finding local optimum regions.
For this purpose, all the stored sets of parameters and corresponding error sets are used to train the surrogate-model.
We use fully connected deep network \cite{Chow2007} to train the model by all parameter sets and their corresponding error functions.
Equation \ref{NN} presents the mapping system of the trained surrogate-model.
\begin{equation}
\vec{f}^{sm}:(\vec{p})\longrightarrow{e}^{fm}(\vec{p},\vec{q})
\label{NN}
\end{equation}
Once trained, we have a surrogate-model to obtain surrogate model error function, $e^{sm}(\vec{p})$ ,without the computational cost of forward model.
To find the minimum of $e^{sm}(\vec{p})$, we use genetic algorithm (GA).
Using GA, the local minima of the surrogate-model, $\vec{p}^{sm}$, is found per Equation \ref{Addmin}
\begin{equation}
\vec{p}^{sm} = \arg\min_{\vec{p}}\{e^{sm}(\vec{p})\}
\label{Addmin}
\end{equation}
The local minima, $\vec{p}^{sm}$, is added to the sets of the parameter vectors, $\vec{p}$.
Then the model error function, $e^{fm}(\vec{p}^{sm})$, is calculated using the forward model and added to the error function vector, $\vec{\mathcal{E}}$.
If $e^{fm}(\vec{p}^{sm})$ is lower than the convergence tolerance, then the iterations are terminated. Otherwise, the best parameters set, $\vec{p}^{*}$, is updated and the following RS is executed.
New sets of parameter vectors are generated based on normal distribution in the neighborhood of poles, and their corresponding model error functions are evaluated.
Using poles help in the search for the global minimum around all possible local minima.
A total of $j$ number of poles are chosen from the latest parameter vectors, $\vec{p}$ to cover highest probable area.
To choose the highest probable area, all $\vec{p}$ are sorted based on the magnitude of their model error function (lowest to highest error).
Then the poles are selected among the lowest error range.
Since $\vec{p}^{*}$ is the best answer it forms the first pole.
The subsequent poles then are chosen corresponding to the top ranked population (e.g., $10\%$ and $30\%$), $\vec{\mathfrak{P}}=\{\vec{p}^{*},\vec{p}^{2},...,\vec{p}^{j}\}$.
A total of $g$ number of random parameter vectors is generated centered around each pole using a normal distribution.
The generated parameter vectors are added to the entire parameter vectors.
The forward model is then used to estimate the model error function for all newly added parameter vectors and $\vec{p}^{*}$ is updated.
Finally, the search domain is updated around $\vec{p}^{*}$ per Equation \ref{update}
\begin{equation}
S^{n}[\vec{s}^{1} \hspace{3 mm} \vec{s}^{2}] = [a\times\vec{p}^{*} \hspace{3 mm} b\times\vec{p}^{*}]
\label{update}
\end{equation}
where $a$ and $b$ are coefficients adjusting the range.
The surrogate model is trained again and iteration continues until the satisfying the convergence criteria is met or the maximum number of iterations is exceeded.
\section{Result and discussion}
\label{subsect.RD}
In this section, we solve three examples to demonstrate the application of proposed method for estimating mechanical properties of solid materials.
All three examples are propagation of stress waves into heterogeneous two dimensional media.
In all examples, the mechanical properties are estimated by sending stress waves from two sides and capturing the strain response at two other sides.
Figure \ref{SCHE} schematically illustrates all three examples.
In all examples, the experimental wave propagations were simulated by solving KT forward model and addition of $0.1\%$ Gaussian noise.
As measurements, the nodal strains vector is provided over certain boundaries.
As the boundary conditions, for all examples shown, left and bottom edges, $\Gamma$, velocity controled stress waves are injected and over right and top edges, $\Psi$, nodal strains are recorded as measurements.
The velocity of the excitation wave over $\Gamma$ is defined by Equation \ref{BC1}.
It should be noted that the discretization of the simulated forward models and inverse models were different.
\begin{eqnarray}
v(\eta,t) = sin(2\pi\times t); \hspace{2 mm} \eta \in \Gamma \hspace{25 mm} t \leq 1
\nonumber \\
v(\eta,t) = 0; \hspace{2 mm} \eta \in \Gamma \hspace{40 mm} t > 1
\label{BC1}
\end{eqnarray}
\begin{figure}[ht!]
\centering
\subfloat[]{{\includegraphics[width=3.5cm]{1.jpg} }}%
\quad
\subfloat[]{{\includegraphics[width=3.5cm]{2.jpg} }}%
\quad
\subfloat[]{{\includegraphics[width=3.5cm]{3.jpg} }}%
\caption{Schematic illustrations of all three simulated experiments.}
\label{SCHE}
\end{figure}
\subsection{Example 1}
\label{subsec.E1}
In the first example, Figure \ref{SCHE}a, we assume the presence of prior knowledge about density and Poisson coefficient equal to 1 $gr/cm^{3}$ and 0.2 respectively.
The objective is to estimate the nodal elastic moduli distributions in domain.
In most tomography problems, the main objective is anomaly and damage detections.
To achieve these objectives, most common approach is to calculate estatic or dynamic elastic moduli distributions over domains.
Example 1, therefore, focuses on reconstruction of elastic moduli distributions over a domain using stress wave propagations.
To do so, the deep learning network should be trained first to connect nodal elastic moduli to nodal strain measurements vector.
Then, using deep learning surrogate IMCMC algorithm, distributions of elastic moduli will be reconstructed.
For this example, the deep learning input layer ($L_{0}$) is a vector of 10000 units using normalized nodal elastic modulus values.
The output layer ($L_{H}$) is a vector of 202 units which uses normalized nodal strain values at $\Psi_{L}$ and $\Psi_{T}$ boundaries as is shown by Figure \ref{SCHE}a.
Total of 30 hidden layers were used in deep learning network in this example where the number of units of these layers started from 10000 ($L_{1}$) and decreased gradually to 300 at the final hidden layer ($L_{H-1}$).
$L_{2}$ regularization parameters were chosen using an initial batch of random parameter vectors as train and development sets to avoid overfitting.
$25 \%$ dropout probability was chosen as the optimum regularization parameter.
The momentum terms, learning rate and learning rate decay all were tuned by training the network based on initial random search batch.
The estimated parameters at each iteration of inverse problem are divided into 10 mini-batches and each set of parameters generated statisticaly around poles are considered new mini-batches.
The deep learning based IMCMC algorithm starts with 100000 initial random parameter vectors batch as random batch.
The search domain was initiated for elastic modulus, $E$ between $[1 \hspace{3 mm} 20 ]$.
The training progress window were $a=0.8$ and $b=1.2$, which means that after surrogate model training, the search domain includes trial solutions within $\pm20\%$ of the optimal solution.
10 statistical search poles were used, $j=10$, around best parameter vector and best $5, 10, 15, 20, 25, 30, 35, 40, 45\%$ parameter vectors.
A total of $g=2000$ random parameter vectors were generated at each pole.
The convergence tolerance for this example was considered $10\%$.
The deep learning search algorithm iterated while the lowest error function is higher than convergence tolerance.
Figure \ref{E1_1}a and \ref{E1_1}b compare the true and the reconstructed elastic modulus distributions.
\begin{figure}[ht!]
\centering
\subfloat[True E]{{\includegraphics[width=4cm]{ARD.jpg} }}%
\quad
\subfloat[FWI reconstructed E]{{\includegraphics[width=4cm]{ARC.jpg} }}%
\caption{Nodal elastic moduli distribution of the domain shown by Figure \ref{SCHE}a: (a) true elastic moduli, (b) reconstructed elastic moduli.}
\label{E1_1}
\end{figure}
The results indicate that the rconstructed elastic moduli distribution is in good agreement with true elastic moduli.
Comparing true and reconstructed results suggest average of $5 \%$ distributed error is measured between true and reconstructed elastic moduli distributions.
Comparing these two images, reconstrcuted image provides proper distinction between different blocks of materials.
Figure \ref{E1_2}a presents measured nodal strain values at $\Psi_{T}$ and $\Psi_{L}$ boundaries over time, which were considered as optimization objectives between real and simulation cases.
The whole time of $30$ was simulated since it was sufficient to capture almost all propagation events inside the domain for FWI reconstructions.
Figure \ref{E1_2}b presents similar results for recosntructed elastic moduli distribution to \ref{E1_2}a.
The covergence norm between true measurements and estimated one at final iteration was $5.77 \%$ which means true and estimated measurements had average of $\pm 5.77 \%$ difference at each time and space dimensions.
\begin{figure}[ht!]
\centering
\subfloat[]{{\includegraphics[width=5.5cm]{ARM.jpg} }}%
\quad
\subfloat[]{{\includegraphics[width=5.5cm]{AReM.jpg} }}%
\caption{Nodal strain values at $\Psi_{T}$ and $\Psi_{L}$ boundaries with respent to time: (a) true strain measurements, and (b) reconstructed strain values.}
\label{E1_2}
\end{figure}
Finally, Figure \ref{E1_3} presents comparison of wave velocity distributions between real and recosntrcuted domain at 1, 5, 10, 15 and 30 seconds.
The top row of Figure \ref{E1_3} presents wave velocity distribution inside real domain at 1, 5, 10, 15 and 30 seconds respectively from left to right.
The bottom row of Figure \ref{E1_3} presents wave velocity distribution inside reconstructed domain at 1, 5, 10, 15 and 30 seconds respectively from left to right.
Similarly, Figure \ref{E1_3} suggests very good agreement between both situations.
\begin{figure}[ht!]
\centering
\subfloat{{\includegraphics[width=2.5cm]{AR1.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{AR2.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{AR3.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{AR4.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{AR5.jpg} }}%
\\
\subfloat{{\includegraphics[width=2.5cm]{ARe1.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{ARe2.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{ARe3.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{ARe4.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{ARe5.jpg} }}%
\caption{Wave velocity distribution inside domain: (top row) inside real domain at 1, 5, 10, 15 and 30 seconds respectively from left to right, (bottom row) inside FWI reconstructed domain at 1, 5, 10, 15 and 30 seconds respectively from left to right.}
\label{E1_3}
\end{figure}
\subsection{Example 2}
\label{subsec.E2}
In the second example, Figure \ref{SCHE}b, we only assume prior information about Poisson coefficient equal to 0.2.
Therefore, the objective is to estimate distributions of elastic moduli and densities over the domain.
Often, the objective of tomography problems exceeds to estimate real material properties.
To achieve such objective, most common approach is to estimate wave velocity distributions over domains which contain effects of elastic modulus and density.
Using common approaches of wave travel time, gradient based inverse methods, and generally qualitative tomographic methods, it is a very cumbersome objective to estimate elastic moduli and densities distribution at same time seperately.
Also, due to probable high ill-posedness of inverse problem, it is not guaranteed to estimate the most optimum responses.
To achieve this objective, here, the deep learning network was trained first to connect vector of nodal elastic moduli and densities to nodal strain measurements vector.
Then, using the optimization of deep learning model, distributions of elastic moduli and densities are estimated at same time.
In this example, the deep learning input layer ($L_{0}$) was a vector of 20000 units uses normalized nodal elastic moduli and densities at same time.
The output layer ($L_{H}$) is a vector of 202 units which uses normalized nodal strain values at $\Psi_{L}$ and $\Psi_{T}$ boundaries as is shown by Figure \ref{SCHE}b.
Total of 45 hidden layers were used in deep learning network in this example where the number of units of these layers started from 20000 ($L_{1}$) and decreased gradually to 202 at the final hidden layer ($L_{H-1}$).
Similarly, $L_{2}$ regularization and dropout method were used.
$35 \%$ dropout probability was chosen as the optimum regularization parameter.
The dropout probability here was chosen higher than first example to introduce higher randomness and avoid vanishing issue in this example.
Similar momentum terms, learning rate and learning rate decay were used in this example as example 1.
After each inverse problem iteration, new parameters are divided into 10 mini-batches and each set of parameters generated statisticaly around poles are considered new mini-batches.
The deep learning based IMCMC starts with 300000 initial random parameter vectors batch as random batch.
The search domain was initiated for elastic modulus, $E$ between $[1 \hspace{3 mm} 20 ]$; and for density , $\rho$ between $[0.001 \hspace{3 mm} 2 ]$ (used higher than zero to avoid singularity).
The training window were $a=0.8$ and $b=1.2$ to search parameters within $\pm20\%$ of the optimal solution in every iteration.
10 statistical search poles were used, $j=10$, around best parameter vector and best $5, 10, 15, 20, 25, 30, 35, 40, 45\%$ parameter vectors.
A total of $g=4000$ random parameter vectors were generated at each poles.
The convergence tolerance for this example was considered $10\%$, however, the inverse problem had not converged to less than $10\%$ before the iteration number exceeds iteration limit.
Comparing true and reconstructed material properties, Figure \ref{E2_1} compares the true and the reconstructed elastic moduli and dnsities distributions.
\begin{figure}[ht!]
\centering
\subfloat[True E]{{\includegraphics[width=4cm]{BReal_E.jpg} }}%
\quad
\subfloat[FWI reconstructed E]{{\includegraphics[width=4cm]{BRec_E.jpg} }}%
\\
\subfloat[True $\rho$]{{\includegraphics[width=4cm]{BReal_rho.jpg} }}%
\quad
\subfloat[FWI reconstructed $\rho$]{{\includegraphics[width=4cm]{BRec_rho.jpg} }}%
\caption{(a) True elastic moduli distribution, (b) FWI reconstructed elastic moduli distribution, (c) True densities distribution, (d) FWI reconstructed densities distribution.}
\label{E2_1}
\end{figure}
According to Figure \ref{E2_1}, the results indicate that the reconstructed elastic moduli and densities distributions are in agreement with true ones.
Comparing true and reconstructed $E$ results suggest average of $1.3 \%$ distributed error is measured between true and reconstructed elastic modulus distributions.
Comparing true and reconstructed $\rho$ results suggest average of $1 \%$ distributed error is measured between true and reconstructed density distributions.
Comparing these two columns, proposed FWI deep learning method succeeded to reconstruct both $E$ and $\rho$ distributions simultaneously.
\begin{figure}[ht!]
\centering
\subfloat{{\includegraphics[width=5.5cm]{BRM.jpg} }}%
\quad
\subfloat{{\includegraphics[width=5.5cm]{BReM.jpg} }}%
\caption{Nodal strain values at $\Psi_{T}$ and $\Psi_{L}$ boundaries with respent to time: (a) true strain measurements, and (b) FWI reconstructed measurements.}
\label{E2_2}
\end{figure}
Figure \ref{E2_2} presents measured nodal strain values over $\Psi_{T}$ and $\Psi_{L}$ at whole 15 time range which were considered as optimization objectives between real and simulation cases.
The covergence norm between true measurements and estimated one at final iteration was $14.99 \%$ which means variation of estimated measurement is $\pm 14.99 \%$ to true measurements.
In this example, the covergence condition of $10 \%$ has not been met.
Therefore, after iterations exceeded iteration limit, the inverse problem was terminated.
Reconstruction of both densities and elastic moduli at the same time introduces higher ununiqueness and non-convexity to the inverse problem.
That is why the inverse problem was not able to converge to the $10 \%$ $L-{2}$ norm convergence.
Comparing two results of Figure \ref{E2_2}, measurements are still comparable.
\begin{figure}[ht!]
\centering
\subfloat{{\includegraphics[width=3cm]{BR1.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=3cm]{BR2.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=3cm]{BR3.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=3cm]{BR4.jpg} }}%
\\
\subfloat{{\includegraphics[width=3cm]{BRe1.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=3cm]{BRe2.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=3cm]{BRe3.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=3cm]{BRe4.jpg} }}%
\caption{Wave velocity distribution inside domain: (top row) inside real domain at 1, 5, 10 and 15 seconds respectively from left to right, (bottom row) inside FWI reconstructed domain at 1, 5, 10 and 15 seconds respectively from left to right.}
\label{E2_3}
\end{figure}
Finally, Figure \ref{E2_3} presents comparison of wave velocity distributions between real and recosntrcuted domain at 1, 5, 10 and 15 seconds.
The top row of Figure \ref{E2_3} presents wave velocity distribution inside real domain at 1, 5, 10 and 15 seconds respectively from left to right.
The bottom row of Figure \ref{E2_3} presents wave velocity distribution inside reconstructed domain at 1, 5, 10 and 15 seconds respectively from left to right.
Reconstructing both densities and elastic moduli at the same time using transient wave propagation can be cumbersome for main two reasons.
First, simulation of wave propagation through realistic detailed volumetric representations of heterogeneous materials is cumbersome because of the huge computational requirements for higher order simulation methods and high chance of unstability of lower order methods.
Numerical methods to accurately simulate the wave equation are being constantly developed and applied with increasing levels of sophistication.
KT, has been shown to be an efficient method for simulation of nonlinear conservation differential equations with high resolution and can handdle high scattering and shock waves.
Second, variation of density and elastic modulus together can raise huge amount of non-convexity.
Here, comparing results, shows that using proposed deep learning based inverse model and KT forward model have been able to handle these complexities.
\subsection{Example 3}
\label{subsec.E3}
In the third example, Figure \ref{SCHE}c, we consider Poisson coefficient equal to 0.2 and known nodal elastic moduli, therefore, the objective is to estimate densities distribution of the domain.
This can be helpful for cases that different materials with different densities have similar elastic moduli.
For this example, similar model of deep learning to example 1 was used.
However, wave propagation simulation is more sensitive to variation of densities than variation of elastic moduli.
Also, variation of densities cause higher ill-posedness of inverse problem.
Therefore, here, the deeper learning network was trained to connect vector of nodal densities to the nodal strain measurements.
Similar to example 1, the deep learning input layer ($L_{0}$) was a vector of 10000 units uses normalized nodal elastic moduli and densities at same time.
The output layer ($L_{H}$) is a vector of 202 units which uses normalized nodal strain values at $\Psi_{L}$ and $\Psi_{T}$ boundaries as is shown by Figure \ref{SCHE}b.
Total of 30 hidden layers were used in deep learning network in this example where the number of units of these layers started from 10000 (($L_{1}$)) and decreased gradually to 202 at the final hidden layer ($L_{H-1}$).
Similarly, $L_{2}$ regularization and dropout method were used.
$30 \%$ dropout probability was chosen as the optimum regularization parameter.
The dropout probability here was chosen higher than first example to introduce higher randomness in this example.
Similar momentum terms, learning rate and learning rate decay were used in this example as example 1.
The RS based parameters are divided into 10 mini-batches and each set of parameters generated statisticaly around poles are considered new mini-batches.
The deep learning based ICMC algorithm starts with 100000 initial random parameter vectors batch as random batch.
The search domain was initiated for density , $\rho$ between $[0.001 \hspace{3 mm} 5 ]$ (used higher than zero to avoid singularity).
The training progress window were $a=0.8$ and $b=1.2$ to search RS parameters within $\pm20\%$ of the optimal solution in every iteration.
10 statistical search poles were used, $j=10$, around best parameter vector and best $5, 10, 15, 20, 25, 30, 35, 40, 45\%$ parameter vectors.
A total of $g=4000$ random parameter vectors were generated at each poles.
Comparing true and reconstructed material properties, Figure \ref{E3_1} compares the true and the reconstructed densities distributions.
\begin{figure}[ht!]
\centering
\subfloat[True $\rho$]{{\includegraphics[width=4cm]{CRD.jpg} }}%
\quad
\subfloat[FWI reconstructed $\rho$]{{\includegraphics[width=4cm]{CRC.jpg} }}%
\caption{(a) True densities distribution, (b) FWI reconstructed densities distribution.}
\label{E3_1}
\end{figure}
According to Figure \ref{E3_1}, the results indicate that the reconstructed densities distributions are in agreement with true ones.
Comparing true and reconstructed $\rho$ results suggest average of $3.2 \%$ distributed error is measured between true and reconstructed elastic modulus distributions.
Comparing these results, proposed method succeeded to reconstruct $\rho$ distributions in good agreement to the real case.
\begin{figure}[ht!]
\centering
\subfloat{{\includegraphics[width=5.5cm]{CRM.jpg} }}%
\quad
\subfloat{{\includegraphics[width=5.5cm]{CReM.jpg} }}%
\caption{Nodal strain values at $\Psi_{T}$ and $\Psi_{L}$ boundaries with respent to time: (a) true strain measurements, and (b) FWI reconstructed measurements.}
\label{E3_2}
\end{figure}
Figure \ref{E3_2} presents measured nodal strain values over $\Psi_{T}$ and $\Psi_{L}$ at whole 30 time range which were considered as optimization objectives between real and simulation cases.
The covergence norm between true measurements and estimated one at final iteration was $8.7 \%$ which means variation of estimated measurement is $\pm 8.7 \%$ to true measurements.
\begin{figure}[ht!]
\centering
\subfloat{{\includegraphics[width=2.5cm]{CR1.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CR2.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CR3.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CR4.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CR5.jpg} }}%
\\
\subfloat{{\includegraphics[width=2.5cm]{CRe1.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CRe2.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CRe3.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CRe4.jpg} }}%
\smallskip
\subfloat{{\includegraphics[width=2.5cm]{CRe5.jpg} }}%
\caption{Wave velocity distribution inside domain: (top row) inside real domain at 1, 5, 10, 15 and 30 seconds respectively from left to right, (bottom row) inside FWI reconstructed domain at 1, 5, 10, 15 and 30 seconds respectively from left to right.}
\label{E3_3}
\end{figure}
Finally, Figure \ref{E3_3} presents comparison of wave velocity distributions between real and recosntrcuted domain at 1, 5, 10, 15 and 30 seconds.
The top row of Figure \ref{E3_3} presents wave velocity distribution inside real domain at 1, 5, 10, 15 and 30 seconds respectively from left to right.
The bottom row of Figure \ref{E3_3} presents wave velocity distribution inside reconstructed domain at 1, 5, 10, 15 and 30 seconds respectively from left to right.
Reconstructing both densities and elastic moduli at the same time using transient wave propagation can be cumbersome for main two reasons.
First, simulation of wave propagation through realistic detailed volumetric representations of heterogeneous materials is cumbersome because of the huge computational requirements for higher order simulation methods and high chance of unstability of lower order methods.
Numerical methods to accurately simulate the wave equation are being constantly developed and applied with increasing levels of sophistication.
Kurganov-Tadmor, has been shown to be an efficient method for simulation of nonlinear conservation differential equations with high resolution and can handdle high scattering and shock waves.
Second, vriation of density and elastic modulus together can raise huge amount of non-convexity.
Here, comparing results, shows that using proposed deep learning based inverse model and KT forward model have been able to handle these complexities.
\section{Conclusions}
\label{sect.Conclusions}
We proposed a deep learning surrogate IMCMC algorithm for quantitatively estimating the mechanical properties in heterogeneous media.
In this inverse problem, we utilized high resolution central scheme, Kurganov-Tadmor (KT), as a forward model for stress wave propagation.
This forward model well handles material heterogeneity and gradients.
While this inverse problem is ill-posed and non-convex, this algorithm is successfully used to estimate the mechanical properties.
This appraoch is used instead of end to end deep learning model for connecting nodal strain measurements to nodal elastic moduli due to high ill-posedness and lower accuracy of reconstructions.
Comparing results of reconstructions to real values suggest the deep learning surrogate IMCMC method is able to quantify material characteristics properly.
|
2001.02027
|
\section{Introduction}
Given a group endomorphism $\varphi:\pi \to \pi$, consider the (left) action of $\pi$ on $\pi$ via $\sigma \cdot \alpha \mapsto \sigma \alpha \varphi(\sigma)^{-1}$. The set of orbits of this action, denoted by $\mathcal R(\varphi)$, is the set of $\varphi$-twisted conjugacy classes or the set of {\it Reidemeister classes}. The cardinality of $\mathcal R(\varphi)$ is called the {\it Reidemeister number} $R(\varphi)$ of $\varphi$. The study of Reidemeister classes arises naturally in the classical Nielsen-Reidemeister fixed point theory (see e.g. \cite{J}). More precisely, for any selfmap $f:M\to M$ of a compact connected manifold $M$ with $\dim M\ge 3$, the minimum number of fixed points among all maps homotopic to $f$ is equal to the Nielsen number $N(f)$ which is bounded above by the Reidemeister number $R(f)=R(\varphi)$ where $\varphi$ is the induced homomorphism by $f$ on $\pi_1(M)$. While $N(f)$ is an important homotopy invariant, its computation is notoriously difficult. When $M$ is a Jiang-type space, then either $N(f)=0$ or $N(f)=R(f)$. While $N(f)$ is always finite, $R(f)$ need not be. Thus, when $R(f)=\infty$ we have $N(f)=0$ which implies that $f$ is deformable to be fixed point free. As a consequence of the $R_{\infty}$ property, it is shown in \cite{GW2} that for any $n\ge 5$, there exists a compact $n$-dimensional nilmanifold on which {\it every} self homeomorphism is isotopic to a fixed point free homeomorphism.
In \cite{LL}, it is shown that if $\varphi$ is an automorphism of a finitely generated non-elementary word hyperbolic group then $R(\varphi)=\infty$.
Since then many classes of groups have been shown to possess property $R_{\infty}$. However, most of the methods employed in these works have been ad hoc and specific to the classes of groups in question. On the other hand, $\Sigma$-theory, i.e., the Bieri-Neumann-Strebel invariant \cite{BNS}, has been used in \cite{GK} to prove property $R_{\infty}$ under certain conditions on $\Sigma^1$. Subsequent work in \cite{GS, GSS, KW2, SgW} further explore the use of $\Sigma$-theory in connection with property $R_{\infty}$. From the point of view of geometric group theory, it is natural to ask whether property $R_{\infty}$ is geometric, i.e., invariant up to quasi-isometry. In general, $R_{\infty}$ is not even invariant under commensurability and hence not invariant under quasi-isometry. The simplest example is that of $\mathbb Z$ as an index $2$ subgroup of the infinite dihedral group $D_{\infty}$ (see e.g. \cite{GW2}) where the former does not have $R_{\infty}$ while the latter does.
Since being non-elementary and word hyperbolic is geometric, the work of \cite{LL} implies that $R_{\infty}$ is invariant under quasi-isometry for the family of finitely generated non-elementary word hyperbolic groups (see also \cite{fel} in which a sketch of proof was given for non-elementary relative hyperbolic groups). Another family is that of the amenable or solvable Baumslag-Solitor groups $BS(1,n)$ for $n>1$. These groups have been completely classified in \cite{FM} up to quasi-isometry. For higher $BS(m,n)$ where $m\ge 2$ and $n>m$, it turns out that they are all quasi-isometric to each other as shown in \cite{Wh}. These Baumslag-Solitor groups (the fundamental group of the torus, $BS(1,1)$, is excluded here) have been shown in \cite{FeG} to have property $R_{\infty}$. More generally, the family of generalized Baumslag-Solitor (GBS) groups \cite{L} and any groups quasi-isometric to them also have property $R_{\infty}$ \cite{TW2}. Moreover, $R_{\infty}$ is also invariant under quasi-isometry for a certain solvable generalization of the $BS(1,n)$ \cite{TW1}.
As another class of examples, let $\Lambda$ be an irreducible lattice in a connected semisimplie non-compact real Lie group $G$ with finite centre. It is known that any finitely generated group $\Gamma$ quasi-isometric to $\Lambda$ has the $R_\infty$-property \cite{ms}.
Despite the success in \cite{LL,TW1,TW2,ms}, there have been no new examples of groups for which property $R_{\infty}$ is geometric. One difficulty is the determination of the group of quasi-isometries in general. As a first step, we ask
\begin{question}
For what class of groups is $R_{\infty}$ a commensurability property? Equivalently, if $G$ has property $R_{\infty}$ and $\Gamma$ is commensurable to $G$, (i.e., there exist subgroups $H < G, \bar H < \Gamma$ so that $H\cong \bar H, [G:H]<\infty, [\Gamma:\bar H]<\infty$) when does $\Gamma$ also have property $R_{\infty}$?
\end{question}
One of the main results of this paper is the following: For the definition of the class of groups $\mathcal S$ as well as the
proof of the theorem,
see \S 3.
\begin{theorem}\label{comm1}
Let $G$ be a finitely generated group. Suppose every finite index subgroup $H$ has the property that $b_1(H)=b_1(G)$. If $G\in \mathcal S$ then every group $\hat G$ commensurable to $G$ also has property $R_{\infty}$.
\end{theorem}
The objective of this paper is to begin a systematic approach to studying $R_{\infty}$. We give conditions under which, when employing $\Sigma$ - theory, property $R_{\infty}$ is invariant under commensurability. In doing so, we introduce a stronger notion of $R_{\infty}$, namely $R^{\chi}_{\infty}$ in section 2. When the complement $(\Sigma^1)^c$ is a finite spherical polytope lying inside an open hemisphere, we can find a point $[\chi]\in S(G)$ that is fixed by all automorphisms of $G$. If $[\chi]$ is {\it rigid} then $G$ has property $R^{\chi}_{\infty}$ (Theorem \ref{SO-R}) and hence $R_{\infty}$.
In section 4, we investigate situations when the $\Sigma$ - invariants of $G$ are preserved under automorphisms of a finite index subgroup $H$. In section 5, we construct new families of groups that are direct products and free products with property $R_{\infty}$.
\section{Background on BNS invariants and $R_{\infty}$}
\subsection{Sigma invariants}
Let $G$ be a finitely generated group. The set ${\rm Hom}(G,\mathbb R)$ of homomorphisms from $G$ to the additive group $\mathbb R$ is a real vector space with dimension equal to $m$, the ${\mathbb Z}$-rank of the abelianization $G^{\rm ab}$ of $G$. Denote by $\partial_{\infty}{\rm Hom}(G,\mathbb R)$ the boundary at infinity of $\mathbb R^m$ (ie.\ the set of geodesic rays in $\mathbb R^m$ initiating from the origin). This is homeomorphic to the {\it character sphere of $G$} defined as the set of equivalence classes $S(G) := \{ [\chi] | \chi \in {\rm Hom}(G,\mathbb{R}) - \{ 0 \} \}$ where $\chi_1 \sim \chi_2$ if and only if $\chi_1 = r\chi_2$ for some $r > 0$. Let $\Gamma$ denote the Cayley graph of $G$ with respect to a fixed generating set $S$. Given $[\chi] \in S(G)$, define $\Gamma_\chi$ to be the subgraph of $\Gamma$ generated by the vertices $\{ g \in G \vert \chi(g) \geq 0 \}$. We say $[\chi] \in \Sigma^1(G)$ if $\Gamma_{\chi}$ is path connected.
For $n>1$, there are higher order $\Sigma$ - invariants $\Sigma^n$ introduced in \cite{BR}.
The following are some well-known and useful facts (see e.g. \cite{Strebel}). The notation $\Sigma^1(G)^c$ represents the complement of $\Sigma^1(G)$ in $S(G)$.
\begin{proposition}\label{Factor}
Suppose $\phi:G \to H$ is an epimorphism, and $\chi \in {\rm Hom}(H,\mathbb{R})$. If $[\chi \circ \phi] \in \Sigma^1(G)$, then $[\chi] \in \Sigma^1(H)$.
\end{proposition}
\begin{proposition}\label{Sigma-Product}
For finitely generated groups $G$ and $H$, $\Sigma^1(G \times H)^c = (\Sigma^1(G)^c \circledast \emptyset) \cup (\emptyset \circledast \Sigma^1(H)^c)$ where $\circledast$ denotes the spherical join on the character sphere $S(G\times H)$.
\end{proposition}
Consider a group extension given by the following short exact sequence
$$
1\to H\to G\to K \to 1
$$
where $H$ and $G$ are finitely generated and $K$ is finite. Since $K$ is finite, the restriction homomorphism $\textrm{Hom}(G,\mathbb R)\to \textrm{Hom}(H,\mathbb R)$ is a monomorphism so that $S(G)$ can be regarded as a subsphere of $S(H)$. The following expression relates the $\Sigma$ - invariants of $G$ with those of $H$ (see \cite[Cor. 3.2]{KW2} or \cite[Thm. 9.3]{mmv}).
\begin{proposition}\label{sigma-finite}
For $n\ge 1$,
$$
\Sigma^n(G)=\Sigma^n(H)\cap \partial_{\infty}{\rm Fix} (\hat \nu)
$$
where $\nu: K\to G$ is any left transversal such that $\nu(1_K)=1_G$, and ${\rm Fix} (\hat \nu)=\{\phi \in {\rm Hom}(H,\mathbb R)\mid \phi(\nu(q)^{-1}h\nu(q))=\phi(h) \text{~for all $h\in H, q\in K$}\}$ is a subspace of ${\rm Hom}(H,\mathbb R)$ .
\end{proposition}
\subsection{Property $R^{\chi}_{\infty}$}
Recall from \cite{GK}, the role that $\Sigma$-theory plays is that the $\Sigma$-invariant can be used to obtain a rational point on the character sphere that is fixed by all automorphisms. It fact, the underlying principle is the existence of a character $\chi:G\to \mathbb R$ such that $\chi\circ \varphi=\chi$ for ALL $\varphi\in {\rm Aut}(G)$. In this case, the image ${\rm Im}(\chi)$ is a finitely generated abelian subgroup of $\mathbb R$ and is isomorphic to $\mathbb Z^r$ for some positive intger $r$. The equality $\chi\circ \varphi=\chi$ implies that $\varphi$ induces the identity on ${\rm Im}(\chi)$ which implies that $R(\varphi)=\infty$ since ${\rm Ker}(\chi)$ is characteristic.
\begin{definition}\label{rigid} (Cf. \cite[Definition 1.6, \S6.E]{GSS}.)
Let $G$ be a finitely generated group and let $\chi: G\to \mathbb R$ be a non-trivial character. The character $\chi$ is said to be {\it rigid} if for any $r\in \mathbb R$, $r\cdot {\rm Im}(\chi)={\rm Im}(\chi)$ implies $r=\pm 1$. We say the character class $[\chi]$ is {\it rigid}
if for any $s>0$, the character $s\cdot \chi$ is rigid.
\end{definition}
Note that a character $\chi$ is rigid if and only if its class $[\chi]$ is.
Thus, if for all $\varphi \in {\rm Aut}(G)$, $[\chi \circ \varphi]=\varphi^*([\chi])=[\chi]$ and $[\chi]$ is rigid then $\chi \circ \varphi=\chi$ for all $\varphi\in {\rm Aut}(G)$. Evidently, if $[\chi]$ is rational (i.e., $\chi(G)\cong \mathbb Z$) then $[\chi]$ is rigid.
Recall from \cite[\S6E]{GSS} that a character $\chi$ as well as the class $[\chi]$ are called {\it transcendental} if
$\mathrm{Im}(\chi)\subset \mathbb R$ has the property that if $a, b\in {\rm Im}\chi$ are non-zero, then $a/b$ is either rational or transcendental. It follows that if $[\chi]$ is transcendental then it is also rigid. It is easily seen that if $\chi:G\to \mathbb R$ has image $\mathbb Z+2^{1/3} \mathbb Z$, then $[\chi]$ is rigid;
evidently it is not transcendental.
Suppose that $\chi:G\to \mathbb R$ is a transcendental character and $\chi':G\to \mathbb R$ is a character such
that $Im(\chi')\subset Im(\chi)$. Then $\chi'$ is also transcendental. This is in general not true of
rigid characters. For example if $G=\mathbb Z^3, $ and $\chi,\chi' :G\to \mathbb R$ are defined as
$\chi(a,b,c)=a+b\sqrt 2+c\pi, \chi'(a,b,c)=a+b\sqrt 2$, then $Im (\chi')\subset Im(\chi)$ but $\chi'$
is not rigid, even though $\chi$ is.
\begin{remark} Suppose a finitely generated group $G$ has a character sphere $S(G)$ of dimension $n=\dim S(G)$. Then for any automorphism $\varphi \in {\rm Aut}(G)$, the induced homeomorphism $\varphi^*: S(G)\to S(G)$ has topological degree $\pm 1$. The Lefschetz number $L(\varphi^*)=1+(-1)^n\cdot \deg \varphi^*$. Thus, if $n$ is even and $\deg \varphi^*=1$ then the Lefschetz Fixed Point Theorem asserts that $\varphi^*([\chi])=[\chi]$ for some $[\chi]$. However, there is no guarantee that $[\chi]$ is rigid. Similarly, if $\Sigma^1(G)^c$ is topologically a disk, then the Brouwer Fixed Point Theorem asserts every $\varphi^*$ has a fixed point but again such a fixed point need not be rigid. In fact, there exists a group $G$ \cite{GSS} where $S(G)$ has a point $[\chi]$ that is fixed by $\varphi^*$ for all $\varphi\in {\rm Aut}(G)$ but $[\chi]$ is {\it not} rigid.
\end{remark}
The existence of such a globally {\it fixed} character {\it that is witnessed by $\Sigma$-theory} leads us to the following stronger notion of property $R_{\infty}$.
\begin{definition}\label{witness}
A group $G$, not necessarily finitely generated, is said to have property $R^{\chi}_{\infty}$ if there exists a non-trivial character $\chi: G \to \mathbb R$ such that $\chi \circ \varphi=\chi$ for all $\varphi\in {\rm Aut}(G)$. Note that if $G$ has property $R^{\chi}_{\infty}$, it necessarily must have property $R_{\infty}$.
\end{definition}
\begin{example}\label{witness-ex}
Take $G=F_r\times BS(1,2)\times BS(1,2)$ where $F_r$ is the free group of rank $r\ge 2$. It is easy to see that the complement $[\Sigma^1(G)]^c=\mathbb S^{r-1}\cup \{+\infty\} \cup \{ +\infty'\}$ is an infinite set where $+\infty$ and $+\infty'$ denote the north poles of the two distinct copies of $S(BS(1,2))$ and $\mathbb S^{r-1}$ is a n
$(r-1)-$dimensional sphere disjoint from $+\infty$ and $+\infty'$. It follows that either each of the points $+\infty$ and $+\infty'$ is fixed, in which case, one of these endpoints yields a character that is fixed by all automorphisms, or $[\bar{ \chi}]$, which corresponds to the point on the arc obtained from taking the {\it average} of the characters $\chi, \chi \circ \varphi$ associated to those two points, is fixed by $\varphi^*$ for all $\varphi \in {\rm Aut}(G)$. Here $\varphi^*$ is the homeomorphism of $S(G)$ induced by $\varphi$. Since the points $+\infty$ and $+\infty'$ are rational, it follows that $[\bar \chi]$ is also rational and hence rigid. Again, we conclude that $\bar \chi$ is fixed by all automorphisms. Hence, $G$ has property $R^{\chi}_{\infty}$.
\end{example}
On the contrary, there are examples of groups for which the property $R_\infty$ does not imply the property
$R_\infty ^\chi$.
\begin{example}\label{non-witness-ex}
By analyzing the automorphisms of the fundamental group of the Klein Bottle $K$ as in \cite[Lemma 2.1, Theorem 2.2]{GW2}, it is straightforward to see that there is no $[\chi]\in S(\pi_1(K))=\Sigma^1(\pi_1(K))\cong \mathbb S^0$ that is fixed by all automorphisms. Thus, $\pi_1(K)$ has property $R_{\infty}$ but not $R^{\chi}_{\infty}$.
\end{example}
\section{Conditions on the first Betti number}\label{Betti_1}
Consider a group extension
\begin{equation}\label{f-ext}
1\to H\to G\to K\to 1
\end{equation}
where $H$ and $G$ are finitely generated and $K$ is finite. Let $\nu :K\to G$ be a left transversal with
$\nu(1_K)=1_G$.
The following simple relation between property $R_{\infty}$ for $G$ and that for $H$ is straightforward (see e.g. \cite{GW1}).
\begin{lemma}\label{Rinfty_for_finite}
Given the extension \eqref{f-ext},
if $H$ is characteristic and has property $R_{\infty}$ then $G$ has property $R_{\infty}$.
\end{lemma}
Recall that a subset $P \subset S=\mathbb R^n\setminus 0/\!\sim$ is a {\it spherical polytope} if there exist $v_1,\ldots,v_m\in
\mathbb R^n$ such that (i) all the $v_j$ are in a half-space, that is,
there exists a linear map $f:\mathbb R^n\to \mathbb R$ such that $f(v_j)>0~\forall j$, (ii)
$P=
\{[v]\in S\mid v=\sum_{1\le j\le m} a_jv_j,~a_j\ge 0, v\ne 0\}$ and (iii) $v_1,v_2,\ldots, v_m$
is minimal set of such
vectors; equivalently no $v_i$ is a positive linear combination of a subset of $v_j, j\ne i$.
Then $[v_1],\ldots,[v_m]$ are the {\it vertices} of $P$. If $\alpha$ is any homeomorphism of $S$ induced by an
automorphism of the vector space $\mathbb R^n$ such that $\alpha(P)=P,$ permutes the vertices of $P$.
\begin{definition}\label{SO}
Let $\mathcal S$ denote the class of all finitely generated groups which satisfy the following two conditions:
\begin{enumerate}
\item $\Sigma^1(G)^c$ is non-empty and
lies inside an open hemisphere of the character sphere $S(G)$;
\item the connected components of $\Sigma^1(G)^c$ are finite
spherical polytopes with transcendental vertices.
\end{enumerate}
\end{definition}
Now let $G$ be any group, not necessarily finitely generated. Suppose that the abelianization $G^{\textrm{ab}}\cong \mathbb Z^n\oplus T$ where $T\le G^{\textrm{ab}}$ is the torsion subgroup. Thus $G^\textrm{ab}/T$ is free abelian of finite rank.
One may still define the character sphere $S(G)$ exactly as for finitely generated groups and
we note that it is homeomorphic to $S(G^{\textrm{ab}})\cong \mathbb S^{n-1}$. Moreover, if $\phi:G\to G$ is any automorphism, then $\phi$ induces
a homeomorphism of $S(G)$. A {\it hemisphere} in $S(G)$ is defined as follows: Suppose that $g\in G$ is not
in $[G,G]$. The hemispheres defined by $g$ are:
$H_g^+:=\{[\chi ]\in S(G)\mid \chi(g)>0\}$ and $H^-_g=\{[\chi]\in S(G)\mid \chi(g)<0\}$.
Clearly $H^\pm_g=H^\mp_{g^{-1}}$.
The hemispheres are homeomorphic to the open balls in $S(G)$. Moreover,
if $\phi:G\to G$ is any automorphism, then $\phi^*:S(G)\to S(G)$ preserves the collection of hemispheres.
These statements follow from the observation that any automorphism of $G$ induces an $\mathbb R$-linear automorphism of the ($n$-dimensional) vector space $\textrm{Hom}(G,\mathbb R)$. In fact, automorphisms of $G$ preserve the $\mathbb Q$-structure
$\textrm{Hom}(G;\mathbb Q)\subset \textrm{Hom}(G,\mathbb R)$. It is readily seen that if $\chi$ is transcendental (resp. rigid), so is $\phi^*(\chi)=\chi\circ \phi$.
When $G$ is not finitely generated, the $\Sigma$-invariant $\Sigma^1(G)$ as in \cite{BNS} is not available.
Although the definition due to K. S. Brown \cite{Br} is applicable, we will not need it for our present purposes.
Let $K\lhd G$ be a {\it characteristic} subgroup of a group $G$ such that $\bar G:=G/K$ is a finitely
generated infinite group. We do not assume that $G$ is finitely generated.
Then any automorphism $\phi$ of $G$ induces an automorphism $\bar \phi:\bar G\to \bar G$.
This leads to a homomorphism $\Psi: \textrm{Aut}(G)\to \textrm{Aut}(\bar G)$. Since $\bar G$ is finitely generated, one can apply $\Sigma$-theory to obtain $\Sigma^1(\bar G)\subset S(\bar G).$
Note that $\textrm{Aut}(G)$ acts on $S(\bar G)$ (via $\Psi$) preserving the sets $\Sigma^1(\bar G)$ and $\Sigma^1(\bar G)^c$ in $S(\bar G)$. Observe that $\Sigma^1(\bar G)^c$ depends not only on $G$ but also
on the choice of $K$ and is therefore not intrinsic to $G$.
\begin{definition}\label{snotfg} Let $\widetilde{\mathcal S}$ denote the class of all groups $G$ having a characteristic subgroup $K$ with quotient $G/K$ in $ \mathcal S$.
\end{definition}
\begin{lemma}\label{average}
Let $G$ be any group such that $G^{\rm ab}$ has finite rank.
Suppose $\chi: G\to \mathbb R$ is a transcendental
character and $\phi:G\to G$ is an automorphism such that the
$\phi^*$-orbit of $[\chi]$ is finite. Let $\chi_j=\chi\circ \phi^j, 0\le j<r$
where $r>0$ is the least positive integer so that $[\chi_r]=[\chi]$.
Suppose that the $[\chi_j]$ are in an open hemisphere of $S(G)$. Let
$\eta=\sum_{0\le j<r} \chi_j.$ Then $\eta$ is transcendental.\\
\end{lemma}
\begin{proof} We note that ${\rm Im}(\eta)\subset {\rm Im}(\chi)$. So it suffices to
show that $\eta$ is non-zero. But this follows from our hypothesis
that the $[\chi_j]$ are in an open hemisphere. \end{proof}
\begin{theorem}\label{SO-R}
If $G\in \widetilde{\mathcal S}$, then $G$ has property $R^{\chi}_{\infty}$.
\end{theorem}
\begin{proof} Let $K\lhd G$ be a characteristic subgroup such that $\bar G:=G/K$ is in $\mathcal S.$
As noted above, $\textrm{Aut}(G)$ acts on $S(\bar G)$ leaving $\Sigma^1(\bar G)^c$ invariant.
Since $\bar G\in \mathcal S$, $\Sigma^1(\bar G)^c$ is a non-empty (finite) union of spherical polytopes. Moreover, the group
$\textrm{Aut}(G)$ acts on the
set of
vertices of $\Sigma^1(G)^c$.
Since these vertices are transcendental and all of them are contained in an open half-space, by Lemma \ref{average}, we can find a transcendental character $\chi: \bar G \to
\mathbb R$ that is fixed by all automorphisms of $G$. It follows that the character $G\to \bar G\stackrel{\chi}{\longrightarrow} \mathbb R$ is fixed by $\textrm{Aut}(G)$. Hence $G$ has property $R^{\chi}_{\infty}$.
\end{proof}
Denote by $b_1(\Gamma)$ the first Betti number of a group $\Gamma$.
\begin{lemma}\label{betti}
Given the extension \eqref{f-ext}, if $b_1(H)=b_1(G)$ and $G\in \mathcal S$ then $H\in \mathcal S$ and hence has property $R^{\chi}_{\infty}$.
\end{lemma}
\begin{proof} Since $b_1(H)=b_1(G)$, we conclude that the character sphere of $G$ coincides with the character sphere of $H$, that is, $S(G)=S(H)$. By \cite[Prop. 2.1, 2.3]{KW2},
$\partial_{\infty}{{\rm Fix} (\hat \nu)}=\partial_{\infty}{\rm Hom}(H,\mathbb R)$. It follows from Prop. \ref{sigma-finite} that $G$ and $H$ have the same $\Sigma$ invariants. Since $G\in \mathcal S$ it follows that $H\in \mathcal S$ and the last assertion follows from Theorem \ref{SO-R}.
\end{proof}
\begin{remark} It should be emphasized that if $G$ (and hence $H$ under the assumption $b_1(H)=b_1(G)$) has empty or symmetric (e.g. $\Sigma^1(\pi_1(M))=-\Sigma^1(\pi_1(M)$ where $M$ is a closed orientable $3$-manifold \cite{BNS}) $\Sigma$ - invariants then we simply cannot deduce any information regarding property $R_{\infty}$. For example, consider the classical lamplighter groups $L_n=\mathbb Z_n\wr \mathbb Z$. It is known \cite{GW1} that $L_n$ has property $R_{\infty}$ iff $\mathrm{gcd} (n,6)>1$. However, $\Sigma^1(L_n)=\emptyset$ for any $n\in \mathbb N$. Another such example is the fundamental group $\Gamma$ of a non-prime $3$-manifold where $\Gamma$ has property $R_{\infty}$ \cite{GSW2} but $\Sigma^1(\Gamma)=\emptyset$. Furthermore, if $M$ is a closed orientable $3$-manifold with $\mathbb H^2\times \mathbb R$ geometry then $\pi_1(M)$ has property $R_{\infty}$ \cite{GSW1} while the fundamental group of the $3$-torus does not. Here, both fundamental groups have non-empty symmetric $\Sigma^1$.
\end{remark}
We shall now prove Theorem \ref{comm1}.
\begin{proof} Let $\hat G$ be commensurable to $G$ so that there exist $H\le G, \hat H\le \hat G$ such that $[G:H]<\infty, [\hat G:\hat H]<\infty$ and $\hat H\cong H$. Let $C_H$ be the core of $H$ in $G$ so that $C_H\le H$ and $C_H\unlhd G$. Since $H$ is of finite index in $G$ so is $C_H$. By Lemma \ref{betti}, we conclude that $C_H\in \mathcal S$. Now $b_1(C_H)=b_1(H)=b_1(G)$. Furthermore, $H$ has the same $\Sigma$ - invariants as $G$ so we conclude that $H\in \mathcal S$. Since $\hat H \cong H$, $\hat H\in \mathcal S$. Now $\Gamma_{\hat H}:=\bigcap_{\varphi \in {\rm Aut}(\hat G)} \varphi(C_{\hat H})$ also has finite index in $\hat G$ and is characteristic in $\hat G$. Note that $\Gamma_{\hat H}$ is isomorphic to a subgroup $\bar H\le H$ of finite index in $H$. It follows from the assumption that $b_1(\bar H)=b_1(G)$, the subgroup $\bar H\in \mathcal S$. Now $\Gamma_{\hat H}$ has property $R^{\chi}_{\infty}$. Applying Lemma \ref{Rinfty_for_finite}, we conclude that $\hat G$ has property $R_{\infty}$.
\end{proof}
\begin{remark}
Lemma \ref{Rinfty_for_finite} does not necessarily imply $R^{\chi}_{\infty}$ for the extension unless it has the same $\Sigma$ - invariants as the kernel. Thus, in the proof of Theorem \ref{comm1}, if we know for instance that $b_1(\Gamma_{\hat H})=b_1(\hat G)$ then we can conclude that $\hat G$ also has property $R^{\chi}_{\infty}$.
\end{remark}
\begin{example}\label{BS}
Recall that property $R_{\infty}$ is a quasi-isometric invariant for the class of solvable Baumslag-Solitar groups (and their solvable generalizations)\cite{TW1}. It is known (see e.g., \cite{KW2})
that $\Sigma^1(BS(1,n))=\{-\infty\}$ contains exactly one rational point and $b_1(BS(1,n))=1$. Furthermore, if $H$ is a finite index subgroup of $BS(1,n)$ then $H$ itself is a $BS(1,n^m))$ (see e.g. \cite{Bo}) so that $b_1(H)=1$. Thus Theorem \ref{comm1} gives a different proof of the fact that $R_{\infty}$ is invariant under commensurability for the class of solvable Baumslag-Solitor groups.
\end{example}
\begin{example}\label{gamma-n}
For any $n\ge 2$, write $n=p_1^{y_1}...p_r^{y_r}$ as its prime decomposition. Define a solvable generalization of the solvable Baumslag-Solitar groups by
$$\Gamma_n=\langle a,t_1,...,t_r \mid t_it_j=t_jt_i, t_iat_i^{-1}=a^{p_i^{y_i}}, i=1,...,r.\rangle.$$
Evidently, when $r=1$, $\Gamma_n=BS(1,n)$. In \cite{SgW}, it has been shown that $\Sigma^1(\Gamma_n)^c$ is a finite set of rational points all lying inside an open hemisphere so that $\Gamma_n\in \mathcal S$. Moreover, a presentation is also found for any finite index subgroup $H$ of $\Gamma_n$. Using this presentation, one can show that $b_1(H)=b_1(\Gamma_n)=r$.
Thus Theorem \ref{comm1} gives a different proof of the fact that $R_{\infty}$ is invariant under commensurability for this class of generalized solvable Baumslag-Solitor groups.
\end{example}
Next, we exhibit more examples for which $b_1(H)=b_1(G)$. Note that in general, $b_1(G)\le b_1(H)$.
\begin{example}\label{semi-simple}
Let $G$ be a connected semi-simple Lie group
having real rank at least $2$ and $\Gamma$ be an irreducible lattice in $G$. For every finite index subgroup $H$ in $\Gamma$, $b_1(H)=b_1(\Gamma)=0$. However, in this case, $S(H)=\emptyset=S(\Gamma)$ and hence both $H$ and $\Gamma$ have empty $\Sigma$ - invariants.
\end{example}
\begin{example}\label{PL}
Certain subgroups of ${\rm PL}_{o}([0,1])$ (oriented PL-homeomorphism group of $[0,1]$) possess such property \cite[section 6]{GSS}.
\end{example}
\begin{example}\label{inner}
Suppose $G=H\rtimes_{\theta}K$ where $K$ is finite and
$\theta:K\to {\rm Aut}(H)$ is the action. If $\theta(K)\subset {\rm Inn}(H)$ then $b_1(H)=b_1(G)$. From Stallings' $5$-term exact sequence, we have the following exact sequence
$$
H_2(K) \to H/[G,H] \to H_1(G) \to H_1(K)\to 0.
$$
Since $K$ is finite, both $H_2(K)$ and $H_1(K)$ are finite. It follows that
$${\rm rk}_{\mathbb Z}\left(H/[G,H]\right)={\rm rk}_{\mathbb Z}(H_1(G))=b_1(G).
$$
Since $[H,H]\le [G,H]$, it suffices to show that $[H,H]=[G,H]$ under our assumptions. For any $g\in G$, $g$ can be uniquely written as $g=\hat h \bar k$ where $\bar k$ is the image of $k\in K$ under the section given by the splitting. For any $h\in H$,
\begin{equation*}
\begin{aligned}
ghg^{-1}h^{-1}&=\hat h\bar kh{\bar k}^{-1}{\hat h}^{-1}h^{-1} \\
&=\hat h \theta(k)(h){\hat h}^{-1}h^{-1} \\
&=\hat h \eta h\eta^{-1}{\hat h}^{-1}h^{-1} \qquad \text{for some $\eta\in H$ since $\theta(k)\in {\rm Inn}(H)$}\\
&=(\hat h\eta)h(\hat h\eta)^{-1}h^{-1} \in [H,H]
\end{aligned}
\end{equation*}
It follows that $[G,H]=[H,H]$ and hence we have $b_1(H)=b_1(G)$.
\end{example}
\begin{lemma}\label{simple_subgp}
Let $G$ be any group. Suppose the commutator subgroup $[G,G]$ contains an infinite simple group $K$ with $[[G,G]:K]<\infty$. Then for any finite index subgroup $H$ of $G$, $b_1(H)=b_1(G)$.
\end{lemma}
\begin{proof} To see this, first note that for every finite index subgroup $H$, its core $core_G(H)=C_H\le H$ is normal and has finite index in $G$. Now, $K\cap C_H$ has finite index in $K$ so $K\cap C_H$ is non-trivial. Since $K$ is simple and $core_K(K\cap C_H)\le K\cap C_H \le K$, it follows that $K\cap C_H=K$ so $K\le H$. Again, $K$ being simple means that $K=[K,K]$. Since $K=[K,K]\le [H,H] \le [G,G]$ and $K$ has finite index in $[G,G]$, we conclude that $[H,H]$ has finite index in $[G,G]$. It follows from Stallings' $5$-term exact sequence that $b_1(H)=b_1(G)$. Note that the argument above shows that {\it every} finite index subgroup of $G$ contains the simple group $K$.
\end{proof}
\begin{remark}
The hypotheses of Lemma \ref{simple_subgp} are satisifed by a large class of groups. In particular, for $n\ge 2$, the Houghton groups $H_n$ satisfy the conditions of Lemma \ref{simple_subgp} with $K=A_{\infty}$. Furthermore, let $S$ be a self-similar group and $G=V(S)$ be the associated Nekrashevych group. Then $[V(S), V(S)]$ is simple (see for instance \cite{N}). In fact, under certain conditions, $[G,G]$ can be of finite index in $G$ (\cite[Theorem 3.3]{SWZ}). Thus, by Lemma \ref{simple_subgp}, these aforementioned groups $G$ have the property that $b_1(H)=b_1(G)$ for all finite index subgroup $H$ in $G$.
\end{remark}
The R. Thompson's group $F$ is known to have property $R_{\infty}$ \cite{BFG}. A different proof, using $\Sigma$-theory, has been given in \cite{GK}. In fact, one can conclude that $F\in \mathcal S$ so $F$ has property $R^{\chi}_{\infty}$. Now, the next result follows from Lemma \ref{simple_subgp}, the fact that $[F,F]$ is simple and Theorem \ref{comm1} that any group commensurable to $F$ also has property $R_{\infty}$.
\begin{theorem}\label{Thompson_F}
Consider the R. Thompson's group $F$. Then any group commensurable to $F$ also has property $R_{\infty}$.
\hfill $\Box$
\end{theorem}
\begin{remark}
Example \cite[5.5]{KW2} follows immediately from Theorem \ref{Thompson_F}. The generalized Thompson's groups $F_{0,n}$ have property $R_{\infty}$ and every group commensurable to one such also has property $R_{\infty}$. This result, including Theorem \ref{Thompson_F}, has been proven in \cite{GSS} using different methods.
\end{remark}
Another large class of interesting groups for which finite index subgroups have the same first Betti numbers is the class of lamplighter groups of the form $H\wr \mathbb Z$ where $H$ is a finite group. Since lamplighter groups have empty $\Sigma^1$, these groups exhibit different behavior as we illustrate in the next example.
\begin{example}\label{strange-lamplighter}
Let $p\ge 5$ be an odd prime. It follows from \cite{GW1} that $G=\mathbb Z_p\wr \mathbb Z$ does not have property $R_{\infty}$. Moreover, no finite index subgroup of $G$ has property $R_{\infty}$. Since every subgroup of finite index in $G$ is of the form $(\mathbb Z_p)^k\wr \mathbb Z$ for some $k\in \mathbb N$, it follows from the main theorem of \cite{GW1} that such subgroup does not have property $R_{\infty}$. (See Remark 2.)
\end{example}
\section{Invariance under ${\rm Aut}(H)$}
Consider the Artin braid group $B_3$ (on the disk) and its pure braid group $P_3$ on $3$ strands. The group $P_3$ is a normal subgroup of index $6$ in $B_3$. Moreover, $P_3\cong F_2\times \mathbb Z$ where $F_2$ is the free group on $2$ generators and $\mathbb Z$ is generated by the central element $\Delta$ which is the full-twist of the $3$ strands. It follows that $b_1(P_3)=3$ and $b_1(B_3)=1$. By Prop. \ref{sigma-finite}, $\Sigma^1(B_3)=\Sigma^1(P_3)\cap \partial_{\infty} {\rm Fix} (\hat \nu)$. Since $[B_3,B_3]$ is finitely generated, $\Sigma^1(B_3)=\{\pm \infty\}$. Furthermore, $P_3$ has property $R_{\infty}$ (see e.g. \cite{FGW}). Although $B_3$ and $P_3$ both have property $R_{\infty}$, neither of them belongs to $\mathcal S$. Observe that every automorphism of $B_3$ restricts to an automorphism of $P_3$. This leads us to investigate when $\Sigma^n(G)$ is invariant under ${\rm Aut}(H)$.
Based on Prop. \ref{sigma-finite}, one should seek conditions under which $\partial_{\infty}{\rm Fix} (\hat \nu)$ is invariant under automorphisms of $H$. Recall that for any left transversal $\nu:K\to G$ such that $\nu(1_K)=1_G$,
$$
{\rm Fix} (\hat \nu)=\{\phi\in {\rm Hom}(H,\mathbb R)\mid \phi(\nu(q)^{-1}h\nu(q))=\phi(h), \forall h\in H, \forall q\in K\}.
$$
For every $q\in K$, define $\alpha_q\in {\rm Aut}(H)$ by $\alpha_q(h)=\nu(q)^{-1}h\nu(q)$. It follows that ${\rm Fix} (\hat \nu)=\{\phi\in {\rm Hom}(H,\mathbb R)\mid \phi \circ \alpha_q=\phi, \forall q\in K\}$. Denote by
$\overline{\alpha_q}\in {\rm Out}(H)$ the image of $\alpha_q$ in ${\rm Out}(H)$.
\begin{proposition}\label{central-out}
Given a short exact sequence
$$
1\to H\to G\to K\to 1
$$
and a left transversal $\nu:K\to G$ with $\nu(1_K)=1_G$, if for every $q\in K$, $\overline{\alpha_q}\in Z({\rm Out}(H))$, the center of ${\rm Out}(H)$ then for any $\varphi \in {\rm Aut}(H)$, we have $\varphi(\Sigma^n(G))=\Sigma^n(G)$. Furthermore, if $G\in \mathcal S$ and if $\varphi^*(S(G))=S(G)$ for some $\varphi \in {\rm Aut}(H)$ then $R(\varphi)={\infty}$.
\end{proposition}
\begin{proof}
Given any $\varphi\in {\rm Aut}(H)$, there is an induced isomorphism $\hat \varphi$ on ${\rm Hom}(H,\mathbb R)$ given by $\hat \varphi(\phi)=\phi\circ \varphi$ for any $\phi\in {\rm Hom}(H,\mathbb R)$. Suppose $\phi \in {\rm Fix} (\hat \nu)$. For $\hat \varphi(\phi)\in {\rm Fix} (\hat \nu)$, we must have $\hat \varphi(\phi) \circ \alpha_q=\hat \varphi(\phi)$ for every $q\in K$. It follows that
$$
\phi\circ \varphi \circ \alpha_q=\phi\circ \varphi=\phi \circ \alpha_q\circ \varphi
$$
must hold for all $q\in K$. This equality holds if the automorphisms $\varphi \circ \alpha_q$ and $\alpha_q\circ \varphi$ differ by an inner automorphism. This holds under the assumption that $\overline{\alpha_q}$ lies in the center $Z({\rm Out}(H))$ for every $q\in K$. Now the invariance of $\Sigma^n(G)$ under ${\rm Aut}(H)$ follows from Prop. \ref{sigma-finite}. Since $G\in \mathcal S$ there exists a rigid character $\chi$ that
is fixed by all automorphisms of $G$. Since this character is obtained from the $\Sigma$ - invariants of $G$ which are invariant under ${\rm Aut}(H)$ and the subsphere $S(G)\subset S(H)$ is invariant under $\varphi$, we conclude that $\chi$ is also fixed by $\varphi^*$. It follows that $R(\varphi)={\infty}$.
\end{proof}
\section{More groups with $R^{\chi}_{\infty}$ or $R_{\infty}$}
Recall from Definition \ref{snotfg} the class of groups $\widetilde {\mathcal S}$. A group $G$ is in $\widetilde{\mathcal S}$
if there exists a characteristic subgroup $K\triangleleft G$ such that $G/K$ belongs to $\mathcal S$. It is not
assumed that $G$ is finitely generated.
In this section
we construct many families of groups (not necessarily finitely generated) that are direct products or free products of $G,H$ with $G\in \mathcal S$ where $H$ is a group belonging to certain families of groups described below.
(i) {\bf Divisible groups.} Recall that a group $G$ is divisible if given any element $g\in G$ and an integer $n>1$, there exists an $h\in G$
such that $g=h^n$. Examples of divisible abelian groups are $\mathbb Q^m\times (\mathbb Q/\mathbb Z)^n, m,n\in \mathbb N$.
It is known that there exist $2^{\aleph_0}$-many pairwise non-isomorphic groups which are generated by two elements and divisible.
(See \cite{lyndon-schupp}.)
These groups do not have any proper finite index subgroups. This family is closed under finite direct products. We shall
denote this class of group by $\mathcal D$.
(ii) {\bf Torsion groups.} All torsion groups have vanishing $b_1$. This follows easily from the basic fact that homology commutes
with direct limit. This family of groups is huge and includes many interesting groups such as Grigorchuk groups, the group of finitary permutations
of $\mathbb N$, etc. Elementary (abelian)
examples include $A(\mathcal P):=\oplus_{p\in \mathcal P} \mathbb Z_p$, as $\mathcal P$ varies in the set of all (infinite) subsets of primes.
Denote this class of groups by $\mathcal T$.
(iii) {\bf Acyclic groups.} A group is said to be acyclic if its reduced homology with trivial $\mathbb Z$ coefficients vanishes.
This class includes the Higman four-group \cite{dyer-vasquez}
and binate towers \cite{berrick}.
It is known that
any finitely generated group admits an embedding into a finitely generated acyclic group \cite{baumslag-heller-dyer}. The class of finitely generated acyclic groups, denoted $\mathcal A$, is closed under finite direct products and finite free products.
(iv) {\bf Higher rank lattices.} Let $ G$ be a connected semisimple (real) linear Lie group which has no compact factors. Suppose that
the real rank of $G$ is at least $2$. (The real rank of a linear Lie group is the dimension of the largest diagonalizable subgroup
isomorphic to $\mathbb R_{>0}^\times$.) Let $L\subset G$ be an irreducible lattice in $G$.
Then it is a deep result of Margulis that any normal subgroup of $L$ is either finite or has finite index in $L$. Since $L$ itself
is not virtually abelian, it follows that $b_1(L)=0$ and that the same is true of any finite index subgroup of $L$. (This is not true
in the case of rank-$1$-lattices.) Again, if $L_i\subset G_i, 1\le i\le n,$
are irreducible higher rank lattices, then the product $L:=\prod_{1\le i\le n} L_i $ also has trivial abelianization.
Any finite index subgroup $\Lambda$ of $L$ admits a finite index subgroup $\Gamma$ which is a product $\prod \Gamma_i$ where each
$\Gamma_i\subset L_i$
is a sublattice, i.e., finite index subgroup of $L_i$. It follows that $b_1(\Gamma)=0$ and hence $b_1(\Lambda)=0$. Let us
denote by $\mathcal L$ the class of all finite index subgroups $\Lambda$ of direct products $L=\prod_{1\le i\le n}L_i$ as above.
We now construct new examples of groups with property $R^{\chi}_{\infty}$.
\begin{proposition} \label{newgroups} We keep the above notation. Let $\mathcal C$ denote $\mathcal D\cup \mathcal T\cup
\mathcal A\cup \mathcal L.$ Let $G$ be a group belonging to $\widetilde {\mathcal S}$ and $H$ a group in $ \mathcal C$. Suppose that
every homomorphism $H\to G$ is trivial. Then:
(i) $G\times H$ belongs to $\widetilde{\mathcal S}$ and hence has property $R^{\chi}_{\infty}$.
If $G$ is finitely generated, then $G*H\in \widetilde{ \mathcal S}$ and so has property $R^\chi_\infty$.
(ii) Let $K$ be a finite index subgroup of $G$ such that $b_1(K)=b_1(G)$. Then $K\times H, K*H \in \widetilde{\mathcal S}$ and so have property $R^{\chi}_{\infty}$.
\end{proposition}
\begin{proof}
(i) Since any homomorphism $H\to G$ is trivial, the subgroup $H=1\times H\subset G\times H$ is
characteristic in $G\times H$. Since $G\in \widetilde{\mathcal S}$, there exists a characteristic subgroup
$K\triangleleft G$ such that $\bar G:=G/K$ is in $\mathcal S$. Then $K\times H$ is characteristic
in $G\times H$. This is because under an automorphism of $G\times H$,
$H$ gets mapped onto itself and $K\mod H$ gets mapped isomorphically onto $K\mod H$. Hence $G\times H\in \widetilde{\mathcal S}$. By Theorem \ref{SO}, $G\times H$ has property $R_\infty^\chi$.
Next we consider $G*H$ where $G\in \widetilde {\mathcal S}$ is finitely generated.
Let $K\triangleleft G$ be a characteristic subgroup of $G$ such that $\bar G:=G/K\in \mathcal S$.
If $H\in \mathcal D \cup \mathcal T \cup \mathcal L$, then it is easy to see that $H$ is freely indecomposable. If $H\in \mathcal A$ is acyclic then $H$ is {\it finitely generated}. Thus, for any $H\in \mathcal C$, we
have a free product decomposition $H=H_1* \cdots *H_n$ where each $H_j$ is indecomposable as a free product.
Since $G$ is finitely generated, we also have
$G=G_1*\cdots*G_m$ where each $G_j$ is indecomposable as a free product. By our assumption,
none of the $H_j$ are isomorphic to any of the $G_i$.
A result of Fouxe-Rabinowitsch \cite{F-R}, describes a set of generators
of $G*H=C_1*\cdots C_{m+n}$ where $C_i=G_i, i\le m, C_i=H_{i-m}, i>m$. These are of three types, namely, (i) permutation automorphisms $\pi$ which permute the factors (having fixed once for all, isomorphisms between
two factors $C_i\to C_j, i<j,$ if it exists); (ii){\it factor automorphism} $\phi$ which maps each $C_i$ to itself, and,
(iii) $FR$-automorphisms $\sigma=\sigma(i,y), y\in C_j, j\ne i$ where $\sigma|C_i$ equals conjugation by $y$, $\sigma|C_k=id$
if $k\ne i$. By our above observation, any permutation automorphism preserves the $H_j, 1\le j\le n.$
It follows that the normal subgroup $N$ of $G*H$ generated by $H$ is invariant under each of these generators of $\textrm{Aut}(G*H)$. So $N$ is characteristic in $G*H$. It follows that any automorphism $\theta\in \textrm{Aut}(G*H)$ induces an automorphism $\theta_0\in \textrm{Aut}(G)$ via the natural projection
$\eta: G*H\to G$. Now $\theta_0\in \textrm{Aut}(G)$ induces an automorphism $\bar\theta\in \textrm{Aut}(\bar G)$.
Denoting by $q: G*H\to \bar G$ the composition of the natural quotients, we have $\bar \theta=q\circ \theta$.
and ${\rm Ker}(q)$ is characteristic in $G*H$. So $G*H\in \widetilde{\mathcal S}$ and so has the property $R_\infty^\chi$ by
Theorem \ref{SO}.
(ii) By Lemma \ref{betti}, $K$ belongs to $\mathcal S$. Since any homomorphism $H\to G$ is trivial, the same is
true if $G$ is replaced by $K$. Thus the hypotheses of the statement of the theorem are valid when $G$ is
replaced by $K$. Therefore (ii) follows from (i).
\end{proof}
\begin{remark}
There are groups with property $R^{\chi}_{\infty}$ but with empty $\Sigma$ - invariants. For example, the group $BS(2,3)$ has property $R_{\infty}$. A close inspection of the proof in \cite{FeG} shows that $BS(2,3)$ has property $R^{\chi}_{\infty}$ while it has empty $\Sigma^1$ so that $BS(2,3) \notin \mathcal S$.
\end{remark}
In general the requirement that any homomorphism $H\to G$ is trivial is hard to verify. However, in certain contexts this
is easily verified or known. Examples of such situations are: (a) $H$ is a torsion group and $G$ is torsion free. (b) $H$ admits
no finite dimensional linear representation and $G$ is linear. For example we may take $G$ to be an irreducible lattice in
a semisimple linear Lie group and $H$ to be a binate group (\cite[Theorem 3.1]{berrick-1994},\cite{berrick-1995}). (c) If $G$ is a group such that
any nontrivial element in $G$ has at most finitely many roots in $G$ and $H$ is divisible. For example, take $G$ to be a non-elementary
hyperbolic group or is a subgroup of $GL(n,\mathbb Z)$ for some $n$. Note that if $G$ is the fundamental group of a closed orientable hyperbolic $3$-manifold then by \cite{BNS}, $\Sigma^1(G)$ is symmetric so $G\notin \mathcal S$. In view of this, (i) of Proposition \ref{newgroups} can be generalized as follows using the same arguments as in the proof of Proposition \ref{newgroups}.
\begin{proposition} \label{newgroups2} Let $\mathcal C$ denote $\mathcal D\cup \mathcal T\cup
\mathcal A\cup \mathcal L$.
Let $G$ be a group with property $R_{\infty}$ and $H$ be in $ \mathcal C$. Suppose that
every homomorphism $H\to G$ is trivial then:\\
(i) $G\times H$ has property $R_{\infty}$, \\
(ii) if $G$ is freely indecomposable or is a finite free product
then $G*H$ has property $R_{\infty}$.
\end{proposition}
\begin{proof}
Following the proof of (i) of Proposition \ref{newgroups}, for (i), $H=1\times H$ is characteristic in $G\times H$ with quotient $G$. Since $G$ has property $R_{\infty}$, it follows (e.g. \cite[Lemma 1.2(1)]{GW1}) that $G\times H$ has property $R_{\infty}$. Similarly, for (ii), there is a characteristic subgroup $N$ in $G\ast H$ with quotient $G$ so that we can conclude that $G\ast H$ must also have property $R_{\infty}$.
\end{proof}
\subsection*{Concluding remarks}
Although the notion of $R^{\chi}_{\infty}$ is inspired by the use of $\Sigma$-theory,
there are groups with such property but $\Sigma^1$ is empty (for instance, any free products have empty $\Sigma^1$). In the last section, we construct certain free products $G*H$ with property $R_{\infty}$. In particular, when $H\in \mathcal D$ is divisble, $H$ does not contain any proper subgroup of finite index. Yet, if $G$ has property $R_{\infty}$ (or $G\in \widetilde{\mathcal S}$) and every $H\to G$ is trivial then $G*H$ has property $R_{\infty}$ (or $R^{\chi}_{\infty}$). On the other hand, it has been shown in \cite{GSW2} that $G*H$ has property $R_{\infty}$ provided both $G$ and $H$ are freely indecomposable and each contains proper characteristic finite index subgroups. We ask the following
\begin{question}
Let $G=G_1*\cdots *G_k$ be a finite free product of freely indecomposable (not necessarily finitely generated) groups $G_i$. Does $G$ necessarily have property $R_{\infty}$?
\end{question}
\noindent
{\bf Acknowledgments:} We thank the referee of the for his/her careful reading of the paper and for his/her comments,
which resulted in improved presentation.
|
2111.04770
|
\section{Introduction}
The study of dwarf spheroidal galaxies (dSphs) within the Local Group provides a unique opportunity to characterize in detail the structure of dark matter (DM) subhaloes (e.g., \citealp{Mateo1993, Walker2009, Lokas2009, Boylan2013, Hui2016}). When compared to other galaxies, dSphs are relatively simple to both measure and simulate, which allows us to make reliable inferences on their parameters, and therefore, better constrain DM properties. They are close enough to provide individual dynamical tracers (i.e. individually resolved stars), and they currently have relatively insignificant contribution from baryons (again, when compared to other galaxies), allowing for a more robust estimation of their dynamical properties and their DM profiles. It is because of this simplicity that some of the most stringent tests of DM properties determined from galaxies come from studies of dSphs. For example, \citet{Strigari2008} showed that within a radius of 300 pc all Milky Way dSphs have dynamical masses of $M_{300}\sim 10^7$$M_{\odot}$, despite the fact that they span four orders of magnitude in luminosity. This implies that there is a characteristic inner mass scale for DM halos, below which, depending on the nature of DM, they either cannot form stars or just simply do not exist.
The nature of DM has also been put to question when pure $\Lambda$CDM simulations failed to agree with observations of DM halo abundances and the DM halo profiles of dSphs.
These differences can be categorized into the ``missing satellites problem'' (\citealp{Klypin:1999uc}), i.e. the overabundance of DM subhaloes in pure $\Lambda$CDM simulations with respect to observations, and the ``core/cusp problem''(e.g. \citealp{Moore:1994yx,Flores:1994gz,Battaglia:2008jz,Walker:2011zu,Amorisco:2011hb,Agnello:2012uc,Adams2014,Oh:2015xoa}), or the cored nature of central DM density profiles in observations in discrepancy with the cuspier nature of the same in pure $\Lambda$CDM simulations.
These two discrepancies left room for a variety of exotic forms of dark matter to better fit observations (e.g. \citealp{ Colin:2000dn,Colin:2007bk,Lovell2012}).
Most recent work shows that including baryonic physics in the simulations flattens the otherwise cuspy cores, without the need to invoke self-interacting DM (\citealp{Pontzen2012,Battaglia2013,DiCintio2014,DelPopolo2017}).
Furthermore, observational results for systems mostly dominated by DM are not decisive as to the shape of the central profile. Most studies show cored central densities (e.g., \citealp{Walker2009,Breddels2013,Leung2020, Read_2016}), whereas more recent work show some cusps (\citealp{Jardel2013, Adams2014,Hayashi2020}). It has also been suggested that dwarf spheroidals galaxies might contain little or no DM (\citealp{Hammer2019}).
An interesting possibility is that dwarf spheroidal galaxies contain a central black hole. \citet{Volonteri2005, Silk2017, Kormendy2013, AmaroSeoane2014} and others have suggested that black hole formation may be a natural consequence in these systems in the early universe. There is an empirical correlation between the mass of a black hole and its bulge's velocity dispersion known as the black hole-sigma relation (\citealp{Gebhardt2000, FerrareseMerrit2000, Saglia2016}). These empirical correlations have not been probed in dSphs, but if they are extrapolated to this mass regime one might expect to have black holes with masses of $(1-10)\times10^4$ $M_{\odot}$. \citealp{AmaroSeoane2014} further argue that some dwarf systems may in fact have a larger-than-expected black hole. In this work we analyze such a system, Leo I, using the same rigorous dynamical models applied to larger galaxies.
Being among the brightest and furthest of the Milky Way dSphs has made Leo I the subject of many in-depth dynamical studies. At $257 \pm 13$ kpc (\citealp{sohn2012}) and with a half-light-radius of $3\farcm40 \pm 0\farcm30$ (\citealp{McConnachie_2012}), it provides an important tracer to explore the Milky Way mass model. Metallicity studies (\citealp{Bosler2006}), proper motion (\citealp{gaia2018,sohn2012}) and radial velocity measurements (\citealp{Koch2007, sohn2007}) have accrued a significant amount of data on the galaxy. These data have been extensively analyzed by various groups, both by statistical comparison with simulations and by direct comparison with several Jeans spherical dynamical models, yet often times contradictory pictures emerge, with \citet{mateo2008} finding evidence for an extended DM halo from observing a flat rotation curve at large radii, and \citet{lokas2008} finding a DM profile similar to the stellar profile. All recent analyses find a $V$-band $M/L$ value in the range $8-15$, which is on the low end for Milky Way dSph satellites.
In this paper, we measure the stellar light profile and explore the central dynamics using new integrated-light kinematic measurements and orbit-based modeling that allows for a generic stellar velocity anisotropy.
We include kinematics from \citet{mateo2008} (hereafter referred to as M08), the largest data set available in the literature, to sample the outer part of the galaxy. We further consider different aspects of the tidal effects from the Milky Way. Section \ref{sec:DATA} presents the new kinematic observations using VIRUS-W on McDonald Observatory 2.7m. Section \ref{sec:LUMINOSITY} provides a measure of the projected number density profile, and corresponding 3d density profile. Section \ref{sec:KINEMATICS} presents the integrated-light measurements and all kinematic tracers used in the dynamical models. An important aspect of the kinematic extractions is to understand the effect that crowding from neighboring stars has on the measured spectra; this effect is described in Section \ref{sec:CROWDING}. Section \ref{sec:TIDAL_EFFECTS} examines the implications of tidal effects using various models. Section \ref{sec:DYNMOD} presents the dynamical models. In Section \ref{sec:DISCUSSION} we discuss the implications of our work.
\section{OBSERVATIONS AND DATA REDUCTION}
\label{sec:DATA}
VIRUS-W (\citealp{Fabricius2012}) is an integral field unit (IFU) spectrograph on the McDonald Observatory 2.7 m Harlan J. Smith Telescope and an ideal instrument for low velocity dispersion systems like dSphs due to the high spectral resolving power. At $R \sim 8600$ and wavelength coverage of 4855-- 5475 \r{A}, VIRUS-W is suited to measure stellar velocities accurate to around 1~km~s$^{-1}$\ and integrated velocity dispersions to about 10~km~s$^{-1}$. VIRUS-W is composed of 267 3\farcs2\ diameter fibers with a total field-of-view of 105$\arcsec$\ $\times$ 55$\arcsec$\ (wide side aligned east-west) and a 1/3 fill factor.
The locations of the fibers are known to about 0\farcs2, after calibration of the field center. We use stars within the IFU and stars within the guider field to calibrate the IFU center. With 3\farcs2 diameter fibers, imaging FWHM ranging from 1$\arcsec$ to 2$\arcsec$, and the fact that we will sum spectra over a large radial and angular extent, the pointing error is negligible.
The VIRUS-W observations in this paper were carried out over 2 nights in January 2017 (Pointing 1) and 4 nights in February 2017 (Pointing 2) (see \Cref{fig:pointings}).
The conditions were mostly clear and were monitored using the guider stars' FWHM and photometric magnitude. Each night we obtained twilight flats, Hg and Ne arc lamps, dome flats, and bias exposures for calibration. We employed an observing strategy which included two 3600 s target exposures with two adjacent 1800 s sky nod exposures, amounting to a total observed time of 30 hr.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{DATA/POINTINGS.png}
\caption{The VIRUS-W and M08 footprints overlaid on an SDSS Leo I image. The VIRUS-W footprint is 105$\arcsec$ $\times$
55$\arcsec$. We show January-February 2017 pointings 1 (blue) and 2 (red), as well as M08 targets in green. The lower panel shows a zoomed image, with the photometric center of the galaxy marked with a yellow plus sign.}
\label{fig:pointings}
\end{figure}
For the reductions, we use \texttt{Panacea}\footnote{https://github.com/grzeimann/Panacea}, a general IFU reduction package from McDonald Observatory. Many of the routines employed in this package are similar to standard IFU reduction codes but we describe them in detail here. Starting from the raw images, we subtract and trim the overscan average value and region. We construct a single master bias frame over many nights and subtract the resultant image from our science and twilight frames. Using the twilight frames, we build a wavelength solution and a trace model for each of our fibers. Twilights are taken at least once a night typically in the evening and morning. As the ambient temperature changes, there are slight shifts in the trace of the fibers which are measurable in each of our science frames. We use these derived shifts to correct our trace model for individual science frames. For the shorter exposure (lower-signal) 1800 s sky frames we interpolate between the two closest science or twilight frames to get an adjusted trace model.
In order to properly extract the amount of flux deposited in each pixel of the CCD by a given fiber, we use the normalized fiber profile weights given by our twilights. For each individual fiber, CCD column by CCD column, we solve a system of linear equations defined as $A\ x = b$, where $A$ is a matrix whose entries are the normalized fiber profile weights for the given column for each fiber, $b$ is the column of CCD data from a given science or sky frame, and $x$ is the vector for which we solve that gives the spectrum for each fiber for the given column. The spectral extraction is not done in rectified coordinates (coordinates curved to follow the trace of the diffraction pattern on the CCD) because the curvature of the trace is small enough that the error in not rectifying coordinates is less than $1\%$ for the whole CCD.
The need for long science observations due to the low luminosity of the object carries along the problem of non-linear variations between the two bracketing sky observations.
To further extract these non-linear variations we use an iterative sky subtraction algorithm for the science observations that takes advantage of the fact that all fibers contain starlight from Leo~I. We first build a master sky frame from each individual sky frame, by applying an illumination model built from the twilight frames, which corrects for fiber-to-fiber variations. We then use time-interpolated master sky spectra to first subtract the sky from each science frame. Co-adding the individual science spectra for each VIRUS-W pointing gives us two master science models. We subtract the master science frames from the individual science frames and use this difference to build new master sky frames. We iterate this procedure twice for convergence. The final master sky frames are subtracted from the individual science exposures and we co-add each pointing to get our final 534 fiber measurements.
As explained in detail in the kinematics section (Section \ref{sec:KINEMATICS}), we bin the individual spectra into a polar grid in order to measure the integrated-light kinematics. \Cref{fig:spectra} shows our binned spectra.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{DATA/leospec.pdf}
\caption{Spectra in different regions for the VIRUS-W data. The black line represents the extracted spectra centered on the Mg regions; the data extend both to the blue and the red, and we highlight this region. The red line is the best-fit of the template star convolved with the velocity profile. There are 23 spatial bins used for the VIRUS-W spectra, and here we show six representative spectra. The spatial locations are noted at the bottom of the plots.}
\label{fig:spectra}
\end{figure}
\section{LUMINOSITY DENSITY PROFILE}
\label{sec:LUMINOSITY}
Dynamical models require a spatial profile of the mass density in the object of study as an input. The first step is to estimate the projected surface brightness for the object. Previous modeling for Leo I used profiles obtained via star counts from photographic plates (\citealp{Irwin1995}). As discussed in \citealp{Noyola2006}, within dense regions, star counts suffer from incompleteness due to crowding effects. This underscores the need to obtain updated density profiles coming from more recent data sets.
\begin{deluxetable}{cccc|cccc}
\tabletypesize{\normalsize}
\tablecolumns{2}
\tablecaption{Leo I density profile\label{tab:density}}
\tablewidth{0pt}
\tablehead{
\colhead{log $r$} & \colhead{SB} & \colhead{err} & \colhead{smooth} &
\colhead{log $r$} & \colhead{SB} & \colhead{err} & \colhead{smooth}
}
\startdata
0.297 & 21.58 & 0.35 & 21.56 & 2.106 & 22.45 & 0.03 & 22.48\\
0.632 & 21.560 & 0.22 & 21.63 & 2.178 & 22.63 & 0.04 & 22.66\\
0.844 & 21.66 & 0.15 & 21.70 & 2.250 & 22.82 & 0.04 & 22.89\\
1.005 & 21.82 & 0.13 & 21.76 & 2.321 & 23.12 & 0.05 & 23.11\\
1.140 & 21.88 & 0.12 & 21.80 & 2.391 & 23.38 & 0.07 & 23.40\\
1.257 & 21.80 & 0.09 & 21.82 & 2.461 & 23.64 & 0.09 & 23.66\\
1.363 & 21.82 & 0.07 & 21.83 & 2.530 & 23.94 & 0.13 & 23.96\\
1.460 & 21.84 & 0.06 & 21.85 & 2.599 & 24.28 & 0.22 & 24.28\\
1.552 & 21.86 & 0.05 & 21.88 & 2.668 & 24.65 & 0.38 & 24.57\\
1.639 & 21.96 & 0.05 & 21.92 & 2.736 & 24.97 & 0.60 & 24.94\\
1.722 & 22.00 & 0.05 & 21.98 & 2.804 & 25.33 & 0.95 & 25.26\\
1.803 & 22.04 & 0.04 & 22.03 & 2.948 & N/A & N/A & 25.96\\
1.881 & 22.13 & 0.04 & 22.08 & 3.034 & N/A & N/A & 26.39\\
1.957 & 22.19 & 0.04 & 22.22 & 3.092 & N/A & N/A & 26.68\\
2.032 & 22.31 & 0.03 & 22.33 & 3.150 & N/A & N/A & 26.98\\
\enddata
\tablecomments{Column (1): log radius in arcseconds; Column (2): measured photometric points from SDSS image in $V$ magnitudes; Column (3): photometric error; Column (4): smooth profile in $V$ magnitudes, including a de Vacouleurs extrapolation at large radii}
\label{tab:lum}
\end{deluxetable}
Due to the necessity for wide spatial coverage, we use publicly available Sloan Digital Sky Survey (SDSS) g-band imaging (DR12; \citealp{Alam2015}) for Leo I. We have to restrict ourselves to a field of $20^\prime$ in size owing to the presence of a bright foreground star to the south of the galaxy.
A careful determination of the galaxy's center is crucial, especially if one suspects the existence of a cusp, since miscentering can turn a cuspy profile into a flat one (but not the other way around). A photometric catalog was obtained from the SDSS image using \texttt{daophot} (\citealp{Stetson1987}). We then used the method described in detail in \citealp{Noyola2006} to determine the center using this catalog. Briefly, the method assumes axisymmetry and counts stars in angular slices around a tentative center, searching around for the minimum variation between slices given slightly offset centers to the initial guess.
Our new derived center is 10h08m26.7s +12d18m27.8s, differing only by 2\farcs3 from that of M08.
Using the new central location, we measure the ellipticity and position angle of the galaxy. We do this by smoothing the SDSS image using a bi-weight convolution boxcar (\citealp{Beers:1990fw}). We find the best fit global value for ellipticity and position angle on the smooth image with values of 0.2 and 80$^\circ$ (defined as north through east to the major axis), respectively, which is consistent with the results from \citealp{Irwin1995}. As M08 already noted, the image shows variations in ellipticity and position angle (PA) with radius, particularly for the central regions.
Since these regions are the most affected by crowding and therefore shot noise, we opt for using a global fit with constant ellipticity and PA, which is adequate for our axisymmetric models.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{LUMINOSITY/densitylog.pdf}
\caption{Radial surface brightness profiles, both from star counts and from integrated light. Green: photometric points from SDSS. Cyan: photometric points from Hubble Advanced Camera for Surveys (ACS) image. Red: smooth line based solely on SDSS photometric points. Blue dashed: star count profile from \citealp{Irwin1995}. Magenta: density profile from M08. Every profile is directly normalized to the M08 line but for the ACS one, normalized indirectly via SDSS due to radial coverage. The smooth red line is the one used as input for dynamical models, which shows a shallow cusp towards the center.}
\label{fig:luminosity}
\end{figure}
The last step to obtain a density profile uses a biweight estimator to measure the integrated-light density in concentric elliptical annuli. This method is used in \citealp{Noyola2006} for Galactic globular clusters. We use the profile from M08 to find the photometric normalization by matching our profiles to theirs at large radius, where there is excellent agreement between our photometric points and previous measurements. It is worth mentioning that even though M08 call their profile ``surface brightness" and report it in the corresponding units, the actual measurement is done by repeating the procedure in \citealp{Irwin1995} on their new data set. By using isopleths of star counts, their profile will suffer from the same crowding effects described in \citealp{Noyola2006}. As can be seen in \Cref{fig:luminosity}, every profile agrees quite well within the $3^\prime-10^\prime$ radial range, while a large difference is seen inside $3^\prime$. Our surface brightness profile departs from previous estimates by showing a shallow cusp, while the star count profiles show a flat central profile. As a test, we performed the same measurement on the higher resolution Hubble Space Telescope (HST) images (\citealp{Bosler2006}) used for crowding estimations (see Section \ref{sec:CROWDING}). Despite their higher resolution, the spatial coverage of these images is incomplete (only covering out to $1^\prime$), which is why we only use the profile from the SDSS image for our analysis. The higher resolution of the ACS image resolves many more individual stars, which causes a larger scatter in the photometric points as can be seen in the central region in \Cref{fig:luminosity}. The ACS- and SDSS-based profiles match well to 4$\arcsec$\ radius, both tracing a shallow cusp. Inside 4$\arcsec$\ there is a small difference but the amount of light in that difference, and hence stellar mass, is insignificant, especially given that our first kinematic bin includes spectra out to 30$\arcsec$. The fact that the analysis of the two independent images yields the same shape for the central profiles provides confidence that the detected cusp is real. The detection of this cusp also gives us confidence in our new center.
As a final step, we smooth the density profile using a spline \citealp{Wahba1990}. We also extrapolate the profile with an $r^{1/4}$ law in order to accommodate stars in the dynamical modeling that go beyond the measured region. This profile is the one used as input in the dynamical models. The data are shown in Table \ref{tab:lum}.
\section{KINEMATICS}
\label{sec:KINEMATICS}
We use integrated-light kinematics derived from the VIRUS-W data, and we also include velocities from individual stars from M08. Each kinematic data set requires different analysis techniques due to the different spatial densities involved.
Traditional methods for the kinematic study of local dSphs rely on them being sufficiently close and sparse to be able to measure the velocity of one star at a time. The VIRUS-W data, unlike M08's, is concentrated in the central densest regions of Leo I, which, compounded with the fact that VIRUS-W fibers have twice the diameter of M08's, makes the analysis of individual stellar spectra virtually impossible.
As discussed in Section \ref{sec:CROWDING}, most VIRUS-W fibers contain multiple stars. Typically, the brightest star contributes at most 24\% of the total light. At this level extracting individual stellar velocities with VIRUS-W in the central region of Leo~I would be biased. Instead, we use integrated-light when dealing with VIRUS-W data (a common method in the study of denser or further away galaxies, but also Galactic globular clusters). To make sure we reach a sufficient number of stars for our analysis, we bin fibers for our kinematic measurements. The spatial binning of the fibers is defined by the dynamical modeling grid, and our minimum number is five fibers.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{KINEMATICS/intfiber.pdf}
\caption{On-sky spatial and angular binning of the VIRUS-W data. Axes are aligned with the galaxy's major and minor axes as projected in the sky, are in arcseconds, and represent the distance from the galaxy's center. The number overlaying each bin indicates the number of fibers within it. Archival data, covering the outermost radial bins, have been omitted for clarity.
}
\label{fig:VWbinning}
\end{figure}
The spatial and angular binning on sky for the VIRUS-W data is given in Fig \ref{fig:VWbinning}. The number of spectra in each spatial VIRUS-W bin ranges from 5 to 41. For each spectra we first fit the continuum and divide, which normalizes each of the spectra so that their continuum is unity. The fitting to the continuum is done in fixed wavelength bins, using regions to avoid absorption lines. We then linearly interpolate across the bin centers to make a smooth continuum. The co-addition uses a biweight of the continuum-divided spectra.
We rely on the maximum penalized likelihood method described in \citet{Gebhardt:1999vca} for our kinematical measurements. The principle behind this method is to simultaneously fit the line-of-sight velocity distribution (LOSVD) and the relative weights on the stellar templates. We find the best fit to the integrated-light spectra by fitting to the velocity bins of the LOSVD nonparametrically and adjusting the weights on template stars observed with VIRUS-W. We further constrain the LOSVD with a smoothing parameter. As a sanity check, we compare our results to those obtained using pPXF (\citealp{Cappellari2016}) and find close agreement.
We use five template stars for the kinematic extractions. These five templates were observed with VIRUS-W, and have been down selected from a parent sample of 32 stars. We experimented with a variety of templates, and the five chosen are representative. We find insignificant differences in the final kinematics when using different templates. The templates are processed the same way for continuum removal as the science spectra.
Finally, we use a subset of the M08 data. M08 performed individual spectral analysis on each of their fibers via normalized cross-correlations in order to derive individual stellar velocities. Although most of their fibers were not concentrated in the central regions, they do target some stars all the way to the center of Leo I. Further analysis performed in this paper (Section \ref{sec:CROWDING}) using HST imaging and simulations shows that this very likely biased their central individual stellar velocities towards the average velocity of the galaxy. Due to this bias, we decided against using M08 data where we could use our own kinematics. We binned the individual data points from M08 in concentric radial bins for the outer regions to achieve sufficient numbers for the LOSVDs.
\cite{Koch2007} present a kinematic analysis of Leo~I and show a velocity dispersion profile similar to what we are using at large radii. Their innermost data point at 1\farcm3 has a dispersion around 10~km~s$^{-1}$, consistent with measurements presented here of 11~km~s$^{-1}$. They use additional data inside of that radius from the study of \cite{mateo1998} that shows a significantly lower dispersion. The data from the older work is incorporated within M08. As shown in the following discussion, we demonstrate why using individual velocities to measure a dispersion can be biased in crowded regions.
The final LOSVDs result from VIRUS-W measurements in the center bins and radially binned M08 measurements for the outer regions. These LOSVDs and the uncertainties are input into the dynamical models. Table \ref{tab:losvds} shows the velocity dispersions fitted to the LOSVDs, where we report the values integrated over the annuli. We note that the dynamical models fit directly to the LOSVDs, and we only provide the velocity dispersions in radial bins for comparison with the models.
\begin{deluxetable}{c|c}[h!]
\tabletypesize{\normalsize}
\tablecaption{Leo I Velocity Dispersion Bins\label{tab:proplog}}
\tablecolumns{2}
\tablewidth{1.0\columnwidth}
\tablehead{
\colhead{Radius (arcsec)} &\colhead{Velocity Dispersion (km~s$^{-1}$)}
}
\startdata
15.35 & 12.0 $\pm$ 0.2 \\
38.65 & 11.5 $\pm$ 0.5 \\
53.25 & 11.8 $\pm$ 1.0 \\
75.10 & 11.0 $\pm$ 0.6 \\
\hline
200 & 9.1 $\pm$ 0.8 \\
250 & 9.3 $\pm$ 0.9 \\
315 & 8.6 $\pm$ 1.0 \\
390 & 11.4 $\pm$ 1.7 \\
540 & 10.0 $\pm$ 0.8 \\
\enddata
\tablecomments{Leo I bins as a function of radius. The horizontal line marks the division between the two datasets, M08 (below) and our VIRUS-W data (above).}
\label{tab:losvds}
\end{deluxetable}
\section{THE PROBLEM OF CROWDING}
\label{sec:CROWDING}
When measuring velocities for individual stars, the flux contribution of nearby faint stars to a given measured spectrum, whether they are resolved or unresolved, will have the effect of biasing the measured velocity of such spectrum towards the mean velocity of the galaxy, even if the spectrum has a large contribution from a single star. This effect has to be taken into account when measuring velocity dispersion $\sigma_v$ from individual stars in crowded stellar fields, since they will be biased low if the contribution of unresolved light is meaningful.
The story is different for kinematic measurements that use templates stars to extract velocity dispersion from line broadening of added spectra. In this case the unresolved light, or the small contribution from resolved faint stars, makes a valuable contribution to the line profile. The analysis has to be very careful dealing with added spectra where the contribution is dominated by a single or a few stars. It is in that case that the $\sigma_v$ can be artificially biased low or high.
These effects have been previously discussed in crowded stellar fields when using IFUs. \citet{Luetzgendorf2015} perform simulations which show that the contribution of faint stars can bias the velocity dispersion measurements to lower values in crowded fields by large amounts within normal observing conditions.
This bias occurs by influencing the measured individual star velocities towards the cluster mean due to the diffuse background light. For sparse fields, the bias can go either way if a bright star is nearby.
\citet{Bianchini2015} perform detailed simulations on globular clusters to study this bias as well.
The bias they find on the velocity dispersion is stochastic, with a trend towards low values in the most central regions.
For Leo I, we are in a regime of high crowding and run simulations tuned to the specific observations and distribution of stars within the fibers.
These effects have been previously discussed in crowded stellar fields when using IFUs. \citep{Luetzgendorf2015} perform simulations which show that the contribution of faint stars can bias the velocity dispersion measurements to lower values in crowded fields by large amounts within normal observing conditions. This bias occurs by influencing the measured individual star velocities towards the cluster mean due to the diffuse background light. For sparse fields, the bias can go either way if a bright star is nearby. \citep{Bianchini2015} perform detailed simulations on globular clusters to study this bias as well. They find the bias is stochastic, with a trend of a bias to low values in the most central regions. For Leo I, we are in a regime of high crowding and run simulations tuned to the specific observations and distribution of stars within the fibers.
The crowding problem is partially mitigated by the PAMPELMUSE code (\citealp{Kamann2018}) used in globular clusters surveyed by MUSE (\citealp{AlfaroCuello2019,Kamann2020}). The advantage that PAMPELMUSE has is that the spatial resolution elements of MUSE are quite small, so the number of stars within any spaxel is also small. Therefore, with accurate positions of stars from HST, one can then better extract individual velocities by limiting the crowding bias. In our case, with 3\farcs2\ fibers and the large point-spread function (PSF), the number of stars that contribute light to each spaxel is high, making it essentially impossible to provide unbiased individual velocities.
In this section, we investigate the effects of crowding on the measurements of radial velocities for individual fibers on Leo I's central regions. We use two sets of deep, high resolution Hubble Space Telescope (HST) imaging to quantify the number of stars and their relative flux contribution to each fiber. The first set consists of F435W imaging from the Advanced Camera for Surveys (ACS) taken on February 2006 with an exposure time of 6800s (GO-10520; PI: Smecker Hane). The second set contains F555W imaging from the Wide Field Camera 3 (WFC3) taken on January 2011 with an exposure time of 880s (GO-12304; PI: Holtzman). We create catalogs from both sets of imaging using \texttt{daophot} (\citealp{Stetson1987}).
As seen in \Cref{fig:fibercont}, within our $3\farcs2$\ diameter fibers we find a median of 22 stars in our deep HST catalogs. The brightest star in each fiber has a median contribution of $24\%$ of the total flux. Thus, no single star dominates the light of our fibers and rather we are capturing the light from small populations on the order of tens of stars each. In addition, the point-spread function of the observations range from 1\farcs5-3\farcs0, blending diffuse star light even more, and making the extraction of individual velocities yet more biased.
\begin{figure}
\hspace{-0.5cm}\includegraphics[width=0.5\textwidth]{CROWDING/FIBER_CONTAMINATION.pdf}
\caption{Box plot of the maximum percentage of flux from the brightest star in an individual VIRUS-W fiber vs. radius. The boxes extend from the first quartile to the third with the median marked as an orange line. The whiskers extend from the 5th percentile to the 95th. The flux contribution is calculated using HST's deeper catalog. Due to the low flux contribution of the brightest star, using VIRUS-W to extract individual stellar velocities in the central region of Leo I would be biased. Instead, we rely on integrated-light measurements.}
\label{fig:fibercont}
\end{figure}
In light of this crowding issue, we choose not to measure radial velocities from individual VIRUS-W fibers but rather stack fibers within radial bins and infer the velocity dispersion from the Mg b triplet absorption features. Our kinematic measurements are discussed
in more detail in Section \ref{sec:KINEMATICS}.
We also use an archival data set from M08. This program was able to target individual, bright RGB stars which dominated the light in their fiber with some exceptions. In the central 150 pc of Leo I, HST catalogs show that the HECTOCHELLE fibers captured the light from a median of 5 stars not including the central RGB target with a median contribution of $10.4\%$ from the unresolved population. \Cref{fig:fibercomparison} demonstrates the size of the fibers from VIRUS-W and HECTOCHELLE compared to the HST imaging. \Cref{fig:M08fiberflux} uses the information for the size and location of the VIRUS-W fibers to show the effect from crowding.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{CROWDING/FIBER_CROWDING.png}
\caption{Comparison of a VIRUS-W fiber (black circle of 1\farcs6\ radius) and a HECTOCHELLE fiber (white circle of 0\farcs7\ radius) placed at the center of the galaxy in the HST deep image, with cataloged stars overlaid.}
\label{fig:fibercomparison}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{CROWDING/VELvsCROWDING_MATEO.png}
\caption{M08 fibers arranged in terms of maximum flux from one star within a fiber. The blue points are those covered by ACS (deep catalog), while the orange points are covered by WFC3 (shallow catalog). One can see a noticeable trend of smaller velocity dispersion for lower flux percentage, and thus the need for taking into account crowding effects.}
\label{fig:M08fiberflux}
\end{figure}
Motivated to understand the effect crowding might have on the kinematic measurements of M08 in the central regions of Leo I, we devise a simple model. The idea is to use the star positions and magnitudes from HST, and simulate the crowding in each spectral element (i.e., individual fiber), by measuring the inferred bias on the velocity of the target star. This correction will vary across the field depending on the crowding, and once we have that field-dependent correction, we then combine over all spectral elements to generate the bias on the velocity dispersion. Below we list the six steps for the process to determine the local correction needed for each spectral element, for convenience defining the normal distribution
\begin{align}
N(\vec{x},\vec{\mu},\sigma) \equiv \frac{1}{(2\pi \sigma^2)^{\dim \vec{x}/2}} e^{-(\vec{x}-\vec{\mu})^2/(2\sigma^2)},
\end{align}
where $\dim \vec{x}$ is the number of components in the vector $\vec{x}$.
\begin{enumerate}
\item Input star catalog: For each star, we use their position relative to the center, the point-spread function (PSF) of the observations, and the magnitude in order to derive that star's contribution to the total flux in the fiber
\begin{align}
\begin{aligned}
F_j(&M_j,\mu_x,\mu_y) = A \,10^{-0.4 \times M_j}\times \\
&\int_{-r}^{r} \int_{-\sqrt{r^2-x^2}}^{\sqrt{r^2-x^2}}
N(\vec{x},\vec{\mu},\sigma_s) dy\, dx.
\end{aligned}
\end{align}
where $\sigma_s$ is FWHM/2.35 of the observations; $\vec{\mu} = (\mu_x, \mu_y)$ are the coordinates of the star with respect to the fiber center,
$r$ is the radius of the fiber, $M_j$ is the star's apparent magnitude, $\vec{x} = (x,y)$ are Cartesian coordinates centered on the fiber ($r^2 = x^2+y^2$),
and $A$ is a normalization constant.
\item Spectral profile: Instead of relying on a simple measure of the velocity shift using a weighted average of the velocities, we generate an absorption line in order to be more realistic. The absorption features in the spectrum of the $j$th star
in the fiber are modeled with a Gaussian of width $\sigma_w$, the average sigma of the Mg b absorption lines of a template star
\begin{equation}
g_j(u) = F_j N(u,v_j,\sigma_w).
\end{equation}
\item Velocity sampling: We then generate random samplings of the velocities for each star. The position of the $j$th gaussian in spectral space ($v_j(\sigma)$, the $j$th star's velocity) is randomly drawn from a normal distribution whose standard deviation (i.e., its velocity dispersion) is selected from range of 0.5--60~km~s$^{-1}$. We use a range of input velocity dispersions in order to check for effects as a function of velocity dispersion:
\begin{align}
\begin{split}
P(v_j,\sigma) = N(v_j,0,\sigma) \\
\sigma \in [0.5,60]
\end{split}
\end{align}
\item Summed fiber profile: We then generate the final summed spectrum that will be used for velocity measurement. For each of the fibers, the gaussians are scaled to their relative flux contribution $F_j$ computed above and stacked in spectral space.
\begin{equation}
G(u|\sigma) = \sum_{j \in \text{stars}}{g_j(u)}
\end{equation}
where we explicitly denote the random variable nature of the dependence on $\sigma$ to avoid confusion.
\item Velocity centroid: For each fiber, the simulated spectrum of stacked Gaussians is then cross-correlated with a
normalized Gaussian of width $\sigma_w$ to calculate the radial velocity $v_m$:
\begin{equation}
\max( G(u|\sigma) \star
N(u,0,\sigma_w)
) = v_m
\label{eq:cross-correlate-gaussians}
\end{equation}
\item Crowding bias: Finally, we use simulations in order to measure a bias for each particular star in the specified fiber or slit. For each fiber, we drew a $\sigma$ and $v_j$ 10,000 times over all stars in the fiber. Combining these via Equation \eqref{eq:cross-correlate-gaussians}, we obtain 10,000 estimates of the measured velocity in that fiber, which are then used to compute the range of velocities measured from that fiber (i.e., the standard deviation). Thus, the measured velocity in a fiber $v_i$ is drawn from this output distribution. These simulations provide a map from input velocity dispersion to output
velocity dispersion for a given flux distribution of contributing stars. This map, $\sigma_i(\bar{F}_i,\sigma)$,
is produced for each $i$th fiber in M08 with HST imaging and is shown in \Cref{fig:fibermod}.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{CROWDING/M08FIBERMOD.pdf}
\caption{Monte Carlo simulation of the fractional change in velocity dispersion due to crowding vs. maximum flux percentage contribution of one star, per fiber. In the notation of the equations in this section: $(\sigma -\sigma_i(\bar{F}_i,\sigma))/\sigma$ vs $\bar{F}_i $. Each circle represents a fiber. The black vertical line shows the median maximum flux for VIRUS-W points. Due to crowding, for any given intrinsic velocity dispersion $\sigma$, the velocity one would measure from that fiber would be one extracted from a distribution with velocity dispersion $\sigma_i(\bar{F}_i,\sigma)$. In general, more crowding produces a lower maximum flux per star in fiber and correspondingly, a bigger fractional change in velocity dispersion.}
\label{fig:fibermod}
\end{figure}
At this point, we have determined the local correction on each fiber, and we combine these local corrections to determine the effect on the integrated velocity dispersion of the co-added fibers within a bin or annuli.
Using our simple model and mapping allows us to construct a maximum likelihood estimator (MLE) for the velocity dispersion of individual radial velocity measurements from fibers in M08 with crowding issues. The construction of such an MLE is only applicable to the fibers with HST coverage, so we restrict our calculations and adjustments to fibers which satisfy this criterion.
The strength of our estimations relies on the ability of the catalogs to detect fainter stars. By doing so, we can estimate the maximum flux percentage of the brightest star within the fiber and make the appropriate corrections; the shallower the catalog, the smaller the corrections. We measure the difference between the maximum flux percentage of the brightest star among the different catalogs. Compared to the shallow catalog from HST, we obtain $91\%\pm 8\%$ for M08 and $40\% \pm 17\%$ for VW.
For the deep catalog from HST, we obtain $79\% \pm 16\%$ for M08 and $27\% \pm 13\%$ for VW. Given the effect between using the deep or shallow catalog is about a $10\%$ difference, we use the shallow catalog only when lacking deeper imaging. This small difference will not impact the conclusions.
In practice, velocity dispersions are estimated by radial bins of fibers.
For a given radial bin, we calculate the probability of measuring a particular velocity $v_{m_i}$,
\begin{align}
P(v_{m_i}|\sigma_i,\sigma_e)= N(v_{m_i},\mu,\sigma_e) \otimes N(v_{m_i},\mu,\sigma_i)
\end{align}
Where $\otimes$ denotes convolution,
\begin{align}
\mu = \mu(\sigma_i,\sigma_e) &=
\sum_i \frac{v_{m_i}}{\sigma_i^2+\sigma_e^2}{\Bigg/} \sum_i \frac{1}{\sigma_i^2+\sigma_e^2}
,
\end{align}
and
\begin{align}
\sigma_i &= \sigma_i(\bar{F}_i,\sigma).
\end{align}
The mapping, $\sigma_i$, is from our simple model above, the index $i$ represents the $i$th fiber in the bin, $\bar{F}_i$ is the flux vector of stars in the $i$th fiber, $\sigma_e$ is the radial velocity measurement error, and $\sigma$ is the velocity dispersion in the bin without any crowding correction.
Thus, we maximize the likelihood ${\mathcal{L(\sigma)}}$ for a given bin to get our best estimate of the true velocity dispersion.
\begin{align}
\begin{split}
\mathcal{L(\sigma)} &= \prod^N_iP\left(v_{m_i}|\sigma_i,\sigma_e\right),\\
\sigma &= \max\mathcal{L(\sigma)}
\end{split}
\end{align}
Our corrections to the innermost M08 bins are shown in \Cref{fig:mateocorr}. Even though the coverage is not complete, one can extrapolate the results to the other fibers making use of the surface density profile. These crowding corrections agree well with the velocities we measured by binning the spectra of 10-12 fibers and performing integrated light (c.f. Table \ref{tab:losvds}).
Given the above issues with having to deal with crowding when using individual velocities, we rely instead on integrated light measurements in the inner regions for the dynamical models. As we have shown, the kinematics from both are in agreement when applying the bias corrections based on the simulations.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{CROWDING/mateo_corrections.png}
\caption{Estimated corrections for the velocity dispersion measurements from individual stars in M08 when crowding is taken into account. The corrections are only for stars that fall within existing HST images. The orange points are uncorrected velocities, the blue ones are corrected. The error bars increase in size, but at every radius the corrected velocity dispersion increases.}
\label{fig:mateocorr}
\end{figure}
\section{TIDAL EFFECTS}
\label{sec:TIDAL_EFFECTS}
It is important to consider tidal effects from the Milky Way on Leo I. Since our models assume dynamical equilibrium, if stars are actively being stripped out of the gravitational potential, we must include their effect on the kinematics, or at least quantify the magnitude of tidally stripped stars. Tidal effects are key to measuring the DM contribution at large radii, but are not expected to alter the central kinematics. We try a few tidal models to explore their effects.
Early measurements of Leo I's tidal radius \citep{Irwin1995} fit a truncated isothermal sphere to the luminosity under the assumption that mass follows light, estimating a tidal radius of $0.9 \pm 0.1$ kpc.
\citealp{sohn2012} use HST imaging to study the proper motion of Leo I and infer possible pericentric approaches. With a combination of orbital analysis and Monte Carlo simulations they predict a fairly elliptical orbit, with a pericentric passage of $101.0 \pm 34.4$ kpc. For different Milky Way models, they find a tidal radius of 3--4 kpc. This result coincides with Gaia's DR2 measurement (\citealp{gaia2018}). Their three different galactic models give pericentric passages of ($89.5^{+55.9}_{-47.5}$ kpc, $112.6^{+58.4}_{-60.6}$ kpc, $86.9^{+59.2}_{-44.4}$ kpc).
This result seems at odds with indirect orbit estimations made earlier by \citealp{sohn2007} and M08 based mainly on a ``break" radius at $400$ pc; beyond this radius, the kinematics show deviations from axisymmetry in the angular distribution of the velocities and a rise in the velocity dispersion. $N$-body simulations show that such a rise is typically found as an artifact of the inclusion of tidally unbound stars (\citealp{read2006,Klimentowski2007,Munoz2008}). In accordance with the kinematics, as pointed out by M08, there is a significant stellar population change at this break radius, with younger stars being concentrated within it. Again, it is expected from hybrid hydro/$N$-body simulations (as in \citealp{Mayer2005}) that tidal stirring in dwarfs would cause strong gaseous inflows leading to enhanced central star formation.
Taking this break radius both in stellar composition and kinematics into consideration as the effect of tides caused by the Leo I's closest approach, both papers (\citealp{sohn2007,mateo2008}) estimate pericentric passages smaller than 30 kpc.
In a later paper, \citealp{sen2009}, run several significance tests on M08's kinematic sample to determine the robustness of their break radius. They conclude that even though modest streaming motion of at most 6.2~km~s$^{-1}$\ is significant at the 5\% confidence level, with a best fitting break radius of 447 pc, it is still unconstrained (from 115.2 pc to 928.9 pc) with that data set.
\citealp{lokas2008} use an ``interloper removal" method to remove unbound stars. Even though shown by the authors to be satisfactory for a galaxy in which mass follows light, if the mass-to-light is to increase strongly with radius (if embedded in a cuspy or cored DM halo, as shown in \citealp{Battaglia2013}), it can remove an important fraction of genuine members and artificially lead to a declining velocity dispersion.
In light of these discrepancies, we sample different tidal radii around the best fitting radius considered in \citealp{sen2009}. Our approach uses a dynamical model for the bound stars and an input number density for stars that have been tidally removed. For various models of the tidal radius, we modify the 3D luminosity density accordingly and calculate the projected dispersion profile for that tidal model. We then compare the dispersion profiles of the tidal model to the one with no tidal effects and hence determine the relative corrections to the projected kinematics. These relative corrections are applied to the as-measured dispersion profiles, and the tidal dispersion kinematic and projected density profiles are then the inputs for the orbit-based modeling. This procedure approximates the effects from tidal forces as input to the dynamical models.
The input dynamical model is an isotropic model fitted to the measured kinematics, assuming all radii are in dynamical equilibrium. This model then provides 3D number density and projected kinematic profiles. Thus, our first assumption considers no tidal effects. Our second assumption has a tidal radius of 505$\arcsec$. For this case we measure the light in the 3D density that falls outside of 505$\arcsec$, remove that light from all inner radii, and then re-measure the projected kinematics and surface brightness profile. The third assumption has a tidal radius of 430$\arcsec$, for which we perform the same operation. The ratio of the radial profiles, both surface brightness and dispersion, between the tidal model and the input dynamical model are then applied to the measured quantities.
Figure \ref{fig:tidal} shows the resulting surface brightness and velocity dispersion profiles for two selected tidal radii at 430$\arcsec$\ and 505$\arcsec$. As it is evident from the figure and the calculations, a smaller assumed tidal radius will lower the velocity dispersions and increase their uncertainties. The effects are most drastic for the outer bins since a higher fraction of the light would be tidal light. We use these modified profiles as part of our modeling, as explained in Section \ref{sec:DYNMOD}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{TIDAL_EFFECTS/TIDAL.png}
\caption{Velocity dispersion (upper) and luminosity density (lower) as a function of radius. The black lines are the measured values. Yellow and green lines correspond to the modified profiles when assuming tidal radii of 430$\arcsec$ and 505$\arcsec$ respectively. Removing different ``sheets" of light corresponds to different effects on the measured velocity dispersion. }
\label{fig:tidal}
\end{figure}
As a sanity check on the approach taken above, we can use a modification to the orbit-based models. We modify the best-fit orbit model of the full kinematic data set in a way consistent with the assumptions of tidal effects. This model includes a dark halo component. Since our approach assumes that orbits that go beyond the tidal radius are not considered in dynamical equilibrium and that their contribution to the projected velocity dispersion on the inner bins needs to be removed, we remove orbits whose energies place them outside of the specified tidal radius by forcing their weights to zero. In this way, the resultant orbital distribution should produce the modified kinematic radial profiles that we assume.
A limitation to this procedure is that replacing orbital weights with zero creates an internally inconsistent model. We remove orbits based on both tidal radii of 430$\arcsec$ and 505$\arcsec$, and then re-calculate projected kinematics. In both cases, the velocity dispersion profiles show similar, but not exact, behavior as in \cref{fig:tidal}. The slope of the velocity dispersion profiles are nearly the same, and there is a small (1~km~s$^{-1}$) offset in normalization. This normalization is expected since we are removing mass from the system but not re-calculating the mass within the orbit model. Since the shape of the velocity dispersion profile is nearly the same as our approximated value, we expect the same result for the underlying potential in the central regions.
We stress that our approach to including tidal effects serves as an aid to understand how tidally-perturbed stars could modify our conclusions. A proper model would include the Milky Way's actual effect on the stellar orbital distribution and surface brightness profile of the stripped stars. Given the check we performed and the already large uncertainties on the dark halo, any refinement in the models of tidal effects will be minimal.
\begin{figure*}[!h]
\centering
\subfloat{
\includegraphics[width=\columnwidth]{MODELS/0.0_3.png}
}
\subfloat{
\includegraphics[width=\columnwidth]{MODELS/0.0_4.png}
}\\
\subfloat{
\includegraphics[width=\columnwidth]{MODELS/10.3.png}
}
\subfloat{
\includegraphics[width=\columnwidth]{MODELS/7.4.png}
}
\caption{$\chi^2$ distributions for every model. MODEL 1: every velocity data point included. MODEL 2: all but the central and last M08 velocity bins included. MODEL 3: removed a sheet of “tidal light”, considered to be the stars outside a 430$\arcsec$\ tidal radius. MODEL 4: removed a sheet of “tidal light”, considered to be the stars outside a 505$\arcsec$\ tidal radius. The first panels show that none of the best fit models require a stellar $M_*/L$ larger than $\sim 5.0$. The second panels show that every model requires a central black hole of mass $\sim$ \mbhApproximate $\times 10^6$ \Msun. The circular velocity is not at all well constrained in any of the models, although there is a slight preference for values above 40~km~s$^{-1}$\ in the ones that are not tidally corrected. The core radius $r_c$ is also poorly constrained for all except Model 1, where a value of $\sim$1.7 kpc is favored, implying a flat central core. We sample $v_{cir}$ down to 10~km~s$^{-1}$\ and for these plots we restrict the plot range around the most probable values in order to show the variations better.
}
\label{fig:models}
\end{figure*}
\begin{figure*}
\centering
\subfloat{
\includegraphics[width=0.9\textwidth]{MODELS/conf_fill0.0_3.pdf}
}\\\vspace{-0.5cm}
\subfloat{
\includegraphics[width=0.9\textwidth]{MODELS/conf_fill0.0_4.pdf}
}\\\vspace{-0.5cm}
\subfloat{
\includegraphics[width=0.9\textwidth]{MODELS/conf_fill10.3.pdf}
}\\\vspace{-0.5cm}
\subfloat{
\includegraphics[width=0.9\textwidth]{MODELS/conf_fill7.4.pdf}}
\caption{MODEL 1: every velocity data point included. MODEL 2: all but the central and last M08 velocity bins included. MODEL 3: removed a sheet of ``tidal light", considered to be the stars outside a 430$\arcsec$\ tidal radius. MODEL 4: removed a sheet of ``tidal light", considered to be the stars outside a 505$\arcsec$\ tidal radius. Left: enclosed mass vs. radius for black hole (green), total (black), stellar (orange) and DM (blue). All the models with $\Delta\chi^2<1$ are displayed in their respective color's lighter shade. Middle: velocity dispersion tensor anisotropy vs. radius. All the models with $\Delta\chi^2<1$ are from the shaded region. Right: velocity dispersion as a function of radius. All the models with $\Delta\chi^2<1$ are in blue.}
\label{fig:derpar}
\end{figure*}
\section{DYNAMICAL MODELS}
\label{sec:DYNMOD}
We aim to determine the mass distribution of the galaxy that best fits both our spectroscopic and photometric data. The distribution, in general, can be characterized by the following equation:
\begin{equation}
\rho(r)=\frac{M_*}{L}\nu(r)+\rho_{DM}(r)+M_\text{BH} \delta(0)
\end{equation}
where:
\begin{itemize}
\item $\rho(r)$ is the matter density profile.
\item $\frac{M_*}{L}$ is the stellar mass-to-light ratio, assumed constant with radius.
\item $\nu(r)$ is the deprojected luminosity density profile.
\item $\rho_{DM}(r)$ is the dark matter density profile.
\item $M_\text{BH}$ is the black hole mass.
\item $\delta(0)$ is a delta function at $r=0$.
\end{itemize}
For the DM profile, we use a cored logarithmic potential, which translates to a dark matter density $\rho_{DM}(r)$:
\begin{equation}
\rho_{DM}(r)=\frac{v_c^2}{4 \pi G} \frac{3r_c^2+r^2}{(r_c^2+r^2)^2}
\end{equation}
where $v_c$ is the asymptotic circular speed at $r=\infty$ and $r_c$ the core radius. These profiles have a flat central core of density $\rho_c = 3 v_c^2 / 4 \pi G r_c^2$ for $r \lesssim r_c$ and an $r^{-2}$ profile for $r>r_c$.
We use orbit-based models for the dynamical analysis. These orbit-based dynamical models where first used by \citet{Schwarzschild1979}, with details in \citet{Gebhardt2000,Thomas2004,Siopis2009}.
The input to the models are the stellar luminosity density profile (which we describe in Section \ref{sec:LUMINOSITY}) and the kinematics.
The variables are the DM density profile's parameters, the stellar mass-to-light ratio and the black hole mass.
With a defined potential, we then generate representative orbits and then find the weights of the orbits to best maximize the likelihood function fit to the kinematics.
We then modify the variables, re-run the orbit library and fit, providing a $\chi^2$ for each parameter set. After sampling the variables, we then produce distribution of quality of fits. For each model, we run approximately 20,000 stellar orbits, fitted to the 28 velocity profiles. The velocity profiles come from the 23 VIRUS-W LOSVDs and the 5 LOSVDs generated from the archival individual velocities at large radii. For the black hole mass, we sample from 0 to 10$^7$$M_{\odot}$, for the stellar $M/L$ we sample from 0.01 to 5, for the DM scale radius we sample from 0 to 3kpc, and for the DM circular velocity we sample from 10--60~km~s$^{-1}$. The exact sampling within those ranges is discussed below. For the light profile, we deproject the surface brightness profile presented above. Since Leo I is already flattened with an axis ratio of 0.8, we assume an edge-on configuration. This deprojection is then unique, and we use the algorithm as given in \citet{Gebhardt_1996}. All models are run on the Texas Advanced Computation Center, and we have approximately 30,000 models, where each one takes about an hour on a single CPU.
\subsection{DATA BINNING AND PARAMETER SPACE SAMPLING}
Due to the tidal and crowding issues raised in prior sections we use both the VIRUS-W and M08 datasets, replacing M08 with VIRUS-W data within 178$\arcsec$, where crowding is significant. The data are binned in a polar grid of 20 radial bins and 5 angular bins. The VIRUS-W data extend to $80\arcsec$, filling the polar bins as available, whereas M08 data are divided into 5 radial bins (from 178$\arcsec$\ to 668$\arcsec$) to include sufficient fibers in each bin.
To deal with the possible effects of tidal disruption, we arrange the data into four different configurations:
\begin{itemize}
\item MODEL 1: Considering all data above
\item MODEL 2: Removing the last kinematic radial bin
\item MODEL 3: Removing a sheet of stars assuming the tidal radius is at 430$\arcsec$
\item MODEL 4: Removing a sheet of stars assuming the tidal radius is at 505$\arcsec$
\end{itemize}
We will refer to these models in terms of their number from here on.
When searching parameters that minimize $\chi^2$ we start with a sparse grid search, which we combine with a Latin hypercube for the initial sampling of the space. A Latin hypercube randomly samples the space ensuring each sample is the only one in each axis-aligned hyperplane containing it, a method independent of the size of the space. Subsequent minimization was carried out by a constrained optimization using response surfaces with radial basis functions as the response function. Briefly, the algorithm uses radial basis functions to interpolate over the samples. It further selects a new point that both minimizes the interpolation and is farther away than a given distance from any other point. With each iteration, the given distance is modified to avoid getting stuck in local minima. Details of the original algorithm can be found in \citet{blackbox}.
\subsection{RESULTS}
\label{sec:RESULTS}
\Cref{fig:models} shows the $\chi^2$ for models and all parameter selections. In each figure, each point represents an individual parameter selection. As reference, the red curve is a convex hull that envelops the lower bound of the $\chi^2$ values. To define the best fit parameters and the uncertainties, we use the red curve in that figure. We find the $\Delta\chi^2 = 1$ from the model with the minimum value on either side to define the 1-$\sigma$ range, and take the mean to represent the best-fit value. Table \ref{tab:results} presents these values for each parameter and model.
All models prefer a black hole of $\sim$ \mbhApproximate $\times 10^6$ \Msun. Stellar mass-to-light ratios are poorly constrained, with upper limits of $\sim 2$ for the models without any tidal corrections and $\sim 5$ for the others, implying poorly constrained dynamical estimates for the stellar contribution. These are consistent with external constraints on the total mass of Leo I from surface brightness and star formation history studies \citep{McConnachie_2012}.
The DM halo parameters are the least constrained, having a tendency for high circular velocities $\sim 50 \,\mathrm{km/s}$ and core radii closer to $1 \,\mathrm{kpc}$.
\begin{deluxetable*}{lccccccccccc}
\tabletypesize{\normalsize}
\tablecaption{Leo I best fit density parameters}
\tablewidth{0pt}
\tablehead{
\colhead{Model} &\colhead{Description}&\colhead{$\frac{M_*}{L} ~ $} & \colhead{$M_\text{BH}$} & \colhead{$v_{\text{cir}}$}& \colhead{$r_c $} & \colhead{$\frac{M_\text{dyn}^{300}}{L} ~$} & \colhead{$\Delta \chi_\text{NO BH}^2$} & \colhead{$M_*^{300}$} & \colhead{$M_\text{DM}^{300} $} \\
& &
$[\frac{M_{\odot}}{L_{\odot}}]$ & $[10^6 M_{\odot}]$ & $[\frac{\text{km}}{\text{s}}]$&$[\text{kpc}]$ & $[\frac{M_{\odot}}{L_{\odot}}]$ & & $[10^6 M_{\odot}]$ & $[10^6 M_{\odot}]$\\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10)
}
\startdata
1 & All data & $<1.8$ & $3.3 \pm 1.8$ & $>34.5$ & $1.5 \pm 0.6$ & $2.6 \pm 0.3$ & $ 13.8 $ & $4.2\pm4.1 $ & $7.5\pm 1.9$\\
2 & R$<500$\arcsec & $<1.8$ & $3.5 \pm 1.9$ & $>34.8$ & $1.4 \pm 0.5$ & $2.6 \pm 0.3$ & $ 10.3 $ & $ 3.3 \pm 3.3$ & $7.9 \pm 1.4$ \\
3 & $r_{tidal}=430$\arcsec & $<4.1$ & $3.2 \pm 2.2$ & $-$ & $ >0.6 $ & $3.2 \pm 0.4$ & $ 7.1 $ & $9.9 \pm 9.9$ & $ 5.9 \pm 5.6$ \\
4 & $r_{tidal}=505$\arcsec & $<4.7$ & $3.8 \pm 3.0$ & $-$ & $ >0.48 $ & $4.0 \pm 0.4$ & $ 6.4 $ & $12.0 \pm 12.0$ & $ 7.7 \pm 7.3$\\
& & & & $c$ & $r_s [\text{kpc}]$ & & & & \\\hline
5 & NFW & $0.7 \pm 0.5$ & $2.1 \pm 1.6$ & $7.0 \pm 3.2$ & $>5.6$ & $2.3 \pm 0.3$ & $ 14.3 $ & $ 3.0 \pm 1.9$ & $13.8 \pm 1.0$ \\
\enddata
\tablecomments{The best fit values extracted from \cref{fig:models}. Errors are from a $\Delta\chi^2 = 1$. When the errors extended further than the surveyed range we included inequality signs. Unconstrained parameters are indicated with a dash. Column (1): model name. Column (2): model description. Column (3): stellar mass-to-light ratio. Column (4): black hole mass. Columns (5) and (6): circular velocity and core radius for Models 1 through 4 and the NFW concentration parameter and scale radius for Model 5. Column (7): dynamical $M/L$ inside 300 pc. Column (8): difference in $\chi^2$ between the best solution including a black hole and that without one. Columns (9) and (10): the enclosed stellar mass and DM mass at 300 pc.}
\label{tab:results}
\end{deluxetable*}
\Cref{fig:derpar} shows derived quantities for all parameter selections within $1\sigma$. Within these figures, the panels represent total and stellar enclosed mass, anisotropy in the velocity dispersion, and projected velocity dispersion as a function of radius.
We quantify the anisotropy in the velocity dispersion tensor with $\sigma_r/\sigma_t$, the ratio of radial to tangential anisotropy in the galaxy. The tangential anisotropy $\sigma_t$ is defined as
\begin{equation}
\sigma_t \equiv \sqrt{\frac{1}{2}(\sigma^2_\theta+\sigma^2_\phi+\nu^2_\phi)}
\end{equation}
\newline
where $\nu_\phi$ is the rotational velocity (streaming motions in $r$ and $\theta$ are assumed to be zero).
Overall, the truncated models contain close to half of the enclosed mass of the non-truncated ones, the latter having an enclosed mass of $\sim 10^8$ $M_{\odot}$.
For Models 1 and 2, which do not take into account tidal effects, the anisotropy shows large variations, such that the stars arrange themselves on orbits that start isotropic, then become extremely radial, and end up isotropic as a function of radius.
Whereas models including tidal effects have smoothly varying anisotropy profiles.
The model with the most modest tidal effects, Model~4, is nearly isotropic throughout the galaxy.
The rightmost panels in \Cref{fig:derpar} show the projected velocity dispersion profiles as a function of radius for data and models. For the blue points that represent the models, we include all models that are within $\Delta\chi^2=1$ deviation from the best fit. The variation in projected dispersion is very small for this set, as can be seen in the figure. The models represent the kinematic data well in nearly all regions.
\Cref{fig:ml} shows the dynamical $M/L$ ratio vs. logarithmic radius for the four assumed models. It is clear that the central kinematics are dominated by the black hole, and the variation between models is minimal for that region. The largest variation between models happens between 100$\arcsec$ and 200$\arcsec$, which is the region where the stellar component dominates. At larger radius, the $M/L$ values rise due to the inclusion of the dark halo when using the full datasets (Model 1 or 2). For the models with a tidal radius, the $M/L$ at large radii is consistent with being constant.
Overall, the most robust prediction is that of the black hole mass.
For the adopted cored logarithmic DM profile, the various assumptions for the tidal effects do not influence the estimates, ranging from \mbhModelThree $\times 10^6$ \Msun\ to \mbhModelFour $\times 10^6$ \Msun. Table \ref{tab:results} summarizes all the results for the different models discussed above.
\begin{figure}[h]
\centering
\includegraphics[trim= 0 0 0 700 ,clip,width=0.45\textwidth]{MODELS/ml.pdf}
\caption{Derived dynamical $M/L$ vs radius for the various models in log scale. Regardless of tidal radius and kinematical assumptions, the kinematics of the central region of the galaxy are dominated by the black hole. The presence of a relatively low amount of DM is required only in the outer regions of the galaxy. In the $1^\prime-2^\prime$ radial region, a typical stellar population $M/L$ is adequate to fit the models. }
\label{fig:ml}
\end{figure}
\subsection{Central Mass Density Profile}
To investigate whether the black hole mass is influenced by the assumed DM potential, we also run dark matter models that have a central density increase.
A standard model of the dark matter profile, especially for systems without significant influence from baryonic processes, is one defined by \citet{Navarro_1997} (NFW). This model has a central density rise compared to the cored profile that we use as default. The NFW profile is defined by two parameters, the concentration $c$ and the scale radius $R_s$.
We run NFW models for Model 1, where we include all kinematic observations. This comparison will be representative of potential changes. We sample concentration indices from 3 to 60, and scale radii from 0.1 to 20~kpc, including similar ranges in black hole mass and stellar $M/L$ as when using the core logarithmic potential. The best-fitted black hole mass is $(2.1\pm1.6)\times 10^6$$M_{\odot}$, stellar $M/L$ is $0.7\pm0.5$, $c$ is $7.0\pm3.2$, and scale radius $\gtrsim 5.6$ kpc. \Cref{fig:nfw_profile} presents the results.
The overall fit to the kinematic data is worse for the NFW profile compared to the cored logarithmic profile. The best-fit black hole mass from the NFW profile is lower but consistent with the value obtained from the logarithmic profile.
The change in $\chi^2$ between the model with no black hole and the best-fitted black hole is 14, which is a higher difference than that measured in the other models.
Thus, the black hole mass is preferred at a stronger level with the NFW profile.
The stellar $M/L$ is even smaller, making the contribution to the potential from the stars even less important.
Since the overall fit is worse and the stellar $M/L$ is pushed into an even more unrealistic regime, the NFW profile does not appear to be a good representation of the density profile for Leo I.
We adopt the cored logarithmic profile as the model that best represents the dark matter contribution for Leo I.
A value of 7 for the concentration parameter preferred by the NFW models is atypical for a dwarf spheroidal galaxy, with a value $\sim 15$ being more typical in the literature \citep{Dutton2014,Ludlow2013}. If we restrict to only these higher concentration parameters ($c > 12$), the inferred black hole mass remains roughly constant at $2\times 10^6$ $M_{\odot}$, and the $\chi^2$ increases. In the extreme NFW case of $c > 50$, the black hole mass uncertainty increases to include low masses, but the model becomes unrealistic with a large $\chi^2$. For comparison, restricting to a smaller core radius in the cored logarithmic profile $r_c < 1$ kpc, changes $\Delta \chi^2$ by a small amount and similarly leaves the black hole mass unaffected. Thus, using either a more standard NFW profile or the one found here, the significance of requiring a black hole remains the same.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{MODELS/0.0_3nfw.png}
\caption{$\chi^2$ distributions for a model taking an NFW dark matter profile. The top-left panel is the stellar M/L. Top-right is the black hole mass. Bottom-left is the NFW concentration parameter. Bottom-right is the NFW scale radius.}
\label{fig:nfw_profile}
\end{figure}
\section{DISCUSSION}
\label{sec:DISCUSSION}
The two main conclusions of this work are that we find a central black hole mass of $\sim$ \mbhApproximate $\times 10^6$ \Msun, and a dark halo profile that is much more uncertain compared to previous studies. A black hole mass this large in Leo~I is significant in many respects. It is the first detection of a black hole in a dwarf spheroidal using spatially-resolved kinematics, it has a mass that is similar to the total stellar mass of the system, and it is a comparable mass to that of the black hole in the center of the Milky Way. While the uncertainty on the mass is large, the no-black-hole models are significantly ruled out. Given the implications of a such a result, we discuss caveats and potential concerns regarding the robustness of the black hole mass measurement.
We first describe the caveat that these results are obviously dependent on the various mass profiles we have assumed. The dynamical models rely on a tracer population (i.e., the stars) that are only responsive to the overall gravitational potential. The most robust method for determining the mass density profile is using a technique without reliance on parametric models. One example is given in \cite{Jardel2013} where they determine the mass density profile in radial bins. With the mass density profiles measured one can then fit various models to those profiles to split it into individual components.
This brute force method is impractical given our compute resources.
For this paper, we apply the more traditional approach where we parameterize the mass density profile. We use four parameters characterizing a black hole, a two-component dark halo and a stellar component, under two different suites of models. Thus, the robust result is the underlying sum of these three components. For the individual components there can be degeneracies of the parameters. An obvious degeneracy is between the stellar mass-to-light ratio and the DM profile. We have explored this specific degeneracy by studying the models that have mass-to-light ratios typical for these stars, where the stellar populations models suggest values around 2. The results for the black hole mass change little, and well within the 68\% confidence band for the black hole mass, when restricting the range of the stellar mass-to-light ratio.
Thus, even if we force the stellar mass-to-light ratio to a standard value, we would measure the same black hole mass.
For the black hole mass, the results are the most robust since our kinematics are well measured in the central regions.
If either the stars or the DM were to mimic the effect of a black hole, it would require an unrealistically steep central density profile.
The nearly constant velocity dispersion within 200$\arcsec$ at around 12 km~s$^{-1}$, and in particular the central value at 15$\arcsec$, is the primary driver for a model preference of a central black hole.
The central kinematic point will have the most influence, and all models we have run include that point.
Since all of the VIRUS-W data are treated in the same manner through this analysis and have similar signal-to-noise ratio, we always include all VIRUS-W data.
Furthermore, the large sphere of influence due to this large black hole mass includes all of the VIRUS-W kinematics.
Since we have a full suite of parameters for the DM profile, we can explore those parameter regions that are more typical for dSphs and study potential changes in the black hole mass with different dark halo properties. We find very consistent values for the black hole mass under a large variety of dark halo properties, which includes those parameters that have traditionally been used for dwarf spheroidals. Only when going to extreme models where the NFW concentration is above 50 do the $\chi^2$ contours extend to lower masses, at the expense of a very poor fit compared to our preferred model. The reason is that we have such high signal-to-noise kinematics in the central region, where the black hole has the most influence. The sphere of influence of the black hole is commonly used as an approximate radius that one needs to spatially resolve for a robust estimate. This radius is where the mass of the black hole is equal to the stellar mass or dark matter mass, which ranges from 125--250 pc or 100$\arcsec$--200$\arcsec$. Within these radii, we have our highest quality data.
The expectation from a model with no black hole and the light profile of Leo I is to have the velocity dispersion decrease towards the center. The kinematics published by (\citealp{mateo2008}) do in fact show a decrease, and we argue in Section \ref{sec:CROWDING} that decrease they measure is due to not considering effects of crowding. Crowding effects are easily understood and must be present at some level. When we correct for crowding, we recover an increase in the central velocity dispersion. The additional kinematic information comes from our VIRUS-W observations using integrated light. The integrated-light kinematics naturally do not suffer from crowding effects. Getting consistent results for the velocity dispersion from either correcting the individual stellar kinematic or the integrated-light kinematics provides greater confidence in the stellar velocity dispersion profile. This profile is clearly the key to determining the central kinematics. It is important that all dwarf spheroidal kinematics be re-evaluated for crowding effects in the central regions.
The no-black-hole models have $\Delta\chi^2$ from the best fit model that range from 6.4 to13.8, formally excluding the no-black-hole case at over 95\% confidence.
These $\Delta\chi^2$ are similar to many of the published black hole models in normal galaxies (e.g. \citealp{Erwin:2017hsx}). While we use dynamical models with a nonparametric orbital distribution, additional tests could include triaxiality and nonequilibrium dynamical effects. These additional considerations should have a small effect on the mass of the black hole, given the large $\Delta\chi^2$. The large ratio of the black hole mass to the host galaxy mass has been detected previously in other galaxies, with values ranging from 5--25\% of the galaxy mass being in the black hole for some systems. \citealp{Ahn2017} find a black hole of 13\% of the galaxy mass in vucd3 and 18\% in M59c0; \citealp{Yildirim2015} find 5\% in NGC1277 and \citealp{Walsh2017} find 1\% in Mrk1216, while \citealp{denBrok2015} find 10\% in NGC4395. The typical value from the full sample of published masses is 0.15\% (\citealp{Viero2014}). For Leo I, the relative mass of the black hole to the host depends on the radial extent one uses to define Leo I. Over the four models presented, within 300~pc (a standard unit used for these systems), the black hole mass ranges from 16-22\% of the total mass. This percentage decreases as one considers the full mass of the system, going down to 3\% for the most extended model. Thus, the black hole mass relative to the host mass is consistent with the other extreme systems reported.
A black hole mass this large in Leo~I is not expected from extrapolation of any of the standard black-hole to host-galaxy correlations. Of course, these small systems do not necessarily need to follow the trends seen in normal galaxies, but the black hole mass reported here does stand out. \cite{Luetzgendorf2015} explore extrapolations of black hole correlations down to globular cluster scales, and using a velocity dispersion of 12~km~s$^{-1}$, Leo~I has a black hole mass a factor of 100 more than the extrapolated trends. On the numerical side, \citealp{vanWassenhove2010} consider different scenarios for formation of a black hole in Milky Way satellites and place the likelihood of one of them having a black hole around the size found here to be below $1\%$, but this result also depends on the initial seed mass (see also \citealp{Bellovary2021}).
Runaway mergers of stellar mass black holes are unlikely to produce such a black hole in such a small galaxy, since the required initial mass function to reach the ratios seen in the models might be more top-heavy than what chemical abundances and star formation history studies suggest. An alternative explanation for the abnormally large central black hole may come from the recent study of Leo I's star formation history from \citealp{Ruiz_Lara_2020}. The authors identify a period of quenching from $z=1-2$ followed by re-ignition until almost present day when ram-pressure stripping may have shut it down as it fell into the Milky Way. While the authors speculate this re-ignition at intermediate redshifts could be due to a past merger with a smaller dwarf this could also be consistent with gas accretion and potential active galactic nuclei feedback, lending support to the high $M_{BH}$ values presented here.
\Citealp{AmaroSeoane2014} also suggest that dwarf systems may in fact have significantly larger black holes compared to the host-galaxy black-hole relationships. Having a larger sample of black holes limits measured in dwarf galaxies will be important to explore.
There are alternatives to a central black hole as derived from the observed kinematics. The first is a collection of dark remnants as opposed to a black hole (\citealp{Zocchi_2018,Aros_2020}). In this case, a central concentration of remnants implies two unlikely ingredients: (a) an extremely top-heavy initial mass function in order to produce the number of remnants necessary to match the detected dark mass, and (b) a small two-body relaxation time in order to get enough remnants into the central region. For Leo~I, the very low stellar density makes both unlikely, although detailed evolutionary models are warranted. The second is having a significant number of binary stars that increase the measured velocity dispersion. As shown in \citealp{Spencer_2017b}, the change in measured dispersion is small even for a large binary fraction of 50\%. Thus the change of the measured central dispersion in Leo I of 12 km~s$^{-1}$ would be minimal, especially out to 100$\arcsec$, and will have negligible effect on the measured black hole mass.
Regarding the dark halo, the best-fit logarithmic profile models have circular velocities that range from 30-60~km~s$^{-1}$, but are unconstrained at the upper limit. This range is larger than previous uncertainty estimates. The reasons for the large range include allowing more freedom in the dynamical models and including a range of tidal assumptions.
Having a circular velocity at the higher limit of 60~km~s$^{-1}$\ will significantly help with the ``too big to fail" problem \citep{Boylan2011}, as it implies these systems actually exist.
Having a circular velocity at the lower limit of 30~km~s$^{-1}$\ makes the dark halo barely significant.
Furthermore, the internal velocity anisotropies (as presented in \Cref{fig:models}) show large radial variation for the non-truncated models. The anisotropy profiles for the truncated models are much smoother, and in particular Model~4 is consistent with an isotropic distribution. Comparing to systems supported by dispersion, most show nearly isotropic orbits at large radii (\citealp{Gebhardt2003}), implying the truncated models are more realistic than the non-truncated models. The relative amount of dark matter in the truncated models is much less than for the non-truncated models. Thus, the need for dark matter is even less given the anisotropy profiles.
The assumption of a cored or NFW dark matter model has little effect on the black hole's presence, although the NFW profile does prefer a lower mass $(2\pm1) \times 10^6$$M_{\odot}$, rather than \mbhApproximate $\times 10^6$ \Msun. The change in $\chi^2$ between the model with no black hole and the best fit model is actually larger for the NFW profile. Thus, the NFW model provides slightly stronger implications for the presence of a black hole. The NFW model is a slightly worse fit overall, and we prefer the cored logarithmic profiles.
The strongest statement we can make from this analysis regarding the dark halo in Leo~I is that it is very uncertain and can accommodate large differences in interpretation. In fact, the most realistic models (i.e., the truncated models) have a very weak need for a dark matter halo. Regarding the black hole mass, we have explored the models and assumptions in a variety of ways, and the significance of the black hole mass remains strong. It is worthwhile to continue studying dwarf spheroidals using robust and general dynamical models.
\section{ACKNOWLEDGMENTS}
We are grateful for the excellent and extensive comments from the referee, which significantly improved this work. EN and KG worked with the support by the National Science Foundation under Grant No. 1616452. Some of the data presented in this paper are obtained from the Mikulski Archive for Space Telescopes (MAST).
\newpage
\bibliographystyle{aa}
|
2210.13569
|
\section{For every submission}
\subsection{Did you discuss the \textit{limitations} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{mainClaims}{Yes,No,N/A}{}\\[0.2cm]
\tf[0.85]{mainClaimsJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss any potential \textit{risks} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{risks}{Yes,No,N/A}{}\\[0.2cm]
\tf[0.85]{risksJustification}{}
\end{tabular}
\end{Form}
\subsection{Do the abstract and introduction summarize the paper’s main claims?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{abstractIntro}{Yes,No,N/A}{}\\[0.2cm]
\tf[0.85]{abstractIntroJustification}{}
\end{tabular}
\end{Form}
\section{Did you use or create \textit{scientific artifacts}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{createArtifacts}{Yes,No}{}\\[0.2cm]
\end{tabular}
\end{Form}
If yes:
\subsection{Did you cite the creators of artifacts you used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{citeCreators}{Yes,No,N/A}{}\\[0.2cm]
\tf{citeCreatorsJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the \textit{license or terms} for use and/or distribution of any artifacts?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{legalGrounds}{Yes,No,N/A}{}\\[0.2cm]
\tf{legalGroundsJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss if your use of existing artifact(s) was consistent with their \textit{intended use}, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{intendedUse}{Yes,No,N/A}{}\\[0.2cm]
\tf{intendedUseJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the steps taken to check whether the data that was collected/used contains any \textit{information that names or uniquely identifies individual people} or \textit{offensive content}, and the steps taken to protect / anonymize it?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{personallyIdentifiableInformationOrOffensiveContent}{Yes,No,N/A}{}\\[0.2cm]
\tf{personallyIdentifiableInformationOrOffensiveContentJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{documentation}{Yes,No,N/A}{}\\[0.2cm]
\tf{documentationJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{relevantStatistics}{Yes,No,N/A}{}\\[0.2cm]
\tf{relevantStatisticsJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you run \textit{computational experiments}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{computationalExperiments}{Yes,No}{}
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the \textit{number of parameters} in the models used, the \textit{total computational budget} (e.g., GPU hours), and \textit{computing infrastructure} used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{reportReproducibility}{Yes,No,N/A}{}\\[0.2cm]
\tf{reportReproducibilityJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the experimental setup, including \textit{hyperparameter search} and \textit{best-found hyperparameter} values?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{bestFoundHyperparameter}{Yes,No,N/A}{}\\[0.2cm]
\tf{bestFoundHyperparameterJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report \textit{descriptive statistics} about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{descriptiveStatistics}{Yes,No,N/A}{}\\[0.2cm]
\tf{descriptiveStatisticsJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{existingPackages}{Yes,No,N/A}{}\\[0.2cm]
\tf{existingPackagesJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you use \textit{human annotators} (e.g., crowdworkers) or \textit{research with human subjects}?} If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{hummanAnnotators}{Yes,No}{}\\
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{fullTextInstructions}{Yes,No,N/A}{}\\[0.2cm]
\tf{fullTextInstructionsJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such \textit{payment is adequate} given the participants’ demographic (e.g., country of residence)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{payment}{Yes,No,N/A}{}\\[0.2cm]
\tf{paymentJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss whether and how \textit{consent} was obtained from people whose data you're using/curating (e.g., did your instructions explain how the data would be used)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{consent}{Yes,No,N/A}{}\\[0.2cm]
\tf{consentJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Was the data collection protocol \textit{approved (or determined exempt)} by an ethics review board?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{ethicsAmountSpent}{Yes,No,N/A}{}\\[0.2cm]
\tf{ethicsAmountSpentJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report the basic demographic and geographic characteristics of the \textit{annotator} population that is the source of the data?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{annotator}{Yes,No,N/A}{}\\[0.2cm]
\tf{annotatorJustification}{}
\end{tabular}
\end{Form} \\[0.3cm]
\end{document}
\section{Introduction}
Language models (LMs) are computational systems trained to predict upcoming tokens based on past context.
To perform this task well, they must construct a coherent representation of the text, which requires establishing relationships between words that occur at non-adjacent time points.
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{figures/design.pdf}
\caption{Characterizing verbatim memory retrieval in neural language models. In our paradigm, language models processed English text in which a list of nouns occurred twice.
We operationalized retrieval as the reduction in surprisal from the first to the second list presentation.
We measured retrieval while varying: a) set size, b) the structure of the second list, c) the length of the intervening text, and d) the content and structure of the intervening text.}
\label{fig:design}
\end{figure}
Despite their simple learning objective, LMs based on contemporary artificial neural network architectures perform well in contexts that require maintenance and retrieval of dependencies spanning multiple words.
For example, LMs learn to correctly match the grammatical number of the subject and a corresponding verb across intervening words; for example, they prefer the correct \emph{The \textbf{girls} standing at the desk \textbf{are} tall}, to the incorrect \emph{The \textbf{girls} standing at the desk \textbf{is} tall} \citep{linzen_assessing_2016, marvin_targeted_2018, gulordava_colorless_2018, futrell_rnns_2018}.
The ability to maintain context across multiple words is likely to be a central factor explaining the success of these models, potentially following fine-tuning, in natural language processing tasks \citep{devlin_bert_2019, brown_language_2020}.
The work discussed above has shown that LMs extract linguistically meaningful signals and that, over the course of learning, they develop a short-term memory capacity: the ability to store and access recent past context for processing, possibly akin to the working memory systems thought to enable flexible human cognitive capacities \citep{baddeley_working_2003}. What is the nature of the memory processes that LMs learn? Are these memory processes able to access individual tokens from the recent past \textit{verbatim}, or is the memory system more implicit, so that only an aggregate \textit{gist} of the prior context is available to subsequent processing?
Here, we introduce a paradigm (Fig.~\ref{fig:design}), inspired by benchmark tasks for models of human short-term memory \citep{oberauer_benchmarks_2018}, for characterizing short-term memory abilities of LMs.
We apply it to two particular neural LM architectures that possess the architectural ingredients to hold past items in memory: attention-based transformers \citep{vaswani_attention_2017} and long short-term memory networks \citep[LSTM]{hochreiter_long_1997}.
Whereas LSTMs incorporate the past by reusing the results of processing from previous time steps through dedicated memory cells, transformers use the internal representations of each of the previous tokens as input.
These architectural ingredients alone, however, are not sufficient for a model to have memory.
We hypothesize that whether or not the model puts this memory capacity to \textit{use} depends on whether the training task (next word prediction) requires it --- the parameters controlling the activation of context representations and subsequent retrieval computations are in both cases \textit{learned}.
Our goal is to determine whether and when the LMs we study maintain and retrieve verbatim representations of individual prior tokens.
First, we measure the \textit{detail} of the context representation: does the LM maintain a verbatim representation of all prior tokens and their order, or does it instead combine multiple prior tokens into a summary representation, like a semantic gist?
Second, we consider the \textit{resilience} of the memory to interference: after how many intervening tokens do the representation of prior context become inaccessible?
Third, we consider the \textit{content-invariance} of the context representations: does the resilience of prior context depend on semantic coherence of the prior information, or can arbitrary and unrelated information sequences be retrieved?
\section{Related Work}
Previous studies examined how properties of linguistic context influenced next-word prediction accuracy in transformer and LSTM LMs trained on text in English.
\citet{khandelwal_sharp_2018} showed that LSTM LMs use a window of approximately 200 tokens of past context and word order information of the past 50 words, in the service of predicting the next token in natural language sequences. \citet{subramanian_multi-scale_2020} applied a similar analysis to a transformer LM and showed that LM loss on test-set sequences was not sensitive to context perturbations beyond 50 tokens.
\citet{oconnor_what_2021} investigated whether fine-grained lexical and sentential features of context are used for next-word prediction in transformer LMs.
They showed that transformers rely predominantly on local word co-occurrence statistics (e.g. trigram ordering) and the presence of open class parts of speech (e.g. nouns), and less on the global structure of context (e.g. sentence ordering) and the presence of closed class parts of speech (e.g. function words).
In contrast with these studies, which focused on how specific features of past context affect LM performance on novel input at test time, our paradigm tests for the ability of LMs to retrieve nouns that are exactly repeated from prior context.
In a separate line of work bearing on memory maintenance in LSTMs, \citet{lakretz_emergence_2019, lakretz_mechanisms_2021} studied an LSTM's capacity to track subject-verb agreement dependencies.
They showed that LSTM LMs relied on a small number of hidden units and the gating mechanisms that control memory contents.
Here, we are similarly concerned with memory characteristics that support LM performance, but — akin to behavioral tests in cognitive science — we infer the \textit{functional properties} of LM memory by manipulating properties of repeated noun lists and observing the effects these manipulations have on the behavior (surprisal) of the LM rather than on its internal representation.
A third related area of research proposes \textit{architectural} innovations that augment RNNs and LSTMs with dedicated memory components \citep[e.g.][]{weston_memory_2015, yogatama_memory_2018} or improve the handling of context and memory in transformers \citep[see][for review]{tay_efficient_2020}.
Here, we are not concerned with improving architectures, but with developing a paradigm that allows us to study how LMs put to use their memory systems, whether those are implicit or explicit.
\section{Methods}\label{sec:methods}
\subsection{Paradigm: Lists of Nouns in Context}\label{sec:paradigms}
Noun lists were embedded in brief vignettes (Figure \ref{fig:design}, A and B).
Each vignette opened with a \emph{preface string} (e.g. ``Before the meeting, Mary wrote down the following list of words:'').
This string was followed by a list of nouns (the \emph{first list}), which were separated by commas; the list-final noun was followed by a full stop (e.g. ``county, muscle, vapor.'').
The first list was followed by an \emph{intervening text}, which continued the narrative established by the preface string (``After the meeting, she took a break and had a cup of coffee.'').
The intervening text was followed by a short \emph{prompt} string (e.g. ``After she got back, she read the list again:'') after which another list of nouns, either identical to the first list or different from it, was presented (we refer to this list as the \emph{second list}).
The full vignettes are provided in Section \ref{sec:vignettes} of the Appendix.
\subsection{Semantic Coherence of Noun Lists}\label{sec:noun_lists}
We used two types of word lists: arbitrary and semantically coherent.
Arbitrary word lists (e.g. ``device, singer, picture'') were composed of randomly sampled nouns from the Toronto word pool.\footnote{\url{http://memory.psych.upenn.edu/files/wordpools/nouns.txt}}
Semantically coherent word lists were sampled from the categorized noun word pool,\footnote{\url{http://memory.psych.upenn.edu/files/wordpools/catwpool.txt}}
which contains 32 lists, each of which contains 32 semantically related nouns (e.g. ``robin, sparrow, heron, ...'').
All noun lists used in experiments are reported in Tables \ref{tab:arbitrary_nouns} and \ref{tab:semantic_nouns} of the Appendix.
After ensuring there were at least 10 valid, in-vocabulary nouns per semantic set (as this was the maximal list length we considered), we were able to construct $23$ nouns lists.
Finally, to reduce the variance attributable to tokens occurring in specific positions, we generated 10 ``folds'' of each list by circularly shifting the tokens in the first list 10 times.
In this way, each noun in each list was tested in all possible ordinal positions.
This procedure resulted in a total of $23 \times 10 = 230$ noun lists.
\subsection{Language Models}\label{sec:models}
\paragraph{LSTM} We used an adaptive weight-dropped (AWD) LSTM released by \citet{merity_regularizing_2018}\footnote{Our code is available at: \url{https://github.com/KristijanArmeni/verbatim-memory-in-NLMs}. Our experiment data are available at: \url{https://doi.org/10.17605/OSF.IO/5GY7X}}, which had 3 hidden layers with 400-dimensional input embeddings, 1840-dimensional hidden states, and a vocabulary size of 267,735.
The model contained 182.3 million trainable parameters.
It was trained on the Wikitext-103 corpus \citep[][]{merity_pointer_2016} and achieved a test-set perplexity of 41.8. Full training hyperparameters are reported in Section \ref{sec:lstm_training_details} of the Appendix.
\paragraph{Transformer} We trained a transformer LM on approximately 40 million subset of the Wikitext-103 benchmark.\footnote{After retokenization with the BPE tokenizer, the training corpus contained 44,824,396 subword tokens.}
We retrained the BPE tokenizer on the concatenated Wikitext-103 training, evaluation, and test sets and set.
The vocabulary had 28,439 entries.
We trained both the 12-layer GPT-2 architecture (known as ``GPT-2 small'', 107.7 million trainable parameters) and, as a point of comparison, smaller, 1-, 3-, and 6-layer transformers (29.7, 43.9, and 65.2 million trainable parameters, respectively).
The context window was set to 1024 tokens and embedding dimension was kept at 768 across the architectures.
The perplexities for the 12-, 6-, 3- and 1-layer models on the Wikitext-103 test set were 40.3, 46.7, 60.1, and 93.2, respectively. The full transformer training details are reported in Section \ref{sec:transformer_training_details} of the Appendix.
We also evaluated the transformer LM pretrained by \citet{radford_language_2019}, accessed through the Hugging Face Transformers library \cite{wolf_transformers_2020}. We refer to this model simply as \mbox{GPT-2}.
It was trained on the WebText corpus, which consists of approximately 8 million online documents.
We used the GPT-2-small checkpoint which has 12 attention layers and 768-dimensional embedding layer.
The model contains 124 million parameters and has a vocabulary of 50,257 entries.
We used the maximum context size of 1024 tokens.
\subsection{Surprisal}\label{sec:surprisal}
For each token $w_t$ in our sequence, we computed the negative log likelihood (surprisal): $\texttt{surprisal}(w_t) = -\log_{2} P(w_t | w_{1}, \ldots, w_{t-1}) \label{eq:surprisal}$.
In cases when the transformer byte-pair encoding tokenizer split a noun into multiple tokens---e.g. ``sparrow'' might be split into ``sp'' and ``arrow''---we summed the surprisals of the resulting tokens.
\paragraph{Quantifying retrieval: repeat surprisal}
To quantify how the memory trace of the first list affected the model's expectations on the second list, we measured the ratio between the surprisal on the second list and the surprisal on the first list: $\texttt{repeat surprisal} = \frac{\bar{s}(L_2)}{\bar{s}(L_1)}\times 100 \label{eq:relative_surprisal}$, where $\bar{s}(L_1)$ refers to mean surprisal across non-initial nouns in the first list and $\bar{s}(L_2)$ to mean surprisal across all non-initial nouns in the second list.
We take a \textit{reduction} in surprisal on second lists to indicate the extent to which an LM has retrieved tokens from the first list.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.7\linewidth]{figures/per-token.pdf}
\caption{Median surprisal (over $N^{list} = 230$) broken down per token position in second lists of arbitrary nouns and semantically coherent nouns.
Negative values on x-axis represent 4 tokens of prompt string that introduced the second list: ``(she) read the list again''.
The 0-index marks the first noun in the list.
Line style and hue denote manipulation of the second list relative to the first list.
Error bands denote 95\% confidence interval around the median (bootstrap estimate).}
\label{fig:per-token}
\end{figure*}
\section{Transformer Results} \label{experiments}
We first describe the results of our experiments with the two largest transformer models, the off-the-shelf GPT-2 and the 12-layer transformer we trained; LSTM results are discussed in Section~\ref{sec:lstm}, and results with smaller transformers are discussed towards the end of this section.
\paragraph{The transformers retrieved prior nouns and their order; this capacity improved when the model was trained on a larger corpus.} \label{ex:word_order}
We tested whether the transformers could retrieve the identity and order of 10-token noun lists (arbitrary or semantically coherent).
To this end, we constructed vignettes in which the second list was either (a) identical to the first list, (b) a permutation of the first list, or (c) a list of novel nouns not present in the first list.\footnote{Novel nouns in the string were introduced by randomly selecting a list of nouns from one the 22 remaining lists in the noun pool. In semantically coherent lists, novel nouns were drawn from a different semantic category than the nouns in the first list.}
We then measured retrieval as reduction in surprisal from first to second list.
When the two transformers were presented with second lists that were repeated version of the first ones (blue in Fig.~\ref{fig:per-token}, B and C), token-by-token surprisal decreased compared to novel tokens, suggesting that the transformers were able to access verbatim representations of past nouns from context.
When the second list was a permutation of the first one, surprisal was higher compared to when it was repeated, indicating that the transformers expected the nouns to be ordered as in the first list.
Training set size played an important role in supporting verbatim recall: surprisal differences were considerably smaller for the transformer trained on the 44 million Wikitext-103 corpus (Fig.~\ref{fig:per-token},~B) compared to GPT-2 (Fig.~\ref{fig:per-token},~C).
In order to contextualize the magnitude of these retrieval effects, we computed the relative surprisal across all tokens in lists except the first one (Fig.~\ref{fig:set-size}).
When the first and second lists were identical (e.g. with $N=10$ arbitrary nouns), the Wikitext-103 transformer's median relative surprisal was at $81\%$ of the first list, compared to $87\%$ for the permuted lists, and $101\%$ for the novel lists.
In GPT-2, repeat surprisal was only $2\%$ of the first list, much lower than the $58\%$ for the permuted lists, and $96\%$ of the novel list.
Retrieval in GPT-2 was robust to the exact phrasing of the text that introduced the lists.
Replacing the subject `Mary' with `John' in the vignette, replacing the colon with a comma or randomly permuting the preface or the prompt strings did not affect the results (Fig.~\ref{fig:vignettes-ctrl}, bottom, Appendix \ref{sec:appendix}).
By contrast, the same perturbations reduced retrieval effects for Wikitext-103 (Fig.~\ref{fig:vignettes-ctrl}, top, Appendix \ref{sec:appendix}), supporting the conclusion that larger training corpus size contributes to robustness of transformer retrieval.
\paragraph{Transformer retrieval was robust to the number of items being retrieved.} \label{ex:set-size}
In studies of human short-term memory, performance degrades as the number of items that need to be retained increases (``set-size effects'', \citealt{oberauer_benchmarks_2018}).
Is our LMs' short-term memory similarly taxed by increasing the set size?
We varied the number of tokens to be held in memory with $N^{\textit{tokens}} \in \{3, 5, 7, 10\}$.
For this comparison, the length of the intervening text was kept at 26 tokens.
Results reported in Fig.~\ref{fig:set-size} show that for both the smaller Wikitext-103 transformer and the larger GPT-2, verbatim recall was, for the most part, consistent across the different set sizes.
For GPT-2, repeat surprisal increased monotonically with set size only when the order of nouns in second list, either semantically coherent or arbitrary, was permuted.\footnote{This increase in surprisal with set size for permuted sequences is to be expected, of course, because, if the model has perfect memory of the list of tokens, but cannot predict the order in which they will reoccur, then its probability of guessing the next item in a permuted list where $k$ items have yet to be observed will be $1/k$, and the mean value of $k$ is larger for larger set sizes.}
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{figures/set-size.pdf}
\caption{Verbatim token retrieval for varying number of tokens being retrieved (left) and the length of the intervening text (right).
Reported is proportion of list-averaged surprisal on second relative to first list of nouns.
Points show group median (over $N^{list}$ = 230).
Error bars denote 95\% confidence interval around the median (bootstrap estimate).
For set size manipulation, intervening text is fixed at 26 tokens. For intervening text manipulation, set size is fixed at 10 tokens.}
\label{fig:set-size}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.85\linewidth]{figures/filler-type.pdf}
\caption{LM memory retrieval for different intervening texts.
We plot relative list-averaged surprisal over all non-initial tokens in lists.
Points show group median (over $N^{\textit{list}}$ = 230).
Error bars denote 95\% confidence interval around the median (bootstrap estimate).
Note that in the top-row plots y-axis starts at 60\%.}
\label{fig:inter-text-type}
\end{figure*}
\paragraph{Transformer retrieval was robust to the length and content of intervening text, but scrambling the intervening text reduced retrieval of order information.}
For how long are individual items retained in the memory of the LM?
We tested this by varying the length of the intervening text for $N^{\textit{tokens}} \in \{26, 99, 194, 435\}$ (see Fig.~\ref{fig:design}, panel~B).
To generate longer intervening text samples, we continued the narrative established by the initial preface string (``Before the meeting, Mary wrote down the following list of words:'').
All intervening text strings ended with the same prompt string (``When she got back, she read the list again:'') which introduced the second list.
Memory retrieval in the transformer models, whether trained on Wikitext-103 or a much larger corpus size, was largely invariant to the size of the intervening text between the first and second lists (Fig.~\ref{fig:set-size}, B and C, respectively).
The results suggest that the two transformers were retrieving prior nouns using a form of direct indexing of the relevant words from the input buffer, rather than implementing a generic memory heuristic, such as predicting that the nouns that have occurred in the most recent 20 tokens will recur.
Increasing the length of \textit{well-formed, semantically coherent} intervening text does not, then, interfere with memory retrieval in the transformer.
In models of human memory, current context, such as immediately preceding text, can indeed be used as a cue for recalling the encoded items \citep[]{kahana_computational_2020}.
Does the transformers' capacity to retrieve copies of past nouns rely on the content and structure of the intervening text?
We tested this by creating incongruent and scrambled versions of the longest intervening text (435 tokens).
An incongruent condition was created by using intervening text that was syntactically well-formed but semantically incongruent with respect to the preface.
The scrambled version was created by randomly permuting the tokens of the intervening text.
The transformers' retrieval of past tokens was largely unaffected by the specific content of the intervening text, as long as the intervening text was coherent/well-formed (Fig.~\ref{fig:inter-text-type}).
However, in GPT-2, median surprisal across permuted arbitrary lists of nouns increased by $8\%$ when the intervening text was scrambled (Fig.~\ref{fig:inter-text-type}, bottom) compared to well-formed text.
This suggests that GPT-2 relied on narrative coherence of the intervening text, rather than its aggregate semantic content alone, as a cue for retrieving the ordering information of arbitrary word lists.
\paragraph{Transformer verbatim recall is learned, guided by attention, and requires increase in size.}
Having shown that the transformer LMs could flexibly and robustly retrieve words and their ordering verbatim from short-term memory (Figs. \ref{fig:set-size} and \ref{fig:inter-text-type}), we next asked: is this ability learned, or does it derive directly from the architecture? To address this question, we re-ran the experiment with varying number of tokens in lists with a randomly initialized transformer model (architecture as in Section \ref{sec:models}).
This random-weights model was unable to retrieve words or their order: for example, repeat surprisal remained at $100\%$ relative to first lists regardless of whether or not the nouns in the second list have appeared before (Fig.~\ref{fig:trf-ctrl}, top, Appendix \ref{sec:appendix}).
\begin{figure*}[ht]
\centering
\includegraphics[width=0.75\linewidth]{figures/trf-layer.pdf}
\caption{LM memory retrieval for models of different sizes.
Reported is relative list-averaged surprisal over all non-initial tokens in lists.
Points show group median (over $N^{list}$ = 230).
Error bars denote 95\% confidence interval around the median (bootstrap estimate).
Note that in these plots y-axis starts at 70\%.}
\label{fig:trf-layer}
\end{figure*}
Next we tested whether the transformers' ability to recall past tokens depended on the attention mechanism \citep[]{bahdanau_neural_2014, vaswani_attention_2017} which allows it, in principle, to use all past words, weighted according to their relevance, for next word prediction.
To test for the role of attention in verbatim retrieval, we randomly permuted the rows of key and query matrices in each of the 12 attention layers of GPT-2 and reran the experiment with varying number of tokens in lists.
The shuffled-attention model retained some capacity to retrieve past nouns (Fig.~\ref{fig:trf-ctrl}, bottom, Appendix~\ref{sec:appendix}), but the effect was greatly reduced. For example, repeat surprisal for lists of $N=10$ semantically coherent nouns was at $90\%$ relative to first lists for shuffled-attention, compared with $3\%$ for the intact model.
Intriguingly, this shuffled-attention model showed the same surprisal for repeated and permuted lists, indicating that it was no longer accessing word order information from the original list.
Thus, the attention mechanism is necessary for transformers to index past nouns and their order from memory.
Finally, a deep layered architecture is a key characteristic of transformers and performance typically scales with model size \citep[][]{radford_language_2019, kaplan_scaling_2020}.
Does the capacity to perform verbatim recall depend on model size?
To address this question, we trained transformers with 1, 3, 6 and 12 layers on our 40-million subset of Wikitext-103.
Consistent with the hypothesis that size -- in addition to architecture -- is crucial, the smaller 1- and 3-layer models showed a modest verbatim recall capacity, but were not sensitive to order (e.g. the 3-layer model shows $90\%$ repeat surprisal for repeated and permuted lists of $N=10$ tokens, Fig.~\ref{fig:trf-layer}).
Sensitivity to order progressively emerged in 6- and 12-layer models, where in the 12-layer model repeat surprisal levels were $5\%$ and $7\%$ lower for repeated relative to permuted 10-token lists (Fig.~\ref{fig:trf-layer}).
While this result confirms that even transformers trained on smaller amounts of text can exhibit short-term memory with sufficient increase in complexity, it remains unclear whether it is the increased depth or the parameter count alone that contribute to this increase in performance.
\section{LSTM Results}
\label{sec:lstm}
\paragraph{The LSTM retrieves gist-like memories over short intervening distances, facilitated by semantic coherence.} \label{sec:lstm_no_ordering}
The LSTM language model expected nouns in the second list to belong to the same semantic category as the first list, and especially to the category of the earliest nouns in the first list.
If the intervening text was no longer than 26 tokens, LSTM repeat surprisal across non-initial token positions (Fig.~\ref{fig:set-size}, A) showed a modest decrease ($5\%$) relative to first list, but only when the nouns in the first and second lists came from the same semantic category.
Examining surprisal values broken down by token position in the list (Fig.~\ref{fig:per-token}, top) shows that in semantically coherent lists of nouns, surprisal was higher for novel lists than for repeated or permuted lists, but this memory effect was only present for tokens near the beginning of the list.
In light of this limited evidence for retrieval in the LSTM across 26 intervening tokens, we examined whether the LSTM retrieves more successfully over shorter intervals. We reduced the intervening text to 4 tokens of coherent text (``Before the meeting, Mary wrote down the following lists of words. One was: <first list> \textbf{And the other:} <second list>'').
In this short-range retrieval setting, we now observed a small reduction of relative repeat surprisal of $5\%$ and $4\%$ for arbitrary lists of 3 or 5 nouns, respectively, as well as a stronger reductions ranging from $12\%$ (3-token list) to $5\%$ (10-token list) for semantically coherent lists (Fig.~\ref{fig:set-size_filler-short_awd-lstm}).
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/set-size_sce3_awd-lstm.pdf}
\caption{LSTM verbatim token retrieval for varying number of tokens being retrieved at short (4-token) intervening text.
Reported is proportion of list-averaged surprisal on second relative to first list of nouns (repeat surprisal).
Points show group median (over $N^{list}$ = 230).
Error bars denote 95\% confidence interval around the median (bootstrap estimate).}
\label{fig:set-size_filler-short_awd-lstm}
\end{figure}
Overall, the reduction in surprisal was comparable for repeated and permuted lists, indicating that the LSTM did not predict that words would occur in their original order.
Taken together, the experiments described in the section suggest that the LSTM retrieves a semantic gist of the prior list, rather than individual tokens in order.
Consistent with this notion of an aggregate semantic memory, we found that retrieval was stronger for semantically coherent lists, for which an aggregated semantic representation would be closer to each of the individual words in the list.
\section{Discussion}
Short-term memory---the capacity to temporarily store and access recent context for current processing---is a crucial component of language prediction.
In this paper, we introduced a paradigm for characterizing a language model's short-term memory capabilities, based on retrieval of verbatim content (sequences of nouns) from prior context, and used this paradigm to analyze LMs with transformer and LSTM architectures.
The transformers we tested were able to access verbatim information -- individual tokens and their order -- from past context.
Furthermore, this verbatim retrieval was learned and largely \textit{resilient} to interference from intervening context.
This indicates that the models (especially those trained on the largest corpora) implemented, via learning, a high-resolution memory system.
The ability to access individual tokens may in turn support functions that rely on token indexing, akin to the functionality of the general-purpose working memory (WM) buffer proposed in cognitive science \citep{baddeley_working_2003}.
Such flexible WM could subserve the reported ability of transformers to rapidly generalize to new tasks at runtime \citep{brown_language_2020}, also known as ``in-context learning''.
Indeed, in concurrent work to ours, \citet{olsson_context_2022} observed that small (2 or 3-layer) attention-only transformers developed attention heads that functioned as so-called ``induction heads''. These effectively performed pattern matching by looking over the past context for any occurrences of the current token and predicting the same (or similar) sequence completions.
Attention heads that learned this basic inductive computation were also shown to perform more general in-context learning for complex tasks such as language translation.
Similarly, it has been suggested that in standard RNNs such meta-learning requires a short-term memory mechanism known as fast weights \citep{schmidhuber_learning_1992, ba_using_2016} which can be thought of as analogous to self-attention in transformers \citep{schlag_linear_2021}.
However, a highly resilient verbatim memory system could also be disadvantageous if it causes the LM to place too much confidence on verbatim features of prior context for next-word prediction. Indeed, text generated from a transformer LM's predictions can be highly repetitive \citep{holtzman_curious_2020} -- it is possible that an over-reliance on accessing short-term memory may underlie this tendency.
In contrast to the transformers, the LSTM model only retrieved a coarse semantic category of previous lists, without fine-grained information about word order, and was only able to do so when the intervening text was short. This is in spite of the fact that the LSTM had a larger parameter count than the transformer models and obtained comparable perplexity on WikiText103 (Table \ref{tab:model_architectures}).
The tendency of LSTMs to rely on the fuzzy representation of past context for next-word prediction has been reported previously \citep{khandelwal_sharp_2018}.
Whereas in sequence-to-sequence tasks requiring recall of short lists of pseudowords, recurrent neural networks are a good model of human short-term memory \citep{botvinick_short-term_2006}, later research has shown that the copying capacity of LSTMs does not generalize to longer sequences of symbols \citep{grefenstette_learning_2015}.
Is tracking a shallow representation of context always a limitation?
Not necessarily.
Humans frequently maintain a ``good-enough'' (i.e. gist-like) representation of context \citep{ferreira_good_2007}.
When the potential for memory capacity is limited (e.g. when context must be compressed to a single hidden state as in an RNN) maintaining a broad, gist-like -- as opposed to token-specific -- memory of context may be more \textit{efficient} overall.
The memory paradigm and the measure of repeat surprisal introduced here allowed us to pinpoint computational differences in how neural LMs put their architectural capacities to use for storing and accessing context in short-term memory when processing English text.
While our decision to use autoregressive (left-to-right) LMs was ultimately based on our initial cognitive psycholinguistic motivation, it may be fruitful to apply our paradigm to other classes of transformer models, for example, bidirectional encoder-only transformers such as BERT \citep{devlin_bert_2019} and encoder-decoder models such as T5 \citep{raffel_exploring_2020}. These architectures have gained traction in applied NLP settings and it would be informative to test whether this paradigm can provide diagnostic value for LM performance on other benchmarks. Similarly, if the compressed context representation in LSTMs serves as a short-term memory bottleneck, it would be instructive to test LSTM LM architectures when explicitly augmented with attention \citep{bahdanau_neural_2014} or a copy-mechanism \citep{gu_incorporating_2016}. Finally, our attention-ablation experiment in the transformer was performed uniformly across layers; future studies could focus on targeted ablations of specific attention heads to pinpoint the mechanistic locus of short-term memory \cite{olsson_context_2022}.
\section{Conclusions}
Pretrained language models, and self-supervised predictive learning broadly, have received increased attention in terms of their (in)sufficiency as a framework for achieving feats of human-like language processing \citep{kaplan_scaling_2020, linzen_syntactic_2021}.
Here, akin to the line of work evaluating cognitive linguistic capacities of neural LMs \citep{futrell_neural_2019, ritter_cognitive_2017}, we tested the ability of language models to perform an important aspect of human intelligence for natural language --- flexibly accessing items from short-term memory --- and showed that the transformer model, even though not trained with a short-term memory objective, retrieved remarkably detailed representations of past context. This capacity emerged from training: a transformer trained on a small amount of data showed more modest retrieval abilities. The retrieval abilities of LSTM LMs, by contrast, were different; the LSTM maintained a summary representation of the list, which was not sensitive to word order. We conclude that our paradigm can illuminate the memory systems that arise in neural language models.
\section{Broader Impact}
The research reported here addresses a specific, basic research question about the functional organization of short-term memory in contemporary language processing algorithms. Although from a broader perspective, the nature of (working) memory is likely an important question in developing human-like artificial intelligence systems deployed in real-life scenarios, it is, in our opinion, unlikely that the results reported here could pose or lead to novel societal risks as we are primarily trying to better the understanding of the already developed systems.
\section*{Acknowledgements}
The authors gratefully acknowledge the support of the National Institutes of Mental Health (grant R01MH119099). The research presented here also benefited from the discussions and feedback during the research visit (by KA) in the context of the collaborative grant ``Working Memory Based Assessment of Large Language Models'' at the Department of Language Technologies, Institute Jozef Stefan. The visit was in part financially supported by the Slovenian Research Agency (grant BI-US/22-24-170).
Finally, this work was supported in part through the NYU IT High Performance Computing resources, services, and staff expertise.
|
2205.10888
|
\section{Introduction}\label{sec-intro} Gromov developed the h-principle \cite{Gromov_PDR} as a soft topological approach to finding solutions to partial differential relations, and this was refined subsequently by several others, notably Eliashberg-Mishachev \cite{em-book}. These two references \cite{Gromov_PDR,em-book} emphasize somewhat different points of view. While \cite{em-book} uses the standard terminology of differential topology in terms of jets,
\cite{Gromov_PDR} uses a more abstract, formal sheaf-theoretic framework. The main applications of both these approaches is to solve partial differential relations on \emph{smooth manifolds}. The aim of this paper is to extend the domain of applicability of the h-principle to \emph{smooth stratified spaces} (cf.\ \cite{GM_SMT}).
There is one immediate difficulty that we face. Let $X$ be a smooth stratified space. Then, any natural notion of a tangent bundle $TX$ (cf.\ Definition \ref{str-tbl}) to $X$ gives a structure that is not a bundle in the usual sense of a topological bundle.
This leads us to the notion of \emph{stratified bundles} (Definition \ref{def-strbdl}) which consist of bundles along strata of $X$ with appropriate gluing conditions across strata. It is precisely this "gluing data" that distinguishes the manifold framework from the stratified spaces framework.
A prototypical example of a stratified bundle
to keep in mind is that of $P: M\longrightarrow M/G$, where $M$ is a smooth manifold, and $G$ is a compact Lie group
with a not necessarily free smooth action on $M$. Then $M$ admits a stratification by orbit type, where the strata $M_{(H)}$ are indexed by closed subgroups $H \leq G$, consisting of points with isotropy group conjugate to $H$. This stratification descends to a stratification of $M/G$, and $P : M \to M/G$ becomes a stratified fiber bundle. Let $x \in M$, $G_x$ be the isotropy group, and $[x]=P(x)$. Then the fiber of $P$ over $[x]$ is $G/G_x$.
We found the sheaf-theoretic formalism of Gromov \cite[Ch. 2]{Gromov_PDR} more convenient to address the algebraic topology issues. We refer the reader to \cite[Ch. 2]{Gromov_PDR} for details on sheaves of quasitopological spaces, i.e.\ continuous sheaves.
Note however that the sheaf of sections of a stratified fiber bundle $E$ over a stratified space $X$ forms something more involved than just a sheaf over a topological space, as one may restrict $E$ to any stratum-closure $\overline{L} \subset X$ and take sections thereof. We therefore extend
the sheaf-theoretic formalism of Gromov to \emph{stratified sites} (Definition \ref{def-stratfdsite}) and \emph{stratified sheaves} (Definition \ref{def-sssheaf}) over these. In cases of interest in this paper, a stratified sheaf typically assigns a topological space to an open subset $U$ of a stratum closure $\overline L$. Here, $L$ ranges over
strata of $X$.
Following Gromov \cite{Gromov_PDR}, we call such a stratified sheaf a \emph{stratified continuous sheaf}. Thus, a
stratified continuous sheaf is a collection of continuous sheaves $\{\FF_{\overline{L}}\}$, one for every stratum $L$ of $X$. In this sense, stratified continuous sheaves are analogous to constructible sheaves.
Next, for every pair $S<L$, the data of a stratified continuous sheaf gives a restriction map $$\operatorname{res}^L_S: i_{\overline{S}}^* \FF_{\overline{L}} \to \FF_{\overline{S}},$$ where $ i_{\overline{S}}^* \FF_{\overline{L}}$ denotes the pullback of $ \FF_{\overline{L}}$ to ${\overline{S}}$. A homotopy theoretic construction that arises naturally is that of the homotopy fiber ($\operatorname{hofib}$): $$\overline{\HH}^L_S= \operatorname{hofib} (\operatorname{res}^L_S: i_{\overline{S}}^* \FF_{\overline{L}} \to \FF_{\overline{S}}).$$ Set $\HH^L_S= \overline{\HH}^L_S|S$, the restriction of $\overline{\HH}^L_S$ to the stratum $S$ of ${\overline{S}} \subset X$. Thus,
$$\HH^L_S= \operatorname{hofib} (\operatorname{res}^L_S: \FF_{\overline{L}} \to \FF_{{S}}).$$ We shall refer to $\overline{\HH}^L_S$
and $\HH^L_S$ as the closed and open homotopy fiber sheaves respectively for the pair of strata $S<L$.
One of the aims of this paper is to find conditions on the homotopy fiber sheaves that guarantee the $h-$principle for the stratified sheaf $\FF$.
The sheaf of sections of a bundle comes naturally equipped with the compact-open topology, and therefore constitutes a continuous sheaf. A key difference between Gromov's h-principle \cite[Section 2.2]{Gromov_PDR} and the present paper stems from the fact that a stratified sheaf is a collection of sheaves, and not a single sheaf. The difference already shows up when the base space is a simplex equipped with its natural stratification.
This change in setup was motivated by a question due to Sullivan.
In fact, this paper and its companion \cite{ms-gt} were born in part by trying to address the following two questions due to Sullivan \cite{sullivan-pihes-formal} and Gromov \cite{Gromov_PDR}. After discussing smooth forms on simplices in \cite{sullivan-pihes-formal}, Sullivan suggests the following Example/Problem/test-question.
\begin{qn}[stratified spaces] \cite[p. 298]{sullivan-pihes-formal}\label{qn-sullivan0}
"In the $\cdots$ cell-space abstraction we didn't require that cells be contractible.
Thus these notions can be extended to stratified sets-thought of inductively as obtained
by attaching manifolds with boundary with a careful statement about the geometry of
the attaching map.
It would be interesting to carry this out in detail-the basic idea being that a
form should have values only on multivectors tangent to the strata."
\end{qn}
We interpret Question \ref{qn-sullivan0} as follows:
\begin{qn}\label{qn-sullivan}
Provide an inductive description of forms over stratified spaces.
\end{qn}
Another source of inspiration for this paper comes from the following question due to Gromov.
\begin{qn}\cite[p. 343]{Gromov_PDR}\label{qn-gromov}
Can one define singular symplectic (sub) varieties?
\end{qn}
In this paper, we address Question \ref{qn-sullivan}, by providing an inductive description of sheaves of smooth forms over stratified spaces. In fact, we set up the more general framework of a stratified bundle over a stratified space,
and provide an inductive description of the sheaf of sections of a stratified bundle over a stratified space. We postpone a full treatment of Question \ref{qn-gromov} to the companion paper \cite{ms-gt}, but develop a general conceptual framework in this paper.
Having fixed the framework in terms of stratified sheaves over stratified spaces, the main aim of the paper then boils down to extending some of the basic notions introduced by Gromov in \cite[Ch. 2]{Gromov_PDR} to the stratified context, and proving the stratified h-principle using these generalizations. Two crucial concepts were important in \cite[Ch. 2]{Gromov_PDR} for a sheaf $\FF$ (of topological spaces) over a manifold $V$:
\begin{enumerate}
\item Flexibility of $\FF$: This means that for every $K\subset K' \subset V$, $\FF(K') \to \FF(K)$ is a (Serre) fibration.
\item $\diff(V)-$invariance of $\FF$: This means that the action of the pseudogroup $\diff(V)$ of partially defined diffeomorphisms of $V$ lifts to an action on $\FF$.
\end{enumerate}
Flexibility of sheaves is generalized to flexibility of stratified sheaves by demanding
two kinds of conditions:
\begin{enumerate}
\item \emph{Stratumwise flexibility:} For every stratum $S <X$, the restricted sheaf $\FF|S$ (assigning $\FF(U)$ only to open subsets $U$ of $S$) is flexible as a sheaf over the manifold $S$. Using Gromov's work \cite{Gromov_PDR}, this hypothesis allows us to conclude
the $h-$principle for the restrictions of $\FF$ to \emph{open strata}.
\item \emph{Flexibility across strata}: For every stratum $S < X$, the open homotopy fiber sheaf $\HH^L_S$ is flexible on the open stratum $S$.
As pointed out before,
for any pair of strata $S<L$ of $X$, there exist closed and open homotopy fiber sheaves $\overline{\HH}^L_S$ and $\HH^L_S$ respectively. Of these, the open homotopy fiber sheaf $\HH^L_S$ is a more tractable object and is defined on the open (manifold) stratum $S$. \emph{ $\HH^L_S$ is the key new player introduced in this paper in the context of stratified spaces.} No analog
exists in the smooth manifold context. It is the sheaf $\HH^L_S$ that encodes "gluing data" across the pair of strata $S, L$.
\end{enumerate}
We establish (Theorem \ref{thm-hofibsflexg}) that if $(1)$ and $(2)$ holds, then $\FF$ satisfies the $h-$principle.\\
The condition of $\diff(V)-$invariance in \cite{Gromov_PDR} is generalized to invariance of the stratified sheaf $\FF$ under stratified diffeomorphisms (Definition \ref{def-stratdiff}). Thus, $\FF$ is $\sdiff-$invariant if for every pair of strata $S<L$ of $X$, $\FF|S$, $i_S^* \FF_{\overline{L}}$ and the restriction map $\operatorname{res}^L_S$ are naturally
$\diff(S)-$invariant. \\
\noindent {\bf Terminology, examples and non-examples:} What we have called "stratified bundles" have their origins in work of Thom \cite{Thom_stratmaps}. Mather \cite[p. 500]{mather-notes} defines the notion of a "Thom map". In this paper a stratified bundle map is a Thom map satisfying an extra hypothesis (Definition \ref{def-strbdl}, condition (iii)) making the notion closer to a bundle. \\
1) The main class of examples of stratified bundles over stratified spaces, as mentioned earlier, consists of $P: M \to M/G$, where $M$ is a smooth manifold, and $G$ is a compact Lie group. This includes symplectic reductions \cite{SL_stratsympred} \cite[Theorems 1.4.2, 2.4.2]{marsden-lnm}. In fact, if $G_1, G_2$ admit commuting actions on $M$, e.g.\ if the $G_1$ action is on the left, and
the $G_2$ action is on the right, then there exists a stratified bundle $P_1: G_1\backslash M \to G_1\backslash M /G_2$, where $G_1\backslash M$ itself is allowed to be a stratified space. Thus, there are natural examples of stratified bundles of the form $P: X \to X/G$, where $X$ is itself a stratified space.\\
2) \emph{Caveat:} The reader should be warned that in the context of complex analytic spaces, a
genuine topological bundle over a stratified complex analytic space with holomorphic total space, and fiber a complex manifold is sometimes referred to as a stratified bundle (cf.\ \cite{forstneric}). We use stratified bundle in a much more general sense than this. In particular, the fibers over different strata need not be homeomorphic in our context.\\
3) Let $D \subset V$ be a singular divisor in a smooth complex variety $V$. Let
$N_\ep (D)$ be a regular neighborhood, and $\partial N_\ep (D)$ its boundary.
Let $r: N_\ep (D) \to D$ be the retraction map to the divisor.
Then the restriction $r| \partial N_\ep (D): \partial N_\ep (D) \to D$ is \emph{not} an example of
a stratified bundle in our sense. This is because lower dimensional strata in $D$ have higher dimensional fibers under $r| \partial N_\ep (D): \partial N_\ep (D) \to D$. For a stratified bundle in our sense,
the opposite happens: for instance, lower dimensional strata in $M/G$ have lower dimensional
fibers under $P: M \to M/G$.
\subsection{Statement of results}
We are now in a position to state the first main theorem of our paper
(referring to
Theorem \ref{thm-hofibsflexg} for a more precise statement).
\begin{theorem}\label{thm-sheafh-intro}
Let $\FF=\{ \FF_{\overline{L}}: L < X \text{ stratum}\}$ be a stratified continuous sheaf over a stratified space $X$,
such that $\FF$ is stratumwise flexible, i.e.\ $\FF_{\overline{L}}|L$ for each $L < X$ is
flexible.
Further, suppose that $\FF$ is infinitesimally flexible across strata, i.e.\ each open homotopy fiber sheaf $\HH^L_S$ is
flexible.
Then $\FF$ satisfies the parametric $h-$principle.
\end{theorem}
Gromov deduces the h-principle from the homotopy-theoretic condition of flexibility
for sheaves, notably over manifolds \cite[p.76]{Gromov_PDR}. Theorem \ref{thm-sheafh-intro} is the stratified analog of Gromov's theorem: the underlying space is replaced by a stratified space, and sheaves are replaced by stratified sheaves.
The following theorem of Gromov connects flexibility and microflexibility \cite[Ch. 2.2]{Gromov_PDR}.
\begin{theorem}\cite[p. 78]{Gromov_PDR}\label{thm-grmicro2flexintro}
Let $Y = V \times \mathbb{R}$ and let $\Pi: Y \to V$ denote the projection
onto the first factor. Let $\FF$ be a microflexible continuous sheaf over $Y$ invariant under $\diff( V, \Pi)$.
Then the restriction $\FF \vert V \times \{0\}$ is a flexible sheaf over $V (= V \times \{0\})$.
Let $\FF$ be a microflexible $\diff(V)-$invariant continuous sheaf
over a manifold $V$. Then the restriction to an arbitrary piecewise smooth polyhedron
$K \subset V$ of positive codimension, $\FF|K$, is a flexible sheaf over $K$.
\end{theorem}
The stratified analog of Theorem \ref{thm-grmicro2flexintro} is then furnished by the following (see
Theorems \ref{thm-micro2flexs} and \ref{thm-micro2flexs2}):
\begin{theorem}
Let $\FF=\{ \FF_{\overline{L}}: L < X \text{ stratum}\}$ be a stratified continuous sheaf over a stratified space $X$,
such that $\FF$ is $\mathrm{StratDiff}$-invariant. Further, suppose for each stratum $L < X$, $\FF_{\overline{L}}|L$ is microflexible and for each pair of strata $S < L < X$, $\HH^L_S$ is microflexible. Then the restriction $\FF|K$ to a stratified subspace $K \subset X$ of stratumwise positive codimension satisfies the parametric $h-$principle.
\end{theorem}
In Section \ref{sec-formalfnstrat}, we address Sullivan's Question \ref{qn-sullivan} by developing a flag-like structure for jets on stratified spaces. The main results are given by Propositions \ref{prop-formalfnmflds}, \ref{prop-decompcones}, and \ref{prop-formalfnnbhdstrat2}. These results give a stratified analog
of the sheaf of formal $r-$jets in \cite{em-book,em-expo} (see Definition \ref{def-sjr} giving the corresponding sheaf $\sjr^r$). In a sense, Section \ref{sec-formalfnstrat} interpolates between the algebraic topology of Section \ref{sec-hprin} and the differential topology of Section
\ref{sec-hat}.
In Section \ref{sec-hat}, we return to the differential topological setting of jets and jet bundles as a concrete example to which the above sheaf-theoretic theorems may be applied.
Let $p:E \to X$ be a smooth stratified bundle over a smooth stratified space $X$. Then the sheaves of sections of $p:E \to X$ and their jets come with natural control conditions. Let $\FF$ denote the stratified sheaf of controlled sections of $E$ over $X$. Then we have the following
(see Theorem \ref{thm-diffrlnflex}).
\begin{theorem}\label{thm-diffrlnflexintro}
$\FF$ is flexible, in particular it satisfies the parametric $h-$principle.
\end{theorem}
A caveat is in order. The continuous sheaf $\FF$ is strictly smaller than the sheaf of \emph{all} sections. It is rather easy to see that the
sheaf of all sections satisfies the parametric $h-$principle. Theorem \ref{thm-diffrlnflexintro} says that this continues to hold in the presence of control conditions. In fact, $\FF$ can be identified with the sheaf of holonomic stratified $r-$jets (see Definition \ref{def-sjr} for the sheaf $\sjr^r$ of formal stratified $r-$jets), and hence Theorem \ref{thm-diffrlnflexintro} is also true for the sheaf of holonomic stratified jets.
We establish the following stratified holonomic approximation theorem (see Theorem \ref{em-hats}), generalizing Eliashberg-Mishachev's
holonomic approximation theorem \cite[Theorem 1.2.1]{em-expo} for manifolds (Theorem \ref{em-hat}).
\begin{theorem}\label{em-hatsintro}
Let $X$ be {an abstractly stratified space} equipped with a compatible metric, $E \to X$ be a stratified bundle, and $K \subset X$ be a relatively compact
stratified subspace of positive codimension.
Let {$f \in \sjr^r_E(\operatorname{Op} K)$} be a $C^r-$regular formal section. Then for arbitrarily small $\ep > 0$,
$\delta > 0$, there exist a stratified
diffeomorphism $h : X \to X$ with $$||h-{\mathrm{Id}}||_{C^0} < \delta,$$ and a stratified holonomic section {$\til{f} \in \sjr^r_E(\operatorname{Op} K)$} such
that
\begin{enumerate}
\item the image $h(K)$ is contained in the domain of definition of the section $f$,
\item $||\til{f}- f|\operatorname{Op}\, h(K) ||_{C^0} < \ep$.
\item $\til{f}, f|\operatorname{Op}\, h(K)$ are normally $\varepsilon$ $C^r$-close.
\end{enumerate}
\end{theorem}
We should point out that neither Theorem \ref{thm-diffrlnflexintro}, nor Theorem \ref{em-hatsintro} follow from the relative holonomic approximation theorem \cite[Theorem 3.2.1]{em-book}. The basic
issue can be illustrated in the simple case of a pair of strata $S<L$. Let $E_S, E_L$ denote the bundles over $S, L$. In order to prove
either of these theorems, we need to consider \emph{extensions} of a jet
(formal or holonomic) of $E_S$ over $S$ to a jet over a germ of a neighborhood of $S$ in $\overline L$. This echoes the fact mentioned earlier, that in the stratified sheaf context, the open homotopy fiber $\HH^L_S$ is a new player in the game. Alternately, the extension may be thought of as "gluing data" that allows us to go between jets over $S$ and jets over $L$.
A host of applications in the manifold context have been enumerated by Eliashberg-Mishachev. All have potential generalizations to the stratified context in the light of Theorem \ref{em-hatsintro}. We give an application of our techniques at the end of Section \ref{sec-emhat} by showing that stratified immersions of positive codimension between stratified spaces satisfy the h-principle (Theorem \ref{thm-immns}): this is the stratified analog of the Smale-Hirsch theorem \cite[Chapter 8.2]{em-book}.
\subsection{Outline of the paper} The aim of Section \ref{sec-prel} is to set up the context of the h-principle for stratified spaces. This section is in the spirit of \cite{em-book}, except that the technology is for stratified spaces in place of manifolds. We start by describing the setup of stratified spaces following \cite{GM_SMT,mather-notes}. Stratified spaces are recalled in Section \ref{sec-stratfdsps} and stratified maps in Section \ref{sec-stratfdmaps}. We define stratified bundles and related notions in Section \ref{sec-stratfdbdl}, where the basic fact we prove is the local structure of bundle maps
(Lemma \ref{lem-strbdl-trivialization} and Corollary \ref{strbdl-cone-link}). It is well-known that locally a stratified space looks like a product $\mathbb R^n \times cA$
of Euclidean space with a cone $cA$ on a compact stratified space $A$. Lemma \ref{lem-strbdl-trivialization} and Corollary \ref{strbdl-cone-link} upgrade this to a statement about the local structure of a stratified bundle over a stratified space.
We then proceed
in Section \ref{sec-stratfdjet} to define
stratified jets, jet bundles,
and formal and holonomic sections in the stratified context.
While Section \ref{sec-prel} ends by setting up the context of the h-principle for stratified spaces by describing jet bundles and their sections and is in the spirit of \cite{em-book}, Section \ref{sec-sheafflexdiff} has more of an algebraic topology flavor, and is in the spirit of
\cite{Gromov_PDR}. Here, we look at sheaves over stratified spaces.
The crucial notion of a \emph{stratified sheaf} over a stratified space is introduced in Section \ref{sec-flexdefs}.
Flexibility conditions are introduced in this context in Section \ref{sec-flexdefs}. A principal condition used in \cite{Gromov_PDR} is
Diff-invariance of sheaves. In Section \ref{sec-diffinv}, we describe the stratified analog, $\sdiff-$invariance, in the context of stratified spaces.
In Section \ref{sec-hprin}, we prove one of the main theorems of the paper, Theorem \ref{thm-hofibsflexg} (or Theorem \ref{thm-sheafh-intro} above), establishing the parametric $h-$principle for stratified sheaves over stratified spaces. The main idea or mnemonic may be summarized as follows: flexibility (a precursor to the h-principle) normal to strata plus flexibility tangential to strata furnishes the $h-$principle for stratified sheaves over stratified spaces. In a sense, this is in the spirit of Goresky-Macpherson's fundamental theorem on Morse data \cite{GM_SMT} on stratified spaces where
total Morse data can be recovered from normal Morse data and tangential Morse data.
On the way, we establish Theorem \ref{thm-flex2shprin}, spelling out the connection between flexibility and the h-principle in the context of stratified sheaves. A tool we use
in Section \ref{sec-hprin} is Milnor's theory of microbundles. This allows us to simplify Gromov's formalism from
\cite[Chapter 2]{Gromov_PDR}.
In Section \ref{sec-micro}, we establish the connection between flexibility and microflexibility of stratified sheaves (see Theorem \ref{thm-micro2flexs}). It follows (see Theorem \ref{thm-micro2flexs2}) that the restriction of microflexible $\sdiff-$invariant sheaves to positive codimension stratified subspaces satisfies the parametric $h-$principle.
Section \ref{sec-formalfnstrat} is devoted to using microbundles and developing a homotopy model of the Gromov diagonal normal sheaf $\FF^*$
for a sheaf $\FF$ of controlled sections of a stratified bundle $P: E \to X$. In so doing, we answer Sullivan's Question \ref{qn-sullivan} by developing a formalism of flag-like sets.
The description is hybrid in nature. There is a tangential component given by sections along manifold strata $S$ and there is a normal component given by sections over the link of $S$ in $X$. Since the link $A$ of a stratum $S$ is itself a stratified space, the restriction of $P: E \to X$ to $P:P^{-1}(A)\to A$ is again a stratified bundle of lesser complexity; hence an inductive description.
We return to jets and jet bundles of stratified bundles over stratified spaces in Section \ref{sec-hat}.
Theorem \ref{thm-diffrlnflex} establishes that the
sheaf of sections of a stratified jet bundle satisfies the parametric $h-$principle. We also prove the stratified analog of Eliashberg-Mishachev's holonomic approximation theorem in Theorem \ref{em-hats}. As an application of Theorem \ref{thm-sheafh-intro}, we establish a stratified Smale-Hirsch theorem: stratified immersions of positive codimension between stratified spaces satisfy the h-principle (Theorem \ref{thm-immns}).\\
\noindent {\bf Acknowledgments:} The authors thank Yasha Eliashberg for comments on an earlier draft.
\section{Smoothly stratified objects and maps} \label{sec-prel}
\subsection{Smoothly stratified spaces}\label{sec-stratfdsps}
\begin{defn}\label{def-Idec} Let $X$ be a locally compact second countable metric space and let $(I, \leq)$ be a partially ordered set. An \emph{$I$-decomposition} of $X$ is a locally finite collection $\{S_\alpha\}_{\alpha \in I}$ of disjoint locally closed subsets of $X$ such that
\begin{enumerate}
\item $S_\alpha$ is a topological manifold for all $\alpha \in I$
\item $X = \bigcup_{\alpha \in I} S_\alpha$
\item $S_\alpha \cap \overline{S_\beta} \neq \emptyset \iff S_\alpha \subseteq \overline{S_\beta} \iff \alpha \leq \beta$.
\end{enumerate}\end{defn}
If $X$ is an $I$-decomposed space as above, we shall call $S_\alpha, \, \alpha \in I,$ the \emph{strata} of $X$, and denote by $\Sigma$ the collection of strata of $X$ indexed by $I$. We shall use $\alpha \leq \beta$ and $S_\alpha \leq S_\beta$ interchangeably, partially ordering $\Sigma$ instead. If $S_\alpha \leq S_\beta$ and $S_\alpha \neq S_\beta$ we shall write $\alpha < \beta$ (or $S_\alpha < S_\beta$). Note that $S_\alpha < S_\beta$ is equivalent to saying that $S_\alpha$ lies in the boundary $$\partial S_\beta := \overline{S_\beta} \setminus S_\beta$$ of $S_\beta$. For any stratum $S \in \Sigma$ we define the \emph{depth} of $S$ to be
$$\depth(S): = \sup\{n : S_i \in \Sigma \text{ such that } S < S_1 < \cdots < S_{n-1}\}.$$
Similarly, define the \emph{height} of $S$ to be
$$\operatorname{height}(S): = \sup\{n : S_i \in \Sigma \text{ such that } S > S_1 > \cdots >S_{n-1}\}.$$
We shall moreover define the \emph{depth} and \emph{dimension} of $X$ respectively as
$$\depth (X) := \sup_{\alpha \in I}\, \depth (S_\alpha), \;\;\; \dim X := \sup_{\alpha \in I} \,\dim (S_\alpha)$$
\begin{defn}\label{def-whitneyab}Let $(S, L)$ be a pair of smooth (not necessarily properly) embedded submanifolds of a smooth manifold $M$ such that $S \subset \overline{L}$.
\begin{enumerate}
\item The pair $(S, L)$ is said to satisfy \emph{Whitney condition (a)} if for any sequence $\{x_n\} \subset L$ converging to $x \in S$ such that the sequence of tangent planes $T_{x_n} L$ converge to a plane $\tau \subset T_x M$, the inclusion $T_x S \subset \tau$ holds.
\item The pair $(S, L)$ is said to satisfy \emph{Whitney condition (b)} if the following holds.\\ Let $\{x_n\} \subset L$, $\{y_n\} \subset S$ be any pair of sequences both converging to $x \in S$ such that the tangent planes $T_{x_n} L$ converge to some plane $\tau \in T_x M$. Further suppose that the secants $\overline{x_n y_n}$ converge to a line $\ell \in T_x M$. Then $\ell \subset \tau$.
\end{enumerate}
\end{defn}
The notions of convergence of planes and lines mentioned above are defined locally by choosing a coordinate chart around $x$ in $M$, such that the chart contains a tail of the sequences $\{x_n\}, \{y_n\}$. It is straightforward to check that the property of the pair $(S, L)$ satisfying either of the above conditions is independent of the chosen coordinate chart. See \cite{mather-notes} for a coordinate-free restatement of condition $(b)$.
Note that condition $(b)$ implies condition $(a)$, since given any sequence $\{x_n\} \subset L$ with $T_{x_n} L \to \tau$ and a line $\ell \subset T_x S$, defined in a local chart $(U, x) \cong (\mathbb R^n, 0)$ around $x$, one can construct a pair of sequences $\{x_n\} \subset L$ and $\{y_n\} \subset S$ such that the secants $\overline{x_n y_n}$ converge to $\ell \in T_x M$. Now as $T_{x_n} L \to \tau$, we must have $\ell \subset \tau$ by hypothesis of satisfying condition $(b)$. Therefore, $\ell \subset \tau$. But $\ell \subset T_x S$ was arbitrary, so we conclude $T_x S \subset \tau$. Therefore, condition $(a)$ holds.
\begin{defn}\label{def-whitneyss} A \emph{Whitney stratified subset} of a smooth manifold $M$ is a subset $X \subset M$ with an $I$-decomposition $\Sigma$ such that every stratum in $\Sigma$ is a {smoothly} embedded submanifold of $M$, and any pair of strata {$(S, L)$ in $\Sigma$ such that $S < L$} satisfies the Whitney condition $(b)$.\end{defn}
Thom and Mather showed that every Whitney stratified subset of a smooth manifold has a canonical local model akin to manifolds being locally modeled by Euclidean spaces. This gives us an intrinsic definition of a topological Whitney stratified set. The cone on a topological space $A$ is denoted as {$cA$}.
\begin{defn}\cite{friedman-notes}\label{def-cs} A \emph{CS set} is an $I$-decomposed space $(X, \Sigma)$ such that for any stratum $S \in \Sigma$ the following holds: \\ For any point $x \in S$ there exists an open neighborhood $U$ of $x$ in $X$, {a chart $V$} around $x$ in $S$, and a stratified space $(A, \Sigma_A)$ called the \emph{link} of $x$, such that there is a stratum-preserving homeomorphism {$\varphi : V \times cA \to U$} where $U$ {is} given the induced stratification from $X$.\end{defn}
\begin{theorem}\cite{mather-notes}\label{mather-cs}
Every Whitney stratified subset of a smooth manifold is a CS set.
\end{theorem}
Given an $I$-decomposed space $(X, \Sigma)$, a \emph{tubular neighborhood system} or simply, a \emph{tube system} $\mathcal{N}$ on $\Sigma$ is a collection of triples $(N_\alpha, \pi_\alpha, S_\alpha)_{\alpha \in I}$ consisting of (for every $\alpha \in I$) an open neighborhood $N_\alpha$ of the stratum $S_\alpha$ in $X$, called the \emph{tubular neighborhood} of the stratum, a retraction $\pi_\alpha : N_\alpha \to S_\alpha$, called the \emph{tubular projection}, and a continuous function $\rho_\alpha : N_\alpha \to [0, \infty)$ such that $\rho_\alpha^{-1}(0) = S_\alpha$: $\rho_\alpha$ is called the \emph{radial function}.
We now define an \emph{abstract stratification} in the sense of Mather \cite{mather-notes}. It provides a notion of a smooth stratification on an $I$-decomposed space $(X, \Sigma)$ independent of the ambient space it is embedded in. This is analogous to the abstract definition of a smooth manifold using a smooth atlas rather than via an embedding.
\begin{defn}\label{def-abtractstratsp} An $I$-decomposed space $(X, \Sigma)$ equipped with a tube system $\mathcal{N}$ on $\Sigma$, denoted by the triple $(X, \Sigma, \mathcal{N})$, defines an \emph{{abstractly} stratified space} if the following holds
\begin{enumerate}
\item Each stratum $S_\alpha \in \Sigma$ {is a} smooth manifold.
\item For any pair $\alpha, \beta \in I$ of indices such that $\alpha < \beta$, we set $N_{\alpha\beta} = N_\alpha \cap S_\beta$ and the restrictions of $\pi_\alpha$ and $\rho_\alpha$ to $N_{\alpha\beta}$ are denoted by $\pi_{\alpha\beta}$ and $\rho_{\alpha\beta}$ respectively. The map $(\pi_{\alpha\beta}, \rho_{\alpha\beta}) : N_{\alpha\beta} \to S_\alpha \times (0, \infty)$ is a submersion.
\item For all triples $\alpha, \beta, \gamma \in I$ of indices with $\alpha < \beta < \gamma$, the \emph{$\pi$-control condition} $\pi_{\alpha\beta}\pi_{\beta\gamma}(x) = \pi_{\alpha\gamma}(x)$ and the \emph{$\rho$-control condition} $\rho_{\alpha\beta}\pi_{\beta\gamma}(x) = \rho_{\alpha\gamma}(x)$ are satisfied whenever $x \in N_{\beta\gamma} \cap N_{\alpha\gamma} \cap \pi_{\beta\gamma}^{-1}(N_{\alpha\beta})$.
\end{enumerate}
\begin{comment}
content...
If $(X, \Sigma, \mathcal{N})$ satisfies all the above conditions except the $\rho$-control condition, it is called an \emph{{weakly abstractly} stratified space}.
\end{comment}
\end{defn}
For simplicity of notation, we will often denote the tubular neighborhood of a stratum $S \in \Sigma$ in an {abstractly} stratified space $(X, \Sigma, \mathcal{N})$ by $N_S$ and the associated tubular function and radial function will be denoted by $\pi_S$ and $\rho_S$. For two strata $S, L \in \Sigma$, $S < L$, the tubular neighborhood of $S$ in $L$ will be defined as $N_{SL} := N_S \cap L$ and the restrictions of $\pi_S$ and $\rho_S$ to $N_{SL}$ will be denoted by $\pi_{SL}$ and $\rho_{SL}$ respectively. Let
$$N_S^\varepsilon := \{x \in N_S : \rho_S(x) < \varepsilon(\pi_S(x))\} \subset N_S,$$ where $\varepsilon : S \to (0, \infty)$ is a \emph{smooth positive continuous function}. We shall often use the shorthand $\varepsilon > 0$ if there is no scope of confusion. $N_S$ shall usually mean $N_S^1$.
{Here, we use $\varepsilon$ to denote a function on $S$ defining a tubular neighborhood of $S$. However, for most applications, we shall need to consider the function $\varepsilon$ only on relatively compact subsets of $S$, where it may be taken to be a small constant, hence this notation.}
Two tube systems $\mathcal{N} = (N_S, \pi_S, \rho_S)_{S \in \Sigma}$ and $\mathcal{N}' = (N'_S, \pi'_S, \rho'_S)_{S \in \Sigma}$ on an $I$-decomposed space $(X, \Sigma)$ {shall be declared} \emph{equivalent} if for any strata $S \in \Sigma$, there exists an open neighborhood $S \subset N''_S \subset N_S \cap N'_S$ such that $\pi_S|N''_S = \pi'_S|N'_S$ and $\rho_S|N''_S = \rho'_S|N''_S$. If $(X, \Sigma_X, \mathcal{N}_X)$ and $(Y, \Sigma_Y, \mathcal{N}_Y)$ are two {abstractly} stratified spaces and $f : X \to Y$ is a stratum-preserving homeomorphism such that the pulled back tube system $f^* \mathcal{N}_Y = (f^* N_S, f^{-1} \circ \pi_S \circ f, \rho_S \circ f)_{S \in \Sigma_Y}$ is equivalent to $\mathcal{N}_X$, then $f$ is said to be an \emph{isomorphism} between $X$ and $Y$.
{Examples of stratified spaces include manifolds with corners. A product $X \times Y$ of stratified spaces $X, Y$ is naturally stratified with strata consisting of products of strata of $X$ and $Y$, however there is no canonically defined abstract stratification in general. For instance, consider $I \times I$ where $I = [0, 1]$ is stratified as a manifold with boundary. However, if one of $X$ or $Y$ is a manifold, $X \times Y$ does have a canonical abstract stratification.}
\begin{theorem}\cite{mather-notes}\label{mather} Any Whitney stratified subset $(X, \Sigma) \subset M$ admits {a tubular neighborhood system consisting of (not necessarily properly) embedded tubular neighborhoods $\nu(S)$, one for each stratum $S \in \Sigma$ in $M$}. Further, there exists a projection $\pi_S : \nu(S) \to S$ and radial function $\rho_S : \nu(S) \to [0, \infty)$ such that $N_S = \nu(S) \cap X$. The restrictions of $\pi_S$ and $\rho_S$ to $N_S$ furnish a tube system $\mathcal{N} = (N_S, \pi_S, \rho_S)_{S \in \Sigma}$ which {makes $(X, \Sigma, \mathcal{N})$ an abstractly stratified space}.\end{theorem}
The following theorem is a version of the Whitney embedding theorem for {abstractly} stratified spaces, essentially saying that every {abstractly} stratified space of dimension $n$ can be embedded in $\mathbb{R}^N$ for $N \geq 2n+1$ as a Whitney stratified space, and any two such embeddings are isotopic if $N \geq 2n+2$.
\begin{theorem}\cite{natsume}\label{natsume} Let $(X, \Sigma, \mathcal{N})$ be an {abstractly} stratified space with $\dim X = n$. Then for any $N \geq 2n+1$ there is a \emph{realization} of $X$ in $\mathbb{R}^N$, i.e.\ there exists an embedding $\iota : X \to \mathbb{R}^N$ such that $X' = \iota(X)$ is a Whitney stratified subset of $\mathbb{R}^N$ with a stratification $\Sigma' = \{\iota(S) : S \in \Sigma\}$ and a tube system $\mathcal{N}' = (N_S, \pi_S, \rho_S)$ {as in Theorem \ref{mather}, such that
$$\iota : (X, \Sigma, \mathcal{N}) \to (X, \Sigma', \mathcal{N}')$$
is an isomorphism of {abstractly} stratified spaces}. Moreover if $N \geq 2n+2$ any two such embeddings $\iota_0, \iota_1 : X \to \mathbb{R}^N$ are isotopic in the following sense: there is a realization $H : X \times I \to \mathbb{R}^N$ such that $H(x, t) = (H_t(x), t)$ where $H_t : X \to \mathbb{R}^N$ is a realization for all $0 \leq t \leq 1$ and $H_0 = \iota_0, H_1 = \iota_1$.\end{theorem}
\begin{theorem}\cite{goresky-triangulation}\label{goresky-triangulation}
Any {abstractly} stratified space admits a triangulation by smoothly embedded simplices compatible with the filtration by stratum-closures\end{theorem}
\subsection{Stratified maps}\label{sec-stratfdmaps}
\begin{defn}\label{def-controlledmap} A map $f : (X, \Sigma_X) \to (Y, \Sigma_Y)$ of $I$-decomposed spaces is said to be a {\emph{stratum-preserving map}} if for any $S \in \Sigma_X$, there is a unique $L \in \Sigma_Y$ such that $f(S) \subset L$. Equivalently, for every stratum $L \in \Sigma_Y$, $f^{-1}(L)$ is a disjoint union of strata of $\Sigma_X$.
If $(X, \Sigma_X, \mathcal{N}_X)$ and $(Y, \Sigma_Y, \mathcal{N}_Y)$ are {abstractly} stratified spaces, then a stratum-preserving map $f : X \to Y$ of the underlying $I$-decomposed spaces is said to be a \emph{controlled map} if for any stratum $S \in \Sigma_X$ and the corresponding unique stratum $L \in \Sigma_Y$ such that $f(S) \subset L$, the following conditions are satisfied
{\begin{enumerate}
\item $f|S : S \to L$ is a smooth map.
\item There exists $\varepsilon > 0$ such that $f(N_S^\varepsilon) \subset N_L$.
\item The \emph{$\pi$-control} condition
$$f(\pi_S(x)) = \pi_L(f(x))$$
and the \emph{$\rho$-control} condition
$$\rho_S(x) = \rho_L(f(x))$$
hold for all $x \in N_S^\varepsilon$.
\end{enumerate}}
If all the conditions above except the $\rho$-control condition is satisfied, $f$ is said to be a \emph{weakly controlled map}.\end{defn}
\begin{comment}\label{rmk-wc} To see what Definition \ref{def-controlledmap} means in local coordinates, it is convenient to equip $X,Y$ with metrics $g_X, g_Y$ such that the radial functions $\rho$ give radial co-ordinates in $g_X, g_Y$. The $\rho-$control condition $\rho_S(x) = \rho_L(f(x))$ ensures that if $f (S) \subset L$, then the radial co-ordinate $\rho_S$ in a small normal neighborhood $N^\ep_S$ is mapped {\bf isometrically} to the radial co-ordinate $\rho_L$.
However, weakly controlled maps are much more flexible. Let $X=\mathbb R^m \times cA${,} $Y= \mathbb R^n \times cB$. Here $cA$ is the cone on $A$, given by $A \times [0,1)/A \times \{0\}$. Equip $cA$ with `polar' co-ordinates given by a radial co-ordinate $\rho_A$ (taking values in $[0, \infty)$) and `angular' co-ordinates $\theta_A \in A$. For convenience we assume that $A$ is a compact stratified space embedded in a sphere. Similarly, $cB$ is the cone on $B$, given by $B \times [0,1)/B \times \{0\}$ equipped with `polar' co-ordinates $\rho_B, \theta_B$.
Consider a stratified map $f: X \to Y$. (This is a frequently occurring local model for such maps, see Corollary \ref{cor-trivialzn} below.)
Equip $X= \mathbb R^m \times cA$ with cylindrical co-ordinates $(z_X, \rho_A, \theta_A)$ and
$Y= \mathbb R^n \times cB$ with cylindrical co-ordinates $(z_Y, \rho_B, \theta_B)$. If $f$ is weakly controlled (with respect to the controlled structures given by the cylindrical coordinates), then
$f(z_X, \rho_A, \theta_A) \subset Y$ is given
in terms of three functions as follows: $$f(z_X, \rho_A, \theta_A)=(f_1(z_X), f_2(z_X, \rho_A, \theta_A), f_3(z_X, \rho_A, \theta_A)).$$
For a strongly controlled map, $f_2(z_X, \rho_A, \theta_A)$ necessarily equals $\rho_A$.
An easy way of reducing the weakly controlled case to the strongly controlled case is given below.
\begin{rmk}\label{rmk-wc2c}
In both the weakly and strongly controlled cases, there is a deformation retraction $\pi_S$ of an open neighborhood $N(S)$ of a stratum $S$ to $S$. The pre-images $\pi_S^{-1} (z_X)$ of points $z_X \in S \subset X$ are homeomorphic (in fact, {isomorphic}) to cones $cA$, where $A$ is the link of $S$ (see Corollary \ref{cor-trivialzn} below). For a weakly controlled map, the radial co-ordinate $\rho_A$ on $N_S$ is then
a function of two variables $z_X$ and $\theta_A$. If we simply replace the radial co-ordinate $\rho_A$ by a new radial co-ordinate given by $\rho_A^\prime=\rho_B \circ f$ (i.e.\ the pullback of the radial co-ordinate on $Y$ under $f$), then $f$ becomes a strongly controlled map. Equivalently,
up to a choice of reparametrization of the radial function $\rho$, weakly and strongly controlled maps coincide.
\end{rmk}
\end{comment}
\begin{defn}\label{def-stratfdsubmimm} A controlled map $f : (X, \Sigma_X, \mathcal{N}_X) \to (Y, \Sigma_Y, \mathcal{N}_Y)$ is a \emph{stratified submersion} if for any stratum $L \in \Sigma_Y$ and any component $S \in \Sigma_X$ of $f^{-1}(L)$, $f|S : S \to L$ is a submersion.
\begin{comment}
content...
A controlled map $f : (X, \Sigma_X) \to (Y, \Sigma_Y)$ is a {\bf stratified immersion} if for every stratum $S \in \Sigma_X$
and the corresponding unique stratum
$L \in \Sigma_Y$ containing $f(S)$,
$f|S : S \to L$ is an immersion. If $f$ is moreover injective, we shall call it a {\bf stratified embedding}.
\end{comment}
\end{defn}
\begin{comment}\label{rmk-term}
Stratified submersions were called stratified maps in \cite{GM_SMT}; however since we shall have need for stratified immersions as well, we have preferred to use this alternate terminology.
\end{comment}
\subsection{Stratified bundles}\label{sec-stratfdbdl}
\begin{defn}\cite{mather-notes}\label{def-strvf} Let $(X, \Sigma, \mathcal{N})$ be an {abstractly} stratified space. A \emph{stratified vector field} $\eta$ on $X$ is a collection $\{\eta_S : S \in \Sigma\}$ where for each stratum $S \in \Sigma$, $\eta_S$ is a smooth vector field on $S$. The
stratified vector field $\eta$ will be called a \emph{controlled vector field} if for any pair $S, L \in \Sigma$ of strata with $S < L$, there exists some $\varepsilon > 0$ such that for any $x \in N^\varepsilon_S \cap L$, the following conditions hold:
\begin{enumerate}
\item $\eta_L \rho_{SL}(x) = 0$.
\item $(\pi_{SL})_*\eta_L(x) = \eta_S(\pi_{SL}(x))$.
\end{enumerate}
If we simply drop the first condition, we obtain a \emph{weakly controlled vector field}.
\end{defn}
Thus, a controlled vector field in the higher dimensional stratum $L$ is {parallel to the lower dimensional stratum $S$}, i.e. it does not change along the radial direction $\rho_{SL}$. This is ensured by the
first condition above. It also projects `isomorphically' to the vector field in the lower dimensional stratum $S$. This is ensured by the
second condition above. A weakly controlled vector field in the higher dimensional stratum $L$ is not necessarily {parallel to the lower dimensional stratum $S$}, i.e.\ it is allowed to have a radial component; however, if the radial component is subtracted from a weakly controlled vector field, we obtain a controlled vector field.
We now define higher-dimensional controlled distributions. These will be useful in defining stratified fiber bundles below. Note that we drop the $\rho$-control condition in this case.
\begin{defn}\label{def-strdist}Let $(X, \Sigma, \mathcal{N})$ be an {abstractly} stratified space. A \emph{stratified distribution} $D$ on $X$ is a collection $\{D_S : S \in \Sigma\}$ where for each stratum $S \in \Sigma$, $D_S$ is a smooth subbundle of the tangent bundle $TS$ of $S$.
The distribution
$D$ will be called a \emph{weakly controlled distribution} if
for any pair $S, L \in \Sigma$ of strata with $S < L$, there exists some $\varepsilon > 0$ such that for any $x \in T^\varepsilon_S \cap L$, $$(\pi_{SL})_* D_L(x) = D_S(\pi_{SL}(x)).$$ \end{defn}
Note that the dimensions of $D_S, D_L$ may differ for $S \neq L$. For the next definition, we shall need the local structure of neighborhoods $N_S$ of strata $S$.
By Thom's first isotopy lemma, $N_S$ is a fiber bundle over $S$ with fiber $cA$, where $A$ denotes the link of $S$ in $X$.
\begin{defn}\label{def-strbdl} A triple $(E, X, p)$ consisting of
\begin{enumerate}
\item {An} {abstractly} stratified {space} $(X, \Sigma, \mathcal{N})$, called the \emph{base space},
\item {An} {abstractly} stratified space $(E, \widetilde{\Sigma}, \widetilde{\mathcal{N}})$, called the \emph{total space},
\item {A} weakly controlled map $p : E \to X$ called the \emph{bundle projection},
\end{enumerate}
will be called a \emph{stratified fiber bundle} if
\begin{enumerate}
\item[(i)] For every stratum $\widetilde{S} \in \widetilde{\Sigma}$ and the corresponding unique stratum $S \in \Sigma$ such that $p(\widetilde{S}) \subseteq S$, the restriction $p : \widetilde{S} \to S$ is a {smooth} fiber bundle, and
\item[(ii)] The stratified distribution $\ker dp := \{\ker d(p|_{\widetilde{S}}) : \widetilde{S} \in \widetilde{\Sigma}\}$ on $E$ is weakly controlled.
\item[(iii)] Let $p|N_{\til{S}}: N_{\til{S}} \to N_{{S}}$ denote the restriction of $p$ to a neighborhood $N_{\til{S}}$ of ${\til{S}}$. Let $B, A$ denote the links of ${\til{S}},S$ respectively, so that $N_{\til{S}}$ (resp.\ $N_{{S}}$) is a bundle over ${\til{S}}$ (resp.\ $S$) with fiber $cB$
(resp.\ $cA$). Identify $\til S$ with the zero-section of $N_{\til{S}}$. We demand that $(p|N_{\til{S}})^{-1} ({S}) = \til{S}$.
\end{enumerate}
Given a stratified fiber bundle $(E, X, p)$, a \emph{(weakly) controlled section} of $p$ is a (weakly) controlled map $s : X \to E$ such that $p \circ s = \mathrm{id}$.\end{defn}
We should point out that a stratified bundle $(E, X, p)$ as defined above is necessarily locally trivial over strata {by Thom's second isotopy lemma, originally formulated in \cite{Thom_stratmaps} (for a detailed proof, see \cite[Proposition 11.2]{mather-notes})}. Hence, we can think of a stratified bundle as a collection $\{(E_S, S, p): S \in \Sigma\}$, where each $p : E_S \to S$ is a genuine topological bundle over the stratum $S$ with fiber a stratified space. {The conditions of Definition \ref{def-strbdl} ensure that these bundles patch together consistently. Condition (ii) is known as Thom's condition $(a_p)$ in literature \cite[Section 11]{mather-notes}}. Condition (iii) forces the restriction
$p|cB$ of $p$ to a $cB-$fiber of $N(\til{S})$ to land in a $cA-$ fiber of $N(\til{S})$ with the additional condition that the pre-image of the cone-point $c_A$ of $cA$ is exactly the cone-point $c_B$ of $cB$. This will be useful in Corollary \ref{cor-trivialzn} below.
\begin{comment}
\begin{rmk}\label{rmk-allsxnswc}
Given a stratified fiber bundle $(E, X, p)$, $p$ is by definition a weakly controlled map. Hence any section $s : X \to E$ is also necessarily weakly controlled.
\end{rmk}
\end{comment}
We shall sometimes use the suggestive notation $p:E \to X$ for a stratified fiber bundle. The following {lemma} and its consequences (Corollary \ref{strbdl-cone-link} and Corollary \ref{cor-trivialzn}) give the \emph{local structure} of stratified fiber bundles.
\begin{lemma}\label{lem-strbdl-trivialization} Let $(E, X, p)$ be a stratified fiber bundle. For any point $\widetilde{x} \in E$ with $p(\widetilde{x}) = x$, there is an open neighborhood $V$ of $\widetilde{x}$ in $E$, and $U$ of $x$ in $X$, equipped with the respective induced stratifications, such that $p(V) = U$, and
{\begin{enumerate}
\item There exist {abstractly} stratified spaces $(A, \Sigma_A, \mathcal{N}_A)$, $(B, \Sigma_B, \mathcal{N}_B)$ and isomorphisms of {abstractly} stratified spaces $\psi : V \to cB \times \mathbb{R}^n$ and $\varphi : U \to cA \times \mathbb{R}^m$ for some $n \geq m$,
\item There exists a map $f : cB \times \mathbb{R}^n \to cA \times \mathbb{R}^m$ which factors as $f = (g, \mathrm{proj})$ where $\mathrm{proj} : \mathbb{R}^n \to \mathbb{R}^m$ denotes the projection to the first $m$ coordinates,
\end{enumerate}
making the following diagram commute:
$$
\begin{CD}
V @>>> cB \times \mathbb R^n \\
@VVV @VVV \\
U @>>> cA \times \mathbb R^m
\end{CD}
$$}
\end{lemma}
\begin{proof} Suppose $\widetilde{S}$ is the unique stratum of $E$ containing $\widetilde{x}$. Let $S$ be the unique stratum of $X$ containing $x$. So $p(\widetilde{S}) \subseteq S$. Let $\widetilde{\pi}$ be the tubular projection associated to $\widetilde{S}$ in $E$ and let $\pi$ be the tubular projection associated to $S$ in $X$. Since $p|\widetilde{S} : \widetilde{S} \to S$ is a fiber bundle, we can choose charts $\widetilde{O} \cong \mathbb{R}^n$ around $\widetilde{x}$ in $\widetilde{S}$ and $O \cong \mathbb{R}^m$ around $x$ in $S$ such that $p : \widetilde{O} \to O$ is equivalent to the projection $\mathrm{proj} : \mathbb{R}^n \to \mathbb{R}^m$ with respect to local coordinates. Let {$V = \widetilde{\pi}^{-1}(\widetilde{O}) \cap \mathrm{cl}(N^\varepsilon_{\widetilde{S}})$} and $U = \pi^{-1}(O) \cap \mathrm{cl}(N^\varepsilon_S)$ for some appropriate $\varepsilon > 0$
(here $\mathrm{cl}(-)$ denotes closure). Observe that $\pi : U \to O$ is a proper stratified submersion. Choose coordinate vector fields $\partial_1, \cdots, \partial_m$ on $O$ corresponding to local coordinates $t_1, \cdots, t_m$. By \cite[Proposition 9.1]{mather-notes} there exist controlled vector fields $\eta_1, \cdots, \eta_m$ on $U$ which commute stratumwise such that $\pi_* \eta_i = \partial_i$ for all {$1 \leq i \leq m$}. Let $F = \pi^{-1}(x) \cap \mathrm{cl}(N^\varepsilon_S)$.
Let $\Phi_i$ be the local $1$-parameter family of stratum-preserving homeomorphisms on $U$ generated by $\eta_i$, for $0 \leq i \leq m$. For any $u \in U$, there exists a unique $v \in F$ and unique $(t_1, \cdots, t_m)\in O$, such that {$(\Phi_1^{t_1} \circ \Phi_2^{t_2}\cdots \circ \Phi_m^{t_m})(v)=u$}.
This gives an inverse homeomorphism $h : U \to F \times O$ defined by {$$h(u) = ((\Phi_m^{-t_m} \circ \cdots \circ \Phi_2^{-t_2} \circ \Phi_1^{-t_1})(u), t_1, \cdots, t_m),$$} where $\pi_S(u) = (t_1, \cdots, t_m)$, so that $t_1, \cdots, t_m$ are (implicitly) functions of $u$.
Now consider the commutative square
{$$
\begin{CD}
V @>{\widetilde{\pi}}>> \widetilde{O} \\
@V{p}VV @V{p}VV \\
U @>{\pi}>> O
\end{CD}
$$}
This gives us a map to the fibered product $(\widetilde{\pi}, p) : V \to \widetilde{O} \times_O U$. Note that since $\ker dp$ is a weakly controlled distribution on $E$ by hypothesis, $\widetilde{\pi}_* \ker dp_v = \ker dp_{\widetilde{\pi}(v)}$, for any $v \in V$, i.e.\ $d\widetilde{\pi}$ restricts to a surjection $d\widetilde{\pi} : \ker dp_v \to \ker dp_{\widetilde{\pi}(v)}$. We now use a fact from linear algebra:
\begin{claim}\label{claim-linal} Let $\mathbf{W}_1, \mathbf{W}_2, \mathbf{W}_3, \mathbf{W}_4$ be vector spaces occurring in the following commutative diagram
{$$
\begin{CD}
\mathbf{W}_1 @>{f}>> \mathbf{W}_2\\
@V{p}VV @V{q}VV \\
\mathbf{W}_3 @>{g}>> \mathbf{W}_4
\end{CD}
$$}
\noindent where $f, g, p, q$ are all surjective linear maps. {If $f$ restricts to a surjection $f : \ker p \to \ker q$ then the induced map to the fibered product $(f, p) : \mathbf{W}_1 \to \mathbf{W}_2 \times_{\mathbf{W}_4} \mathbf{W}_3$ is surjective.}
\end{claim}
\begin{proof}[Proof of Claim \ref{claim-linal}] For any $u \in \mathbf{W}_2$ and $v \in \mathbf{W}_3$ such that $q(u) = g(v)$, choose lifts $\widetilde{u}, \widetilde{v} \in \mathbf{W}_1$ such that $f(\widetilde{u}) = u$ and $p(\widetilde{v}) = v$ by surjectivity of $f$ and $p$, respectively. Then observe that $(q \circ f)(\widetilde{u} - \widetilde{v}) = q(u) - (q \circ f)(\widetilde{v}) = q(u) - (g \circ p)(\widetilde{v}) = q(u) - g(v) = 0$ by commutativity of the diagram. Therefore, $f(\widetilde{u} - \widetilde{v}) \in \ker q$. As $f : \ker p \to \ker q$ is surjective, there must be some $k \in \ker p$ such that $f(\widetilde{u} - \widetilde{v}) = f(k)$. Therefore there must also be some $\ell \in \ker f$ such that $\widetilde{u} - \widetilde{v} = k + \ell$. Let $w = \widetilde{u} - \ell = \widetilde{v} + k \in \mathbf{W}_1$. This is the desired element.\end{proof}
We now return to the proof of Lemma \ref{lem-strbdl-trivialization}. Note that
$\widetilde{O} \times_O U$ is an abstractly stratified space. Claim \ref{claim-linal} implies $(\widetilde{\pi}, p) : V \to \widetilde{O} \times_O U$ is a stratumwise submersion. {Therefore, there exist controlled vector fields $\widetilde{\eta}_i$ on $V$ {\textit{over}} $\eta_i$ on $U$ for $1 \leq i \leq m$, see \cite[Proposition 11.5]{mather-notes} (we pause here to record a warning that controlled vector fields \textit{over} controlled vector fields are not controlled vector fields in the usual sense of the word, see \cite[Section 11]{mather-notes} for a careful discussion)}. In particular, $\widetilde{\pi}_* \widetilde{\eta_i} = \widetilde{\partial}_i$ for all $1 \leq i \leq m$, where $\widetilde{\partial}_1, \cdots, \widetilde{\partial}_m$ are the first $m$ coordinate vector fields on $\widetilde{O}$ obtained as lifts of $\partial_1, \cdots, \partial_m$ by the projection $p : \widetilde{O} \to O$. Consider the rest of the coordinate vector fields $\widetilde{\partial}_{m+1}, \cdots, \widetilde{\partial}_n$ on $\widetilde{O}$ and let $\widetilde{\eta}_{m+1}, \cdots, \widetilde{\eta}_n$ be controlled vector fields on $V$ such that $\widetilde{\pi}_* \widetilde{\eta}_i = \widetilde{\partial}_i$ for $m+1 \leq i \leq n$.
Let $\widetilde{\Phi}_i$ denote the local $1$-parameter family of stratum-preserving homeomorphisms of $V$ generated by $\widetilde{\eta_i}$ for $1 \leq i \leq n$. We obtain, as before, two homeomorphisms $\widetilde{h}_1 : V \to \widetilde{F} \times \widetilde{O}$ given by {$\widetilde{h}_1(v) = (\widetilde{\Phi}_n^{-t_n} \circ \cdots \circ \widetilde{\Phi}_1^{-t_1}(v), t_1, \cdots, t_n)$ and $\widetilde{h}_2 : V \to F' \times O$ given by $\widetilde{h}_2(v) = (\widetilde{\Phi}_m^{-t_m} \circ \cdots \circ \widetilde{\Phi}_1^{-t_1}(v), t_1, \cdots, t_m)$} by considering the flow generated by all of $\widetilde{\eta}_1, \cdots, \widetilde{\eta}_n$ in the first case, and the flow generated by the first $m$ of these, namely $\widetilde{\eta}_1, \cdots, \widetilde{\eta}_m$, in the second case.
{Here $\widetilde{F} = \widetilde{\pi}^{-1}(\widetilde{x}) \cap N^\varepsilon_{\widetilde{S}}$ and $F' = (p \circ \widetilde{\pi})^{-1}(x) \cap N^\varepsilon_{\widetilde{S}}$. Consider the map
\begin{gather*}\phi : \widetilde{F} \times \widetilde{O} \to F' \times O \\
\phi(z, t_1, \cdots, t_n) = (\Phi_{m+1}^{t_{m+1}} \circ \cdots \circ \Phi_n^{t_n}(z), t_{m+1}, \cdots, t_n)\end{gather*}}
Then the following diagram commutes:
{$$
\begin{CD}
V @>{\widetilde{h}_1}>> \widetilde{F} \times \widetilde{O}\\
@V{\mathrm{id}}VV @V{\phi}VV \\
V @>{\widetilde{h}_2}>> F' \times O
\end{CD}
$$}
Since $\widetilde{\eta}_1, \cdots, \widetilde{\eta}_m$ are controlled vector fields on $V$ {\textit{over}} $\eta_1, \cdots, \eta_m$ on $U$, by \cite[Proposition 11.6]{mather-notes} there is also a commutative diagram as follows
{$$
\begin{CD}
V @>{\widetilde{h}_1}>> F' \times O\\
@V{p}VV @V{(p, \mathrm{id})}VV \\
U @>{h}>> F \times O
\end{CD}
$$}
By combining the two commutative diagrams above, we obtain (up to change of coordinates) an equivalence of $p : V \to U$ with {a map $\widetilde{F} \times \widetilde{O} \to F \times O$ defined by $(z, \mathbf{t}) \mapsto (g_\mathbf{t}(z), p(\mathbf{t}))$. Now, observe that the map $\widetilde{F} \times \widetilde{O} \to F \times \widetilde{O}$, $(z, \mathbf{t}) \mapsto (g_\mathbf{t}(z), \mathbf{t})$ is also a stratified fiber bundle}. So we can lift coordinate vector fields $\widetilde{\partial}_1, \cdots, \widetilde{\partial}_n$ on $\widetilde{O}$ to controlled vector fields on $F \times \widetilde{O}$ and from there to controlled vector fields on $\widetilde{F} \times \widetilde{O}$ {\textit{over}} the aforementioned controlled vector fields. Once again, using \cite[Proposition 11.6]{mather-notes}, we obtain a commutative diagram as follows
{$$
\begin{CD}
\widetilde{F} \times \widetilde{O} @>{\cong}>> \widetilde{F} \times \widetilde{O}\\
@V{p}VV @V{(g_{\mathbf{0}}, p)}VV \\
F \times \widetilde{O} @= F \times \widetilde{O}
\end{CD}
$$}
\noindent where the diagram is compatible with projections of each of the terms to $\widetilde{O}$. Therefore, by conjugating by the isomorphism on the top horizontal arrow we obtain an equivalence of $\widetilde{F} \times \widetilde{O} \to F$, $(z, \mathbf{t}) \mapsto g_{\mathbf{t}}(z)$ with $\widetilde{F} \times \widetilde{O} \to F$, $(z, \mathbf{t}) \mapsto g_{\mathbf{0}}(z)$. This shows that $p : V \to U$ is equivalent (up to reparametrization) to a genuine product
{$$g_{\mathbf{0}} \times p : \widetilde{F} \times \widetilde{O} \to F \times O$$}
By an application of Thom's first isotopy lemma {\cite[Proposition 11.1]{mather-notes}}, we identify $F \cong cA$ and $\widetilde{F} \cong cB$ where $A = \pi^{-1}(x) \cap \rho^{-1}(r)$ and $B = \widetilde{\pi}^{-1}(\widetilde{x}) \cap \widetilde{\rho}^{-1}(r')$ for some sufficiently small $r, r' > 0$, with the induced abstract stratification from $E$ and $X$ respectively. The lemma follows.\end{proof}
{We refine the conclusions of Lemma \ref{lem-strbdl-trivialization} by elaborating on the structure of the map $p_c : cB \to cA$ between the conical factors in the local trivialization
$$(V, U, p|_V) \cong (cB \times \mathbb R^n, cA \times \mathbb R^m, p_c \times \mathrm{proj})$$
of a stratified fiber bundle $(E, X, p)$ furnished by Lemma \ref{lem-strbdl-trivialization}}
\begin{cor}\label{strbdl-cone-link} $p_c : cB \to cA$ is a stratified fiber bundle.\end{cor}
\begin{proof}We know
$$p_c \times \mathrm{proj} : cB \times \mathbb{R}^n \to cA \times \mathbb{R}^m$$
is a stratified fiber bundle, since it is equivalent to the stratified fiber bundle $p|V : V \to U$ by a pair of isomorphisms of abstractly stratified spaces. Then for any stratum $S$ of $cA$ and any stratum $L$ of $cB$ such that $f(L) \subset S$, the restriction $p_c \times \mathrm{proj} : L \times \mathbb{R}^n \to S \times \mathbb{R}^m$ is a smooth fiber bundle. In particular, $p_c|L : L \to S$ is a surjective smooth submersion.
We can arrange $p_c$ to be a proper map by choosing $r, r' > 0$ appropriately in the last part of Lemma \ref{lem-strbdl-trivialization}. Therefore, $p_c|L$ is a proper surjective submersion and hence a smooth fiber bundle by Ehresmann's fibration theorem. Therefore, $p_c : cB \to cA$ is a weakly controlled map which is a stratumwise smooth fiber bundle. Moreover, it is straightforward to check that $\ker dp_c$ is a controlled distribution on $cB$ since $\ker d(p_c \times \mathrm{proj})$ is a controlled distribution on $cB \times \mathbb{R}^n$ by hypothesis. This establishes that $(cB, cA, p_c)$ satisfies all the hypotheses in Definition \ref{def-strbdl} and is therefore a stratified fiber bundle.
\end{proof}
\begin{comment}
\begin{rmk}\label{strbdl-cone-link} In the statement of Lemma \ref{lem-strbdl-trivialization} we can further demand the map $g : cB \to cA$ is a stratified bundle projection map. This can be accomplished as follows: We know
$$(g, \mathrm{proj}) : cB \times \mathbb{R}^n \to cA \times \mathbb{R}^m$$
is a stratified fiber bundle, since it is conjugate to the stratified fiber bundle $p|V : V \to U$ by a pair of isomorphisms of abstract stratified spaces. Then for any stratum $S$ of $cA$ and any stratum $L$ of $cB$ such that $f(L) \subset S$, the restriction $(g, \mathrm{proj}) : L \times \mathbb{R}^n \to S \times \mathbb{R}^m$ is a smooth fiber bundle. In particular, $g|L : L \to S$ is a surjective smooth submersion.
We can arrange $g$ to be a proper map by choosing $r, r' > 0$ appropriately in the last part of Lemma \ref{lem-strbdl-trivialization}. Therefore, $g|L$ is a proper surjective submersion and hence a smooth fiber bundle {by Ehressmann's fibration theorem}. Therefore, $g : cB \to cA$ is a weakly controlled map which is a stratumwise smooth fiber bundle. Moreover, it is straightforward to check that $\ker dg$ is a controlled distribution on $cB$ since $\ker d(g, \mathrm{proj})$ is a controlled distribution on $cB \times \mathbb{R}^n$ by hypothesis. {This establishes $(cB, cA, g)$ satisfies all the hypothesis in Definition \ref{def-strbdl} and is therefore a stratified fiber bundle.
We summarize the conclusions for convenience. If $p : E \to X$ is a stratified fiber bundle, then for any pair of points $\widetilde{x} \in E$ and $x \in X$ such that $p(\widetilde{x}) = x$, there exists neighborhoods $V$ of $\widetilde{x}$ in $E$ and $U$ of $x$ in $X$ such that $p : V \to U$ is equivalent to a stratified bundle \marginpar{{changed $\pi_c$ to $p_c$}}
{$$(p_c, \mathrm{proj}) : cB \times \mathbb{R}^n \to cA \times \mathbb{R}^m$$}
by a pair of isomorphisms of abstract stratified spaces, where $p_c : cB \to cA$ is a stratified fiber bundle as well.
\end{rmk}
\end{comment}
\begin{cor}\label{cor-trivialzn}
The map $p_c : cB \to cA$ is equivalent to the cone on a map $p_{\ell} : B \to A$ between the links, by a pair of isomorphisms of the abstractly stratified spaces $cA$ and $cB$. That is, there exists a commutative diagram of the form
$$
\begin{CD}
cB @>{\cong}>> cB\\
@VV{p_c}V @VV{c(p_\ell)}V\\
cA @>{\cong}>> cA
\end{CD}
$$
\end{cor}
\begin{proof}
For concreteness, let $cA= A \times [0,1)/A\times \{0\}$ and $cB= B \times [0,1)/B\times \{0\}$, {and let us indicate the cone points as $\{c_A\}$ and $\{c_B\}$ respectively}. Identify the $[0,1)$ factor in each with radial co-ordinates on $cA, cB$, {using the radial functions $\rho_A$ and $\rho_B$, respectively}. Let $\rho_A: cA \to [0,1)$ denote the projection onto its radial co-ordinate, and let $\Phi=\rho_A \circ p_c$.
Then $\Phi: cB \to [0,1)$ is a stratified fiber bundle with compact fibers, where the base $[0,1)$ has exactly two strata $\{0\}$ and $(0,1)$. Let $c_B$ denote the cone-point of $B$. Then condition (iii) of Definition \ref{def-strbdl} ensures that $\Phi : cB \setminus \{c_B\} \to (0,1)$ is a stratified fiber bundle where the base is a single stratum, and the fibers are compact. By Thom's first isotopy lemma {\cite[Proposition 11.1]{mather-notes}}, $\Phi : cB \setminus \{c_B\} \to (0,1)$ is a product fibration, i.e.\ $cB \setminus \{c_B\}$ is isomorphic to $B \times (0,1)$ {as abstractly stratified spaces, by an isomorphism which preserves the projection to $(0, 1)$}. Reparametrizing the radial co-ordinates furnishes the conclusion.
\end{proof}
\begin{comment}\label{rmk-induct}\marginpar{{how relevant is this anymore}}
Lemma \ref{lem-strbdl-trivialization} and Corollary \ref{cor-trivialzn} are potentially useful in inductive arguments inducting on dimension of stratified spaces. Lemma \ref{lem-strbdl-trivialization} gives a local structure for neighborhoods of $x \in S$, where $S$ is a stratum of a stratified space $X$. Let $\mathbb R^m \times cA$ denote such a local neighborhood. Then $\mathbb R^m \times cA$ factors into the manifold $(\mathbb R^m)$ factor and a compact stratified $(cA)$ factor. For $m > 0$, this directly allows us to apply induction to the $cA$ factor. Even for $m=0$, Corollary \ref{cor-trivialzn} allows us a further decomposition into the radial co-ordinate of $cA$ and the link $A$. Hence, induction may be applied to the link $A$. In any such argument, the problem thus comes down to assembling the information along the three co-ordinates:
\begin{enumerate}
\item the manifold $\mathbb R^m-$factor,
\item the radial co-ordinate in $cA$, and
\item the link $A$ in $cA$.
\end{enumerate}
\end{comment}
{Let $(A, \Sigma_A, \mathcal{N}_A)$ and $(B, \Sigma_B, \mathcal{N}_B)$ be abstractly stratified spaces. The same argument as in Lemma \ref{lem-strbdl-trivialization} and Corollary \ref{cor-trivialzn} can be used to establish lifting of \emph{stratified homotopies} $H: A \times I \to B$ i.e.\ a homotopy where, for every stratum $S \in \Sigma_A$, there exists a unique stratum $L \in \Sigma_B$ such that $H(I \times S) \subset I \times L$.}
\begin{prop}\label{strathomotlift}Let $(E, B, p)$ be a stratified bundle, and $H : A \times [0, 1] \to B$ be a stratified homotopy. Let $h_0 := H|A \times \{0\}$ and let $\til{h}_0 : A \to E$ be a lift of $h_0$. Then there exists a lift $\til{H} : A \times [0, 1] \to E$ such that $\til{H}$ is a stratified homotopy, $\til{H}|A \times \{0\} = \til{h}_0$ and $\til{H}$ covers $H$, i.e. \ $\til{H} \circ p = H$.\end{prop}
\begin{proof}We modify the homotopy by enlarging $[0, 1]$ slightly to $(-\ep, 1+\ep)$ and defining $H : A \times (-\ep, 1+\ep) \to B$ by declaring $H$ to be constant on $A \times (-\ep, 0]$ and $A \times [1, 1+\ep)$. It is now possible to choose $\ep > 0$ such that $H$ is a stratified mapping, where $A \times (-\ep, 1+\ep)$ is stratified by $S \times (-\ep, 1+\ep)$; $S \in \Sigma_A$ being the strata of $A$.
Let $H^* E \subset A \times (-\ep, 1+\ep) \times E$ denote the pullback of $E$ over $A \times (-\ep, 1+\ep)$. Since $H$ is a stratified map, $H^* E$ is a stratified bundle over $A \times (-\ep, 1+\ep)$. By projecting first to $A \times (-\ep,1+\ep)$ and then to $(-\ep,1+\ep)$ as in Corollary \ref{cor-trivialzn}, we obtain a commutative diagram
$$
\begin{CD}
H^* E @>{\cong}>> E_0 \times (-\ep, 1+\ep)\\
@VVV @VVV \\
A \times (-\ep,1+\ep) @= A \times (-\ep,1+\ep)
\end{CD}
$$
Where $E_0 = h_0^* E$ is the pullback of the stratified fiber bundle $(E, B, p)$ over $A$ under $h_0 : A \to B$. The map $\til{h}_0 : A \to E$ induces a map to the fibered product $H^*(\til{h}_0) : A \to H^* E$.
Let $\Phi : H^* E \to E_0 \times (-\ep, 1+\ep)$ denote the isomorphism above. Then, the product homotopy with coordinates changed by $\Phi$, i.e.
$$H = \Phi^{-1} \circ (\Phi \circ H^*(\til{h}_0), t)$$ gives the required lift.
\end{proof}
A slight generalization of stratified fiber bundles is sometimes useful:
\begin{defn}\label{def-stratumwisebdl}
A \emph{stratumwise bundle} $P: E \to B$ consists of
\begin{enumerate}
\item an abstractly stratified space $E$-the \emph{total space},
\item an abstractly stratified space $B$-the \emph{base space},
\item a stratum-preserving map $P$, such that for every stratum $S$ of $B$,
$$P|P^{-1}(S) \, : P^{-1}(S) \to S$$ is a topological fiber bundle, with fiber
a stratified space $F_S$.
\end{enumerate}
\end{defn}
\begin{eg}\label{eg-pdkt}
A product of stratified spaces is a stratumwise bundle, but not necessarily a
stratified fiber bundle.
\end{eg}
\begin{comment}
\begin{rmk}\label{rmk-strathomotlift}
The same argument as in Lemma \ref{lem-strbdl-trivialization} and Corollary \ref{cor-trivialzn} can be used to establish lifting of stratified homotopies. Let $H: A \times [0,1] \to B$ be a stratified homotopy, i.e.\ a homotopy through stratified maps. Let $h_0= H|A \times \{0\}$. Let $P:E \to B$ be a stratified bundle, and let $\til{h_0}: A \to E$ be a lift of $h_0$. Then there exists a lift $\til{H}: A \times [0,1] \to E$ covering $H$ so that $\til{H}|A \times \{0\}=\til{h_0}$.
To see this, let $H^* E$ denote the pullback of $E$ over $A \times [0,1]$. Since $H$ is a stratified
homotopy, $H^* E$ is a stratified bundle over $A \times [0,1]$. To ensure that the strata of $A \times [0,1]$ are in one-to-one correspondence with those of $A$, we enlarge $ [0,1]$ slightly to $(-\ep,1+\ep)$ and declare that the homotopy $H$ is constant on $ A \times (-\ep, 0] $ and on
$ A \times [1,1+\ep) $. Then, using the Thom isotopy Lemma as in Lemma \ref{lem-strbdl-trivialization} and Corollary \ref{cor-trivialzn} (by projecting first to $A \times (-\ep,1+\ep)$ and then to $(-\ep,1+\ep)$) guarantees that there exists a stratified bundle
$E_0$ over $A$ such that $H^* E$ is bundle isomorphic to $E_0 \times (-\ep,1+\ep)$ over $A \times (-\ep,1+\ep)$ with the identity in the second factor. Let $\Phi: H^*E \to E_0 \times (-\ep,1+\ep)$ denote this bundle isomorphism.
Then, the product homotopy with coordinates changed by $\Phi$, i.e.\ $$\Phi^{-1} \circ (\Phi \circ H^*(\tilde{h_0}),t)$$ gives the required lift.
\end{rmk}
\end{comment}
\begin{defn}\label{str-tbl}Let $(X, \Sigma, \mathcal{N})$ be an abstract $C^\infty$-stratified space. By Theorem \ref{natsume} there is a realization $X' \subset \mathbb{R}^N$ such that $(X', \Sigma', \mathcal{N}')$ is an abstract stratified set, where $\mathcal{N}'$ is induced from a tubular neighborhood system $(\nu(S), \pi_S, \rho_S)_{S \in \Sigma'}$ on the Whitney stratified set $X' \subset \mathbb{R}^n$. We define the {\bf tangent bundle} $TX$ to be the union $\bigcup_{S \in \Sigma'} TS \subset T\mathbb{R}^N$ of tangent bundles to each strata of $X'$. This inherits a topology from $T\mathbb{R}^N = \mathbb{R}^{2N}$ and an $I$-decomposition $\Sigma^{(1)} = \{TS_\alpha : S_\alpha \in \Sigma'\}_{\alpha \in I}$.
Let $p : TX \to X$ be the {\bf projection} obtained by restricting the projection $T\mathbb{R}^N \to \mathbb{R}^N$ to $TX$ and composing with the inverse of the realization homeomorphism $X \to X'$. Define $$N^{(1)}_\alpha := T\nu(S_\alpha) \cap TX$$ to be the tube around the stratum $TS_\alpha$ of $TX$, i.e.\ $N^{(1)}_\alpha$ is the intersection of $TX$ with the tangent bundle to the tubular neighborhood $\nu(S_\alpha) $.
The associated tubular projection $\pi^{(1)}_\alpha := d\pi_\alpha$ is defined by the restriction of the differential $$d\pi_\alpha : T\nu(S_\alpha) \to TS_\alpha$$ to $N^{(1)}_\alpha.$
Finally, consider the differential of the radial function $d\rho_\alpha : T\nu(S_\alpha) \to \mathbb{R}$ as a map to the fibers of $T[0, \infty) \cong [0, \infty) \times \mathbb R$. Then we define the radial function associated to the stratum $TS_\alpha$ as
$$\rho^{(1)}_\alpha := \rho_\alpha \circ p + (d\rho_\alpha)^2.$$
We denote the tube system defined by these functions as $\mathcal{N}^{(1)} = (N^{(1)}_\alpha, \pi^{(1)}_\alpha, \rho^{(1)}_\alpha)_{\alpha \in I}$.\end{defn}
\begin{lemma}\label{lem-txstratfd} The triple $(TX, \Sigma^{(1)}, \mathcal{N}^{(1)})$ is an abstract $C^\infty$-stratified space.\end{lemma}
\begin{proof}For any pair of indices $\alpha, \beta \in I$ with $\alpha < \beta$, $N_{\alpha\beta} = N_\alpha \cap S_\beta$ is a submanifold of $S_\beta$. Hence it inherits a $C^\infty$-structure. Consider $\pi_{\alpha\beta} : N_{\alpha\beta} \to S_\alpha$ and $\rho_{\alpha\beta} : N_{\alpha\beta} \to (0, \infty)$ $-$ both $C^\infty$-maps. If $\pi^{(1)}_{\alpha\beta}$ and $\rho^{(1)}_{\alpha\beta}$ denote the restrictions of $\pi^{(1)}_\alpha$ and $\rho^{(1)}_\alpha$ to $$N^{(1)}_{\alpha\beta} = N^{(1)}_\alpha \cap TS_\beta = TN_\alpha \cap TS_\beta = T(N_\alpha \cap S_\beta) = TN_{\alpha\beta},$$
then observe that $\pi^{(1)}_{\alpha\beta} = d\pi_{\alpha\beta}$ and $\rho^{(1)}_{\alpha\beta} = \rho_{\alpha\beta} \circ p + (d\rho_{\alpha\beta})^2$.
We know that for any triple of indices $\alpha, \beta, \gamma \in I$, the following control conditions hold:
\begin{equation}
\pi_{\alpha\beta} \circ \pi_{\beta\gamma} = \pi_{\alpha\gamma} \label{control-1}\end{equation}
\begin{equation}
\rho_{\alpha\beta} \circ \pi_{\beta\gamma} = \rho_{\alpha\gamma} \label{control-2}
\end{equation}
Differentiating and using the chain rule on \ref{control-1} and \ref{control-2}, we have
\begin{equation}
d\pi_{\alpha\beta} \circ d\pi_{\beta\gamma} = d\pi_{\alpha\gamma}\label{control-1'}
\end{equation}
\begin{equation}d\rho_{\alpha\beta} \circ d\pi_{\beta\gamma} = d\rho_{\alpha\gamma}\label{control-2'}
\end{equation}
whenever both sides of the equations are defined. Equation
\ref{control-1'} implies that $$\pi^{(1)}_{\alpha\beta} \circ \pi^{(1)}_{\beta\gamma} = \pi^{(1)}_{\alpha\gamma},$$ hence $(TX, \Sigma^{(1)}, \mathcal{N}^{(1)})$ has $\pi$-control.
From Equation \ref{control-2'} we also obtain
\begin{equation}
(d\rho_{\alpha\beta})^2 \circ d\pi_{\beta\gamma} = (d\rho_{\alpha\gamma})^2\label{rho-1}
\end{equation}
Since $p \circ d\pi_{\beta\gamma} = \pi_{\beta\gamma}$, we see that Equation \ref{control-2} also implies
\begin{equation}(\rho_{\alpha\beta} \circ p) \circ d\pi_{\beta\gamma} = \rho_{\alpha\gamma} \circ p\label{rho-2}
\end{equation}
Adding Equations \ref{rho-1} and \ref{rho-2}, we see that $\rho^{(1)}_{\alpha\beta} \circ \pi^{(1)}_{\beta\gamma} = \rho^{(1)}_{\alpha\beta}$. This is the desired $\rho$-control.
Note that $\rho^{(1)}_\alpha(x, v) = 0$ if and only if $\rho_\alpha(x) = d\rho_\alpha(x, v) = 0$. Since $\rho_\alpha(x) = 0$, $x \in S_\alpha$. Next write $v = u + w \in T\mathbb{R}^N$ where $u$ is the orthogonal projection of $v$ to $T_x S_\alpha$ under a fiberwise inner product defined on $\nu(S_\alpha)$. Then
$$0 = d\rho_\alpha(x, v) = d\rho_\alpha(x, u) + d\rho_\alpha(x, w).$$
But $d\rho_\alpha(x, u) = 0$ since $\rho_\alpha \equiv 0$ on $S_\alpha$. This forces $d\rho_\alpha(x, w) = 0$. Since $\rho_\alpha$ is the radial function on $\nu(S_\alpha)$, $d\rho_\alpha$ is strictly increasing in any direction orthogonal to $S_\alpha$, forcing $w = 0$. Hence $v \in T_xS_\alpha$. Therefore $(x, v) \in TS_\alpha$, i.e., $(\rho^{(1)}_\alpha)^{-1}(0) = TS_\alpha$. Finally it is straightforward to check that $(\pi_{\alpha\beta}^{(1)}, \rho_{\alpha\beta}^{(1)}) : N^{(1)}_{\alpha\beta} \to S_\beta \times (0, \infty)$ is a submersion.
We have checked that $\pi^{(1)}$ and $\rho^{(1)}$ are valid projection and radial functions, and $(TX, \Sigma^{(1)}, \mathcal{N}^{(1)})$ satisfies both the control conditions. The lemma follows.\end{proof}
\begin{lemma}\label{lem-cvf} (Weakly) controlled sections of the projection $p : TX \to X$ are (weakly) controlled vector fields on $X$ (cf.\ Definition \ref{def-strvf}). \end{lemma}
\begin{proof} For concreteness, we prove the lemma for controlled sections and controlled vector fields. The same proof works for weakly controlled sections and weakly controlled vector fields.
Let $\eta : X \to TX$ be a controlled section of $p$. Then for any stratum $S \subset X$, $\eta|S : S \to TS$ is a $C^\infty$-section of the tangent bundle of $S$; let us denote this vector field as $\eta_S$. Then $\{\eta_S : S \in \Sigma\}$ is a stratified vector field on $X$. Note that since $\eta$ is $\pi$-controlled, it follows that for any pair of strata $S, L \subset X$ with $S < L$, we have $\pi^{(1)}_{SL}(\eta(x)) = \eta(\pi_{SL}(x))$. Hence $(\pi_{SL})_*(\eta_L(x)) = \eta_S(\pi_{SL}(x))$. Moreover $\eta$ is $\rho$-controlled, hence $\rho^{(1)}_{SL}(\eta(x)) = \rho_{SL}(x)$. Now $\rho^{(1)}_{SL}(\eta(x)) = \rho_{SL}(x) + d\rho_{SL}(\eta(x))^2$, therefore $d\rho_{SL}(\eta(x)) = 0$. Equivalently, $(\eta_L)_*\rho_{SL} = 0$. This verifies that $\{\eta_S : S \in \Sigma\}$ is indeed a controlled vector field on $X$.\end{proof}
\begin{prop}\label{str-derv}Let $(X, \Sigma_X, \mathcal{N}_X)$ and $(Y, \Sigma_Y, \mathcal{N}_Y)$ be abstract stratified spaces and $f : X \to Y$ be a controlled (resp.\ weakly controlled) map. Then the stratum-wise differential induces a controlled (resp.\ weakly controlled) map $df : TX \to TY$.\end{prop}
\begin{proof}Suppose $\alpha, \beta \in I$ is a pair of indices such that $\alpha < \beta$, and $S_\alpha, S_\beta \in \Sigma_X$ be the corresponding pair of strata of $X$. Let $L_\alpha, L_\beta \in \Sigma_Y$ be the unique strata of $Y$ such that $f(S_\alpha) \subset L_\alpha$ and $f(S_\beta) \subset L_\beta$. Let us denote the tube system on $X$ associated to the pair of strata $(S_\alpha, S_\beta)$ by $(N^X_{\alpha\beta}, \pi^X_{\alpha\beta}, \rho^X_{\alpha\beta})$ and similarly the tube system on $Y$ associated to the pair of strata $(L_\alpha, L_\beta)$ by $(N^Y_{\alpha\beta}, \pi^Y_{\alpha\beta}, \rho^Y_{\alpha\beta})$. As $f$ is a controlled mapping, we have
\begin{equation}f \circ \pi^X_{\alpha\beta} = \pi^Y_{\alpha\beta} \circ f\label{pi-control}\end{equation}
\begin{equation}\rho^X_{\alpha\beta} = \rho^Y_{\alpha\beta} \circ f \label{rho-control}\end{equation}
on $N^X_{\alpha\beta} \cap f^{-1}(N^Y_{\alpha\beta})$. Differentiating Equation \ref{pi-control}, it is immediate that $df \circ (\pi^X)^{(1)}_{\alpha\beta} = (\pi^Y)^{(1)}_{\alpha\beta} \circ df$. Differentiating Equation \ref{rho-control} we obtain
\begin{equation}d\rho^X_{\alpha\beta} = d\rho^Y_{\alpha\beta} \circ df\label{rho-control'}\end{equation}
Since $(p \circ df)(x, v) = f(x)$ for all $(x, v) \in TX$, we can rewrite Equation \ref{rho-control} as
\begin{equation}\rho^X_{\alpha\beta} \circ p = \rho^Y_{\alpha\beta} \circ p \circ df \label{rho-control''}\end{equation}
Squaring both sides of Equation \ref{rho-control'} and adding to Equation \ref{rho-control''} gives $(\rho^X)^{(1)}_{\alpha\beta} = (\rho^Y)^{(1)}_{\alpha\beta} \circ df$. This verifies that $df : (TX, \Sigma_X^{(1)}, \mathcal{N}_X^{(1)}) \to (TY, \Sigma_Y^{(1)}, \mathcal{N}_Y^{(1)})$ is a controlled map. It is clear from the proof that if $f$ is only weakly controlled, $df$ is also weakly controlled.\end{proof}
\subsection{Stratified jets}\label{sec-stratfdjet}
Let $(X, \Sigma, \mathcal{N})$ be an abstract stratified space. Then the tangent bundle $(TX, \Sigma^{(1)}, \mathcal{N}^{(1)})$ is also an abstractly stratified space by Lemma \ref{lem-txstratfd}. We can now iterate this construction to define the {\bf $k$-fold iterated tangent bundle} $T^{(k)} X$. However, the radial functions become rather unwieldy to work with. We redefine the control structure on $T^{(k)} X$ as follows:
\begin{defn}[Iterated tangent bundle]\label{def-itdtb} Let $(X, \Sigma, \mathcal{N})$ be an abstract stratified space and we choose a realization $X' \subset \mathbb{R}^N$ such that $(X', \Sigma', \mathcal{N}')$ is an abstract stratified set. Here, $\mathcal{N}'$ is induced from a tubular neighborhood system $(\nu(S), \pi_S, \rho_S)_{S \in \Sigma'}$ on the Whitney stratified set $X' \subset \mathbb{R}^N$.
Let $T^{(k)} X$ be the union $\bigcup_{S \in \Sigma'} T^{(k)} S \subset T^{(k)} \mathbb{R}^N$ of the $k$-fold iterated tangent bundles to each stratum of $X'$. Then $T^{(k)} X$
inherits a topology from $T^{(k)}\mathbb{R}^N = \mathbb{R}^{2^k N}$ and an $I$-decomposition $\Sigma^{(k)} = \{T^{(k)} S_\alpha : S_\alpha \in \Sigma'\}_{\alpha \in I}$. Let $$p^{(k)}: T^{(k)} X \to X$$
be the projection obtained from restricting $T^{(k)}\mathbb R^N \to \mathbb{R}^N$ to $X'$ and composing with the inverse of the realization homeomorphism $X \to X'$.
Define $$N^{(k)}_\alpha := T^{(k)} \nu(S_\alpha) \cap T^{(k)} X$$ to be the tube around the stratum $T^{(k)} S_\alpha$ of $T^{(k)} X$. Let the restriction of the $k$-th derivative
$$d^{(k)} \pi_\alpha : T^{(k)}\nu(S_\alpha) \to T^{(k)}S_\alpha$$ to $N^{(k)}_\alpha$ be the associated tubular projection $\pi^{(k)}_\alpha := d^{(k)}\pi_\alpha$. Let $d^{(i)}\rho_\alpha : T^{(i)} \nu(S_\alpha) \to \mathbb{R}$ be the $i$-th derivative of the radial function $\rho_\alpha$. We define the radial function associated to the stratum $T^{(k)} S_\alpha$ of $T^{(k)} X$ to be $$\rho^{(k)}_\alpha := \rho_\alpha \circ p^{(k)} + (d\rho_\alpha)^2 + \cdots + (d^{(k)}\rho_\alpha)^2$$ restricted to $T^{(k)}_\alpha$. This defines a tube system $\mathcal{N}^{(k)} = (T^{(k)}_\alpha, \pi^{(k)}_\alpha, \rho^{(k)}_\alpha)_{\alpha \in I}$ and $(T^{(k)} X, \Sigma^{(k)}, \mathcal{N}^{(k)})$ is abstractly stratified. \end{defn}
\begin{defn}\label{def-contEvaldjet} Let $(E, X, p)$ be a stratified fiber bundle and $U \subset X$ be an open subset with the canonical abstract stratification inherited from $X$. We define a {\bf formal (weakly) controlled $E$-valued $r$-jet} over $U$ to be an $(r+1)$-tuple $(s_0, s_1, \cdots, s_r)$ such that $s_k : T^{(k)} U \to T^{(k)} E$ is a (weakly) controlled section of $(T^{(k)} E, T^{(k)} X, d^{(k)} p)$ over $T^{(k)} U \subset T^{(k)} X$ for all $0 \leq k \leq r$ and each $s_{k+1}$ {\bf covers} $s_k$ in the sense that the following diagram commutes:
$$\begin{array}{ccccccccc}
E & \xla{} & TE & \xla{} & T^{(2)}E & \xla{} & \cdots & \xla{} & T^{(r)} E \\
\;\;\xua{s_0} & & \xua{s_1} & & \xua{s_2} & & \cdots & & \xua{s_r} \\
U & \xla{} & TU & \xla{} & T^{(2)}U & \xla{} & \cdots & \xla{} & T^{(r)} U \\
\end{array}$$\end{defn}
An extended and modified notion of stratified jets will be given later in Definition \ref{def-sjr} before we prove the $h-$principle for jet sheaves.
The {\bf sheaf of formal (weakly) controlled $E$-valued $r$-jets}, denoted as $\mathcal{J}^r_E$ ($\JJ^r_{E,w})$), assigns to each open subset $U \subset X$ the set $\mathcal{J}^r_E(U)$ ($\JJ^r_{E,w}(U))$) of all formal (weakly) controlled $E$-valued $r$-jets over $U$. Let $\Gamma_E$ ($\Gamma_{E,w}$) denote the sheaf of (weakly) controlled local sections of $(E, X, p)$. Then there is similarly a morphism of sheaves
\begin{equation}\label{eqn-jr}
J^r : \Gamma_E \to \mathcal{J}^r_E
\end{equation}
\begin{equation*}
\big(J^r_w : \Gamma_{E,w} \to \mathcal{J}^r_{E,w}\big)
\end{equation*}
which sends a (weakly) controlled local section $s \in \Gamma_E(U)$ of $(E, X, p)$ over an open subset $U \subset X$ to the formal (weakly) controlled $E$-valued $r$-jet $$J^r s:= (s, ds, d^{(2)}s, \cdots, d^{(r)} s) \in \mathcal{J}_E^r(U)$$
$$\big(J^r_w s:= (s, ds, d^{(2)}s, \cdots, d^{(r)} s) \in \mathcal{J}^r_{E,w}(U)\big).$$ The image of this morphism is a subsheaf
$\mathcal{H}^r_{E}$ of $\mathcal{J}^r_{E}$
($\mathcal{H}^r_{E,w}$ of $\mathcal{J}^r_{E,w}$) which we shall call the {\bf (weakly) controlled sheaf of holonomic $E$-valued $r$-jets on $X$}.
Let $T^{(k)} E$ be equipped with metrics $\mathrm{dist}^{(k)}_E$ respecting the topology for all $0 \leq k \leq r$. Then for any open subset $U \subset X$ we can equip $\mathcal{J}^r_E(U)$ (or $\mathcal{J}^r_{E,w}(U)$) with a metric topology: we shall call two (weakly) controlled $E$-valued $r$-jets $J = (s_0, s_1, \cdots, s_r)$ and $J' = (s_0', s_1', \cdots, s_r')$ over $U$ are {\bf $\varepsilon$-close} if
$$\sup_{x \in U} \mathrm{dist}^{(k)}_E(s_k(x), s_k'(x)) < \varepsilon \; \text{for all} \; 0 \leq k \leq r$$
\section{Flexibility, Diff-invariance}\label{sec-sheafflexdiff} Two crucial notions that come into play in Gromov's sheaf-theoretic $h$-principle\cite[Section 2.2]{Gromov_PDR} over manifolds are
\begin{enumerate}
\item flexibility,
\item $\mathrm{Diff}$-invariance.
\end{enumerate}
The purpose of this section is to extend these two notions both at the level of the base space as well as
that of the nature of the sheaf. Thus, we shall
\begin{enumerate}
\item replace the base manifold by a stratified space,
\item replace the sheaf of quasi-topological spaces in \cite{Gromov_PDR} by stratified sheaves,
\end{enumerate}
and extend Gromov's notions of flexibility and $\mathrm{Diff}$-invariance to this setup.
\subsection{Flexibility of sheaves}\label{sec-flexsimp}
Following Gromov \cite[Ch. 2]{Gromov_PDR}, we shall refer to sheaves of quasitopological spaces as \emph{continuous sheaves}. We collect together in this subsection, some basic notions from \cite[Ch. 2]{Gromov_PDR}, and facts about continuous sheaves.
\begin{defn}\label{def-micfibn}\cite[p. 40]{Gromov_PDR}
Let $\alpha:A \to A'$ be a continuous map of quasitopological spaces. Consider a continuous map $\phi: P \to A$ of a compact polyhedron $P$ into $A$. Let $\phi'=\alpha \circ \phi$. Let $\Phi': P \times [0,1] \to A'$ be such that
$\Phi'| P \times \{0\}=\phi'$.
The map $\alpha$ is called a \emph{{(Serre)} fibration} if {for all such polyhedra $P$, maps $\phi: P \to A$ and homotopies $\Phi'$ of $\phi'$}, $\Phi'$ lifts to a map $\Phi: P \times [0,1] \to A$ such that $\Phi| P \times \{0\}= \phi$ and $\alpha \circ \Phi = \Phi'$.
{The map $\alpha$ is called a} \emph{{(Serre)} microfibration} if {for all such polyhedra $P$, maps $\phi: P \to A$ and homotopies $\Phi'$ of $\phi'$}, there exists ${0 < \ep \leq 1}$ (where $\ep$ may depend on $P, \phi, \Phi'$) and a map $\Phi : P \times [0,\ep] \to A$, such that $\Phi|P \times \{0\} = \phi$ and $\alpha \circ \Phi = \Phi'|P \times [0,\ep]$.
\end{defn}
Henceforth, by fibration (resp.\ microfibration), we shall mean a Serre fibration (resp.\ microfibration) of quasitopological spaces.
\begin{defn}\label{def-kan}
Let $X$ be locally compact Hausdorff.
A continuous sheaf $\FF$ on $X$ is
\emph{flexible} (resp.\ \emph{microflexible}) if for all compact
$K\subset K'$, $\FF(K') \to \FF(K)$ is a fibration (resp. microfibration).
\end{defn}
\begin{eg}\label{lem-surj}
Let $f: Y \to X$ be surjective, and $\FF$ be the continuous sheaf of sections associated to $f$, {equipped with the quasitopology on mapping spaces}. Then $\FF$ is flexible.
\end{eg}
\begin{comment}
We start with the following commutative diagram:
\begin{center}
$
\begin{CD}
I^k \times \{0\}@>>>\FF(K') \\
@VVV @VVV\\
I^k \times [0,1] @>>> \FF(K)
\end{CD}
$
\end{center}
\begin{comment}
content...
\begin{center}
$
\begin{CD}
I^k \times \{0\}@>\sigma(s,0)>>\FF(K') \\
@VVV @VVV\\
I^k \times [0,1] @>\sigma(s,t)>> \FF(K)
\end{CD}
$
\end{center}
We need to construct a "diagonal map" $\phi: I^k \times [0,1] \to \FF(K')$
in the commutative diagram above so that the enlarged diagram commutes.
The top map $$I^k \times [0,1] \to \FF(K)=\injlim_{U \supset K} \FF(U)$$ factors through some (and hence all but finitely many) $\FF(U)$, where
${U \supset K}$.
Looking at the bottom map,
we thus have a family of sections
$\sigma_{(s,t)}: U \to Y$ of $f:Y \to X$. This family is parametrized by
$(s,t) \in I^k \times [0,1] $ such that $\sigma_{(s,0)}: U \to Y$ extends
to a section $\til{\sigma_{(s,0)}}: V \to Y$ for some open $V$ containing $K'$. The last assertion follows from the same argument as in the previous paragraph.
Let $\rho: V \to \mathbb R$ be a continuous bump function such that
\begin{enumerate}
\item $1 \geq \rho(x) \geq 0$ for all $x$.
\item $\rho(x)=1$ on $K$.
\item $\rho(x)=0$ on the complement of $U$.
\end{enumerate}
Define $\til{\sigma_{(s,t)}}: V \to Y$
by $$\til{\sigma_{(s,t)}} (x) =
\sigma_{(s,\rho(x)t)} (x)$$ for all $x \in V$. This provides an extension
\begin{itemize}
\item $I^k \times [0,1] \longrightarrow \FF(V) \stackrel{res}\longrightarrow \FF(K)$ given by
\item $(s,t) \to \til{\sigma_{(s,t)}}$
\end{itemize}
as required.
\begin{rmk}\label{rmk-lem-surj}
In the context of Example \ref{eg-surj}, if we enrich the class of maps in Lemma \ref{lem-surj} to $CAT$ maps, assuming that such maps make sense, then the proof of Lemma \ref{lem-surj} goes through
with the only change that the bump function $\rho$ must be $CAT$.
\end{rmk}
\end{comment}
\begin{theorem}\label{thm-gromov-whe2he}\cite[Theorem B, p. 77]{Gromov_PDR} Let ${\Phi : \FF \to \GG}$ be a morphism of flexible sheaves over a finite dimensional locally compact Hausdorff space $X$. Then $\Phi$ is a local weak homotopy equivalence if and only if $\Phi$ is a weak homotopy equivalence, i.e.\ $\Phi_x: \FF_x \to \GG_x$ is a weak homotopy equivalence for all $x \in X$ if and only if ${\Phi_U : \FF(U) \to \GG(U)}$ is a weak homotopy equivalence for all $U \subset X$ open.
\end{theorem}
\subsection{Stratified spaces and flexibility conditions}\label{sec-flexdefs}
Let $(X, \Sigma)$ be a Whitney stratified space in the sense of Definition \ref{def-whitneyss}. Then, by the Whitney conditions (a), (b) of Definition \ref{def-whitneyab} and Thom's isotopy {lemma} \cite[Section 1.5]{GM_SMT} (see also Lemma \ref{lem-strbdl-trivialization}) we have:
\begin{lemma}\label{lem-nbhd}
Let $x \in X$ and let $S$ denote the unique stratum in which $x$ lies. Then there exists an open neighborhood $U$ of $x$ and a stratum-preserving homeomorphism $\phi: U \to \mathbb R^i \times cA$, where
\begin{enumerate}
\item $S$ has dimension $i$
\item $A$ is a compact stratified space admitting a stratum-preserving homeomorphism with the link of $S$
in $X$.
\end{enumerate}
\end{lemma}
We shall now define a notion of sheaves over stratified spaces $(X, \Sigma)$. This is a finer notion than that of a sheaf over the underlying topological space $X$. It associates data to open subsets of each stratum-closure of $X$. To formulate this, we introduce the \emph{stratified site} associated to $(X, \Sigma)$:
\begin{defn}\label{def-stratfdsite}
A stratified space $(X,\Sigma)$ comes equipped with the canonical filtered collection of topological spaces $\{\overline{S}\}$, where
\begin{enumerate}
\item $S$ is a stratum of $(X,\Sigma)$.
\item $\overline{S}$ is equipped with the subspace topology inherited from $X$.
\end{enumerate}
The \emph{stratified site} $\str(X, \Sigma)$ is the full subcategory of all open sets of $X$, where
\begin{enumerate}
\item Objects of $\str(X, \Sigma)$ are open subsets $U \subset \overline{S}$ of some stratum-closure.
\item Morphisms of $\str(X, \Sigma)$ are inclusions $U \hookrightarrow V$ between such subsets.
\end{enumerate}
\end{defn}
\begin{rmk}\label{rmk-site}
The term stratified site in Definition \ref{def-stratfdsite} is borrowed from algebraic geometry. For example, it can be checked that $\str(X,\Sigma)$ comes equipped with a natural Grothendieck topology (namely, sieves in $\str(X, \Sigma)$ are covers consisting of objects in $\str(X, \Sigma)$), and forms an example of a site.
\end{rmk}
\begin{defn}\label{def-sssheaf}
A \emph{stratified continuous sheaf} $\FF$ on $X$ is a collection of {continuous} sheaves $\{\FF_{\overline{L}}\}$, one for every stratum $L$ of $X$, such that for every pair $S<L$, {there is a morphism of sheaves}
$$\operatorname{res}^L_S: i_{\overline{S}}^*\FF_{\overline{L}}\to \FF_{\overline{S}}$$
{that we call the \emph{restriction map from $L$ to $S$}}.
Thus, a stratified sheaf $\FF$ assigns a quasitopological space $\FF(U)$ to every object of $\str(X,\Sigma)$ (Definition \ref{def-stratfdsite}) in a way that the gluing axiom is satisfied, i.e. \ it is a quasitopological space-valued sheaf on the site $\str(X, \Sigma)$.
\end{defn}
Open subsets of $(X,\Sigma)$ are naturally stratified subsets; hence {they are elements of $\str(X,\Sigma)$. We shall denote by $\FF_S$ the restriction of the sheaf $\FF_{\overline{S}}$ to the open stratum $S$}.
Recall (Definition \ref{def-contEvaldjet} and the discussion in Section \ref{sec-stratfdjet}) that for any stratified fiber bundle $(E, X, p)$, there are natural sheaves $\JJ^r_E, \HH^r_E, \JJ^r_{E,w}, \HH^r_{E,w}$ consisting of controlled (or weakly controlled) formal and holonomic jets.
\begin{defn}\label{def-stratflex}
A stratified sheaf on $(X,\Sigma)$ is \emph{ flexible} (resp. \emph{ microflexible}) if
for any stratum $S$, $\FF_{\overline{S}}$ is
flexible (resp.\ microflexible).
A stratified sheaf on $(X,\Sigma)$ is \emph{stratumwise flexible} (resp. \emph{stratumwise microflexible}) if
for any stratum $S$, $\FF_S$ is
flexible (resp.\ microflexible).
\end{defn}
Note that the latter is a condition on the sheaves $\{\FF_{\overline{S}}\}$ comprising the stratified sheaf $\FF$ \emph{after restricting each $\FF_{\overline{S}}$ to the open stratum $S$.}\\
We recall a construction from \cite[p. 77]{Gromov_PDR}. Let $\mathcal{F}, \mathcal{G}$ be continuous sheaves on $X$ and $q : \mathcal{F} \to \mathcal{G}$ be a morphism of continuous sheaves. Consider the continuous sheaf $\widetilde{\mathcal{F}}$ defined by assigning to every open set $U \subset X$ the set
$$\widetilde{\mathcal{F}}(U) := \{(s, \gamma) \in \mathcal{F}(U) \times \mathrm{Maps}(I, \mathcal{G}(U)) : q(s) = \gamma(0)\}.$$
Equip $\widetilde{\mathcal{F}}(U)$ with a quasitopology as follows: for any topological space $W$, a map $W \to \widetilde{\mathcal{F}}(U)$ is continuous if and only if the projections $W \to \mathcal{F}(U)$ and $W \to \mathrm{Maps}(I, \mathcal{G}(U))$ are continuous. There is a morphism of continuous sheaves $\widetilde{q} : \widetilde{\mathcal{F}} \to \mathcal{G}$ given by $\widetilde{q}(s, \gamma) = \gamma(1)$. Then $$\til q:\widetilde{\mathcal{F}}(U) \to \mathcal{G}(U)$$ is a fibration. Let $\psi \in \mathcal{G}(X)$ be a global section.
\begin{defn}\label{def-hofib} We shall call the fiber of $\widetilde{q}$ over $\psi$ the \emph{homotopy fiber} of $q$ over $\psi$:
$$\mathrm{hofib}(q; \psi)(U) := \widetilde{q}^{-1}(\psi|_U) \subset \widetilde{\mathcal{F}}(U).$$
{If the choice of $\psi \in \mathcal{G}(X)$ is understood, we simply denote the homotopy fiber sheaf as $\mathrm{hofib}(q)$.} \end{defn}
Let $\mathcal{F}$ be a stratified continuous sheaf on $(X, \Sigma)$. For ease of exposition, we assume the existence of and fix a global section $\psi \in \mathcal{F}(X)$.
\begin{defn}\label{def-hh} For $S<L$,
define the \emph{closed homotopy fiber sheaf} of $\FF$ from $L$ to $S$ by $$\overline{\HH}^L_S = \operatorname{hofib}(\mathrm{res}^L_S : i_{\overline{S}}^*\FF_{\overline{L}} \to \FF_{\overline{S}}).$$
The corresponding \emph{open homotopy fiber sheaf} is defined to be $\HH^L_S =i_S^*\overline{\HH}^L_S $
\end{defn}
Note that $$\HH^L_S = \operatorname{hofib}(i_{{S}}^*\FF_{\overline{L}} \to \FF_{{S}}).$$
\begin{defn}\label{def-infstratflex}
A stratified continuous sheaf on $(X,\Sigma)$ is \emph{infinitesimally flexible across strata} if
for any $S<L$ in $\Sigma$, $\HH^L_S$ is flexible.
\end{defn}
\begin{lemma}\label{lem-mapsI2F}
Let $\FF$ be a continuous sheaf over $X$ and $Z\subset K \subset X$ be compact subsets.
\begin{enumerate}
\item If $\FF(K) \to \FF(Z)$
is a fibration, then so is $\operatorname{Maps}(I^n, \FF(K)) \to \operatorname{Maps}(I^n, \FF(Z))$.
\item If $\FF$ is a stratumwise flexible stratified sheaf, so is $\operatorname{Maps}(I^n,\FF)$.
\item If $\FF$ is a stratified sheaf which is infinitesimally flexible across strata, so is $\operatorname{Maps}(I^n,\FF)$.
\end{enumerate}
\end{lemma}
\begin{proof}
${1)}$ We are given that $\mathcal{F}(K) \to \mathcal{F}(Z)$ is a fibration. Let $C$ be a CW-complex; then every map $C \times I^n \times I \to \mathcal{F}(Z)$ with a chosen initial lift $C \times I^n \times \{0\} \to \mathcal{F}(K)$ admits a lift. Thus, $\operatorname{Maps}(I^n, \mathcal{F}(K)) \to \operatorname{Maps}(I^n, \mathcal{F}(Z))$ satisfies the homotopy lifting property with respect to homotopies of maps from $C$. As $C$ was arbitrary, this proves $\operatorname{Maps}(I^n, \mathcal{F}(K)) \to \operatorname{Maps}(I^n, \mathcal{F}(Z))$ satisfies the homotopy lifting property and thus is a Serre fibration.\\
${2)}$ This is an immediate corollary of $(1)$.\\
${3)}$ Infinitesimal flexibility across strata is the statement that for all $S, L \in \Sigma_X$, $S < L$, $\HH^L_S$ is flexible. Applying $\operatorname{Maps}(I^n, -)$ and using $\iota_S^* \operatorname{Maps}(I^n, \mathcal{F}_{\overline L}) = \operatorname{Maps}(I^n, \iota_S^* \mathcal{F}_{\overline L})$, we obtain the desired claim.\end{proof}
\subsection{Diff-invariance}\label{sec-diffinv} We shall say that $U \subset S$ is a \emph{relatively compact embedded open ball}, if
$U$ is an open ball and
$\overline{U} \subset S$ is a compact (smoothly) embedded ball in $S$.
Following Gromov \cite{Gromov_PDR}, we shall say that a sheaf $\FF$ over a manifold $V$ is \emph{$\diffc$-invariant} if {it is acted on by the pseudogroup of compactly supported diffeomorphisms of $V$} in the following sense: for every pair of relatively compact open balls $U, U' \subset V$ (i.e., $\overline{U}, \overline{U'}$ are embedded compact balls in $V$), and a diffeomorphism $\phi: U' \to U$, {there is an isomorphism of sheaves $\psi : \phi^*(\FF\vert_U) \to \FF\vert_{U'}$ such that $\psi$ is functorial in $\phi, U, U'$}. Recall that the pseudogroup $\mathrm{Diff}_c(V)$ of diffeomorphisms is the set of all pairs $(U,f)$, where $U \subset V$ is an open set and {$f$ is a compactly supported diffeomorphism of $M$ carrying $U$ onto another open set $U' = f(U) \subset V$}.
Finally, if $\FF^1, \FF^2$ are $\diffc$-invariant sheaves over a manifold $V$, then a morphism of sheaves $\Phi: \FF^1
\to \FF^2$ is said to be \emph{$\diffc$-invariant} if it is natural with respect to the $\diffc(V)$-action, i.e.\ if
$\psi_i: \phi^*(\FF^i\vert_U)\to\FF^i\vert_{U'}$, for $i=1,2$, denote the isomorphisms above, then the following diagram commutes:
$$
\begin{CD}
\phi^*(\FF^1\vert_U)@>{\psi_1}>>\FF^1\vert_{U'} \\
@V{\Phi}VV @V{\Phi}VV\\
\phi^*(\FF^2\vert_U) @>{\psi_2}>>\FF^2\vert_{U'}
\end{CD}
$$
\begin{defn}\label{def-stratdiff}
A stratified continuous sheaf on a stratified space $(X,\Sigma)$ is \emph{$\sdiff$-invariant} if
\begin{enumerate}
\item for any $S<L$ in $\Sigma$, $i_S^*\FF_{\overline{L}}$ is $\diffc(S)-$invariant.
\item for any $S\in\Sigma$, $\FF_{S}$ is $\diffc(S)-$invariant.
\item for any $S<L$ in $\Sigma$, $\operatorname{res}^L_S$ is $\diffc(S)-$invariant.
\end{enumerate}
\end{defn}
We observe the following.
\begin{lemma}\label{lem-diffinvconststalk}
Let $\FF$ be a $\diffc-$invariant sheaf over a connected manifold $M$. Then $\FF$ has constant stalks, {i.e.\ for any pair of points $x, y \in M$, $\FF_x \cong \FF_y$.}
\end{lemma}
\begin{proof}
Let $x, y \in M$. Let $\{U_i :i \in {\mathbb N}\}$ be a family of nested open balls around $x$ such that
$\cap_i U_i = \{x\}$. There exists a homeomorphism $\phi: U_1 \to V_1$ such that $V_1$ is a neighborhood of $y$, and $\phi(x) = y$. let $V_i = \phi(U_i)$. Then $\{V_i :i \in {\mathbb N}\}$ is a family of nested open balls around $y$ such that
$\cap_i V_i = \{y\}$. By $\diffc-$invariance, {$\FF\vert_{U_i} \cong \phi^* \FF\vert_{V_i}$}, and hence (by passing to limits), $\FF_x \cong \FF_y$.
\end{proof}
Let $\FF$ be a stratified (continuous) sheaf over a stratified space $(X,\Sigma)$ such that
\begin{enumerate}
\item $\FF$ is infinitesimally flexible across strata.
\item $\FF$ is $\sdiff-$invariant.
\end{enumerate}
Recall that
for any $S<L$ in $\Sigma$, $\operatorname{res}^L_S : i_S^*\FF_{\overline{L}} \to \FF_{S}$ is a morphism of sheaves, and
$\HH^L_S=\operatorname{hofib} (\operatorname{res}^L_S)$ denotes the homotopy fiber sheaf.
\begin{lemma}\label{lem-trivialhofibbdl} $\HH^L_S$ is $\diffc(S)-$invariant. In particular,
$\HH^L_S$ has constant stalks over $S$.
\end{lemma}
\begin{proof}
By $\sdiff-$invariance (Definition \ref{def-stratdiff})
of $\FF$, it follows that
\begin{enumerate}
\item for any $S<L$ in $\Sigma$, $i_S^*\FF_{\overline{L}}$ is $\diffc(S)-$invariant.
\item for any $S\in\Sigma$, $\FF_{S}$ is $\diffc(S)-$invariant.
\item for any $S<L$ in $\Sigma$, $\operatorname{res}^L_S$ is $\diffc(S)-$invariant.
\end{enumerate}
By functoriality of the homotopy fiber construction, $\HH^L_S$ is $\diffc(S)-$invariant.
Hence, by Lemma \ref{lem-diffinvconststalk}, $\HH^L_S$ has constant stalks.
\end{proof}
\section{The sheaf-theoretic h-principle}\label{sec-hprin}
\subsection{The (Gromov) diagonal normal sheaf}\label{sec-gromovformalfn}
Let $\FF$ be a continuous sheaf over a locally compact countable polyhedron $X$ (e.g.\ a stratified space). Define a sheaf $\PP$ over $X \times X$ by {assigning to every basic open set $U \times V \subset X \times X$, the quasitopological space}
$$\PP(U \times V ):= {\mathrm{Maps}(U, \FF(V))},$$
\begin{defn}\label{def-formalfn}
The (Gromov) diagonal normal sheaf $\FF^*$ associated to $\FF$ is defined by $$\FF^* = \diag^* \PP,$$ where $\diag: X \to X \times X$ is the diagonal embedding.
\end{defn}
When $\FF$ {is a} subsheaf of the sheaf of sections of a surjective map $P: E \to X$ between topological spaces, an alternate description of the (Gromov) diagonal normal sheaf may be given in terms of (a slight relaxation of) Milnor's construction of microbundles \cite{milnor-microb,kister-mic}.
\begin{defn}\label{def-mic} Let $X$ be a topological space.
The \emph{tangent microbundle} $(U_X,X,p)$ to $ X$ is defined to be the germ of a neighborhood $U_X$ of $\diag(X) \subset X \times X$ along with the projection $p : U_X \to X$ to the first coordinate.
\end{defn}
\begin{rmk} In Milnor's definition \cite{milnor-microb}, a microbundle is required to always be locally trivial, whereas we relax this condition.\end{rmk}
Let $P: E \to X$ be surjective and $\Gamma (U, E)$ denote the space of sections over $U \subset X$ equipped with the compact open topology. Let $\FF$ denote a subsheaf of the sheaf of sections {$\Gamma(-, E)$} satisfying some property $\AAA$, i.e.\ $\FF(U)$ consists of sections $s \in \Gamma (U, E)$ satisfying the property $\AAA$.
Then $\PP(U \times V)$ consists of continuous maps from $U$ to {$\FF(V) \subset \Gamma(V, E)$} (where the latter has the inherited compact open topology). The following are two equivalent descriptions:
\begin{enumerate}
\item $\PP(U \times V )$ consists of $U-$parametrized families of sections over $V$ {satisfying property $\mathcal{A}$}
\item $\PP(U \times V )$ consists of continuous maps from $U \times V$ to $E$ such that for each $x\in U$, {it restricts on $\{x\} \times V$ to a section $\sigma_x :V \to E$ satisfying property $\mathcal{A}$}
\end{enumerate}
It is more convenient to think of $\FF^*$ as the restriction of $\PP$ to the {tangent microbundle} $(U_X,X,p)$ (Definition \ref{def-mic}) rather than as the restriction to $\diag (X)$. This is because $(U_X,X,p)$ is defined as a germ of open neighborhoods of ${\diag (X) \subset X \times X}$, and restriction of the sheaf $\PP$ to any representative of $U_X$ makes sense \emph{without passing to limits}. We proceed to describe this in some more detail.
Let $\{U_i \times U_i\}$ be a collection of basic open sets in $X \times X$. We shall say that a collection of elements $\phi_i \in \PP(U_i \times U_i)$ are \emph{consistent}, if for all $i \neq j$,
$$\phi_i=\phi_j \text{ on } {(U_i \cap U_j) \times (U_i \cap U_j)}$$
Let $W \subset \diag (X) \subset X \times X$ be an open subset. Then any element of $\FF^*(W)$ is represented by a family of consistent
elements $\phi_i \in \PP(U_i \times U_i)$, where $\{U_i \times U_i\}$ covers $W$. {Using the equivalence described in the preceding paragraphs, we may treat $\phi_i$ as maps $\phi_i : U_i \times U_i \to E$. The consistency condition therefore allows us to glue these to a well-defined map $\phi : (W \times W) \cap U_X \to E$. We list the properties of this map as a characterization of sections of $\FF^*$ over $W$:}
\begin{rmk}\label{rmk-formalasgerms}\rm{$\phi: (W\times W) \cap U_X \to E$ is an element of $\FF^*(W)$ if and only if
\begin{enumerate}
\item The restriction ${\phi\vert_{\diag(W)}}$ of $\phi$ to $\diag(W)$ is a section of $E$ over $p(\mathrm{diag}(W)) = W \subset X$, and
\item {For any $w \in W$,} the restriction of $\phi$ to $(\{w\}\times W)\cap U_X$ is the germ of a section of $E$ over the open subset $p_2((\{w\}\times W)\cap U_X) \subset W \subset X$ of $w$ {where $p_2: X \times X \to X$ defines the projection of $X \times X$ to the second coordinate.}
\end{enumerate}
Thus, the first coordinate in $X \times X$ defines the base space of the microtangent bundle $(U_X,X,p)$, and the second gives germs of neighborhoods of points $x \in X$. Hence,
an element of $\FF^*(W)$ is given by a section of $E$ over
$W$ (in the first coordinate) decorated with germs of sections $\{s_w: w \in W\}$ ({in} the second coordinate).
A caveat is in order. {The preceding paragraph suggests that elements of $\FF^*(X)$ correspond to maps $U_X \to E$ from the total space of the tangent microbundle $(U_X, X, p)$ to the total space of the surjective map $(E, X, P)$ whose sections define the sheaf $\FF$. However, this map is {\bf not} a fiber-preserving map; in fact, the situation is completely orthogonal. This is because the fibers of the tangent microbundle $p : U_X \to X$ are subspaces of the second factor in the square $X \times X$, which map to germs of \emph{sections} of the surjection $P : E \to X$, and are therefore ``transverse" to the fibers of $P$.}}
\end{rmk}
\begin{defn}\label{defn-sheafmaps2ff}
Let $\FF$ denote a {continuous} sheaf $X$. Let $W$ be a fixed topological space.
Define a new sheaf
$\operatorname{Maps}(W,\FF)$ over $X$ as follows. For any $U \subset X$ open, set $$\operatorname{Maps}(W,\FF) (U) = \operatorname{Maps}(W,\FF(U)), $$ where $ \operatorname{Maps}(W,\FF(U))$ is equipped with {the standard quasitopology on mapping spaces.}
\end{defn}
\begin{lemma}\label{lemw2ff*}
For $\FF, X, W$ as in Definition \ref{defn-sheafmaps2ff} above, the Gromov diagonal normal sheaves satisfy
$$(\operatorname{Maps}(W,\FF))^* = \operatorname{Maps}(W,\FF^*). $$
\end{lemma}
\begin{proof} Consider the sheaf $\PP$ on $X \times X$ given by
$$\PP(U \times V) = \operatorname{Maps} (U, \operatorname{Maps}(W, \FF) (V)).$$
Also, let $\PP_1$ be the sheaf on $X \times X$ given by
$$\PP_1(U \times V) = \operatorname{Maps} (U, \FF (V)).$$
Then,
\begin{eqnarray*}
\PP(U \times V) & = \operatorname{Maps} (U, \operatorname{Maps}(W, \FF(V)) ) & = \operatorname{Maps} (U\times W, \FF(V)) )\\
& =\operatorname{Maps} (W, \operatorname{Maps}(U, \FF(V)) ) &= \operatorname{Maps} (W,\PP_1(U \times V).
\end{eqnarray*}
Hence,
\begin{eqnarray*}
(\operatorname{Maps}(W,\FF))^*=& \diag^*\PP &= \diag^* (\operatorname{Maps} (W, \PP_1) ) \\ =& \operatorname{Maps} (W, \diag^*\PP_1)&= \operatorname{Maps} (W, \FF^*).
\end{eqnarray*}
\end{proof}
There exists a sheaf over $W \times X$ closely related to the sheaf $\operatorname{Maps}(W,\FF)$ over $X$ (Definition \ref{defn-sheafmaps2ff}). This is defined below.
\begin{defn}\label{defn-sheafmaps2ff2}
Let $\GG$ denote a sheaf of topological spaces over $X$. Let $W$ be a fixed topological space.
Define a new sheaf of $W-$parametrized sections $\FF=\operatorname{Maps}^p(W,\GG)$ over $W\times X$ as follows. For any $U \subset X$ and $V\subset W$ open, set $$\FF(V \times U)=\operatorname{Maps}^p(W,\FF) (V\times U) = \operatorname{Maps}(V,\GG(U)), $$ where $ \operatorname{Maps}(V,\GG(U))$
is equipped with the compact open topology.
\end{defn}
\begin{eg}\label{eg-surj3}{\rm
A natural example of a sheaf of $W-$parametrized sections may be given by the following.
Let $P: Y \to X$ be a continuous surjective map and let $\GG$ denote the sheaf of continuous sections of $P$. Let $W$ be a fixed topological space. Define
a continuous surjective map $P_W: W \times Y \to W \times X$ such that $P_W(w,y)=(w, P(y))$. Then the sheaf of continuous sections of $P_W$ is given precisely by
$\FF=\operatorname{Maps}^p(W,\GG)$.}
\end{eg}
\begin{lemma}\label{lemw2ff*p}
For $\FF, \GG, X, W$ as in Definition \ref{defn-sheafmaps2ff2} above, the Gromov diagonal normal sheaves satisfy the following for open $V \subset W$ and $U \subset X$:
$$\FF^*(V \times U) = \operatorname{Maps}(V,\GG^*(U)). $$
\end{lemma}
\begin{proof} As in Definition \ref{def-formalfn}, the Gromov diagonal normal sheaf is constructed by first constructing
a sheaf $\PP$ on $(W \times X)\times (W \times X)$ as follows.
\begin{eqnarray*}
{\PP((V \times U)\times (V \times U))}=& \operatorname{Maps}((V \times U), \FF (V \times U)) \\
=& \operatorname{Maps}((V \times U),\operatorname{Maps}(V,\GG(U))) \\
=& \operatorname{Maps}((V \times V),\operatorname{Maps}(U,\GG(U))) \\
\end{eqnarray*}
Restricting $\PP$ to the diagonal, we have the following.
\begin{eqnarray*}
\FF^*(V \times U)=& \diag^*\PP((V \times U)\times (V \times U))\\
= &\injlim_{\{\OO:\, (V \times U)\times (V \times U) \supset \OO \supset \diag(V \times U)\}}\PP((V \times U)\times (V \times U))\\
= &\injlim_{\{\OO: \, (V \times U)\times (V \times U) \supset \OO \supset \diag(V \times U)\}}
\operatorname{Maps}((V \times V),\operatorname{Maps}(U,\GG(U))) \\
= & \operatorname{Maps} (V, \GG^*(U))
\end{eqnarray*}
This completes the proof.
\end{proof}
There is a tautological inclusion $$\Delta: \FF \stackrel{\iota}\longrightarrow \FF^*$$ sending $s \in \FF(U)$ to
$(s,\{s_x: x \in U\})$, where $s_x$ denotes the germ of the section $s$ at $x \in U$.
\begin{defn}\label{def-sheafh}\cite[p. 76]{Gromov_PDR}
A (continuous) sheaf $\FF$ satisfies the sheaf theoretic h-principle, if every section
$\phi \in \FF^*(U)$ can be homotoped to $\FF(U) \subset \FF^*(U)$ for all open subsets
$ U \subset V$. Further, $\FF$ satisfies the
parametric sheaf theoretic h-principle if the morphism
$ \Delta_U: \FF(U) \to \FF^*(U)$ is a weak homotopy equivalence for all open $U \subset X$.
\end{defn}
\begin{prop}\label{prop-formalisflex}\cite[p. 76]{Gromov_PDR}
Let $\FF$ be a continuous sheaf over a locally compact finite dimensional Hausdorff space $X$. Then $\FF^*$ is flexible.
\end{prop}
\begin{theorem}\label{thm-fleximphprin}\cite[p.76]{Gromov_PDR}
Let $\FF$ be a continuous
sheaf over a locally compact countable polyhedron $X$ (e.g.\ a manifold or a stratified space). If $\FF$ is flexible, it satisfies the parametric h-principle.
\end{theorem}
\begin{rmk}\label{rmk-hprinvsflex}\rm{
For sheaves, the notion of flexibility is strictly stronger than that of an $h-$principle. Suppose $X$ is locally contractible. By definition of the stalks $\FF_x$ and $\FF^*_x$ of a continuous sheaf $\FF$ and its diagonal normal form $\FF^*$, {respectively, we see that the tautological inclusion $\Delta_x : \FF_x \to \FF^*_x$ is a weak homotopy equivalence at the level of stalks. That is, the germ of a formal section at $x$ is homotopic to the germ of a holonomic section at $x$. Therefore, given a formal section in an open neighborhood $U_x$ of $x$, we may homotope it to a holonomic section, possibly in a small open neighborhood $U_x' \subset U_x$.}
Flexibility allows us to glue these local holonomic sections together to obtain a (global) holonomic section over a large open set. Thus, flexibility may be thought of as an analog of a Mayer-Vietoris principle used to glue homotopy equivalences (see for instance Theorem \ref{thm-bh} below).
The existence of the $h-$principle is invariant under homotopy equivalence of sheaves (essentially by definition), i.e.\ if $\FF$ satisfies the $h-$principle and $\GG$ is
homotopy equivalent to $\FF$, then $\GG$ satisfies the $h-$principle.
The same is not true for flexibility. This is the raison d'etre behind the existence of Section 2.2.7 of \cite{Gromov_PDR}.}
\end{rmk}
\subsection{ Parametric $h$-principle for stratified continuous sheaves}\label{sec-stratfdgromov} We now extend the notion of a sheaf-theoretic $h$-principle (Definition \ref{def-sheafh}) to stratified sheaves.
The essential difference between a stratified continuous sheaf and a continuous sheaf over $X$ is that a stratified sheaf $\FF$ assigns a quasitopological space $\FF(U)$ to an open subset of $U \subset \overline{L}$ for \emph{every} stratum $L$ of $X$, whereas an ordinary sheaf does so only for open subsets of $X$. Hence, for every pair $S<L$, and for every $x \in S$, there are two stalks:
\begin{enumerate}
\item $(\FF_S)_x$, which we shall refer to as the \emph{intrinsic stalk}. More generally, for
any $U \subset S$, $(\FF_S)\lvert_{U}$ will be referred to as the \emph{intrinsic sheaf} over $U$.
\item $(i_S^*\FF_{\overline{L}})_x$, which we shall refer to as the \emph{extrinsic stalk}. More generally, for
any $U\subset S $, $(i_S^*\FF_{\overline{L}})\lvert_{U}$ will be referred to as the \emph{extrinsic sheaf} over $U$.
\end{enumerate}
\begin{defn}\label{def-diagstratsheaf}
For a stratified sheaf $\FF =\{\FF_{\overline{S}} \}$, over a stratified space $X$, the Gromov diagonal normal
stratified sheaf is given by $\FF^* =\{\FF_{\overline{S}}^* \}$.
\end{defn}
We check below that the restriction morphisms of $\FF^*$ are the expected ones, and that there is a canonical map from $\FF$ to $\FF^*$.
\begin{lemma}\label{lem-f2f*}
{$\FF^*$ is a stratified sheaf, and} $\Delta: \FF \to \FF^*$ is a morphism of stratified sheaves.
\end{lemma}
\begin{proof}The intrinsic and extrinsic sheaves give two different diagonal normal sheaves (Definition \ref{def-formalfn}) $\FF_S^*$ and $(i_S^*\FF_{\overline{L}})^*$ respectively.
We note also that for open subsets $U$ of $\overline L$ (for any stratum $L$),
the morphism $\Delta: \FF|U \to \FF^*|U$ is a morphism of sheaves. Hence, it suffices to show that for $S<L$, and $V \subset S$, $\operatorname{res}^{L^*}_S: i^*(\FF_{\overline{L}})^* \to \FF_S^*$ is a morphism of (continuous) sheaves (for the purposes of this proof we use $i$ in place of $i_S$ as no other strata are involved).
It will immediately follow from the definition of $\FF^*$, that the following diagram commutes for $S<L$:
\begin{center}
$
\begin{CD}
i^* \FF_L|V @>\Delta>>(i^* \FF_L)^*|V \\
@V{\operatorname{res}^{L}_S}VV @VV{\operatorname{res}^{L*}_S}V\\
\FF_S|V @>\Delta>>\FF_S^*|V
\end{CD}
$
\end{center}
In other words, we need to show that the restriction map ${\operatorname{res}^{L}_S}$ induces a natural
restriction map $\operatorname{res}^{L^*}_S$ between the Gromov diagonal normal extrinsic sheaf $(i^* \FF_L)^*$ and the Gromov diagonal normal intrinsic sheaf
$\FF_S^*$. This is an exercise in unwinding definitions.
On $X \times X$ define $\PP_L(U \times V) = \operatorname{Maps}(U, \FF(V))$ where $U, V$ are any pair of open subsets in some stratum-closure $\overline L$. The collection
$$\{\PP_L: L \, \rm{a \, stratum \, of \,} X\}$$
defines a stratified sheaf $\PP$ on $X \times X$.
Restricting $\PP$ to $\diag(X)$ we get the Gromov normal sheaf $\FF^*$. Note that $\overline{L} \times \overline{L}\subset X \times X$ comes with the product stratification, and $\diag(S) \subset \diag(\overline{L})$
gives the diagonally embedded stratum $S$ in the diagonally embedded $\overline{L}$.
It remains to show that the diagonal restriction of $\PP$ is a \emph{stratified sheaf}.
We have four strata $S \times S, S \times L,L \times S,L \times L \subset \overline{L} \times \overline{L}$ in $\overline{L} \times \overline{L}$ that need be considered in defining the restriction of $\PP$ to any subset that inherits its stratification from the product stratification of $S<L$.
\begin{obs}\label{obs-diagstratsheaf}
The definition of $ \FF^*$ only considers the strata $S \times S, L \times L$. The strata $S \times L,L \times S$ are irrelevant.
\end{obs}
\begin{proof}[Proof of Observation \ref{obs-diagstratsheaf}]
We explain this observation in some detail. For $U, V \subset S$, we shall denote open neighborhoods in $\overline L$ by $U_L, V_L$ respectively. For $K \subset S$,
$$\FF^*_S (K) = \injlim_{U \supset K, V \supset K} \PP_S (U \times V) = \injlim_{U \supset K, V \supset K} \operatorname{Maps} (U, \FF(V)) ,$$
whereas
$$\FF^*_L (K) = \injlim_{U_L \supset K, V_L \supset K} \PP_L (U_L \times V_L) = \injlim_{U_L \supset K, V_L \supset K} \operatorname{Maps} (U_L, \FF(V_L)) .$$
Thus, while there are, in general, four limits to be considered (corresponding to the four strata $S \times S, S \times L,L \times S,L \times L \subset \overline{L} \times \overline{L}$), the
definition of $ \FF^*$ only needs $S \times S, L \times L$.
\end{proof}
Finally, we can assume without loss of generality that $U = U_L \cap S$ and $V=V_L \cap S$.
This gives restriction maps from $\PP_L (U_L \times V_L) \to \PP_S (U \times V)$.
Passing to direct limits furnishes $\operatorname{res}^{L^*}_S$ concluding the proof.
\end{proof}
We now have the following analog of Definition \ref{def-sheafh}:
\begin{defn}\label{def-ssheafh}
A (continuous) stratified sheaf $\FF$ satisfies the stratified sheaf theoretic h-principle, if every stratified section
$\phi \in \FF^*(U)$ can be homotoped through stratified sections to $\FF(U) \subset \FF^*(U)$ for all open subsets
$ U \subset V$.
Further, $\FF$ satisfies the
parametric stratified sheaf theoretic h-principle if the morphism
$ \Delta_U: \FF(U) \to \FF^*(U)$ of stratified sheaves (furnished by Lemma \ref{lem-f2f*}) is a weak homotopy equivalence for all open $U \subset X$ equipped with the inherited stratification, i.e.\ the morphism
$ \Delta_U: \FF_{\overline{L} }(U\cap \overline{L}) \to \FF^*_{\overline{L} }(U \cap \overline{L})$ given by Lemma \ref{lem-f2f*} is a weak homotopy equivalence for every stratum $L$.
\end{defn}
The following is an analog of Proposition \ref{prop-formalisflex} for stratified sheaves:
\begin{prop}\label{prop-formalisflexs}
Let $\FF$ be a stratified continuous sheaf over a stratified space $X$. Then $\FF^*$ is flexible.
\end{prop}
\begin{proof}
Flexibility of $\FF_{\overline{L}}$ for every stratum $L$ follows from Proposition \ref{prop-formalisflex} and naturality of restriction maps from Lemma \ref{lem-f2f*}.
\end{proof}
We are now in a position to state the the stratified analog of Theorem \ref{thm-fleximphprin}.
\begin{theorem}\label{thm-flex2shprin}
Let $\FF$ be a stratified (continuous)
sheaf over a stratified space $X$. If $\FF$ is flexible, it satisfies the parametric sheaf-theoretic stratified h-principle.
\end{theorem}
\begin{proof}
By Lemma \ref{lem-f2f*}, $\Delta$ is a morphism of stratified sheaves.
The weak homotopy equivalence property for $\Delta: \FF_{\overline{L}} \to \FF_{\overline{L}}^*$
for every stratum $L$ follows from Theorem \ref{thm-fleximphprin}.
\end{proof}
\subsection{Topological properties}\label{sec-topprop} This subsection is rather general in flavor and sets up some basic homotopy theoretic properties of {continuous sheaves} that will be useful later. All topological spaces in this subsection are locally compact $\sigma$-compact finite dimensional locally contractible spaces.
\begin{defn}\label{def-ses}
Let $\mathcal{F}_1, \mathcal{F}_2, \mathcal{F}_3$ be continuous sheaves on a topological space $X$. We say that $$\mathcal{F}_1 \stackrel{p}{\to} \mathcal{F}_2 \stackrel{q}{\to} \mathcal{F}_3$$ is a \emph{ homotopy fiber sequence} if
\begin{enumerate}
\item there exists some $\psi \in \mathcal{F}_3(X)$ such that $q_U \circ p_U : \mathcal{F}_1(U) \to \mathcal{F}_3(U)$ is the constant map to $\psi_U$, for all $U \subset X$, and
\item $\mathcal{F}_1 \to \mathrm{hofib}(q; \psi)$ is a weak homotopy equivalence.
\end{enumerate}
\end{defn}
The following was observed by Gromov \cite[p.77]{Gromov_PDR} (see the paragraph preceding Theorem $B'$ there).
\begin{rmk}\label{lem-sessheaffibn}
Let $\mathcal{F}, \mathcal{G}$ be continuous sheaves on $X$ and $q : \mathcal{F} \to \mathcal{G}$ be a morphism of continuous sheaves. If $\mathcal{F}, \mathcal{G}$ are flexible, then for any $\psi \in \mathcal{G}(X)$, $\mathrm{hofib}(q; \psi)$ is flexible. \end{rmk}
\begin{lemma}\label{lem-sessheafflex}
Let
$$\mathcal{F}_1 \stackrel{p}{\to} \mathcal{F}_2 \stackrel{q}{\to} \mathcal{F}_3$$
be a homotopy fiber sequence as in Definition \ref{def-ses}.
If $\mathcal{F}_1, \mathcal{F}_3$ satisfy the parametric $h$-principle, then so does $\mathcal{F}_2$. \end{lemma}
\begin{proof} From the homotopy fiber sequence,
we obtain a sequence of morphisms $\mathcal{P}_1 \to \mathcal{P}_2 \to \mathcal{P}_3$ of continuous sheaves over $X \times X$ (see Definition \ref{def-formalfn} and the preceding discussion for notation). Restricting to $\diag(X) \subset X \times X$, we obtain a sequence of morphisms $\mathcal{F}_1^* \to \mathcal{F}_2^* \to \mathcal{F}_3^*$ of sheaves over $X$.
By functoriality of the diagonal normal construction (see for instance Lemma \ref{lem-f2f*}), we obtain a commutative diagram
\begin{center}
$\begin{CD}
\mathcal{F}_1 @>>> \mathcal{F}_2 @>>> \mathcal{F}_3\\
@VVV @VVV @VVV\\
\mathcal{F}_1^* @>>> \mathcal{F}_2^* @>>> \mathcal{F}_3^*
\end{CD}$
\end{center}
As $\mathcal{F}_1, \mathcal{F}_3$ satisfies the parametric $h$-principle, the first and third vertical arrows are weak homotopy equivalences. We may evaluate the diagram of sheaves on any open set $U \subset X$, and use naturality of homotopy long exact sequences corresponding to the rows to conclude, by an application of the 5-lemma, that $\mathcal{F}_2(U) \to \mathcal{F}_2^*(U)$ is a weak homotopy equivalence. Thus, $\mathcal{F}_2 \to \mathcal{F}_2^*$ is a weak homotopy equivalence of continuous sheaves. This demonstrates the parametric $h$-principle for $\mathcal{F}_2$.\end{proof}
\begin{rmk}\label{rmk-sessheafflex}
The proof of Lemma \ref{lem-sessheafflex} goes through mutatis mutandis to show that if any two of $\mathcal{F}_1,\mathcal{F}_2, \mathcal{F}_3$ satisfy the parametric $h$-principle, then so does the third.
\end{rmk}
\begin{conv}
Henceforth, we adopt Gromov's convention \cite[Section 1.4.1]{Gromov_PDR} of referring to an arbitrarily small but non-specified neighborhood of a set $K \subset X$
by $\operatorname{Op}_X K$, or simply $\operatorname{Op} K$ if there is no scope for confusion. Thus, $\operatorname{Op} K$ refers to a small neighborhood of $K$
which may become even smaller in the course of the argument \cite[p. 35]{Gromov_PDR}
(see the table on \cite[p. 36]{Gromov_PDR} for further details about this convention/notation).
\end{conv}
\begin{lemma}\label{lem-formalfnrestcomm}
Let $\mathcal{F}$ be a continuous sheaf on a topological space $X$, and $Z \subset X$ be a closed subspace. Then the diagonal normal construction commutes with restriction, i.e.\ there is a weak homotopy equivalence of continuous sheaves $\iota_Z^* (\mathcal{F}^*) \to (\iota_Z^* \mathcal{F})^*$.\end{lemma}
\begin{proof} Recall that the diagonal normal construction applied to any sheaf yields a flexible sheaf (Proposition \ref{prop-formalisflex}). Therefore $\mathcal{G} := \mathcal{F}^*$ is a flexible sheaf on $X$. For $K \subset Z$ compact, we have the following:
$$(\iota_Z^* \mathcal{G})(K) = \lim\limits_{\substack{V \supset K \\ V \subset Z \text{ open}}} \iota_Z^*\mathcal{G}(V) = \lim\limits_{\substack{V \supset K \\ V \subset Z \text{ open}}} \ \lim\limits_{\substack{U \supset V \\ U \subset X \text{ open}}} \mathcal{G}(U) = \lim\limits_{\substack{U \supset K \\ U \subset X \text{ open}}} \mathcal{G}(U) = \mathcal{G}(K)$$
Thus, flexibility of $\mathcal{G}$ implies flexibility of $\iota_Z^* \mathcal{G} = \iota_Z^*(\mathcal{F}^*)$. Moreover, $(\iota_Z^* \mathcal{F})^*$ is flexible by flexibility of the diagonal normal construction as mentioned above. Consider the sheaf morphism
$$(-)|_{Z} : \iota_Z^* (\mathcal{F}^*) \to (\iota_Z^* \mathcal{F})^*$$
defined on the stalk over $z \in Z$ by sending a germ of a mapping $\psi : \operatorname{Op}_X(z) \to \mathcal{F}_z$ to its restriction $\psi|\operatorname{Op}_Z(z) : \operatorname{Op}_Z(z) \to \mathcal{F}_z$. We check that $(-)|_{Z}$ is a sheaf morphism. Indeed, given any open set $U \subset Z$ consider an open cover $\{V_i\}$ of $U$ in $X$, and a collection $\{\phi_i : V_i \to \mathcal{F}(V_i)\}$ which is \emph{consistent}, i.e.,
$$\mathrm{res}_{V_i\cap V_j, V_i} \circ \phi_i|_{V_i \cap V_j} \equiv \mathrm{res}_{V_i\cap V_j, V_j} \circ \phi_j|_{V_i \cap V_j}.$$
Thus $\{\phi_i : V_i \to \mathcal{F}(V_i)\}$ represents an element $\phi \in \iota_Z^*(\mathcal{F}^*)(U)$. The restrictions $\{\phi_i : V_i \cap Z \to \mathcal{F}(V_i)\}$ are also consistent, simply by restricting the above equality to $Z$. Therefore $\{\phi_i : V_i \cap Z \to \mathcal{F}(V_i)\}$ represents an element $\phi|_Z \in (\iota_Z^*\mathcal{F})^*(U)$. Observe that $(-)|_{Z}$ is a \emph{stalkwise} weak homotopy equivalence as $X, Z$ are locally contractible. As both the domain and target sheaves are flexible, we conclude that $(-)|_{Z}$ is a weak homotopy equivalence by appealing to a theorem of Gromov \cite[Theorem B, p. 77]{Gromov_PDR} which says that local weak homotopy equivalence implies weak homotopy equivalence for flexible sheaves. \end{proof}
\begin{rmk}\label{rmk-formalfnrestcomm}\rm{
Suppose $Z \subset X$ is a neighborhood deformation retract, and let $\pi : N_Z \to Z$ be a choice of such a retract. Then we can write down an explicit homotopy-inverse
$$(-) \circ \pi : (\iota_Z^* \mathcal{F})^* \to \iota_Z^* (\mathcal{F}^*)$$
defined on the stalk over $z \in Z$ by sending a germ of a mapping $\psi : \operatorname{Op}_Z(z) \to \mathcal{F}_z$ to $\psi \circ \pi : \operatorname{Op}_X(z) \to \mathcal{F}_z$. Let $\phi \in (\iota_Z^*\mathcal{F})^*(U)$ be a section represented by a consistent family $\{\phi_i : V_i \cap Z \to \mathcal{F}(V_i)\}$ for a $\pi$-saturated open cover $\{V_i\}$ (i.e., $V_i = \pi^{-1}(V_i \cap Z)$) of $U$ in $N_Z$, satisfying the consistency relations:
$$\mathrm{res}_{V_i\cap V_j, V_i} \circ \phi_i|_{V_i \cap V_j \cap Z} \equiv \mathrm{res}_{V_i\cap V_j, V_j} \circ \phi_j|_{V_i \cap V_j \cap Z}$$
By taking a sufficiently fine open cover, we may assume $V_i \subset N_Z$. By composing with $\pi$ on both sides we obtain a consistent family $\{\phi_i \circ \pi : V_i \to \mathcal{F}(V_i)\}$ representing $\phi \circ \pi \in (\iota_Z^*)(\mathcal{F}^*)(U)$, proving that it is a well-defined sheaf homomorphism. Gromov's theorem \cite[Theorem B, p. 77]{Gromov_PDR} once again demonstrates that it is a weak homotopy equivalence.}\end{rmk}
\begin{defn}\label{def-deldgerm}
Let $(A, B)$ be a pair of topological spaces where $B \subset A$ is closed. Let $\mathcal{F}$ be a continuous sheaf on $A$. We define the space of sections on a \emph{deleted germinal neighborhood of $B$ in $A$} as
$$\mathcal{F}(\operatorname{Op}(B) \setminus B) := \injlim_{U \supset B} \mathcal{F}(U \setminus B)$$
There is a restriction map $\mathcal{F}(U) \to \mathcal{F}(U \setminus B)$ for every open neighborhood $U$ of $B \subset A$ which is compatible with the associated directed system indexed by the poset of open neighborhoods $\{U \subset B : A \subset U\}$. Hence we get a restriction map $\mathcal{F}(B) \to \mathcal{F}(\operatorname{Op}(B) \setminus B)$ by applying direct limits $\varinjlim_U$ to both sides. Moreover, we have a restriction map $\mathcal{F}(A \setminus B) \to \mathcal{F}(\operatorname{Op}(B) \setminus B)$ by restricting a section on $A \setminus B$ to a deleted germinal neighborhood of $B$ in $A$.\end{defn}
\begin{lemma}\label{lem-pastedeld}
Let $(A, B)$ be a pair of topological spaces where $B \subset A$ is closed. Let $\mathcal{F}$ be a continuous sheaf on $A$. Then the following is a fiber square of quasitopological spaces:
\begin{center}
$\begin{CD}
\mathcal{F}(A) @>>> \mathcal{F}(B)\\
@VVV @VVV\\
\mathcal{F}(A \setminus B) @>>> \mathcal{F}(\operatorname{Op}(B) \setminus B)
\end{CD}$
\end{center}
\end{lemma}
\begin{proof} Suppose $\psi_1 \in \mathcal{F}(B)$ and $\psi_2 \in \mathcal{F}(A \setminus B)$ such that $\psi_1|({\operatorname{Op}(B) \setminus B}) = \psi_2|({\operatorname{Op}(B) \setminus B})$. Pick a representative $\widetilde{\psi}_1 \in \mathcal{F}(U)$ for some $U \supset B$ open neighborhood. Then we must have $\widetilde{\psi}_1|({V\setminus B}) = \psi_2|({V\setminus B})$ for some deleted open neighborhood $V \setminus B \subset A$. Next, we know that the following is a fiber square by the gluing axiom:
\begin{center}
$\begin{CD}
\mathcal{F}(A) @>>> \mathcal{F}(V)\\
@VVV @VVV\\
\mathcal{F}(A \setminus B) @>>> \mathcal{F}(V \setminus B)
\end{CD}$
\end{center}
We may then glue $\widetilde{\psi}_1|V$ and $\psi_2$ to obtain a $\psi \in \mathcal{F}(A)$. The element $\psi$ is independent of the choice of $\widetilde{\psi}_1$. We therefore obtain a well-defined map
$$\Psi : \mathcal{F}(A \setminus B)\times_{\mathcal{F}(\operatorname{Op}(B) \setminus B)} \mathcal{F}(B) \to \mathcal{F}(A),$$ given by $\Psi (\psi_1, \psi_2) = \psi.$
For any topological space $X$, consider a continuous map $f : X \to \mathcal{F}(A \setminus B) \times_{\mathcal{F}(\operatorname{Op}(B)\setminus B)} \mathcal{F}(B)$ with respect to the quasitopology on the codomain. Then $f$ is equivalent to a pair of continuous maps $f_1 : X \to \mathcal{F}(A \setminus B)$ and $f_2 : X \to \mathcal{F}(B)$ which agree when composed with the restriction to $\mathcal{F}(\operatorname{Op}(B) \setminus B)$, by definition of quasitopology of fiber products. By definition of quasitopology on limits, there exists an open neighborhood $U$ and a deleted open neighborhood $V \setminus B$ of $B$ contained in $U$, such that $f_2$ factors through a continuous map $\widetilde{f}_2 : X \to \mathcal{F}(U)$ and $f_1, \widetilde{f}_2$ agree when restricted to $\mathcal{F}(V \setminus B)$. Therefore, we may paste $\widetilde{f}_2|V$ and $f_1$ to a continuous map $g : X \to \mathcal{F}(A)$. We see that $g = f \circ \Psi$, therefore $\Psi$ preserves the quasitopologies on the domain and codomain, i.e., $\Psi$ is continuous.
Finally, $\Psi$ is inverse to the natural continuous map going in the opposite direction obtained from the universal property of fiber products. Thus, $\Psi$ is a homeomorphism of quasitopological spaces.\end{proof}
\begin{lemma}\label{lem-pastecpts} Let $\mathcal{F}$ be a flexible sheaf on a topological space on $X$. Let $A, K \subset X$ be a pair of subsets such that $K, A\cap K$ are both compact. Then $\mathcal{F}(A \cup K) \to \mathcal{F}(A)$ is a fibration.
\end{lemma}
\begin{proof} The following is a fiber square of quasitopological space
\begin{center}
$\begin{CD}
\mathcal{F}(A \cup K) @>{\mathrm{res}_{K, A\cup K}}>> \mathcal{F}(K)\\
@V{\mathrm{res}_{A, A\cup K}}VV @V{\mathrm{res}_{A\cap K, K}}VV\\
\mathcal{F}(A) @>{\mathrm{res}_{A\cap K, A}}>> \mathcal{F}(A \cap K)
\end{CD}$
\end{center}
Suppose $\psi : W \times I \to \mathcal{F}(A)$ is a homotopy with an initial lift $\widetilde{\psi}_0 : W \times 0 \to \mathcal{F}(A \cap K)$. As $A\cap K, K$ are compact, $\mathrm{res}_{A\cap K, K} : \mathcal{F}(K) \to \mathcal{F}(A \cap K)$ is a fibration. Therefore, we may lift $\mathrm{res}_{A \cap K, A} \circ \psi : W \times I \to \mathcal{F}(A \cap K)$ with initial condition $\mathrm{res}_{K, A\cup K} \circ \widetilde{\psi}_0 : W \times 0 \to \mathcal{F}(K)$ to a homotopy $\varphi : W \times I \to \mathcal{F}(K)$. Finally, using the fact that the diagram is a fiber square, $\psi, \varphi$ provide a lift $\widetilde{\psi} : W \times I \to \mathcal{F}(A \cup K)$ of $\psi$, finishing the proof.
\end{proof}
\begin{lemma}\label{lem-ltsfibns} Let $\{X_n\}$ be an inverse system and $\{Y_n\}$ be a directed system of quasitopological spaces. Let $Z, W$ be quasitopological spaces. Let $\{f_n : X_n \to Z\}$ and $\{g_n : W \to Y_n\}$ be a collection of maps compatible with the systems $\{X_n\}$ and $\{Y_n\}$ respectively. Let $X = \varprojlim X_n$, $Y = \varinjlim Y_n$, and $f : X \to Z$, $g : W \to Y$ be the canonical maps from, and to, the respective limits.
\begin{enumerate}
\item If $f_n$ are fibrations and the structure maps in $\{X_n\}$ are also fibrations, then $f$ is a fibration.
\item If $g_n$ are fibrations, then $g$ is a fibration.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $Q$ be an auxiliary topological space.
Let $\psi : Q \times I \to Z$ be a homotopy with an initial lift $\widetilde{\psi}_0 : Q \times \{0\} \to X$ of $\psi_0 := \psi|Q \times \{0\}$. Choose a lift of $\psi$ to a homotopy $Q \times I \to X_1$ along the fibration $f_1 : X_1 \to Z$, given initial condition $\pi_1 \circ \widetilde{\psi}_0 : Q \times \{0\} \to X_1$. Then, since the structure maps of the inverse system are fibrations, we may lift $Q \times I \to X_1$ to a homotopy $Q \times I \to X_n$ using as initial condition the maps $\pi_n \circ \widetilde{\psi}_0$ for all $n \geq 2$. This gives a collection of homotopies $\{Q \times I \to X_n\}$ compatible with the structure maps of the inverse system. By the universal property of inverse limits, this provides a homotopy $\widetilde{\psi} : Q \times I \to X$, such that $\widetilde{\psi}$ is a lift of $\psi$ with initial condition $\widetilde{\psi}_0$ as desired. This proves (1).
\item Let $\psi : Q \times I \to Y$ be a homotopy with an initial lift $\widetilde{\psi}_0 : Q \times \{0\} \to W$ of $\psi_0 := \psi|Q \times \{0\}$. By definition of the quasitopology of the direct limit, $\psi$ factors through a homotopy $Q \times I \to Y_n$. Since $g_n : W \to Y_n$ is a fibration, we may lift $\psi$ to $Q \times I \to W$, using $\widetilde{\psi}_0 : Q \times \{0\} \to W$ as the initial condition. This is a lift of $\psi$ with initial condition $\widetilde{\psi}_0$, as desired.
This proves (2).
\end{proof}
We shall need the following theorem to `coglue' {weak} homotopy equivalences. {We refer the reader to \cite{brown-heath} by Brown and Heath, and the notes \cite{frankland-notes,francis-notes} for a proof.
\begin{theorem}\label{thm-bh}
Consider a commutative diagram of maps of {quasitopological spaces} as in Figure \ref{homotopyco}, where
\begin{enumerate}
\item the front and back squares are \emph{{(strict)}} pullback diagrams (equivalently, $Q$ and $P$ are fiber-products),
\item $p, q$ are fibrations
\end{enumerate}
If the diagonal arrows, labeled $\phi_1,\phi_2, \phi$ are {weak} homotopy equivalences, so is $\Phi$.
\end{theorem}
\begin{comment}[H]
\begin{center}
\includegraphics[width=0.75\linewidth]{homotopyco.png}
\end{center}
\caption{Homotopy Co-gluing}
\end{comment}
\begin{figure}[H]
\begin{tikzcd}
Q \arrow{rr}{\til{g}} \arrow{dd}{\til{q}} \arrow{rrrrd}[near end]{ \Phi} & & E \arrow{dd}{q} \arrow{rrrrd}{\phi_1} & & & & \\
& & & & P \arrow{rr}[pos=0.3]{\til{f}} \arrow{dd}{\til{p}}& & D \arrow{dd}{p} \\
Y \arrow{rr}{g} \arrow{rrrrd}[pos=0.6]{ \phi_2 } & & B \arrow{rrrrd}[near start]{\phi } & & & & \\
& & & & X \arrow{rr}[near start]{f} & & A
\end{tikzcd}
\caption{Homotopy Co-gluing}
\label{homotopyco}
\end{figure}
We note here for later use, a fact about homotopy fibers of fiber-products:
\begin{lemma}\label{lem-hofib-fiberpdkt}
Let $f: X \to Z$, and $g: Y \to Z$ be continuous maps, furnishing the following pullback diagram:
\begin{center}
$
\begin{CD}
X \times_Z Y @>F>>Y \\
@VVV @VgVV\\
X@>f>>Z \\
\end{CD}
$
\end{center}
Then, the homotopy fibers of
$X \times_Z Y \to Y$ and $f:X\to Z$ are homotopy equivalent.
\end{lemma}
\begin{proof}
There are two homotopy fiber bundles that can be constructed over $Y$ as follows:
\begin{enumerate}
\item Let $\PP( X \times_Z Y,F,Y)\to Y$ denote the path space fibration construction applied to $F: X \times_Z Y \to Y$, and let $P_1:E_1 \to Y$ denote the resulting fibration. Then $\operatorname{hofib} (P_2)$ is homotopy equivalent to $\operatorname{hofib} (F)$.
\item Let $\PP(X,f,Z)\to Z$ denote the path space fibration construction applied to $f: X \to Z$, and let $P_2: E_2 \to Y$ denote the pullback fibration (under $g$) of $\PP(X,f,Z)$.
Then $\operatorname{hofib} (P_2)$ is homotopy equivalent to $\operatorname{hofib} (f)$.
\end{enumerate}
Then there exists
a homotopy equivalence $E_1 \to X \times_Z Y$ covering the identity map over $Y$.
The same holds for $E_2$. Hence, there exists a homotopy equivalence
$\phi:E_1 \to E_2$ of total spaces covering the identity map over $Y$. Since
$P_1: E_1\to Y$, and $P_2: E_2 \to Y$ are fibrations, $\operatorname{hofib} (P_1)$
is homotopy equivalent to $\operatorname{hofib} (P_2)$. The Lemma follows.
\end{proof}
\subsection{Sheaf-theoretic $h$-principle for stratified spaces}\label{sec-sheafh}
Let $(X, \Sigma, {\mathcal{N}})$ be an abstractly stratified set, and $\mathcal{F}$ be a stratified continuous sheaf on $(X, \Sigma)$.
For every stratum $S \in \Sigma$, we denote the associated sheaf on the closure $\overline{S} \subset X$ by $\mathcal{F}_{\overline{S}}$ so that $\mathcal{F}_S := \iota_S^*\mathcal{F}_{\overline{S}}$. For every pair of strata $S, L \in \Sigma$, $S < L$, we also have the restriction map from the sheaf on $\overline{L}$ to that on $\overline{S} \subset \overline{L}$ given by ${\operatorname{res}}^L_S: \iota_{\overline{S}}^*\mathcal{F}_{\overline{L}} \to \mathcal{F}_{\overline{S}}$. Recall also the restriction map ${\operatorname{res}}^L_S : \iota_S^* \mathcal{F}_{\overline{L}} \to \mathcal{F}_S$.
\subsubsection{Limits and preliminary gluing}\label{sec-prellts}
\begin{lemma}\label{lem-topstrat}
Let $U \subset \overline{L}$ be an open subset and $U' := U \cap S$.
Let $\mathcal{G} = \mathcal{F}_{\overline{L}},$ or $ \mathcal{F}_{\overline{L}}^*$. Suppose that $\GG$ is flexible. Then $\mathcal{G}(U \setminus U') \to \mathcal{G}(\operatorname{Op}_{U}(U') \setminus U')$ is a fibration.
\end{lemma}
\begin{proof}
Let $\{K_n\}$ be an ascending sequence of compact subsets of $U \setminus U'$ exhausting $U \setminus U'$. Let $\{C_n\}$ be closures (in $U$) of a descending sequence of open neighborhoods of $U'$ in $U$ such that $\cap_n C_n = U'$, i.e.\
\begin{gather*}
U' \subset \cdots \subset C_3 \subset C_2 \subset C_1 \subset U \\
K_1 \subset K_2 \subset K_3 \subset \cdots \subset U \setminus U'
\end{gather*}
Consider restriction maps $r_{m, n} : \mathcal{G}(K_m \cup (C_n \setminus U')) \to \mathcal{G}(C_n \setminus U')$. Since both $K_m$ and $K_m \cap (C_n \setminus U') = K_m \cap C_n$ are compact, flexibility of $\mathcal{G}|L$ on $L$ then shows that for all $m, n$, $r_{m, n} $ is a fibration by Lemma \ref{lem-pastecpts}. We observe now that:
\begin{claim}\label{claim1}
$\{\mathcal{G}(K_m \cup (C_n \setminus U'))\}_m$ is an inverse system with structure maps given by fibrations.
\end{claim}
\begin{proof}[Proof of Claim \ref{claim1}:]
Observe that for all $m \geq 1$, $K_{m+1} \cup (C_n \setminus U') = (K_m \cup C_n \setminus U') \cup K_{m+1}$. Moreover, $(K_m \cup C_n \setminus U') \cap K_{m+1} = K_m \cup (K_{m+1} \cap C_n)$ and $K_{m+1}$ are both compact. Therefore, Lemma \ref{lem-pastecpts} applies, and we obtain
$$\mathcal{G}(K_{m+1} \cup (C_n \setminus U')) \to \mathcal{G}(K_m \cup (C_n \setminus U'))$$
is a fibration, for all $m \geq 1$.\end{proof}
\begin{claim}\label{claim2} $\varprojlim_m \mathcal{G}(K_m \cup (C_n \setminus U')) \cong \mathcal{G}(U \setminus U')$. \end{claim}
\begin{proof}[Proof of Claim \ref{claim2}:] This is true in complete generality. Let $V$ be an arbitrary open set (here $V=(U \setminus U')$). Let $\{C_n\}$ (resp.\ $\{U_n\}$) be a sequence of closed (resp.\ open) subsets of $U$ such that
\begin{enumerate}
\item $C_n \subset U_n \subset C_{n+1}$,
\item $\cup_n C_n = U = \cup_n U_n$.
\end{enumerate}
There exist restriction maps $\mathcal{G}(V) \to \mathcal{G}(C_n)$ which are compatible with the inverse system $\{\mathcal{G}(C_n)\}$. Hence by the universal property, there exists a continuous map $\mathcal{G}(V) \to \varprojlim \mathcal{G}(C_n)$. On the other hand, we have a continuous map $\Theta: \varprojlim \mathcal{G}(C_n) \to \varprojlim \mathcal{G}(U_{n-1})$ given by restriction. Then $\Theta$ is a homeomorphism of quasitopological spaces, with inverse $\Theta^{-1}: \varprojlim \mathcal{G}(U_{n-1}) \to \varprojlim \mathcal{G}(C_{n-1})$ given by restriction again.
The composition $\mathcal{G}(V) \to \varprojlim \mathcal{G}(C_n) \to \varprojlim \mathcal{G}(U_{n-1})$ is also a homeomorphism, by the gluing property of continuous sheaves applied to the open cover $\{U_{n-1}\}$ of $U$. Hence, $\mathcal{G}(V) \cong \varprojlim \mathcal{G}(C_n)$, as desired. \end{proof}
\begin{claim}\label{claim3} $\varinjlim_n \mathcal{G}(C_n \setminus U') \cong \mathcal{G}(\operatorname{Op}_U(U') \setminus U')$. \end{claim}
\begin{proof}[Proof of Claim \ref{claim3}:] Consider the following chain of homeomorphisms
$$\varinjlim_n \mathcal{G}(C_n \setminus U') \cong \varinjlim_n \varinjlim_{V \supset C_n} \mathcal{G}(V \setminus U') \cong \varinjlim_{V \supset U'} \mathcal{G}(V \setminus U') \cong \mathcal{G}(\operatorname{Op}_U(U') \setminus U')$$
where $V \supset C_n$ varies over all open neighborhoods of $C_n$ in $U$.
\end{proof}
We now proceed to take limits of $r_{m, n}$, first as $m \to \infty$ and then as $n \to \infty$. By Lemma \ref{lem-ltsfibns}, this gives that $\mathcal{G}(U \setminus U') \to \mathcal{G}(\operatorname{Op}_{U}(U') \setminus U')$ is also a fibration.
\end{proof}
\begin{prop}\label{prop-sheafh} Let $(X, \Sigma_X)$ be a stratified space with exactly two strata, $\Sigma_X = \{S, L\}$, with $S<L$, so that $X = \overline{L}$. Let $\mathcal{F}$ be a stratified continuous sheaf on $X$. Assume the following:
\begin{enumerate}
\item {$\mathcal{F}_S=i_S^* \FF_{\overline{S}}$} satisfies the parametric $h$-principle,
\item {$\mathcal{F}_L=i_L^* \FF_{\overline{L}}$} is flexible, and
\item for {some} $\psi \in \mathcal{F}_S(S)$, $\HH^L_S=\operatorname{hofib}(\operatorname{res}^L_{S}; \psi)$ satisfies the parametric $h$-principle.
\end{enumerate}
Then $\mathcal{F} = \mathcal{F}_{\overline{L}}$ satisfies the parametric $h$-principle.
\end{prop}
\begin{proof} Let $\psi \in \mathcal{F}_S(S)$ be as in Definition \ref{def-ses}. Then by hypothesis there exists a homotopy fiber sequence of continuous sheaves over $S$, given by
$$\mathrm{hofib}(\operatorname{res}^L_{S}; \psi) \to \iota_S^* \mathcal{F}_{\overline{L}} \to \mathcal{F}_S.$$
By hypothesis, $\mathcal{F}_S$ and $\mathrm{hofib}(\mathrm{res}_{S, L}; \psi)$ satisfy the parametric $h$-principle. Hence, by Lemma \ref{lem-sessheafflex} so does $\iota_S^*\mathcal{F}_{\overline{L}}$, i.e.\ the natural inclusion map $\iota_S^* \mathcal{F}_{\overline{L}} \to (\iota_S^* \mathcal{F}_{\overline{L}})^*$ is a weak homotopy equivalence. Combining this with Lemma \ref{lem-formalfnrestcomm}, we conclude that the map $(\iota_S^* \mathcal{F}_{\overline{L}})^* \to \iota_S^* (\mathcal{F}_{\overline{L}}^*)$ is a weak homotopy equivalence. By Remark \ref{rmk-formalfnrestcomm} we can in fact choose the homotopy inverse to be $(-) \circ \pi_S$. This implies that the resulting composition map
$$\iota_S^* \mathcal{F}_{\overline{L}} \to (\iota_S^* \mathcal{F}_{\overline{L}})^* \to \iota_S^* (\mathcal{F}_{\overline{L}}^*)$$
is simply the restriction of the diagonal normal construction $\mathcal{F}_{\overline{L}} \to \mathcal{F}_{\overline{L}}^*$ to $S$.
Let $\Phi : \mathcal{F}_{\overline{L}} \to \mathcal{F}_{\overline{L}}^*$ denote the natural morphism in the diagonal normal construction. Then $\Phi|L : \mathcal{F}_L \to \mathcal{F}_L^*$ and $\Phi|S : \iota_S^* \mathcal{F}_{\overline{L}} \to \iota_S^* (\mathcal{F}_{\overline{L}}^*)$ are both weak homotopy equivalences. We would like to ``glue" these to a {weak} homotopy equivalence $\Phi$. To this end, let $U \subset \overline{L}$ be an open subset and $U' := U \cap S$. Using Lemma \ref{lem-pastedeld}, we have fiber squares:
$$
\begin{CD}
\mathcal{F}_{\overline{L}}(U) @>>> \mathcal{F}_{\overline{L}}(U')\\
@VVV @VVV\\
\mathcal{F}_{\overline{L}}(U \setminus U') @>>> \mathcal{F}_{\overline{L}}(\operatorname{Op}_{U}(U') \setminus U')
\end{CD}
\qquad
\begin{CD}
\mathcal{F}^*_{\overline{L}}(U) @>>> \mathcal{F}^*_{\overline{L}}(U')\\
@VVV @VVV\\
\mathcal{F}^*_{\overline{L}}(U \setminus U') @>>> \mathcal{F}^*_{\overline{L}}(\operatorname{Op}_{U}(U') \setminus U')
\end{CD}
$$
The diagonal normal construction map $\Phi$ gives natural maps from each corner of the first diagram to the corresponding corner of the second diagram. Note first that $\Phi$ is a weak homotopy equivalence on the top-right and bottom-left corners as $\Phi|L$ and $\Phi|S$ are weak homotopy equivalences of continuous sheaves.
Next, note that
$$\Phi : \mathcal{F}_{\overline{L}}(\operatorname{Op}_U(U')\setminus U') \to \mathcal{F}^*_{\overline{L}}(\operatorname{Op}_U(U') \setminus U')$$
is a direct limit of the weak homotopy equivalences $\Phi|(V \setminus U') : \mathcal{F}_{\overline{L}}(V \setminus U') \to \mathcal{F}^*_{\overline{L}}(V \setminus U')$ indexed by open neighborhoods $V \subset U$ of $U'$ in $U$. Since homotopy groups commute with direct limit of quasitopological spaces, hence the limiting map is also a weak homotopy equivalence. Thus, $\Phi$ is a weak homotopy equivalence on the bottom-right corner as well.
We shall use Theorem \ref{thm-bh} to conclude the proof. To set up the situation such that the hypotheses of Theorem \ref{thm-bh} are satisfied, we need to establish that
\begin{enumerate}
\item $\Phi$ is a weak homotopy equivalence on the top-right, bottom-left and bottom-right corners. The above paragraph did precisely this.
\item The right vertical arrows in both commutative diagrams above are fibrations. Lemma \ref{lem-topstrat} gives this.
\end{enumerate}
Finally, we apply Theorem \ref{thm-bh} to ``coglue" to a homotopy equivalence $\Phi : \mathcal{F}_{\overline{L}}(U) \to \mathcal{F}^*_{\overline{L}}(U)$. This proves that $\mathcal{F}_{\overline{L}}$ satisfies the parametric $h$-principle.\end{proof}
{The above proof also gives us a criterion for checking the parametric $h$-principle for a \emph{continuous} sheaf on the underlying topological space of a stratified space.}
\begin{lemma}\label{lem-exstratfleximplieshprin}Let $X$ be a stratified space. Let $\FF$ be a continuous sheaf on $X$,
such that $i_{{S}}^* \FF$ is flexible for all (open) strata $S$, then $\FF$ satisfies the parametric $h-$principle.
\end{lemma}
\begin{proof} We argue by induction on depth of $X$. If the stratified space $X$ has depth one, i.e.\ it is a manifold, then the result follows from
Gromov's Theorem \ref{thm-fleximphprin}. Let $L$ denote the disjoint union of maximal strata of $X$, i.e.\ strata that do not lie on the boundary of other strata. Let $\partial L=X\setminus L$.
Applying the inductive hypothesis to $\partial L$, we conclude that $ i_{\partial L}^*\FF$ satisfies the parametric $h-$principle.
By hypothesis, we have flexibility of {$i_L^*\FF$}, since $L$ is {an open stratum} in $X$. As in the proof of Proposition \ref{prop-sheafh}, we use Theorem \ref{thm-bh} to glue {the homotopy equivalences $i_{\partial L}^*\FF \to (i_{\partial L}^*\FF)^*$ and $\iota_L^* \FF \to (\iota_L^* \FF)^*$ to obtain a homotopy equivalence $\FF \to \FF^*$. Therefore, $\FF$ satisfies the parametric $h$-principle}.
\end{proof}
\begin{rmk}\label{rmk-stratflexextvsint}{It is important to distinguish the hypothesis of Lemma \ref{lem-exstratfleximplieshprin} from the hypothesis of stratumwise flexibility in Definition \ref{def-stratflex} which applies to \emph{stratified sheaves}. Stratumwise flexibility is a condition on the \emph{intrinsic sheaves} $\FF_S$ of a stratified sheaf, whereas the hypotheses above are related to flexibility of the \emph{extrinsic sheaves} $\iota_S^* \FF_{\overline{L}}$, for $S < L$. The difference between these sheaves lies in the homotopy fiber sheaf $\HH^L_S := \operatorname{hofib}(\operatorname{res}^L_S;\psi)$.}
\end{rmk}
\begin{lemma}\label{lem-bendright}
{Let $(X, \Sigma)$ be a stratified space, and $\mathcal{F}$ be a stratified sheaf on $X$. For any pair of strata $S < L$ in $X$, recall the associated closed and open homotopy fiber sheaves $\overline{\HH}^L_S$ and $\HH^L_S$ from Definition \ref{def-hh}. For any triad of strata $P < S < L$ in $X$, there exist homotopy equivalences of sheaves}
\begin{gather*}i_{\overline{P}}^*\overline{\HH}^L_S \simeq \operatorname{hofib}\left(\overline{\HH}^L_{P} \to \overline{\HH}^S_{P}\right), \\
i_P^*\overline{\HH}^L_S \simeq \operatorname{hofib}\left(\HH^L_P \to \HH^S_P\right).\end{gather*}
\end{lemma}
\begin{proof} Let $f: X\to Y$ and $g: Y\to Z$. Then there exists a homotopy fiber sequence $$\operatorname{hofib}(f)\to \operatorname{hofib} (g \circ f) \to \operatorname{hofib} (g)$$ by \cite[Lemma 1.2.7]{may-ponto}.
To translate this into the context of continuous sheaves, consider the following diagram:
\medskip
\begin{tikzcd}
& & & \operatorname{hofib}(i_{\overline{P}}^*{\mathcal{F}}_{\overline{L}} \to {\mathcal{F}}_{\overline{P}}) \arrow{dd}{3} \arrow{rr}{2} & & \operatorname{hofib}(i_{\overline{P}}^*{\mathcal{F}}_{\overline{S}} \to {\mathcal{F}}_{\overline{P}}) \arrow{dd}{4} \\
& & & & & \\
\operatorname{hofib}(i_{\overline{P}}^*{\mathcal{F}}_{\overline{L}} \to i_{\overline{P}}^*{\mathcal{F}}_{\overline{S}}) \arrow{rrr}{5} \arrow{rrruu}[bend left]{1} & & & i_{\overline{P}}^*{\mathcal{F}}_{\overline{L}} \arrow{rr}{6} \arrow{dd}{7} & & i_{\overline{P}}^*{\mathcal{F}}_{\overline{S}} \arrow{lldd}{8} \\
& & & & & \\
& & & {\mathcal{F}}_{\overline{P}} & &
\end{tikzcd}
\bigskip
Here,
the arrows 6,7,8 take the place of $f, g\circ f, g$ respectively.
The above homotopy theoretic fact (\cite[Lemma 1.2.7]{may-ponto}) then shows that the arrows 1,2 give a homotopy fiber sequence
$$\operatorname{hofib}(i_{\overline{P}}^*\FF_{\overline{L}} \to i_{\overline{P}}^*\FF_{\overline{S}})\stackrel{1}\longrightarrow \operatorname{hofib}(i_{\overline{P}}^*\FF_{\overline{L}} \to \FF_{\overline{P}}) \stackrel{2}\longrightarrow \operatorname{hofib}(i_{\overline{P}}^*\FF_{\overline{S}} \to \FF_{\overline{P}}).$$
The first term in the sequence above can be identified with $i_{\overline{P}}^* \overline{\HH}^L_S$. Indeed,
$$\operatorname{hofib}(i_{\overline{P}}^*\FF_{\overline{L}} \to i_{\overline{P}}^*\FF_{\overline{S}}) = \operatorname{hofib}(i_{\overline{P}}^* i_{\overline{S}}^* \FF_{\overline{L}} \to i_{\overline{P}}^* \FF_{\overline{S}}) = i_{\overline{P}}^* \operatorname{hofib}(i_{\overline{S}}^*\FF_{\overline{L}} \to \FF_{\overline{S}}) = i_{\overline{P}}^* \overline{\HH}^L_S$$
The first statement of the Lemma follows. The second statement follows by replacing $\overline{P}$ by $P$ throughout.
\end{proof}
\subsubsection{Gluing sheaves across strata that intersect along their closure}\label{sec-glueacross}
In the proof of Theorem \ref{thm-hofibsflexg} below, we shall need to glue sheaves in different strata $S_1, S_2$ to obtain a sheaf over $\overline{S_1} \cup \overline{S_2}$ when $\overline{S_1} \cap \overline{S_2} \neq \emptyset$. Note that Definition \ref{def-sssheaf} does not directly furnish such a sheaf. We proceed by induction on
height (Definition
\ref{def-Idec}). For strata of height zero, there is nothing new to construct.
For concreteness, and to illustrate the construction, suppose
$S_1, S_2$ have height one, and $\overline{S_1} \cap \overline{S_2} = P$, where $P$ has height zero.
Then there are two extrinsic sheaves $i_P^* \FF_{\overline{S_1}}$ and $i_P^* \FF_{\overline{S_2}}$, and restriction maps $\operatorname{res}^{S_1}_P, \operatorname{res}^{S_2}_P$ to $\FF_P =
\FF_{\overline{P}}$. Then there exists a natural sheaf $ \FF_{\overline{S_1} \cup \overline{S_2}}$ given as follows:
\begin{enumerate}
\item On a germinal neighborhood of $P$ in $\overline{S_1} \cup \overline{S_2}$ (where the latter is equipped with the subspace topology inherited from $X$),
$ \FF_{\overline{S_1} \cup \overline{S_2}}$ is given by the fiber product
$i_P^* \FF_{\overline{S_1}} \times_{\FF_P} i_P^* \FF_{\overline{S_2}}$,
\item On $S_1$ (resp.\ $S_2$), $ \FF_{\overline{S_1} \cup \overline{S_2}}$ equals
$\FF_{{S_1}}$ (resp.\ $\FF_{{S_2}}$).
\end{enumerate}
The general construction now follows by induction.
Assume therefore that for any finite union $X_m$ of strata of height at most $m$,
we have a sheaf $\FF_m$ such that
\begin{enumerate}
\item $i_S^* \FF_m = \FF_S$ for all strata $S$ of height $m$.
\item For any stratum closure $\overline P $ of height less than $m$, $\FF_m$ equals the fiber product of extrinsic sheaves of the form $i_{\overline{P}}^* \FF_{\overline{S}}$, where $\overline{P} \subset \overline{S}$.
\end{enumerate}
Then the gluing construction described above for two strata $S_1, S_2$ can be repeated for strata of height $m+1$, and the same argument goes through for any finite collection $S_1, \cdots, S_k$ of height $m+1$. In particular, note that
this furnishes a well-defined sheaf $\FF_Y$ for any closed subset of $X$ that is a union of strata.
We note for use below, the following Corollary of the proof of Proposition \ref{prop-sheafh}.
\begin{cor}\label{cor-3glue}
Let $(X, \Sigma)$ be a stratified space with {unique top} dimensional stratum $L$. Let $Y=\partial L$. Let ${\FF= \{\FF_{\overline{S}}: S \in \Sigma\}}$ be a stratified sheaf on $X$. Suppose that
\begin{enumerate}
\item $\FF_Y$ satisfies the parametric $h-$principle.
\item ${\overline{\HH}^X_Y := \operatorname{hofib}(i_Y^* \FF_X \to \FF_Y)}$ satisfies the parametric $h-$principle.
\item $\FF_X\vert_L$ is flexible.
\end{enumerate}
Then $\FF_X$ satisfies the parametric $h-$principle.
\end{cor}
\begin{proof}
Lemma \ref{lem-sessheafflex} along with the first two conditions ensure that $i_{\overline{Y}}^* \FF_X$
satisfies the parametric $h-$principle. Now, flexibility of $\FF_X|L$ allows us to co-glue
$\FF_X|L$ and $i_{\overline{Y}}^* \FF_X$ as in the proof of Proposition \ref{prop-sheafh} to conclude that $\FF_X$ satisfies the parametric $h-$principle.
\end{proof}
\subsubsection{$h-$principle from stratumwise conditions}\label{sec-main}
We are now in a position to prove the main theorem of this Section. It says roughly that flexibility of homotopy fibers for pairs of strata guarantees parametric $h-$principle for the stratified sheaf provided the latter is stratumwise flexible. We shall first prove this assuming a total order of strata.
\begin{theorem}\label{thm-hofibsflexg}
Let $(X, \Sigma)$ be a stratified space and $\FF$ be a stratified sheaf on $X$ such that
\begin{enumerate}
\item $\FF$ is stratumwise flexible, i.e.\ for every stratum $S \in \Sigma$, $\FF_S := i^*_{S}\FF_{\overline{S}}$ is flexible on the stratum $j$.
\item $\FF$ is infinitesimally flexible across strata, i.e.\ for all $S<L$, the open homotopy fiber sheaf $ {\HH_S^L}$ is flexible.
\end{enumerate}
Then the stratified sheaf $ \FF$ satisfies the parametric $h-$principle.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm-hofibsflexg} under the assumption of total ordering:] We first assume that the strata of $X$ are totally-ordered. Thus,
\begin{enumerate}
\item $(X, \Sigma)$ is a stratified space with strata $\Sigma = \{S_1 > \cdots > S_n\}$ ordered so that $S_i < S_j$ if $i > j$.
\item $\FF = \{\FF_{\overline{i}} : 1 \leq i \leq n\}$ is a stratified continuous sheaf on $X$, where $\mathcal{F}_{\overline{i}}$ is a continuous sheaf on the stratum-closure $\overline{S}_i$.
\end{enumerate}
For any pair of indices $i > j$, let ${\overline{\HH}_i^j} = \operatorname{hofib} (i_{\overline{S}_i}^*\FF_{\overline{j}}\to \FF_{\overline{i}})$, and $\HH_i^j = i^*_{S_i} \overline{\HH}_i^j$ be the closed and open homotopy fibers, respectively, as in Definition \ref{def-hh}.
Then the hypotheses of the theorem translate to the following:
\begin{enumerate}
\item For all $j$, the sheaf $\FF_j := i^*_{S_j}\FF_{\overline{j}}$ is flexible on the stratum $j$.
\item For all $i > j$, the open homotopy fiber sheaf $ {\HH_i^j}$ is flexible.
\end{enumerate}
We shall show that the sheaves $ \FF_{\overline{j}} $ satisfy the parametric $h-$principle.
We proceed by induction on $n$. For $n=1$, i.e.\ for manifolds, this is due to Gromov (see the Main Theorem on pg.\ 76 of \cite{Gromov_PDR}).
By induction, we can assume that $\FF_{\overline{2}}$ satisfies the parametric $h-$principle (using the chain of $n-1$ strata $S_2>\cdots>S_n$). We first prove that the closed homotopy fibers $\overline{\HH}^i_j$, $i > j$ satisfy the parametric $h$-principle.
\begin{claim}\label{claim-bhhprin}
With notation and hypothesis as in Theorem \ref{thm-hofibsflexg}, $\overline{\HH}^j_i$
satisfies the parametric $h-$principle for all $i > j$.
\end{claim}
\begin{proof}[Proof of Claim \ref{claim-bhhprin}] We prove this by downward induction on $i$. First, let $i=n$.
Since $S_n$ is the deepest stratum and therefore $S_n =\overline{S}_n$, $\overline{\HH}^j_n = \HH^j_n$, and $\HH^j_n$ is flexible by hypothesis. Hence, by \cite[p. 76]{Gromov_PDR}, $\overline{\HH}^j_n =\HH^j_n$ satisfies the parametric $h-$principle for all $j <n$.
Suppose now that the theorem is true for $i = k+1 \leq n$ and all $j < k+1$. We shall show that $\overline{\HH}^j_k$ satisfies the parametric $h$-principle as a sheaf on $\overline{S}_k$, for all $j < k$. Observe that $i^*_{S_k} \overline{\HH}^j_k = \HH^j_k$ is an open homotopy fiber, hence flexible by hypothesis. Next, by Lemma \ref{lem-bendright},
$$i^*_{\overline{S}_{k+1}} \overline{\HH}^j_k \simeq \operatorname{hofib} \left ( \overline{\HH}^j_{k+1} \to \overline{\HH}^k_{k+1} \right ).$$
By the induction hypothesis, $\overline{\HH}^j_{k+1}$ and $\overline{\HH}^k_{k+1}$ satisfy the parametric $h$-principle. Therefore, by Lemma \ref{lem-sessheafflex} and Remark \ref{rmk-sessheafflex}, $i^*_{\overline{S}_{k+1}} \overline{\HH}^j_k$ satisfies the parametric $h$-principle as well. Thus, by Corollary \ref{cor-3glue}, $\overline{\HH}^j_k$ satisfies the parametric $h$-principle. \end{proof}
In particular $\overline{\HH}^1_2$ satisfies the parametric $h$-principle. By Corollary \ref{cor-3glue}, we conclude that $\mathcal{F}_{\overline{1}}$ satisfies the parametric $h$-principle, as required.\end{proof}
\begin{proof}[Proof of Theorem \ref{thm-hofibsflexg}, the general case:]
We now show how to remove the hypothesis of total orderability of strata in the proof of Theorem \ref{thm-hofibsflexg}. We shall proceed by induction on height and apply Corollary \ref{cor-3glue}.
We use the notation of Corollary \ref{cor-3glue}. The proof of Corollary \ref{cor-3glue} in fact shows that $\FF_X$ satisfies the parametric $h-$principle
provided that for \emph{any} maximal (with respect to height) stratum $L$, and $Y=\partial L$, we can show that $\FF_Y$, and $ \overline{\HH}^X_Y$ satisfy the parametric $h-$principle. The gluing needs flexibility of $\FF_L$ in Corollary \ref{cor-3glue},
but here our concern will be with the parametric $h-$principle. Let $m+1$ denote the height of $L$, so that $Y$ has height $m$. Assume by induction that
\begin{enumerate}
\item $\FF_{A}$ satisfies the parametric $h-$principle for any substratified space $A \subset X$ of height $<m$.
\item Further, for any substratified space $B \subset X$ with $B>A$, $ \overline{\HH}^B_A$ satisfies the parametric $h-$principle.
\end{enumerate}
Set $Y = \cup_1^k Y_i$, where $Y_i$'s denote the \emph{closures} of the maximal (with respect to height) strata of $Y$. By induction on $k$, it suffices to prove
the theorem for $k=2$. This is because the proof in the totally-ordered case (coupled with the above inductive hypothesis) allows us to conclude that the statement is true for $k=1$, and we can take the union $ \cup_1^{k-1} Y_i$ as a single stratified space in the inductive step. Hence, assume that $Y=Y_1 \cup Y_2$. Also,
note that $Z=Y_1 \cap Y_2$ has less height than (at least one of) $Y_1, Y_2$.
We refer to the sheaf $\FF_Y$ (constructed from $\FF_{Y_1}$ and $\FF_{Y_2}$ as in Section \ref{sec-glueacross} as the \emph{intrinsic sheaf on $Y$}.
Similarly, we refer to $i_Y^* \FF_X$ as the \emph{extrinsic sheaf on $Y$}. It suffices, by Corollary \ref{cor-3glue} to prove that
\begin{enumerate}
\item The intrinsic sheaf $\FF_Y$ satisfies the parametric $h-$principle, and
\item $ \overline{\HH}^X_Y=\operatorname{hofib}(i_Y^* \FF_X \to \FF_Y)$ satisfies the parametric $h-$principle.\\
\end{enumerate}
\noindent {\bf $\FF_Y$ satisfies the parametric $h-$principle:}\\ To prove that $\FF_Y$ satisfies the parametric $h-$principle,
it suffices by stratumwise flexibility of $\FF$ to show that
$$\AAA = i_Z^* \FF_{Y_1} \times_{\FF_Z} i_Z^*\FF_{Y_2}$$
satisfies the parametric $h-$principle.
(Recall that $Z=Y_1 \cap Y_2$. This reduction is exactly as in the proof Proposition \ref{prop-sheafh}).
By Lemma \ref{lem-hofib-fiberpdkt}, $\operatorname{hofib}(\AAA \to i_Z^* \FF_{Y_1})$ is homotopy equivalent to
$\operatorname{hofib}( i_Z^* \FF_{Y_2} \to \FF_Z)$. By the inductive hypothesis on height (of $Z$),
$\operatorname{hofib}( i_Z^* \FF_{Y_2} \to \FF_Z)$
satisfies the parametric $h-$principle. Hence, so does
$\operatorname{hofib}(\AAA \to i_Z^* \FF_{Y_1})$. Again, by
the inductive hypothesis on height (of $Z$),
$i_Z^* \FF_{Y_1}$ satisfies the parametric $h-$principle.
Hence, by Lemma \ref{lem-sessheafflex}, $\AAA$ satisfies the parametric $h-$principle. \\
\noindent {\bf $ \overline{\HH}^X_Y$ satisfies the parametric $h-$principle:}\\ We know that the following sheaves satisfy the parametric $h-$principle:
\begin{enumerate}
\item $\FF_Z$ and $\operatorname{hofib}(i_Z^* \FF_X \to \FF_Z)$ (by the inductive hypothesis
on closures of strata on lower height, and closed homotopy fibers on closures of strata on lower height)
\item $\operatorname{hofib}(i_{Y_1}^* \FF_X \to \FF_{Y_1})$ (from the proof in the totally-ordered case)
\item $\operatorname{hofib}(i_{Y_2}^* \FF_X \to \FF_{Y_2})$ (from the proof in the totally-ordered case)
\end{enumerate}
It suffices (as in the proof of Proposition \ref{prop-sheafh}) to show that $\operatorname{hofib}(i_Z^* \FF_X \to \AAA)$ satisfies the parametric $h-$principle.
Note now that by Lemma \ref{lem-sessheafflex}, $\operatorname{hofib}(\AAA\to \FF_Z)$
satisfies the parametric $h-$principle (since we have already shown that $\AAA$ does, and the inductive hypothesis gives that $\FF_Z$ does).
Further, by the inductive hypothesis applied to the lower depth stratum closure $Z$,
the homotopy fiber $\operatorname{hofib}(i_Z^* \FF_X \to \FF_Z)$ satisfies the parametric $h-$principle. By Lemma \ref{lem-bendright},
$\operatorname{hofib}(i_Z^* \FF_X \to \AAA)$ is homotopy equivalent to
$$\operatorname{hofib}\big(\operatorname{hofib}(i_Z^* \FF_X \to \FF_Z) \to \operatorname{hofib}(\AAA\to \FF_Z)\big).$$
By Lemma \ref{lem-sessheafflex}, this satisfies the parametric $h-$principle.
Hence, so does $\operatorname{hofib}(i_Z^* \FF_X \to \AAA)$.
\end{proof}
\begin{comment}\label{rmk-hofibsflexg2} {\rm
We adopt a bottom-up approach to prove the
analog of Claim \ref{claim-bhhprin}.
The stratum $n$ is replaced by the (necessarily disjoint) union of deepest strata (see Definition \ref{def-Idec}). Note only that $n=\overline{n}$ is now a disjoint union of manifolds of possibly different dimensions. We label the disjoint union of strata that lie above only the deepest strata by $(n-1)$, those that lie just above strata in
$(n-1)$ by $(n-2)$, and so on.
Then the first step of the proof of Claim \ref{claim-bhhprin} shows that $\overline{\HH}_{i,n} =\HH_{i,n}$ satisfies the parametric $h-$principle for all $i<n$.
Fix $i, p$ as in the inductive step of the proof of Claim \ref{claim-bhhprin}. Also,
let $S_p$ be a stratum with label $p$.
It suffices to prove for any such $S_p$, that $i_{\partial S_p}^* \overline{\HH}_{i,p}$ satisfies the parametric $h-$principle.
By Lemma \ref{lem-bendright},
$$i_{\partial S_p}^* \overline{\HH}_{i,p}\cong \operatorname{hofib} (\overline{\HH}_{i,\partial S_p} \to \overline{\HH}_{p,\partial S_p}).$$
The inductive hypothesis applied to closed stratified subspaces with label $p+1$ guarantees that $\overline{\HH}_{i,\partial S_p}$ and $\overline{\HH}_{p,\partial S_p}$
satisfy the parametric $h-$principle. Hence, as before,
$i_{\partial S_p}^* \overline{\HH}_{i,p}$ satisfies the parametric $h-$principle, and
an application of Corollary \ref{cor-3glue} completes the proof that $\overline{\HH}_{i,j} $ satisfies the parametric $h-$principle for all $i<j$. (Note that the hypotheses of Theorem \ref{thm-bh} and the proof of Corollary \ref{cor-3glue} uses neither connectedness of ${\overline{n}}$ nor that it is a manifold.)
Finally, let $L$ be a maximal stratum. Then, we conclude from the above that
$$\operatorname{hofib} \big(i_{\partial L}^* \FF_{\overline{L}}\to \FF_{\partial {L}} \big),$$
and hence $i_{\partial L}^* \FF_{\overline{L}}$
satisfy the parametric $h-$principle.
The proof of Theorem \ref{thm-hofibsflexg} in the general case is now completed by applying Corollary \ref{cor-3glue} to glue the flexible sheaf $\FF_L$ to
$i_{\partial L}^* \FF_{\overline{L}}$.
}
\end{comment}
\subsubsection{Flexibility versus h-principle}\label{sec-ctreg} The aim of the example below is to illustrate the necessity of flexibility of $\FF$ on the top stratum
in Theorem \ref{thm-hofibsflexg} and Proposition \ref{prop-sheafh}. It emphasizes that
it is not enough to assume that $\FF$ satisfies the parametric $h-$principle on the top stratum.
Let $X=M$ be an orientable 3-manifold, and $F \subset M$ be an embedded orientable surface. Stratify $X$ with two strata:
$L= M \setminus F$ and $S=F$, so that $S=\overline{S}$. Let $N$ be another 3-manifold not covered by $M$.
The stratified sheaf $\FF$ is defined as:
\begin{enumerate}
\item $\FF_{{L}}(U) = \operatorname{Imm} (U,N)$ for $U \subset \overline{L}$ open,
\item $\FF_{{S}}(V) = \operatorname{Imm} (V,N)$ for $V \subset S$ open.
\end{enumerate}
Then the restriction map $\operatorname{res}: i_S^* \FF_{{L}} \to \FF_{{S}}$ simply forgets the normal bundle to $S$ in $M$.
We note the following:
\begin{enumerate}
\item $\FF_{{L}}|L$ satisfies the parametric $h-$principle \cite[p. 79]{Gromov_PDR} as $L$ is open.
\item $\FF_{{L}}|L$ is not flexible (since the dimensions of $M, N$ coincide, and both are compact).
\item $\FF_{{S}}$ is flexible \cite[p. 79]{Gromov_PDR}.
\item $\operatorname{hofib}(\operatorname{res})$ is flexible.
\end{enumerate}
\begin{prop}\label{prop-3mfld} With $M, \FF, L, S$ as above,
$\FF$ does not satisfy the parametric $h-$principle.
\end{prop}
\begin{proof}
The Gromov diagonal construction applied to $\FF$ gives $\FF^*$ homotopy equivalent to a sheaf $\GG$ given
as follows. $\GG(U)$ consists of bundle maps $\psi:TU\to TN$ covering smooth maps $\psi_0: U \to N$
such that $\psi|T_xU$ is an isomorphism on every tangent space $T_xU$. In particular,
$\GG(M)$ consists of bundle maps $\psi:TM\to TN$ covering smooth maps $\psi_0: M \to N$
such that $\psi|T_xM$ is an isomorphism on every tangent space $T_xM$. Since $M$ is an orientable 3-manifold, it is parallelizable, so that $TM=M\times \mathbb R^3$.
Therefore $\GG(M)$ contains $\psi$ covering a \emph{constant map} $\psi_0$. In particular, $\GG(M)$ is non-empty.
On the other hand, $\FF(M)$ consists of immersions from $M$ to $N$. Since $M, N$ have the same dimension, $\FF(M)$ consists of covering maps. Since $N$ is not covered by $M$, $\FF$ is empty.
Hence, $\FF$ does not satisfy the parametric $h-$principle.
\end{proof}
\subsection{Microflexibility of stratified sheaves}\label{sec-micro} For the purposes of this subsection
$X$ will denote a stratified space, and $V$ a manifold. $\FF$ will be a (stratified) continuous sheaf.
The aim of this subsection is to generalize the following theorem of Gromov and its consequence below to stratified spaces and stratified sheaves over them.
\begin{theorem}\cite[p. 78]{Gromov_PDR}\label{thm-gromovmicro2flex}
Let $Y = V \times \mathbb R$, and let $\Pi: Y \to V$ denote the projection
onto the first factor. Let $\diff( V, \Pi)$ be the group of diffeomorphisms of $Y$ commuting with $\Pi$, where $V$ is identified with $V \times \{0\}$.
Let $\FF$ be a microflexible (continuous) sheaf over $Y$ invariant under $\diff( V, \Pi)$.
Then the restriction $\FF \vert V (= V \times \{0\})$ is a flexible sheaf over $V (= V \times \{0\})$.
\end{theorem}
The principal consequence is the following
flexibility theorem for $\diff-$invariant sheaves.
\begin{theorem}\cite[p. 78-79]{Gromov_PDR}\label{thm-gromovmicro2flex2}
Let $\FF$ be a microflexible $\diff(V)-$invariant (continuous) sheaf
over a manifold $V$. Then the restriction to an arbitrary piecewise smooth polyhedron
$K \subset V$ of positive codimension, $\FF|K$, is a flexible sheaf over $K$.
\end{theorem}
Recall that a stratified (continuous) sheaf $\FF$ over a stratified space $X$ is \emph{stratumwise microflexible} if
for every stratum $S$ of $X$, $\FF|S$ is microflexible.
The main theorems of this section are now given below.
\begin{theorem}\label{thm-micro2flexs}
Let $Y = X \times \mathbb R$ equipped with the product stratification, and let $\Pi: Y \to X$ denote the projection
onto the first factor. Let $\FF$ be a stratumwise microflexible continuous sheaf over $Y$ invariant under $\sdiff( X, \Pi)$.
Further, suppose that $\FF$ is infinitesimally microflexible across strata, i.e.\ for all strata $S<L$ of $Y$, $\HH^L_S$ (cf.\ Definition \ref{def-hh}) is microflexible.
Then the restriction $\FF \vert X (= X \times \{0\})$ is a stratified sheaf over $X (= X \times \{0\})$ satisfying the parametric $h-$principle.
\end{theorem}
\begin{proof} Note first that $\HH^L_S$ is invariant under $\sdiff( X, \Pi)$ by Lemma \ref{lem-trivialhofibbdl}.
By hypothesis,
\begin{enumerate}
\item $\FF\vert L \times \mathbb R$ is microflexible.
\item $\HH^L_S$ is microflexible.
\end{enumerate}
The strata $L$ of $Y$ are of the form $L_X \times \mathbb R$, where $L_X = L\cap X$.
Note that $L$ is a manifold. Invariance of $\FF$ and $\HH^L_S$ under $\sdiff( X, \Pi)$ implies $\diff (L_X \times \mathbb R, \Pi)-$invariance of $\FF\vert L_X \times \mathbb R$ and $\HH^L_S | S_X \times \mathbb R$. It follows from Theorem \ref{thm-gromovmicro2flex} that $\FF\vert S_X$ and $\HH^L_S | S_X$ are flexible.
Hence, by Theorem \ref{thm-hofibsflexg}, $\FF \vert X $ satisfies the parametric $h-$principle.
\end{proof}
\begin{defn}
A stratified subspace $K<X$ is said to be of positive codimension if for every stratum $S$ of $X$,
$K\cap S$ has positive codimension in $S$.
\end{defn}
\begin{theorem}\label{thm-micro2flexs2}
Let $\FF$ be a stratumwise microflexible $\sdiff-$invariant stratified (continuous) sheaf
over $X$. Further, suppose that $\FF$ is infinitesimally microflexible across strata, i.e.\ the open homotopy fiber sheaves $\HH^L_S$ (Definition \ref{def-hh}) are
microflexible.
Then the restriction $\FF|K$ to a stratified subspace
$K \subset X$ of positive codimension satisfies the parametric $h-$principle.
\end{theorem}
\begin{proof}
Since $K$ is of positive codimension in $X$, for every $k\in K$, there is an open neighborhood $U_k$ of $k$ in $K$ such that $U_k \times (-1,1)$ embeds in $X$. The theorem now follows
from Theorem \ref{thm-micro2flexs} since locally flexible sheaves are flexible (by Gromov's localization lemma \cite[p. 79]{Gromov_PDR}).
\end{proof}
\begin{rmk}
Theorems \ref{thm-micro2flexs} and \ref{thm-micro2flexs2} provide examples of how to translate a positive codimension stratumwise microflexibility hypothesis into an
$h-$principle conclusion.
\end{rmk}
\section{The Gromov diagonal normal construction for smooth stratified spaces}\label{sec-formalfnstrat}
We specialize the Gromov diagonal normal sheaf construction of $\FF^*$ in Definition \ref{def-formalfn} to the sheaf of sections of a stratified bundle $E$ over a smooth stratified space $X$. Even in the manifold setup, an explicit connection between Gromov's $\FF^*$ construction and the use of jets in \cite{em-book} is a little difficult to find. Hence, we provide a detailed treatment below. Remark \ref{rmk-formalasgerms}, which gives an explicit description of $\FF^*$ will allow us to formalize this. It will turn out that
in the stratified context, $\FF^*$ admits an inductive description up to homotopy in terms of two constituent sheaves:
\begin{enumerate}
\item A purely topological germ of sections (see Definition \ref{def-conegerms} below).
\item A smooth jet $\JJ^r_E$ when $E$ is a smooth bundle over a manifold (see Proposition \ref{prop-formalfnmflds} below).
\end{enumerate}
The aim of this section is to describe this structure of $\FF^*$. In the process we answer Sullivan's question \ref{qn-sullivan}.
\subsection{Tangent microbundles on stratified spaces}\label{sec-microb}
The stratified tangent bundle (Definition \ref{str-tbl}) will turn out to be a stratified subbundle of the tangent microbundle to $X$ (Definition \ref{def-mic}).
Note that for $X$ a manifold, the tangent microbundle is {germinally} equivalent to the tangent bundle $TX$, as it coincides with the normal bundle to the diagonal $\diag(X) \subset X \times X$.
For each stratum $S$ of $X$, $TS$ will thus refer to the tangent microbundle of the manifold $S$, i.e.\ it may be identified canonically with the germ of the zero-section from $S$ to the usual (manifold) tangent bundle of $S$.
The tangent microbundle to a stratified space $X$, denoted by $tX$ henceforth, turns out to be a {stratumwise} bundle (see Definition \ref{def-stratumwisebdl}).
We provide an explicit
description of $tX$ in terms of {the} local structure {of $X$}.
Let ${tX :=(U, X, p)}$ be the tangent microbundle of $X$. For $x \in X$, consider the fiber $p^{-1}(x)$. This is a germ $U_x$ of a neighborhood of $x$ in $X$. Let $S$ be the unique stratum of $X$ containing $x$. Then there is {an} identification of $U_x$ with $W \times cA$ where $W$ is the germ of a ball around $x$ in $S$ and $cA$ is the germ of a cone on the link $A$ of $S$ in $X$, { with cone point x}. Thus,
$${(p^{-1}(x), \{(x,x)\}) \cong (U_x, \{x\}) \cong (W, \{x\}) \times (cA, \{x\})}$$
as {germs of spaces}.
The bundle over $S$ whose fibers are (germs of) the cones $cA$ will be denoted as $NS$, and referred to as the normal cone microbundle of $S$ in $X$.
Hence the restriction of $(U, p)$ to $S$, i.e.\ $(p^{-1}(S) \cap U, S, p)$ is germinally equivalent to the direct sum, {i.e.\ fiberwise product} of the microbundles:
$${(p^{-1}(S) \cap U, S, p) \cong (TS, S, p) \oplus (NS, S, p),}$$
where $TS$ is the tangent microbundle of $S$ and $(NS, S, p)$ is the normal cone microbundle of $S$ in $X$. {This demonstrates that $tX$ is a stratumwise fiber bundle over $X$ according to Definition \ref{def-stratumwisebdl}, where the fiber over a point $x \in S$ of any particular stratum $S$ of $X$ is given by $T_x S \oplus cA_S(x)$ where $A_S$ denotes the link of $S$ in $X$, and $cA_S(x)$ denotes the normal cone of $S$ in $X$ at the point $x$. We next describe a filtration of $tX$, which induces the canonical filtration of any normal cone $cA_S$ by stratum-closures.}
\noindent {\bf Relative tangent microbundle:}
Next, suppose that $L$ is a stratum of $X$ and let $Y= \overline{L}$ be the stratum closure of $L$. Then $Y$ is stratified naturally by the strata of $X$ given by the union of $L$ and those strata of $X$ that lie on the boundary of $L$.
The tangent microbundle for $Y$ can be constructed as above, replacing $X$ by $Y$. We denote the tangent microbundle $tY$ of $Y$ by $t(L; X)$ and call it the \emph{relative tangent microbundle (relative to $L$)}. Note that $t(L; X)$ is a microbundle over $Y$. If, moreover, $L$ is a dense stratum in $X$, then $t(L; X) = tX$.
\noindent {\bf Filtering the tangent microbundle:}
Observe that $tX$ admits a filtration by $t(L; X)$ for $L$ varying over strata of $X$.
Thus, $t(L; X)$ is a sub(micro)bundle of $tX$ restricted to any stratum $S < L$, as
$${t(L; X)\vert_{L} =TS \oplus N_{\bar{L}}(S)}$$
and
$${tX|_S =TS \oplus N_X(S)},$$
where $N_{\bar{L}}(S)$ and $N_X(S)$ denotes the normal cone microbundles of the stratum $S$ in $\bar{L}$ and $X$, respectively. So $t(L; X)\vert_{S} \subset t(X)\vert_{S}$. For any particular stratum $S$ of $X$, the collection $\{t(L; X) : S < L\}$ induces a filtration of the normal bundle $N_X(S)$ by the subbundles $\{N_{\bar{L}}(S) : S < L\}$. These in turn induce the filtration by stratum-closures $\{cA^L_S : S < L\}$ on any particular conical fiber $cA_S$. Here $A^L_S$ denotes the link of $S$ in $\bar{L}$.
\subsection{Gromov diagonal normal construction for manifolds}\label{sec-formalfnmflds} We detail some of the points made in Remark \ref{rmk-formalasgerms} and
briefly recall Gromov's diagonal normal sheaf construction in the manifold context before generalizing to stratified spaces. Recall (Definition \ref{def-formalfn}) that if $\FF$ is a {continuous sheaf} over a manifold $M$, then the sheaf $\PP$ over $M \times M$ is given by $\PP(U \times V) = \operatorname{Maps}(U, \FF(V))$. Further, the Gromov diagonal normal sheaf $\FF^*$ is obtained by restricting $\PP$ to the diagonal $\diag(M) \subset M\times M$.
Let $p:E \to M$ be a smooth fiber bundle over $M$ and $\FF$ be the sheaf of sections of $E$, i.e.\ $\FF(U) = \Gamma(U; E)$, where the space of sections $\Gamma(U; E)$ is equipped with {the quasitopology inherited from $\mathrm{Maps}(U, E)$}. Then
$$\PP(U \times V) = \operatorname{Maps}(U, \FF(V)) = \operatorname{Maps}(U, \Gamma(V; E)).$$
{Therefore,} these consist of those maps $U \times V \to E$ for which the restriction to {$\{u\} \times V$} gives a section of $E$ over $V$, {for any $u \in U$}.
Recall that a collection of elements $\phi_i \in \PP(U_i \times U_i), i=1, \cdots, m$ is consistent if for all $i \neq j$, $\phi_i=\phi_j$ on ${(U_i \cap U_j) \times (U_i \cap U_j)}$.
Then $\FF^*$ can be described as follows. For any open set $W \subset \diag(M)$, an element of $\FF^*(W)$ consists of a collection of consistent elements ${\phi_i \in \PP(U_i \times U_i)}$ where $\{U_i \times U_i\}$ is a basic open cover of $W$ in $M \times M$. Consistency of ${\phi_i}$ ensures that they define a well-defined germ of a map $(TW, W_0) \to {E}$ at the zero-section $W_0$ of the tangent bundle $TW \subset TM$.
Let $M_0$ denote the zero-section of $TM$. Then, (global) sections of $\FF^*$ may be viewed as certain germs of maps ${\psi} : (TM, M_0) \to E$. The fact that the maps $U_i \times U_i \to E$ are sections when restricted to the second factor implies the following two facts about the germ $\psi: (TM, M_0) \to E$:
\begin{enumerate}
\item $\psi$ is a section $M \to E$ when restricted to the 0-section $M_0$ of $TM$.
\item $\psi$ is a germ of a section $(T_p M, 0) \to E$ when restricted to the tangent space $T_p M \subset TM$ at $p \in M$.
\end{enumerate}
Replacing $M$ by $U$, $\FF^*(U)$ consists of germs of mappings $\psi_U: (TU, U_0) \to E$ such that $\psi_U$ {is} a section of $E$ when restricted to the 0-section $U_0 \subset TU$ and is a germ of a section of $E$ {over a neighborhood of $p$} when restricted to any tangent space $T_p U$ for $p \in U$. Thus, any element of $\FF^*(U)$ consists of a section $s$ of $E$ over $U$ {decorated with a collection of} germs ${g_p : (U_p, p) \to (E, s(p))}$ of sections of $E$. Here,
\begin{enumerate}
\item $g_p$ is defined on some neighborhood $U_p$ of $p \in U$, and sends $p$ to $s(p)$.
\item $p$ ranges over all $U$.
\end{enumerate}
{Therefore, we shall henceforth denote an element of $\FF^*(U)$ as a tuple $(s, \{g_p : p \in U\})$. It is convenient to imagine this data as a \emph{base} section $s : U \to E$ of $E$ with the image $s(U) \subset E$ decorated by a \emph{field} of section-germs $\{g_p : p \in U\}$}.
There exists a natural {morphism of sheaves} $\Psi_r: \FF^*\to \JJ^r_E$ from $\FF^*$ to the {sheaf} of $r-$jets of sections of $E$ over $M$, {essentially given by setting $\Psi_r(s, \{g_p\})$ equal to the family of $r$-order Taylor polynomials of $g_p$ at $p$, as $p$ varies over $U$. We state a precise definition below:}
\begin{defn}\label{def-psi}
{For any $(s, \{g_p : p \in U\}) \in \FF^*(U)$, define $\Psi_r(s, \{g_p\}) \in \JJ^r_E(U)$ to be the section of the $r$-jet bundle of $E$ such that at the point $p \in U$, the section takes the value $J^r_p g_p$. This defines a morphism of sheaves $\Psi_r : \FF^* \to \JJ^r_E$.}
\end{defn}
\begin{prop}\label{prop-formalfnmflds}
$\Psi_r: \FF^* \to \JJ^r_E$ is a {weak} homotopy equivalence of sheaves. Equivalently, for any $r$, the Gromov diagonal normal sheaf $\FF^*$ is naturally homotopy equivalent to the sheaf $\JJ^r_E$ of $r-$jets of sections of $E$.
\end{prop}
\begin{proof}
Consider the {space $C^\infty_0(\mathbb R^n, \mathbb R^m)$ of germs of smooth maps from $\mathbb R^n$ to $\mathbb R^m$ at the origin}. Also, let ${P^r_0(\mathbb R^n, \mathbb R^m) \subset C^\infty_0(\mathbb R^n, \mathbb R^m)}$ denote the {subspace of} germs of polynomials (in $n$ variable) of degree at most $r$ at $0$. Let $T_r(f)$ denote the Taylor expansion of $f \in C^\infty_0(\mathbb R^n, \mathbb R^m)$ {at 0}, truncated at degree $r$. Then
$$f_t := T_r(f) +t (f-T_r(f)), t \in [0,1]$$
furnishes a {deformation retraction of $C^\infty_0(\mathbb R^n, \mathbb R^m)$ onto $P^r_0(\mathbb R^n, \mathbb R^m)$. As locally $\FF^*$ and $\JJ^r_E$ are isomorphic, respectively, to the sheaves $\mathrm{Maps}(-, C^\infty_0(\mathbb R^n, \mathbb R^m))$ and $\mathrm{Maps}(-, P^r_0(\mathbb R^n, \mathbb R^m))$, we conclude $\Psi_r$ is a weak homotopy equivalence.}
\end{proof}
\begin{rmk}\label{rmk-formalfn}
In fact, $\Psi_\infty: \FF^* \to \JJ^\infty_E$, sending ${(s, \{g_p : p \in U\}) \in \FF^*(U)}$ to its infinite jets is also surjective. If we restrict only to analytic sections, then $\Psi_\infty$ is {moreover} injective.
$\Psi_1: \FF^* \to \JJ^1_E$ given by $${\Psi_1(s, \{g_p\})= (s, \{dg_p\})}$$ is of particular significance. It replaces the {germ-field $\{g_p : p \in U\}$ decorating the base section $s$ by the tangent plane fields $\{dg_p : p \in U\}$}.
\end{rmk}
\subsection{Gromov diagonal normal construction for cones}\label{sec-formalfncone}
We would like to extend the linearized notion of formal $r-$jets of sections ensured by Definition \ref{def-psi} and Proposition \ref{prop-formalfnmflds} from the manifold context to the context of a stratified bundle $P:E \to X$ over a stratified space. However, a full linearization is not possible
and we shall provide a hybrid construction, interbreeding
\begin{enumerate}
\item the linear structure within manifold strata provided by Definition \ref{def-psi} and Proposition \ref{prop-formalfnmflds},
\item the germ construction in Remark \ref{rmk-formalasgerms} for cones on links using the local structure given by Corollary \ref{cor-trivialzn}.
\end{enumerate}
In this subsection, we shall focus on the second ingredient, and in the next subsection indicate how to assemble these two together. Let $P:E \to X$ be a stratified bundle.
For the purposes of this subsection, $E=cB, X=cA$, where $E=B\times [0,1)/B \times \{0\}$, and
$X=A\times [0,1)/A \times \{0\}$ and $A, B$ are compact {abstractly} stratified spaces.
Let $c_A$ (resp.\ $c_B)$ denote the cone-point of $cA$ (resp.\ $cB$). Let $cB^0$ (resp. $cA^0$) denoted the deleted cone ${cB \setminus \{c_B\}}$ (resp. ${cA \setminus \{c_A\}}$).
By Corollary \ref{cor-trivialzn}, there exists a stratified bundle $p: B \to A$ such that
(after reparametrization if necessary), $P(b, t) = (p(b), t)$. Hence, there exists a natural stratified bundle $P^0: cB^0\to cA^0$ induced by $P$.
Let $\FF$ (resp. $\FF_w$) denote the sheaf of controlled (resp.\ weakly controlled) sections of $P:E \to X$ in this case.
Note that any section of $P:cA(=E) \to cB(=X)$ necessarily sends $c_A$ to $c_B$. Let $\FF^0$ (resp. $\FF_w^0$) denote the induced sheaf of controlled (resp.\ weakly controlled) sections of $P^0: cB^0\to cA^0$.
We shall first inductively describe the Gromov diagonal normal sheaf for $\FF^0$ (resp. $\FF_w^0$)
using the sheaf $\LL$ (resp. $\LL_w$) of controlled {(resp.\ weakly controlled)} sections of $p:B \to A$ and
Lemma \ref{lem-mapsI2F}. Let $\LL^*$ (resp. $\LL_w^*$) denote the Gromov diagonal normal sheaves of $\LL$ (resp. $\LL_w$).
In particular, when $A, B$ are manifolds, then (see Section \ref{sec-formalfnmflds}):
\begin{enumerate}
\item $p:B \to A$ is a smooth bundle map,
\item $\LL=\LL_w$ is the sheaf of smooth sections given by $\LL(U) =\Gamma (U, B) $
\item $\LL^*$ is homotopy equivalent to the sheaf $\JJ^r_B$ of $r-$jets (Proposition \ref{prop-formalfnmflds}).
\end{enumerate}
Then controlled sections of $P^0: cB^0\to cA^0$ are given by maps of the form $\sigma
: cA^0 \to cB^0$ of the form $\sigma (a,t)
=(s_t(a),t)$, where $s_t: A \to B$ is a controlled section of $p: B\to A$, i.e.\ $\sigma $ is a continuous $(0,1)-$parametrized family of sections
from $A$ to $B$. The same holds for controlled sections of $P^0: (P^0)^{-1}(U)\to U$ for all open $U$ in $cA^0$. {Therefore, we have an isomorphism of sheaves $\mathcal{F}^0 \cong \operatorname{Maps}^p((0, 1), \mathcal{L})$, where $\operatorname{Maps}^p$ denotes the parametric sheaf as defined in Definition \ref{defn-sheafmaps2ff2}.}
The following is now an immediate consequence of Lemma \ref{lemw2ff*p}:
\begin{lemma}\label{lem-deletedconesheaf} With $\FF^0, \LL$ as above, and open subsets $V \subset A$, and $W \subset (0,1)$, we have:
$$(\FF^0)^*(W \times V) = \operatorname{Maps}(W,\LL^*(V)). $$
\end{lemma}
Next, we describe the relationship between the sheaf of controlled sections and the sheaf of
weakly controlled sections. Suppose $X=cA$ is equipped with a control structure (i.e.\ both a projection $\pi$ and a radial function $\rho$ in a neighborhood of $c_A$). Let $\rho_A$ denote the radial function on a small neighborhood of $c_A$ in $cA$. Without loss of generality, by shrinking
$cA$ if necessary, we may assume that $\rho_A: cA^0 \to (0,1)$ is a fiber bundle with fiber $A$.
Pulling back $\rho_A$ under $P$ we obtain a radial function $\rho_B=P \circ \rho_A$ on $cB$.
\begin{assume}\label{assume-control}
Thus, without loss of generality, we assume that $\rho_B=P \circ \rho_A$, i.e.\ $P$ is a
controlled stratified bundle map from $cB$ to $cA$ in a neighborhood of $c$. Further, $P^0: cB^0 \to cA^0$ is a bundle map such that
$$\rho_B(t,b)= t = \rho_A (P^0(t,b)).$$
\end{assume}
We shall say that a section $s : cA^0 \to cB^0$ is \emph{levelwise weakly controlled} if $s(\{t\} \times A^0) \subseteq \{t\} \times B^0$ for all $t \in (0, 1)$ and the restriction $s|(\{t\} \times A^0)$ is a weakly controlled map to $\{t\} \times B^0$.
With the control structure {on the domain and target of} $P : cB\to cA$ in place, the space of weakly controlled sections $\Gamma_w (cA^0, cB^0)$ fibers over the space of levelwise weakly controlled sections $\Gamma_\ell(cA^0, cB^0)$. {The fibers of this fibration} are given by reparametrizing the $(0,1)-$direction in $cA^0$. More precisely, there exists a
surjection $\Theta: \Gamma_w(cA^0, cB^0) \to \Gamma_\ell(cA^0, cB^0)$, such that for any $\sigma \in \Gamma (cA^0, cB^0)$, $\Theta^{-1} (\sigma) \cong \operatorname{Maps} (A, \diff^+((0,1)))$, where $\diff^+((0,1))$ denotes the orientation-preserving diffeomorphisms of $(0,1)$. Thus, there is a natural product fibration:
\begin{equation*}\label{eqn-strong2weakcontrol}
\Gamma_w (cA^0, cB^0) = \Gamma_\ell (cA^0, cB^0) \times \operatorname{Maps} (A, \diff^+((0,1)))
\end{equation*}
This is because $ \Gamma_w (cA^0, cB^0)$ is a principal $\operatorname{Maps} (A, \diff^+((0,1)))-$bundle
over $\Gamma_l (cA^0, cB^0) $ equipped with a natural section given by the inclusion $\Gamma_\ell(cA^0, cB^0) \hookrightarrow \Gamma_w(cA^0, cB^0)$ of levelwise weakly controlled sections into the weakly controlled sections.
Since $\diff^+((0,1))$ is contractible, we {obtain a homotopy equivalence between $\Gamma_w(cA^0, cB^0)$ and $\Gamma_\ell(cA^0, cB^0)$. Consequently, we obtain}
\begin{cor}\label{cor-deletedconesheaf}
With $\FF^0_w, \LL_w$ as above, and open subsets $V \subset A$, and $W \subset (0, 1)$, we have
$$(\FF^0_w)^*(W \times V) = \operatorname{Maps}(W, (\LL_w)^*(V)).$$
\end{cor}
\begin{defn}\label{def-conegerms}
The space of \emph{germs}
of controlled (resp.\ weakly controlled) sections of $P:cB(=E) \to cA(=X)$ will be denoted by
$\Gamma_c(cA,cB)$ (resp.\ $\Gamma_{c,w}(cA,cB)$).
\end{defn}
We are now in a position to note the following Proposition which allows us to assemble the descriptions in Lemma \ref{lem-deletedconesheaf} and Definition \ref{def-conegerms}.
This is useful in providing an inductive description of the Gromov diagonal normal sheaf of sections of a stratified bundle.
\begin{prop} \label{prop-decompcones}
Any element of the sheaf $\FF^*(U)$ (resp. $\FF_w^*(U)$) for an open $U \subset cA$ determines and is determined by the following:
\begin{enumerate}
\item a controlled (resp. weakly controlled) section $s$ over $U$. In particular, if $U=cA$, $s: cA \to cB$ is a global controlled (resp. weakly controlled) section,
\item a germ at $c_A$ {given} by an element of $\Gamma_c(cA,cB)$ (resp.\ $\Gamma_{c,w}(cA,cB)$) if
$c_A \in U$,
\item an element of {$(\FF^0)^*$ (resp. $(\FF^0_w)^*$)} whose first coordinate (as in Remark \ref{rmk-formalasgerms}) coincides with the restriction ${s\vert_{U \setminus \{c_A\}}}$.
\end{enumerate}
\end{prop}
\begin{proof} For concreteness, we work with $\FF$ and $\FF^*$. The same argument works for $\FF_w$ and $\FF_w^*$. Further,
Equation \ref{eqn-strong2weakcontrol} really ensures that up to homotopy, these sheaves are the same, as we can apply induction (on depth) to weakly controlled
sections from $A$ to $B$ in Equation \ref{eqn-strong2weakcontrol}.
We use the description of the Gromov diagonal normal sheaf from Remark \ref{rmk-formalasgerms}: any element of $\FF^*(U)$ consists of a controlled section
$s$ over $U$ decorated with germs of sections ${\{g_w: w\in U\}}$. The controlled section
$s$ over $U$ contributes item 1 in the statement.
Next,
\begin{enumerate}
\item For $w=c_A$, ${g_w}$ is given by an element as in item 2 in the statement.
\item For $U'=U \setminus \{c_A\}$, ${\{g_w: w \in U'\}}$ {constitutes an} element as in item 3 in the statement.
\end{enumerate}
Finally, we observe that the choices in items 2, 3 are independent. Hence any choice as in items 2, 3 subject to the choice of a section $s$ as in item 1 furnishes an element of $\FF^*$.
\end{proof}
Proposition \ref{prop-decompcones} allows us to decompose elements of $\FF^*$ into two independent components.
Item 2 provides a purely topological component of $\FF^*$. This component cannot be linearized to germs in general. Item 3 on the other hand is defined inductively, and is decomposable, albeit implicitly. Hence Item 3 is again a hybrid of objects as in Item 2 and Item 3, where the latter has `less complexity of nonlinear objects'. The case where $A, B$ are manifolds is the lowest complexity case. In this case, Proposition \ref{prop-formalfnmflds}
provides a completely linear description as a sheaf of jets (linearized germs). We note, however, that this last linear description is true only up to homotopy equivalence.
Note also that for manifolds, elements in $\FF^*(U)$ may be identified with $U$-parametrized sections from the tangent space $T_pU \to E$ provided there is a way (e.g.\ a connection) of identifying $T_pU$ and $T_qU$ for all $p, q \in U$. Regarding the tangent bundle $TU$ as
the germ of a neighborhood of the diagonal $\diag U \subset U \times U$, elements in $\FF^*(U)$ are thus equivalent to $U-$parametrized sections of $E$ over the normal space $N_{(p,p)} (\diag U)$ to the diagonal at some point $(p,p) \in \diag U$.
Suppose $\FF$ is a sheaf of topological spaces over a manifold $M$ such that the inclusion $\FF \subset \FF^*$ has an inverse given by a retraction of sheaves $r: \FF^* \to \FF$ so that $r$ is a fibration. Let $\PP$ denote the sheaf over $M \times M$ given in Section \ref{sec-formalfnmflds}. Define a stratification of $X=M \times M$ with strata
$S= \diag M$ and $L= (M \times M)\setminus S$. Then we define a stratified sheaf $\RR$ over
$X$ so that
\begin{enumerate}
\item $\RR|\overline{L} = \PP$
\item $\RR|S = \FF$
\item the restriction map from $\RR|\overline{L} $ to $\RR|S$ is given by first restricting
$\PP$ to $S$, to obtain $\FF^*$, and then composing with the fibration $r$.
\end{enumerate}
Then (Definition \ref{def-infstratflex}) $\RR$ is infinitesimally flexible across strata.
\subsection{Gromov diagonal normal construction: general case}\label{sec-gdn}
For the purposes of this subsection,
$(E, \Sigma_E, \mathcal{N}_E)$ and $(X, \Sigma_X, \mathcal{N}_X)$ are {abstractly} stratified spaces (see Definition \ref{def-abtractstratsp} for notation) and $P : E \to X$ is a stratified fiber bundle. Then, Lemma \ref{lem-strbdl-trivialization} and Corollary \ref{cor-trivialzn} give us the following commutative diagram.
\begin{center}
$
\begin{CD}
cB@>>>\til{N}@>\til{\pi}>>\til{S} \\
@VPVV @VPVV @VPVV\\
c{A}@>>>{N}@>{\pi}>>{S} \\
\end{CD}
$,
\end{center}
where the horizontal rows are fiber bundles.
\begin{comment}
\widetilde{S} \arrow[d, "P"'] & \widetilde{T} \arrow[d, "P"'] \arrow["\widetilde{\pi}"', l] & c\widetilde{A} \arrow[d, "P"'] \arrow[l] \\
S & T \arrow["\pi"', l] & cA \arrow[l]
\end{comment}
Recall (Remark \ref{rmk-formalasgerms}) that a formal section of $P: E \to X$ on an open subset $U \subset X$ is a germ of a continuous map $s^* : \operatorname{Op}_{U \times U}(\diag(U)) \to E$ from the germ of an open neighborhood $\operatorname{Op}_{U \times U}(\diag(U))$ of the diagonal $\diag(U) \subset U \times U$ such that
\begin{enumerate}
\item $s : U \to E$ defined by $s(u) := s^*(u, u)$ is a section of $E$ over $U$.
\item For every $u \in U$, $s_u : \operatorname{Op}_U(u) \to E$ defined by ${g_u}(v) = s(u, v)$ is a germ of a section of $E$ in $\operatorname{Op}_U(u) \in U$.
\item For every stratum $S \in \Sigma_X$ of $X$ intersecting $U$, $s$ is smooth on $S' = S \cap U$, i.e.\ $s^*|\operatorname{Op}_{S' \times S'}(\diag({S'}))$ is a smooth germ of a map to the unique manifold stratum $\widetilde{S} \subset E$ containing $s(S)$.
\end{enumerate}
Henceforth, in this subsection,
we shall refer to $s$ as the \emph{base} of the formal section.
\begin{defn}\label{def-holosxn} A formal section
$s^*$ is called a \emph{holonomic section} of $P: E \to X$ over $U$ if $$s^*|\operatorname{Op} (u) = {g_u}$$ for all $u \in U$.
\end{defn}
Let ${s^*}$ be a formal section of $P: E \to X$ over $X$. For every stratum $S$, we can restrict ${s^*}$ to $\operatorname{Op}_{S \times X}(\diag (S))$. Note that $\operatorname{Op}_{S \times X}(\diag (S))$ is isomorphic as a microbundle to $$(S \times S, p_1, \diag (S)) \oplus (N_S, \pi_S, 0_S),$$ where $N_S$ is the normal neighborhood of $S$ (cf.\ Definition \ref{def-abtractstratsp}) and $0_S$ is naturally identified with $S\subset N_S$. Here, $N_S$ is thought of as the (micro)normal bundle to $S$ in $X$ with fiber $cA$, where $A$ is the link of $S$ (see, for instance, the commutative diagram above).
\begin{defn}\label{def-tangnormformal} Using the microbundle-isomorphism
$$\operatorname{Op}_{S \times X}(\diag (S))\cong(S \times S, p_1, \diag (S)) \oplus (N_S, \pi_S, 0_S),$$
\begin{enumerate}
\item restricting $s^*$ to the first component, we obtain the \emph{tangential formal section}\\ ${\ss_{S, t}^*} : \operatorname{Op}_{S \times S}(\diag (S)) \to E$, with base section ${\ss_{S, t}}$;
\item restricting $s^*$ to the second component, we obtain the \emph{normal formal section}\\ ${\ss_{S, n}^*} : \operatorname{Op}_{N_S}(0_S) \to E$, with base section $\ss_{S, n}$.
\item $\ss_{S, n}^*$ and $\ss_{S, t}^*$ have the same base section $\ss_{S, n}^*|0_S = \ss_{S, t}^*|\diag(S) = \ss_S$.
\end{enumerate}
\end{defn}
\begin{rmk}[Normal formal is holonomic]\label{rmk-nfisholo}
\rm{Restriction of the normal formal section $\ss_{S, n}^* : \operatorname{Op}_{N_S}(0_S) \to E$ to the fiber $c_xA=cA(x) \subset N_S$ of the normal bundle over a point $x \in S$ is a germ of a controlled section $\ss_{S, n}^*|\operatorname{Op}_{cA(x)}(x)$ of the conical component of the stratified bundle $(\mathbf{I}, P_2) : cB \to cA$ near the cone point $\{c_x\} \subset cA$. Thus, for an open chart $V \subset S$ around $x$, we obtain a map:
$$\phi_V : V \to \Gamma_c(cA, cB),\, \phi(x) := \ss_{S, n}^*|\operatorname{Op}_{cA(x)}(x)$$
On the other hand, the restriction of the normal formal section $\ss_{S, n}^*$ to the zero section $0_S \subset N_S$ of the normal bundle returns the base ${\ss}_S$ of the formal section. Therefore, $\ss_{S, n}^*\vert_{V}$ is a germ of a \emph{holonomic section} around $V \times cA \cong \pi_S^{-1}(V) \subset N_S$,
$$({\ss_S}\vert_{V}, \phi_V) : \operatorname{Op}_{V \times cA}(V \times 0) \to E.$$
We pause to emphasize that while this says that $\ss^*_{S, n}$ is globally a (germ of a) holonomic section of $E$ over $N_S$, there is not a meaningful way to write $\ss^*_{S, n}$ as a section of $E$ over $S$ (namely, ${\ss_S}$), together with a $S-$parametrized family of sections in $\Gamma_c(cA, cB)$. For one, observe that the bundle homomorphism $P : P^{-1}(N_S) \to N_S$ does not induce a unique map $(\mathbf{I}, P_2) : cB \to cA$ between the normal conical fibers, but rather a unique equivalence class of such maps under $\mathrm{Homeo}_c(cA)-$ and $\mathrm{Homeo}_c(cB)-$valued cocycles acting on the domain and range, respectively.
}\end{rmk}
\begin{comment}
content...
Note that the normal formal section ${\ss_{S, n}^*}$ may be regarded as a germ of a section of $P:E \to X$ over the normal bundle $N_S$ near $0_S$ (naturally identified with $S$), and $${\ss}_{S, n} = {s^*}_{S, n}|0_S= {s^*}_{S, n}|S = {\ss}_{S, t}= {\ss}_{S},$$
i.e.\ both the tangential and normal formal sections have the same base section
${\ss}_{S}$.
\end{comment}
In general it is not possible to recover the germ ${s^*}|\operatorname{Op}_{S \times X}(\diag (S))$ from the tangential and normal formal sections. However, for any $\varepsilon > 0$, ${s^*}|\operatorname{Op}_{S \times X}(\diag (S))$ is $\varepsilon$-close to ${s^*}_{S, t} \oplus {s^*}_{S, n}$ by continuity in the $C^0$-norm.
Here, $${s^*}_{S,t} \oplus {s^*}_{S, n}(x, y, z) = ({s^*}_{S, t}(x, y), {s^*}_{S, n}(x, z)) \in \widetilde{N}_S \subset E$$ for $x \in S$, $y \in \operatorname{Op}_S(x)$, $z \in \operatorname{Op}_{cA(x)}(x)$, where $cA(x) =\pi^{-1}(x)$ (cf.\ commutative diagram above). For Definition \ref{def-crformal} below,
we assume that $X, E$ are equipped with a metric as at the end of Section \ref{sec-stratfdjet}.
Further, when we say that two formal sections are $\varepsilon$-close, it is in the sense of closeness with respect to such a metric.
\begin{defn}\label{def-crformal} Let $\ss^* : \operatorname{Op}_{X \times X}(\diag (X)) \to E$ be a formal section of $P: E \to X$ over $X$. Let $S < L$ be a pair of strata in $X$.
The $\delta-$neighborhood of $S$ in $ L$ will be denoted as $N_\delta({S, L})$.
We shall say that $\ss^*$ is of \emph{regularity $C^r$} if for all pairs $S < L$ and $\varepsilon > 0$, there exists $\delta > 0$ such that
\begin{enumerate}
\item $\ss^*_{S, n}|N_\delta({S, L}) : N_\delta({S, L}) \to E$ is smooth on the open stratum $L$,
\item $\ss^*_{S, t} \oplus (\ss^*_{S, n}|N_\delta(S, L))$ is $\varepsilon$-close to $\ss^*_{L, t}$ in the $C^r$ norm. We shall summarize this condition by saying that $\ss^*_{L, t}$ is \emph{$C^r-$asymptotic} to $\ss^*_{S, t} \oplus ( \ss^*_{S, n}|N_\delta({S, L}) )$.
\end{enumerate}
\end{defn}
The sheaf of $C^r-$regular holonomic (resp.\ formal) sections over $X$ will be denoted as $\FF_r$ (resp.\ $\FF_r^*$).
Let $W \subset X$ be open equipped with the inherited stratification.
\begin{defn}\label{def-sjr}
For every stratum $S \subset W$ of $W$, and a section $s : W \to E$,
\begin{enumerate}
\item Let $A_S$ be the link of $S$ in $X$,
\item Let $\widetilde{S}$ be the unique stratum of $E$ containing $s(S)$,
\item Let $B_S$ be the link of $\widetilde{S}$ in $E$,
\item Let $p = (\mathbf{I}, P_2) : cB_S \to cA_S$ be the restriction of $P : E \to X$.
\end{enumerate}
Let $r \geq 1$. An element of $ \sjr^r(W)$ consists of a section $s : W \to E$ decorated by the following data corresponding to every stratum $S \subset W$:
\begin{enumerate}
\item A normal formal section $s^*_{S, n} : \operatorname{Op}_{N_S}(0_S) \to E$ with base $s$,
\item A formal $r-$jet $\sigma_S \in \mathcal{J}^r_E(S)$ of the fiber bundle $P : P^{-1}(S) \to S$.
\end{enumerate}
such that the following compatibility condition is satisfied. For every stratum $S \subset W$, consider $\sigma_S$ as an element of the sheaf of formal sections of $E$ over $S$. Then, for any pair of strata $S < L$ of $W$,
\begin{equation*} \text{$\sigma_L$ is $C^r-$asymptotic to $\sigma_S \oplus s^*_{S, n}$}\end{equation*}
We summarize the condition by saying $\{\sigma_S\}$ is \emph{normally $C^r-$compatible}.
\end{defn}
\begin{prop}\label{prop-formalfnnbhdstrat2}
For any $r\geq 1$, the sheaf $\FF_r^*$ is homotopy equivalent to $\sjr^r$.
\end{prop}
\begin{proof}
Consider the homomorphism of sheaves $\Phi : \FF_r^* \to \sjr^r$ given on an open set $W \subset X$ by $\Phi(W) : \FF_r^*(W) \to \sjr^r(W)$, where
$$\Phi(W)(s^*) = (s, \{s^*_{S, n}\}, \{J^r(s^*_{S, t})\})$$
Here, for every stratum $S \subset W$, $s^*_{S, n}$ and $s^*_{S, t}$ denote respectively the normal formal and tangential formal components of $s^*$ along $S$. Also, $J^r(s^*_{S, t})$ denotes $(s_{S, t}, \{J^r g_p : p \in S\})$ where we use the description $s^*_{S, t} = (s_S, \{g_p : p \in S\})$ of the tangential formal section as a base section on $S$ decorated by a germ-field of sections, as in Section \ref{sec-formalfnmflds}. This is a well-defined map, by normal $C^r$-compatibility (Definition \ref{def-sjr}).
The candidate for a homotopy inverse is given by the inclusion $\iota : \sjr^r \hookrightarrow \FF_r^*$ as a subsheaf, by considering a formal $r$-jet as a formal section. Observe that $\Phi \circ \iota = \mathrm{Id}$. To demonstrate that $\iota \circ \Phi(W)$ is homotopic to the identity map, we follow the proof of Proposition \ref{prop-formalfnmflds}. For every stratum $S$ of $W$, consider the straightline homotopy
$$F_t(s^*) = t s^* + (1- t) (J^r(s^*_{S, t}) \oplus s^*_{S, n}), t \in [0, 1]$$
This establishes the desired deformation retract on every stratum.
\end{proof}
\begin{rmk}\label{rmk-sullivan}
Specializing the constructions of this entire section to the case of a manifold with corners, or even more specifically, to a simplex, the inductively defined structure given by Propositions \ref{prop-formalfnnbhdstrat2}, \ref{prop-decompcones} and \ref{prop-formalfnmflds} simplifies considerably, giving families of flags of tangent spaces. This answers Sullivan's question \ref{qn-sullivan}.
\end{rmk}
\section{Holonomic approximation theorem and other consequences}\label{sec-hat}
\subsection{Flexibility of jet sheaves}\label{sec-jbdlflex}
Let $P: E \to X$ be a stratified bundle (Definition \ref{def-strbdl}).
Let $\FF$ and $\FF_w$ denote respectively the stratified continuous sheaves of controlled and weakly controlled sections of $P$. Let
$\HH^r_E, \HH^r_{E, w}, \JJ^r_E, \JJ^r_{E, w}$ denote the stratified continuous sheaves of controlled holonomic, weakly controlled holonomic, controlled formal and weakly controlled formal sections of $P$ as in Section \ref{sec-stratfdjet}.
Also, let $\sjr^r$ denote the stratified sheaf given by Definition \ref{def-sjr}.
If $P : E \to X$ is a stratified fiber bundle with \emph{manifold fibers}, the sheaf of all stratified $r-$jets will be denoted as $\JJ^r_0$. (The sheaf $\JJ^r_0$ is of relevance in the example of a compact Lie group acting on a manifold $E$ with quotient a stratified space $X$.)
For any stratum $L<X$, let $E_{L}$ denote the induced stratified bundle $P_L: P^{-1} (\overline{L}) \to \overline{L}$. Replacing $X$ by $\overline{L}$, we have induced sheaves
$\FF|\overline{L}, \FF_w|\overline{L}, \JJ^r_E|\overline{L}, \JJ^r_{E,w}|\overline{L}, \HH^r_E|\overline{L}, \HH^r_{E, w}|\overline{L}, \JJ^r_0|\overline{L}, \sjr^r|\overline{L}$.
We shall abuse notation slightly and refer to the stratified sheaf given by the collections of induced sheaves
$$\{\FF|\overline{L}, \FF_w|\overline{L}, \JJ^r_E|\overline{L}, \JJ^r_{E,w}|\overline{L}, \HH^r_E|\overline{L}, \HH^r_{E, w}|\overline{L}, \JJ^r_0|\overline{L}, \sjr^r|\overline{L} : L < X\}$$
also by $\FF, \FF_w, \JJ^r_E, \JJ^r_{E,w}, \HH^r_E, \HH^r_{E, w}, \JJ^r_0, \sjr^r$. It will be clear from the context whether we are referring to the sheaf or the stratified sheaf over $X$.
We record the following observation for concreteness:
\begin{obs}\label{obs-holo=holojet}
$\FF$ and $\FF_w$ are isomorphic to $\HH^r$ and $\HH^r_w$ respectively.
\end{obs}
The morphism from $\FF$ to $\HH^r$ is obtained by adjoining $r-$jets of holonomic sections, and that from $\HH^r$ to $\FF$ forgets the decoration.
\begin{defn}\label{def-sdr} If X is a manifold, a \emph{differential relation $\OO$ (of order $r$)} is a subsheaf of $\JJ^r_0$.
For a stratified space $X$, a \emph{stratified differential relation} $\{\OO_{\overline{L}} : L < X\}$ (of order $r$) is a stratified subsheaf of $\sjr^r$.
\end{defn}
We shall need an auxiliary combinatorial organizational tool.
\begin{defn}\label{def-config}
Let $(X, \Sigma)$, $(Y, \Sigma')$ be abstractly stratified spaces. A \emph{configuration of indexing sets}, or simply, a configuration, is a set-map $\mathfrak{c} : \Sigma \to \Sigma'$
between the indexing sets $\Sigma, \Sigma'$. We shall say that a stratum-preserving map $f : X \to Y$ is of configuration $\mathfrak{c}$ if $f(S) \subset \mathfrak{c}(S)$ for all $S \in \Sigma$.
\end{defn}
Let $(X, \Sigma)$, $(E, \Sigma')$ be the stratifications of $X, E$.
We assume, henceforth, in this subsection, that a configuration $\mathfrak{c} : \Sigma \to \Sigma'$ is fixed, and that the sheaves $\FF$, $\FF_w$ are \emph{implicitly decorated with an arbitrary, but fixed configuration $\mathfrak{c}$.} For $E$ a manifold, $G$ a compact group, and $X=E/G$, the configuration $\mathfrak{c}$ (used to determine $\FF$ or $\FF_w$) is uniquely determined by $P$. This is because the fibers of $P:E \to X$ are manifolds, in particular fibers have a single stratum.
Hence, $\mathfrak{c} : \Sigma \to \Sigma'$ is automatically fixed.
\begin{theorem}\label{thm-diffrlnflex} With the assumptions above,
the stratified sheaf $\FF$ is flexible. In particular, it satisfies the parametric $h-$principle, i.e.\ $\FF \hookrightarrow (\FF)^* \simeq \sjr^r$ is a weak homotopy equivalence. (Hence, by Observation \ref{obs-holo=holojet} $\HH^r$ is flexible).
\end{theorem}
The proof we give below also shows, mutatis mutandis, that $\FF_w$ is flexible.
We first prove $\FF$ is stratumwise flexible. We begin by proving two general lemmas pertaining to flexibility.
\begin{lemma}\label{lem-fixedF}Let $F$ be a fixed quasitopological space. Let $\operatorname{Maps}(-,F)$ be the sheaf over a locally compact topological space $X$ given by
$$\operatorname{Maps}(-,F)(U) :=\operatorname{Maps}(U,F).$$
Then, $\operatorname{Maps}(-,F)$ is flexible.
\end{lemma}
\begin{proof}
Let $K_1 \subset K_2$ be a pair of compact subsets of $X$, and $W$ be a topological space. Suppose $\phi : W \times I \to \operatorname{Maps}(-, F)(K_1)$ is a homotopy with a given initial lift $\psi_0 : W \times \{0\} \to \operatorname{Maps}(-, F)(K_2)$ of $\phi|W \times \{0\}$. By local compactness of $X$,
\begin{align*}\operatorname{Maps}(-, F)(K_i) &= \varinjlim_{U \supset K_i} \operatorname{Maps}(U, F_i) \\
&= \varinjlim_{U \supset K'} \varinjlim_{K' \supset K_i} \operatorname{Maps}(U, F_i)\\
&= \varinjlim_{K' \supset K_i} \varinjlim_{U \supset K'} \operatorname{Maps}(U, F_i) \\
&= \varinjlim_{K' \supset K_i} \operatorname{Maps}(K', F_i)\end{align*}
where $K'$ varies over compact neighborhoods of $K_i, i = 1, 2$. Therefore, we may find compact neighborhoods $K'_1 \supset K_1$, $K'_2 \supset K_2$ such that $\phi$ factors through $\phi' : W \times I \to \operatorname{Maps}(K'_1, F)$ and $\psi_0$ factors through $\psi'_0 : W \times \{0\} \to \operatorname{Maps}(K'_2, F)$. By further shrinking $K'_1, K'_2$ if necessary we may ensure $K'_1 \subset K'_2$ and $\psi'_0$ is a lift of $\phi'|W \times \{0\}$ to $K'_2$. \\
Since $K_1' \subset K_2'$ is a compact inclusion and hence a cofibration, by \cite[pg. 50]{may}, $\operatorname{Maps}(K_2', F) \to \operatorname{Maps}(K_1', F)$ is a fibration. Therefore, $\phi'$ admits a lift $\psi' : W \times I \to \operatorname{Maps}(K_2', F)$ such that $\psi'|W \times \{0\} = \psi'_0$. Let $\psi : W \times I \to \operatorname{Maps}(-, F)(K_2)$ denote the germ of $\psi'$ around $K_2$. Then $\psi : W \times I \to \operatorname{Maps}(-, F)(K_2)$ is a lift of $\phi$, with $\psi|W \times 0 = \psi_0$. This proves $\operatorname{Maps}(-, F)(K_2) \to \operatorname{Maps}(-, F)(K_1)$ is a fibration, establishing flexibility of $\operatorname{Maps}(-, F)$.
\end{proof}
\begin{lemma}
\label{lem-flexpdkt}Let $\mathcal{F}, \mathcal{G}$ be flexible sheaves on a locally compact topological space $X$. Then $\mathcal{F} \times \mathcal{G}$ is flexible.
\end{lemma}
\begin{proof}
Let $K_1 \subset K_2$ be a pair of compact subsets of $X$. By hypothesis, the restriction maps $\mathcal{F}(K_2) \to \mathcal{F}(K_1)$ and $\mathcal{G}(K_2) \to \mathcal{G}(K_1)$ are fibrations. Therefore, as a product of fibrations is a fibration,
$$(\mathcal{F} \times \mathcal{G})(K_2) = \mathcal{F}(K_2) \times \mathcal{G}(K_2) \to \mathcal{F}(K_1) \times \mathcal{G}(K_1) = (\mathcal{F} \times \mathcal{G})(K_1)$$
is a fibration. This proves the lemma.
\end{proof}
\begin{cor}\label{cor-jetsflex}
${\mathcal{F}}$ is stratumwise flexible.
\end{cor}
\begin{proof}
Since each stratum $S$ is a manifold, and ${E|_S}$ is a genuine smooth bundle, let $F_S$ denote the fiber of $E|S$. Note that $\FF_S = i^*\FF_{\bar{S}}$ is the sheaf of sections of $E|S$ over $S$. {Given any $x \in S$ we may choose an open neighborhood $U$ around $x$ over which $U$ trivializes. Therefore, $i^*_U\FF_S \cong i^*_U\operatorname{Maps} (-,F_S)$}. Hence, $\FF_S$ is locally flexible by Lemma \ref{lem-fixedF}. Global flexibility now follows from Gromov's localization lemma \cite[p. 79]{Gromov_PDR}.
Alternately, partition of unity directly allows us to glue families of sections over a family of sets to extend families of sections, and hence establish that $\FF_S$ is flexible.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm-diffrlnflex}] Let $x \in X$ be a point, and $S$ be the unique stratum containing $x$. Let us choose a neighborhood $W \subset X$ of $x$ such that $W \cong V \times cA$, where $V = W \cap S$ and $cA$ is the normal cone of $S$ in $X$. Then $\mathcal{F}(W)$ is the quasitopological space of sections of $E$ over $W$. We obtain a map
$$\mathrm{res}^W_V : \mathcal{F}(W) \to \mathcal{F}_S(V)$$
by restricting a section of $E$ over $W$ to a section of $E|_S$ over $V \subset S$. Explicitly, let $s : V \times cA \cong W \to E$ be a section in $\mathcal{F}(W)$. Let $\widetilde{S}$ be the unique stratum of $E$ containing $s(V)$, and $cB$ be the normal cone of $\widetilde{S}$ in $E$. Then by the local structure of stratified bundles, $s(v, a) = (t(v), f(v, a))$ for all $(v, a) \in V \times cA$ where $t : V \to \widetilde{S}$ is a section and $f : V \to \Gamma(cA, cB)$ is a $V-$parametrized family of sections of $E$ over $cA$. The map above is given by $\mathrm{res}^W_V(s) = t$. Consequently, $\mathrm{res}^W_V$ is equivalent to the following product fibration, given by projection to the first factor
$$\mathcal{F}_S(V) \times \operatorname{Maps}(V, \Gamma(cA, E)) \to \mathcal{F}_S(V).$$
As a corollary, we obtain $\mathcal{F}(W) \cong \mathcal{F}(V) \times \operatorname{Maps}(V, \Gamma(cA, E))$. As this isomorphism is natural under restrictions to open subsets $W' \subset W$, $V' = W' \cap S \subset V$, it establishes an isomorphism of sheaves
$$i^*_V\mathcal{F} \cong i^*_V\operatorname{Maps}(-, \Gamma(cA, E)) \times i^*_W\mathcal{F}_S$$
By Lemma \ref{lem-fixedF}, $\operatorname{Maps}(-, \Gamma(cA, E))$ is flexible and by Corollary \ref{cor-jetsflex}, $\mathcal{F}_S$ is flexible. As restriction and products of flexible sheaves are flexible, we obtain $i^*_V\mathcal{F}$ is flexible. Therefore, $\mathcal{F}$ is locally flexible and hence by Gromov's localization lemma \cite[p. 79]{Gromov_PDR}, $\mathcal{F}$ is flexible.\end{proof}
\medskip
\noindent {\it A description of $\HH^L_S$:}\\
Let $S<L$ denote strata of $X$. Let $\HH^L_S$ denote the restriction of $$\overline{\HH}^L_S:=\operatorname{hofib} (i_{\overline{S}}^*\FF_{\overline{L}}\to \FF_{\overline{S}} )$$ to the topmost stratum of definition of $\overline{\HH}^L_S$, i.e.\ to the (open) stratum $S$. Equivalently,
$$\HH^L_S=\operatorname{hofib} (i_{{S}}^*\FF_{\overline{L}}\to \FF_{{S}} ),$$
where we assume that a section
$\psi_S \in {\FF_{{S}}(S)}$ has been fixed, and homotopy fibers are computed with respect to $\psi_S$.
Let $U\subset S$ be a local (Euclidean) chart. Note that a small normal neighborhood of $U$ in $\overline L$ is of the form $U \times cA^L_S$,
where $A^L_S$ is the link of $S$ in $\overline{L}$.
Then a neighborhood $N_{SL}$ of $S$ in $\overline L$ is an $A^L_S-$bundle over $S$. {Let $S'$ be the unique stratum in $E$ containing $\psi(S)$}. Let ${B_{S'}}$ be the link of $S'$ in ${E}$.
Then $\HH^L_S (U)=\HH^L_S (U,\psi_S)$ consists of two components:
\begin{enumerate}
\item A section of $E$ over $N_{SL}(U)$ {restricting to $\psi_S\vert_{U}$ over the zero section $U \subset N_{SL}(U)$}. Germinally, this is equivalent to a map from $U$ to
$\Gamma_c (cA^L_S, c{B_{S'}})$ as in the proof of Corollary \ref{cor-jetsflex} above. Note that by Lemma \ref{lem-fixedF}, the sheaf $\operatorname{Maps}(-, \Gamma_c (cA^L_S, c{B_{S'}})$ is flexible. We shall refer to this
as the {\it germinal $L-$component}.
\item A path of sections over $U$ in $\FF_{{S}}$ starting at $\psi_S|U$, i.e.\ a continuous map $h:[0,1] \to \FF_{{S}} (U)$, such that $h(0) = \psi_S|U$. Let $P_{\psi}(U)$ denote the collection of such maps.
\end{enumerate}
Let $\GG$ be a sheaf on $S$ defined by $$\GG(U) = P_\psi(U).$$
\begin{lemma}\label{lem-psapaceflex}
$\GG$ is flexible.
\end{lemma}
\begin{proof}
We note that for $U$
a local chart, the restriction $\GG_U$ of $\GG$ to $U$ is given by
$$\GG_U(V) = \operatorname{Maps} \big((I\times V , \{0\}\times V), (F_S,\psi_S)\big).$$
The homotopy extension property from subcomplexes of $S$ to the fiber $F_S$ then gives the lemma.
{Alternatively, we may use Lemma \ref{lem-mapsI2F} to deduce the lemma.}
\end{proof}
{Using the proof of Theorem \ref{thm-diffrlnflex}, we can explicitly compute the homotopy fiber $\HH^L_S$ for the sheaf $\FF$ of sections of $P : E \to X$. Indeed, observe that for quasitopological spaces $X, Y$, the homotopy fiber of the product fibration $X \times Y \to Y$ over a point $y \in Y$ is \emph{homeomorphic} to $X \times P_y Y$, where $P_y Y \subset \operatorname{Maps}(I, Y)$ consists of the collection of maps $\gamma : I \to Y$ with $\gamma(0) = y$, with the inherited quasitopology. Therefore,
$$\HH^L_S \cong \operatorname{Maps}(-, \Gamma_c (cA^L_S, c{B_{S'}})) \times \GG.$$
}
\begin{prop}\label{prop-hlsflex}
$\HH^L_S$ is flexible.
\end{prop}
\begin{proof}{By Lemma \ref{lem-fixedF}, $\operatorname{Maps}(-, \Gamma_c(cA^L_S, cB_{S'}))$ is flexible. By Lemma \ref{lem-psapaceflex}, $\GG$ is flexible. Therefore, using Lemma \ref{lem-flexpdkt}, we conclude $\HH^L_S$ is flexible.}\end{proof}
\begin{comment}
\begin{proof}
Consider compact sets $K_1 \subset K_2 \subset S$.
By Lemma \ref{lem-psapaceflex}, given a path of sections $\sigma_t'$ over $K_1$ with $\sigma_0' = \psi|K_1$, there exists a path of sections $\sigma_t$ over $K_2$ such that
\begin{enumerate}
\item $\sigma_0|K_2 = \psi|K_2$,
\item $\sigma_t|K_1 = \sigma_t'$.
\end{enumerate}
Next, suppose that the germinal $L-$component of $\HH^L_S(K_1)=\HH^L_S(K_1, \psi)$ is given by
$h_1 \in \operatorname{Maps}(K_1, \Gamma_c (cA^L_S, c\bcomment{B_{S'}})$.
Flexibility of the sheaf $\operatorname{Maps}(-, \Gamma_c (cA^L_S, c\bcomment{B_{S'}})$ (Lemma \ref{lem-fixedF}) implies that there exists
$h_2 \in \operatorname{Maps}(K_1, \Gamma_c (cA^L_S, c\bcomment{B_{S'}})$ such that $h_2|K_1=h_1$.
Then $(\sigma_t, h_2)$ for $t \in [0,1]$
is an element of $\HH^L_S(K_2)=\HH^L_S(K_2, \psi)$ extending $(\sigma_t', h_1) $. This completes the proof of flexibility of $\HH^L_S$.
\end{proof}
\subsection{Smooth stratified h-principle}\label{sec-shprin}
As in Section \ref{sec-jbdlflex} above, let $\JJ^r_E, \JJ^r_{E,w}, \HH^r_E$ denote the stratified sheaves of controlled, weakly controlled and holonomic $r-$jets as in Section \ref{sec-stratfdjet}.
\end{comment}
Recall Gromov's convention \cite[Section 1.4.1]{Gromov_PDR} of referring to an arbitrarily small but non-specified neighborhood of a set $K \subset X$
by $\operatorname{Op} K$.
The following are direct adaptations of Gromov's definitions of the smooth h-principle for manifolds from \cite[p. 37]{Gromov_PDR} to the stratified context.
We spell these out for completeness.
\begin{defn}\label{def-hprin}
A stratified differential relation $\RR$ is said to satisfy the
\begin{enumerate}
\item \emph{stratified h-principle near a
subset} $K \subset X$ if for every section $\phi: U(K) \to \RR$ on a neighborhood $U(K)$ of $\overline{K}$, there exists an open neighborhood $U'$ of $\overline{K}$, such that $\phi\vert_{U'}$ is homotopic to a
a holonomic section.
\item \emph{stratified parametric h-principle} near $K$ if the map $$f \to J^r_f$$
from the
space of solutions of $\RR$ on $\operatorname{Op} K$ to the space of sections $\operatorname{Op} K \to \RR$ is a weak
homotopy equivalence.
\item \emph{stratified h-principle for extensions of
$\RR$,
from $K_1$ to $K_2 \supset K_1$} if for every
section $\phi_0: \operatorname{Op} K_2\to \RR$
which is holonomic on $K_1$, there exists a
homotopy to a holonomic
section
$\phi_1$ by a homotopy of sections $\phi_t: , \operatorname{Op} K_2\to \RR$, $t \in [0, 1]$, such that
$\phi_t \vert\operatorname{Op} K_2$ is constant
in $t$.
\item \emph{parametric stratified h-principle for extensions of
$\RR$,
from $K_1$ to a $K_2 \supset K_1$} if the map $f \to J^r_f$
from the space of solutions of $\RR$ on $K_2$ to the space of sections $\operatorname{Op} K_2 \to \RR$
which are holonomic on $K_1$, is a weak
homotopy equivalence.
\end{enumerate}
\end{defn}
\begin{comment}\label{rmk-shefh2smoothh} For $X=V$ a manifold, it is
pointed out by Gromov \cite[Remark $A^\prime$, p. 76]{Gromov_PDR}, that the sheaf-theoretic h-principle (resp.\ parametric sheaf theoretic h-principle) of Definition \ref{def-sheafh} implies the smooth
h-principle (resp.\ parametric h-principle) of Definition \ref{def-hprin} in the special case that $X$ consists of a single stratum.
\end{comment}
\begin{comment}\label{rmk-shefh2smoothhstrat}
This is an analog of Remark \ref{rmk-shefh2smoothh} for stratified sheaves and jets over stratified spaces. The stratified sheaf-theoretic h-principle (resp.\ parametric sheaf theoretic h-principle) of Definition \ref{def-ssheafh} implies the smooth stratified
h-principle (resp.\ parametric stratified h-principle) of Definition \ref{def-hprin}. This is because
\begin{enumerate}
\item Stratumwise, this is exactly a consequence of Remark \ref{rmk-shefh2smoothh}.
\item Across strata, the restriction maps are natural by Lemma \ref{lem-f2f*}.
\end{enumerate}
\end{comment}
\subsection{Holonomic approximation for jet sheaves}\label{sec-emhat}
We turn now to generalizing
the smooth versions of the h-principle due to Eliashberg-Mishachev \cite{em-expo,em-book} to stratified spaces. The following is the \emph{holonomic approximation theorem} for smooth bundles over smooth manifolds.
\begin{theorem}\label{em-hat}\cite[Theorem 1.2.1]{em-expo}\cite[Theorem 3.1.1, p.20]{em-book}
Let $V$ be a manifold, $E \to V$ be a smooth bundle, and $K \subset V$ be a polyhedron of positive codimension.
Let ${f \in \JJ^r_E(\operatorname{Op}\, K)}$ be a formal section. Then for {any} $\ep > 0$,
{$\delta > 0$, there exist a}
diffeomorphism $h : V \to V$ with $$||h-{\mathrm{Id}}||_{C^0} < \delta,$$ and a holonomic section ${\til{f} \in \JJ^r_E(\operatorname{Op}\, K)}$ such
that
\begin{enumerate}
\item the image $h(K)$ is contained in the domain of the definition of the section $f$, and
\item $||\til{f}- f|\operatorname{Op} h(K) ||_{C^0} < \ep$.
\end{enumerate}
In fact, $h$ may be chosen as the time one value of a diffeotopy $h_t: t \in [0,1]$, with
$h_0$ equal to the identity, $h_1=h$, and
for all $t \in [0,1]$,
\begin{enumerate}
\item the image ${h_t(K)}$ is contained in the domain of the definition of the section $f$, and
\item $||h_t-{\mathrm{Id}}||_{C^0} < \delta$.
\end{enumerate}
\end{theorem}
\begin{comment}
Eliashberg-Mishachev generalize Theorem \ref{em-hat} to locally integrable relations $\RR$
in \cite[Theorem 13.4.1, p. 127]{em-book}i.e.\ replacing $\JJ^r_E$ by any
\emph{locally integrable} relation $\RR$, they point out that their proof of the holonomic approximation theorem \cite[Theorem 3.1.1, p.20]{em-book} goes through.
\bcomment{Commented out the remark on locally integrable relations; find a way to phrase it in a way compatible with our definition of a differential relation.}
\end{comment}
Recall that diffeotopies are smooth 1-parameter families of diffeomorphisms
\cite[p. 37]{Gromov_PDR}.
\begin{defn}\label{def-crsmallisotopynormal} Let $S$ be a stratum of a stratified space $X$, with link $A$, and $N_S$ a normal neighborhood of $S$ in $X$; hence $N_S$ is a $cA-$bundle over $S$. Fix local trivializations $\{U_i\}$ and compatible
local product metrics $(g_S^i, g_{cA}^i)$ on $U_i \times cA \subset N_S$.
A stratified diffeotopy ${\{h_t: t \in [0,1]\}}$ of $X$ supported in $N_S$ is said to be normally $\ep-$small in the $C^r$ norm
if the following hold.
\begin{enumerate}
\item $\operatorname{diam} (h_t(s): t \in [0,1]) < \ep$ for all $s \in U_i \times cA \subset N_S$ and all $i$.
\item Let ${\phi_{(t,x)} : cA(x) \to cA(h_t(x))}$ denote the map induced by $h_t$ from the cone ${cA(x)}$
at the point $x \in S$ to the cone ${cA(h_t(x))}$
at the point ${h_t(x)} \in S$. We demand that
for all strata $J$ of $cA \setminus \{c_A\}$ and $y \in J$, the $C^r-$norms of $\phi_{(t,x)}$ are bounded above by $\ep$ for all $x\in S$ and $t \in [0,1]$.
\end{enumerate}
\end{defn}
Note that the conclusion of Theorem \ref{em-hat} only ensures $C^0-$closeness in the conclusion.
However, Definition \ref{def-crformal} allows us to introduce the notion of $C^r-$closeness of formal sections \emph{in the normal direction}.
\begin{defn}\label{def-ncrclose}
{We shall say that a pair of $C^r$-regular formal sections $\phi, \psi$ of $P: E \to X$ over $U \subset X$ are \emph{normally $\varepsilon$ $C^r$-close} if
\begin{enumerate}
\item The continuous maps $\phi|\diag(U)$ and $\psi|\diag(U)$ are $\varepsilon$-close in the $C^0$-norm
\item For every pair of strata $S < L$ intersecting $U$, and $S' = S \cap U, L' = L \cap U$, the germs
$\phi_{S', n}, \psi_{S', n}$ are $C^r-$asymptotic (cf.\ Definition \ref{def-crformal}), in the following sense: for every $\varepsilon_1 > 0$, there exists $\delta > 0$ such that $\phi_{S', n}|N_\delta(S', L')$ and $\psi_{S',n}|N_\delta(S', L')$ are $\varepsilon_1$-close in the $C^r$-norm.
\end{enumerate}}
\end{defn}
{\begin{rmk}\label{rmk-ncrclose}If for every stratum $S$ of $X$ intersecting $U$, $\phi|_S, \psi|_S$ are $C^r$-close, then $\phi, \psi$ are normally $C^r$-close as well. The converse need not be true.\end{rmk}}
To extend Theorem \ref{em-hat} to stratified spaces, we need some basic differential topology facts about stratified spaces. The following allows us to extend diffeotopies across strata.
\begin{lemma}\label{lem-isotopyext} Let $(X, \Sigma, \mathcal{N})$ be an abstractly stratified space and $S \in \Sigma$ be a stratum. Let $h : S \times I \to S$ be a diffeotopy of $S$ supported on a {compact} set $K \subset S$. Then there exists an extension {of $h$} to a stratified diffeotopy $H : X \times I \to X$.
If $h$ is $C^0$-small, the extension $H$ is normally $C^r-$small for any $r>0$.
Moreover, if $h$ is $C^r$-small, $H$ is stratumwise $C^r-$small as well.
\end{lemma}
As a prerequisite to the proof and for later use as well, we will state and prove a general result regarding fiber bundles. The main content of this result is a fiber-preserving analogue of the homotopy extension property. To set it up, let $p : E \to X$, $p' : E \to X'$ be fiber bundles with fibers spaces $Y$ and $Y'$, respectively. Fix basepoints $y \in Y$, $y' \in Y'$ and suppose furthermore that $p$, $p'$ have as their structure groups the groups of basepoint-preserving homeomorphisms $\mathrm{Homeo}(Y, y)$ and $\mathrm{Homeo}(Y', y')$, respectively. Let $s : X \to E, s' : X' \to E'$ denote the canonical sections of $p$ and $p'$ parametrizing the fiberwise basepoints. Note that $p \times \mathrm{id} : E \times I \to X \times I$ is also a fiber bundle with fiber space $Y$ and structure group $\mathrm{Homeo}(Y, y)$, with a canonical section $s \times \mathrm{id} : X \times I \to E \times I$ traced out by the fiberwise basepoints, as before. Further, suppose $X, X', E, E'$ are equipped with metrics compatible with their topology.
\begin{lemma}\label{lem-fiberhep}
Suppose $f : X \to X'$ and $g : E \to E'$ are maps such that
\begin{enumerate}
\item $g$ covers $f$, i.e.\ $p' \circ g = f \circ p$, and
\item $g$ preserves the sections $s, s'$, i.e.\ $g \circ s = s' \circ f$.
\end{enumerate}
Let $F : X \times I \to X'$ be a homotopy such that $F|X \times \{0\} = f$. Then, there exists a map $G : E \times I \to E'$ such that
\begin{enumerate}
\item $G$ covers $F$, i.e.\ $p \circ F = G \circ (p \times \mathrm{id})$, and
\item $G$ preserves the sections $s \times \mathrm{id}, s'$, i.e.\ $G \circ (s \times \mathrm{id}) = s' \circ F$.
\end{enumerate}
Moreover, if $\mathrm{diam}(F(\{x\} \times I)) < \varepsilon$ uniformly for all $x \in X$, then we may choose $G$ such that $\mathrm{diam}(G(\{e\} \times I)) < \varepsilon$ uniformly for all $e \in E$.
\end{lemma}
\begin{proof} Consider the fiber bundle $F^* E'$ over $X \times I$. This is a principal $\mathrm{Homeo}(Y', y')$-bundle over $X \times I$ which restricts to $f^* E'$ over $X \times \{0\}$, therefore there is an isomorphism of principal $\mathrm{Homeo}(Y', y')$-bundles $(f^* E') \times I \to F^* E'$ over $X \times I$. The map $g : E \to E'$ covering $f : X \to X'$ furnishes a fiber-preserving map $h : E \to f^* E'$ covering the identity map on $X$. We collect all these maps in the following commutative diagram:
$$
\begin{CD}
E \times I @>{h \times \mathrm{id}}>> (f^*E') \times I @>{\cong}>> F^*E' @>>> E' \\
@V{p\times \mathrm{id}}VV @VVV @VVV @V{p'}VV\\
X \times I @= X \times I @= X \times I @>{F}>> X'
\end{CD}
$$
Let $G : E \times I \to E'$ be the composition of the three horizontal arrows on the top. Then by commutativity of the outer rectangle, $G$ satisfies Condition $(1)$. Next, observe that the leftmost and the rightmost squares satisfy Condition $(2)$. Indeed, the canonical sections of $F^* E'$ and $(f^* E') \times I$ are furnished by the basepoint $y'$ as their structure groups are in $\mathrm{Homeo}(Y', y')$. The middle square is an isomorphism of principal $\mathrm{Homeo}(Y', y')$-bundles; hence it must necessarily preserve the relevant canonical sections. This proves that $G$ satisfies Condition $(2)$ as well.
For the final assertion, observe that as $F(\cdot, t)$ stays uniformly $\varepsilon$-close to $f$ for all $t \in I$, the cocycles of $F^* E'$ are $\varepsilon$-close to those of $(f^*E') \times I$. By the proof of Proposition 1.7 in \cite[p. 20]{hatcher-notes}, we see that this implies that the fiber-preserving homeomorphism $(f^* E') \times I \to F^* E'$ of the top horizontal arrow in the middle square can be chosen to be uniformly $\varepsilon$-close to identity, where both sides are consider as metric subspaces of $E' \times X \times I$. Thus, the composition of the last two top horizontal arrows $(f^* E') \times I \to E'$ is uniformly $\varepsilon$-close to the map induced by the constant homotopy $X \times I \to X'$, $(x, t) \mapsto f(x)$. The image of $\{z\} \times I \subset (f^* E') \times I \to E'$ under the latter has diameter $0$. Therefore, under the former, it has diameter uniformly bounded by $\varepsilon$. This shows that the composition $G : E \times I \to E'$ satisfies the desired property $\operatorname{diam} G(\{e\} \times I) < \varepsilon$ for all $e \in E$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem-isotopyext}] Let $A$ denote the link of $S$ in $X$.
A tubular neighborhood $N_S$ of $S$ in $X$ is a $cA$-bundle over $S$, i.e.\ a fiber bundle over $S$ with fiber $cA$ (cf.\ Corollary \ref{cor-trivialzn}). The structure group of this bundle is $\mathrm{Homeo}(cA, \{c_A\})$ where $\{c_A\} \subset cA$ is the cone point. By Lemma \ref{lem-fiberhep}, we can extend $h : S \times I \to S$ to a homotopy $f : N_S \times I \to N_S$. Moreover, since $h(\cdot, t)$ is a diffeomorphism, $f(\cdot, t)$ is an isomorphism of bundles, for all $t \in I$.
Next, we construct a stratified diffeotopy {$\tilde{h} : N_S \times [0, 1] \to N_S$ by defining $\tilde{h}(x, t) = f(x, 3t)$ for $(x, t) \in N_S \times [0, 1/3]$, $\tilde{h}(x, t) = f(x, 2-3t)$ for $(x, t) \in N_S \times [1/3, 2/3]$, and $\tilde{h}(x, t) = x$ for $(x, t) \in N_S \times [2/3, 1]$, smoothing at $N_S \times \{1/3\}$ and $N_S \times \{2/3\}$ if required. We extend to a diffeotopy $H : X \times I \to X$ by defining $H(x, t) = x$ for all $x \in X \setminus K$ and $t \in [0,1]$.}
We now prove the second assertion. Note first that $N_S$ is equipped with a continuous metric that is stratumwise smooth. We can change the metric to an equivalent metric that is locally a product metric on $N_S|K$ thinking of $N_S$ as a $cA$ bundle over $S$ to satisfy the conditions of Definition \ref{def-crsmallisotopynormal}. Then, locally on any $U \times cA$ we extend $h$ by the identity on the second coordinate. The resulting extension is then normally trivial, in particular, normally $C^r-$small on $N_S|K$. Completing the extension to a diffeotopy $H : X \times I \to X$ as above, we see that $H$ is normally $C^r-$small for any $r>0$.
Finally note that for any stratum $L$ such that $S < L$, the $C^r$-distance of $H(\cdot, t)$ and $\mathrm{id}$ on $L$ is comparable to the sum of the $C^r$-distance restricted to $S$ and the normal $C^r$-distance.
The last assertion follows. \end{proof}
Combining Theorem \ref{em-hat} and Lemma \ref{lem-isotopyext} we have:
\begin{cor}\label{cor-isotopyext} Let $X, S$ be as in Lemma \ref{lem-isotopyext} above,
$P: E \to X$ be a {stratified} bundle, and $K \subset X$ be a substratified space of positive codimension. Let $A$ denote the link of $S$ and $N_S$ a normal neighborhood of $S$ given by a {$cA-$bundle} over $S$.
Let $K_S = K \cap S$, and let $N(K_S) $ denote the restriction of the bundle $N_S$ to $K_S$.
Let ${f \in \sjr^r_E(\operatorname{Op}\, K)}$ be a $C^r-$regular formal section. Then for any $\varepsilon > 0,\delta > 0$, there exist
\begin{enumerate}
\item a
diffeotopy $h_t : S \to S$ with $h=h_1$ and $||h_t- {\mathrm{Id}}||_{C^0} < \delta$ for all $t \in [0,1]$,
\item an extension $H_t: X \to X $ of $h_t$ supported in a small neighborhood of $N(K_S) $,
\end{enumerate}
and a holonomic section ${\til{f} \in \sjr^r_E(\operatorname{Op}\, K_S)}$ such
that
\begin{enumerate}
\item the image $h(K_S)$ is contained in the domain of the definition of the section $f$,
\item $||\til{f}- f|\operatorname{Op}_X h(K_S) ||_{C^0} < \ep$,
\item $\til{f}$ and $f|\operatorname{Op}_X h(K_S)$ are normally $C^r-$close on $N_S$.
\end{enumerate}
\end{cor}
\begin{proof}
The existence of a diffeotopy $h_t : S \to S$ with $||h_t-{\mathrm{Id}}||_{C^0} < \delta$ such that
\begin{enumerate}
\item for all $t\in [0,1]$, the image $ h_t(K_S)$ is contained in the domain of the definition of the section $f$, and
\item there exists a holonomic section $\til{f}_S : \operatorname{Op}_S h(K_S) \to E$ such that $||{\til{f}_S}- f|\operatorname{Op}_S h(K_S) ||_{C^0} < \ep$,
\end{enumerate}
is guaranteed by Theorem \ref{em-hat}.
Note that so far ${\til{f}_S}$ is defined only on $S$, and the neighborhood $\operatorname{Op}_S h(K_S)$ is only taken within $S$.
It remains to extend ${\til{f}_S}$ into $N_S$ and extend the neighborhood $\operatorname{Op}_S h(K_S)$ to an open neighborhood $\operatorname{Op}_X h(K_S)$. We first apply Lemma \ref{lem-isotopyext} to $h_t$ to obtain a stratified diffeotopy $H_t$ such that
\begin{enumerate}
\item $H_t$ is supported in a small neighborhood of $N(K_S) \subset N_S$.
\item $H_t$ is normally $C^r-$small.
\end{enumerate}
Let $H=H_1$. Then the first item above guarantees that $H$ is defined and possibly unequal to the identity on $N(K_S)$.
Let $cA(x)$ denote the normal cone of $S$ in $X$ at $x$. Note that ${H(cA(x)) = cA(h(x))}$ (in fact, the proof of Lemma \ref{lem-isotopyext} shows that $H$ may be chosen to be the identity in the normal coordinate).
To extend ${\til{f}_S}$ into $N_S$ holonomically, it suffices to define a holonomic {extension}
of ${\til{f}_S}$ on the restriction of $N_S$ to ${K_S}$. By Remark \ref{rmk-nfisholo}, the normal formal section $f_{S, n}$ associated to $f$ is a holonomic section of $E$ over $N_S$. Our main strategy here is to ``graft the conical component of $f_{S, n}$ with $\til{f}_S$" to build a section $\til{f} : \operatorname{Op}_X h(K_S) \to E$. The associated holonomic stratified $r$-jet would be the desired element $\til{f} \in \sjr^r(\operatorname{Op} K_S)$. We accomplish this in an indirect manner in light of the warning at the end of Remark \ref{rmk-nfisholo}.
As $\widetilde{f}_S$ and $f|S$ are $\varepsilon$-close as formal $r$-jets, the sections $\widetilde{f}|_S$ and $ f_{S, n} $ from $ \operatorname{Op}_S(K) \to E$ are $\varepsilon-$close as well. Let $\widetilde{S}$ denote the unique stratum containing the image of $S$ under $\widetilde{f}|_S$ and $ f_{S, n} $. (The existence of such an $\widetilde{S}$ is guaranteed if $\varepsilon > 0$ is sufficiently small.) Let us moreover fix an open neighborhood $U \subset S$ of $K_S$ in $S$ which is contained in the domains of definition of both $\widetilde{f}|_S$ and $f_{S, n}$. Let $\sigma : U \times I \to \widetilde{S}$ be a homotopy through sections $\sigma_t : U \to \widetilde{S}$, between $\sigma_0 = f_{S, n}$ and $\sigma_1 = \widetilde{f}|_S$. By Lemma \ref{lem-fiberhep}, there exists an extension
$$\widetilde{\sigma} : N_S(U) \times I \to N_{\widetilde{S}} \subset E$$
where $N_{\widetilde{S}}$ is the normal cone bundle of $\widetilde{S}$ in $E$, and $N_S(U)$ denotes restriction of $N_S$ to $U$. If $\varepsilon > 0$ is sufficiently small, we may ensure that $h(K_S) \subset U$. Let $\widetilde{f} := \widetilde{\sigma}_1|\operatorname{Op}_X h(K_S)$. Then $\widetilde{f}$ is a holonomic extension of $\widetilde{f}_S|\operatorname{Op}_X h(K_S)$ into a neighborhood of $K_S$ in $X$. Therefore,
\begin{enumerate}
\item $\til f$ is holonomic on $\operatorname{Op}_X h(K_S)$ by construction.
\item Since $f$ is $C^r-$regular by hypothesis and equals $\til f$ on germs of normal cones on $S$, $f$ and $\til f$ are normally $C^r-$close on $N_S$.
\end{enumerate}
This completes the proof.
\end{proof}
It is a well-known fact that
for a smooth bundle $P: E \to M$ over a compact manifold $M$, any two sufficiently close sections $s_1, s_2:
M \to E$ are smoothly isotopic through sections. To see this, fix $s_1(M) = M_1 \subset E$, and let $N_\ep M_1 \subset E$ be a regular normal neighborhood obtained, for instance, by equipping $E$ with a Riemannian metric, and using the exponential map $\exp$ to exponentiate from the normal bundle $N_EM_1$ to $M_1$ down to $E$. Let $H_t$ denote the linear homotopy on $N_EM_1$ that collapses all the linear fibers down to $M_1$ (identified with the zero section of $N_EM_1$).
If $s_2(M) \subset N_\ep M_1$, then $\exp \circ H_t \circ \exp^{-1}$ gives a smooth isotopy of $s_2$ to $s_1$.
The same proof generalizes in a straightforward way to stratified spaces.
Let $P: E \to Y$ be a stratified fiber bundle over a compact stratified space $Y$ and $s_1: Y \to E$ be a stratified section. Let $s_1(Y) = Y_1$. Embedding $E$ in a smooth manifold $E'$ by Theorem \ref{natsume}, equipping $E'$ with a Riemannian
metric $g$, and restricting $g$ to $E$, we obtain {a} stratumwise
Riemannian metric $g_s$ on $E$.
Let $N_\ep Y_1 \subset E$ be a regular normal neighborhood of $Y_1$ in $E$ obtained by exponentiating the stratumwise normal bundle with respect to the stratumwise
Riemannian metric $g_s$. Then a fiberwise linear homotopy exists as in the manifold case.
This establishes the following.
\begin{lemma}\label{lem-closebyisotopic}
Let $P: E \to Y$ be a stratified fiber bundle over a compact stratified space $Y$ and $s_1: Y \to E$ be a smooth stratified section. Further, assume that $E$ is equipped with a metric $d_s$ that is stratumwise smooth Riemannian. Let $s_2: Y \to E$ be a smooth stratified section. Then for all $\ep >0$ there exists $\delta >0$ such that if $d_s(s_1(y), s_2(y))<\delta$ for all $y \in Y$, then there exists a stratified isotopy
$H: Y \times [0,1] \to E$ such that
\begin{enumerate}
\item $H(y,0) = s_1(y)$
\item $H(y,1) = s_1(y)$
\item $d_s(s_1(y), H(y,t))<\ep$ for all $y \in Y$ and $t \in [0,1]$.
\end{enumerate}
\end{lemma}
\begin{rmk}\label{rmk-closebyisotopic}
Compactness is not essential in the proof of Lemma \ref{lem-closebyisotopic}. All that was required was the existence of a normal neighborhood as the image of an open neighborhood of the zero-section. This goes through for $Y$ non-compact as well provided we allow the thickness of the open neighborhood to be non-constant.
\end{rmk}
As a consequence of Lemma \ref{lem-closebyisotopic} we have the following:
\begin{cor}\label{cor-interpolate}
Let $P: E \times (-\ep,1+\ep) \to Y\times (-\ep,1+\ep)$ be a stratified fiber bundle over a stratified space $Y \times (-\ep,1+\ep)$, where $Y$ is a compact stratified space.
Let $s_1, s_2: Y\times (-\ep,1+\ep) \to E \times (-\ep,1+\ep) $ be two stratified smooth sections
that are $\delta-$close in the $C^r-$norm. Then there exists a section $s_3: Y\times (-\ep,1+\ep) \to E \times (-\ep,1+\ep) $ such that $s_3$ interpolates smoothly between $s_1, s_2$, i.e.\
\begin{enumerate}
\item $s_3|(-\ep,0] =s_1|(-\ep,0]$
\item $s_3|[1,1+\ep) =s_2|[1,1+\ep)$
\item $s_3$ is $\delta-$close to both $s_1, s_2$ in the $C^r-$norm.
\end{enumerate}
More generally, if there exists a stratum $S$ of $Y$ and a submanifold $S'$ of $S$ such that
$s_1|S' \times (-\ep,1+\ep)$ and $s_2|S' \times (-\ep,1+\ep)$ are $\delta-$close in the $C^r-$norm, then there exists a section $s_3: Y\times (-\ep,1+\ep) \to E \times (-\ep,1+\ep) $ such that $s_3$ interpolates smoothly between $s_1, s_2$ with $C^r-$closeness along $S'$, i.e.\
\begin{enumerate}
\item $s_3|(-\ep,0] =s_1|(-\ep,0]$
\item $s_3|[1,1+\ep) =s_2|[1,1+\ep)$
\item $s_3|S' \times (-\ep,1+\ep)$ is $\delta-$close to both $s_1|S' \times (-\ep,1+\ep), s_2|S' \times (-\ep,1+\ep)$ in the $C^r-$norm.
\end{enumerate}
\end{cor}
\begin{proof}
Let $H_t$ be the homotopy between $s_1, s_2$ in the discussion preceding Lemma \ref{lem-closebyisotopic}. Setting $s_3'(x,r) =H_r(x,r)$ for $r \in [0,1]$ furnishes a linear interpolation. Smoothing slightly at the end-points, e.g.\ by choosing a smooth monotonically increasing bijective function $g: [0,1]\to [0,1]$, and setting
$s_3(x,r) =H_r(x,g(r))$ for $r \in [0,1]$ gives the required $s_3$.
\end{proof}
We are now in a position to prove the stratified version of Theorem \ref{em-hat}. All substratified spaces $K \subset X$ below will be assumed to be tamely embedded below, i.e.\ if $K$ is non-compact, then the closure $\overline{K}$ is a deformation retract of a small regular neighborhood.
\begin{theorem}\label{em-hats}
Let $X$ be {an abstractly stratified space} equipped with a compatible metric, $E \to X$ be a stratified bundle, and $K \subset X$ be a relatively compact
stratified subspace of positive codimension.
Let {$f \in \sjr^r_E(\operatorname{Op} K)$} be a $C^r-$regular formal section. Then for arbitrarily small $\ep > 0$,
$\delta > 0$, there exist a stratified
diffeomorphism $h : X \to X$ with $$||h-{\mathrm{Id}}||_{C^0} < \delta,$$ and a stratified holonomic section {$\til{f} \in \sjr^r_E(\operatorname{Op} K)$} such
that
\begin{enumerate}
\item the image $h(K)$ is contained in the domain of definition of the section $f$,
\item $||\til{f}- f|\operatorname{Op}\, h(K) ||_{C^0} < \ep$.
\item $\til{f}, f|\operatorname{Op}\, h(K)$ are normally $\varepsilon$ $C^r$-close.
\end{enumerate}
The same applies for $\sjr^r_{E,w}$ in place of $\sjr^r_E$.
\end{theorem}
\begin{proof} The proof proceeds by induction on the depth (cf.\ Definition \ref{def-Idec})
of $X$.
If $X$ has depth one, it is a manifold, and Theorem \ref{em-hat} furnishes the result.
Let $S$ be the lowest stratum (i.e.\ the stratum of greatest depth) that $K$ intersects. We note that there might be more than one such minimal stratum $S_i$ of possibly varying depths with $K \cap S_i \neq \emptyset$, in which case we shall repeat the argument below for each of these. For convenience of exposition, we assume there is a unique such $S$. Let $K_S = K \cap S$. Then $K_S$ is compact; else $K$ would intersect {a stratum of depth lower than that of $S$, but there is none such.}
Theorem \ref{em-hat} ensures the existence of a {self-}diffeomorphism {$h_S$} of $S$ supported in a neighborhood of $K_S = K \cap S$ and a holonomic section {$\til{f_S} \in \sjr^r_E(\operatorname{Op} h(K_S))$} satisfying the conclusions of the theorem, but only on $S$. Further, Corollary \ref{cor-isotopyext}
allows us to
\begin{enumerate}
\item extend {$h_S$} to a stratified {self-diffeomorphism $h_e$ of all of $X$} supported in
$N_S \cap \operatorname{Op}_X (K_S)$.
\item extend {$\til{f_S} \in \sjr^r_E(\operatorname{Op} K_S)$ to a stratified holonomic section $\til{f_e} \in \sjr^r_E(\operatorname{Op}_X h(K_S))$} defined on an open neighborhood $ \operatorname{Op}_X\, h(K_S)$ of $h(K_S)$ in $X$.
\end{enumerate}
We may assume without loss of generality that $\operatorname{Op}_X\, h(K_S)$ is the restriction of the normal bundle $N_S$ to $\operatorname{Op}_S\, h(K)$.
Next,
delete a (very) small closed neighborhood $N_\eta(S)$ of $S$ in $X$ to obtain $X_1$, and let $K_1 = K \cap X_1 \subset X_1$. Then the depth of $X_1$ is strictly less than $X$, and induction may be applied to obtain
\begin{enumerate}
\item A stratified {self-diffeomorphism $h_1$ of $X_1$}, supported in $ \operatorname{Op}_{X_1} K_1$ such that $h_1$ is $\delta-$close to the identity in the {$C^0$} norm.
\item a stratified holonomic section {$\til{f_1} \in \sjr^r_E(\operatorname{Op}_{X_1} K)$} defined on an open neighborhood of $h(K_1)$ in $X_1$.
\end{enumerate}
such that
\begin{enumerate}
\item the image $ h_1(K_1)$ is contained in the domain of definition of the section $f$,
\item $||\til{f_1}- f|\operatorname{Op}\, h_1(K_1) ||_{C^0} < \ep$.
\item $\til{f_1}, f|\operatorname{Op}_{X_1}\, h(K_1)$ are normally $\varepsilon$ $C^r$-close on $X_1$.
\end{enumerate}
Composing $h_1$ with a further {$C^0-$small} stratified diffeomorphism $h_2$ supported on ${\mathcal{A} = N_{2\eta}(S)\setminus \overline{N_{\eta}(S)}}$, we may assume that $h_2 \circ h_1$ and $h_e$ agree on $\AAA$. Let $h$ be the stratified diffeomorphism obtained by pasting these two diffeomorphisms along $\AAA$. Note that $h$ is $C^0-$small, and hence lifts to give a bundle map from $E|\operatorname{Op}_X K$ to $E|\operatorname{Op}_X h(K)$ covering $h: \operatorname{Op}_X K \to \operatorname{Op}_X h(K)$.
Choosing $\eta$ in $N_\eta(S)$ sufficiently small, we may assume that the domain of the extension $\til{f_e}$ given by the normal bundle to $\operatorname{Op}_S h(K)$ constructed earlier in the proof has normal fibers of diameter at least $2\eta$. Thus, on the stratified "annulus" ${\AAA'} = (N_{2\eta}(S)\setminus N_{\eta}(S)) \cap \operatorname{Op}_X \, h(K)$, we have two holonomic sections $\til{f_e}$ and $\til{f_1}$. (We are assuming here that the sections have been composed with the bundle map covering $h: \operatorname{Op}_X K \to \operatorname{Op}_X h(K)$ described in the previous paragraph.)
Since $\eta$ is small, and since both $\til{f_e}$ and $\til{f_1}$ are normally $C^r-$close to $f$ on {$\AAA'$}, they are close to each other.
Hence, using Corollary \ref{cor-interpolate}, we can interpolate between the sections $\til{f_e}|N_\eta(S)$ and $\til{f_1}|(X\setminus N_{2\eta}(S)) $ to obtain
a holonomic section $\til f$ on all of $\operatorname{Op}_X K$. Since $\eta$ is small, all three conclusions of the theorem are satisfied by $h$ and $\til f$.
\end{proof}
\subsection{Application: Immersions}\label{sec-immn}
A host of examples of open $\diff-$invariant relations have been enumerated by Eliashberg-Mishachev \cite{em-expo} in the manifold context and the holonomic approximation theorem deduced for these:
\begin{enumerate}
\item open
manifold immersions, \item open
manifold submersions, \item open
manifold k-mersions (i.e.\ mappings of rank at least $k$), \item mappings with nondegenerate
higher-order osculating spaces, \item mappings transversal to foliations, or more generally, to
arbitrary distributions, \item construction of generating system of exact differential $k-$forms, \item symplectic
and contact structures on open manifolds, etc.
\end{enumerate}
Most of these (in particular, immersions, submersions and $k-$mersions of open manifolds) have natural potential generalizations to the stratified context, by replacing the use of Theorem \ref{em-hat} by Theorem \ref{em-hats}. \\
\noindent {\bf Stratified Immersions:}
We illustrate the extra ingredient necessary over and above \cite{em-book,Gromov_PDR} by studying the sheaf of stratified immersions in this paper. We postpone a more detailed treatment of applications to subsequent work. Here, we shall prove a stratified analog of the Smale-Hirsch theorem \cite[Chapter 8.2]{em-book} (see Theorem \ref{thm-immns} below).
\begin{defn}\label{def-immns}
Let $X, Y$ be abstractly stratified spaces, and let $tX$ and $tY$ denote the tangent microbundles to $X$ and $Y$ respectively.
A \emph{stratified immersion} of $X$ into $Y$ is a weakly controlled map $i : X \to Y$ such that the induced
map $i_*: tX \to tY$ is a fiberwise stratified embedding of stratified spaces.
\end{defn}
A stratified immersion $i : X \to Y$ will be said to be of \emph{positive codimension}, if the following is true: for any stratum $S$ of $X$, let $S'$ denote the unique stratum of $Y$ such that $i(S) \subset S'$. Then $i(S) \subset S'$ is an immersed submanifold of positive codimension. Let $\operatorname{Imm}^\mathfrak{c}(X,Y)$ denote the quasitopological space of \emph{positive codimension stratified immersions} of a stratified space $X$ in a stratified space $Y$ such that the stratified immersions are of configuration $\mathfrak{c}$.
Given stratified spaces $(Y, \Sigma')$ and $(X, \Sigma)$,
and a configuration $\mathfrak{c}$ (cf.\ Definition \ref{def-config}), we may construct a sheaf $\operatorname{Imm}^{\mathfrak{c}}(-, Y)$ on $X$ by defining, for each element $U \in \mathrm{Str}(X, \Sigma)$ of the stratified site (Definition \ref{def-stratfdsite}),
$$\operatorname{Imm}^\mathfrak{c}(-, Y)(U) = \operatorname{Imm}^\mathfrak{c}(U, Y)$$
where $U$, being an open subset in a stratum-closure of $X$, is equipped with the induced stratification from $X$. The restriction maps are defined simply by restriction of the underlying map of an immersion to a smaller subset.
For any stratum $S \in \Sigma$, let $\operatorname{Imm}_{\overline{S}}^\mathfrak{c}(-, Y)$ denote the sheaf of positive codimension stratified immersions of elements of the stratified site of the stratified space $(\overline{S}, \overline{S} \cap \Sigma)$ as in Definition \ref{def-sssheaf}. Let $\operatorname{Imm}_S^\mathfrak{c}(-, Y) := i^*_S \operatorname{Imm}_{\overline{S}}^\mathfrak{c}(-, Y)$ denote its restriction to the open stratum $S$.}
Note that $\operatorname{Imm}^\mathfrak{c}(-, Y)$ is a stratified subsheaf of $\operatorname{Maps}(-, Y)$, where $\operatorname{Maps}(-, Y)$ is identified with the sheaf of sections of the surjective map $P : X \times Y \to X$. $P$ is an example of a stratumwise bundle (Definition \ref{def-stratumwisebdl}), but not a stratified bundle (Example \ref{eg-pdkt}). Nevertheless, by Remark \ref{rmk-formalasgerms}, we may describe sections of the Gromov diagonal construction $\operatorname{Maps}(-, Y)^*$ over an element of the site $U \in \str(X, \Sigma)$ in terms of germs of maps $\sigma : (t(U), U) \to Y$. For every $p \in U$, we may consider $\sigma\vert {t_p U} : t_p U \to Y$ as a map $t_p U \to t_{\sigma(p)} Y$. In this process, we can identify sections of $\operatorname{Maps}(-, Y)^*$ over $U$ as microbundle morphisms $t U \to t Y$ covering a map $U \to Y$.
\begin{defn}A \emph{stratified formal immersion} of $X$ into $Y$ is a pair $(F, f)$ consisting of
\begin{enumerate}
\item A weakly controlled map $f : X \to Y$,
\item A fiber-preserving microbundle morphism $F : tX \to tY$,
\end{enumerate}
such that $F$ covers $f$, and $F$ is a fiberwise stratified embedding.
\end{defn}
It follows from the discussion above that sections of the Gromov diagonal construction $\operatorname{Imm}^\mathfrak{c}(-, Y)^*$ over $U \in \str(X, \Sigma)$ can be identified with stratified formal immersions $(F, f)$ of $U$ into $Y$, where $f : X \to Y$ is of configuration $\mathfrak{c}$. The canonical inclusion $\operatorname{Imm}^\mathfrak{c}(-, Y) \hookrightarrow \operatorname{Imm}^\mathfrak{c}(-, Y)^*$ over $U$ is given by sending a stratified (holonomic) immersion $i : X \to Y$ to the stratified formal immersion $(i_*, i)$ where $i_* : tX \to tY$ is given by the restriction of $i \times i : X \times X \to Y \times Y$ to the diagonal.
\begin{comment}
Note that in order to study $\operatorname{Imm}(X,Y)$, we need to look at sections of $P: X\times Y \to X$
and $tP: tX\times tY \to tX$. Both are examples of stratumwise bundles (Definition \ref{def-stratumwisebdl}), but not examples of stratified bundles (Example \ref{eg-pdkt}).
We, nevertheless, prove the following:
\end{comment}
\begin{theorem}\label{thm-immns} For any stratified spaces $(X, \Sigma)$ and $(Y, \Sigma')$, and configuration $\mathfrak{c}$ as above,
the stratified continuous sheaf $\operatorname{Imm}^\mathfrak{c}(-,Y)$ on $X$ satisfies the parametric $h-$principle.
\end{theorem}
\begin{proof}
By Theorem \ref{thm-hofibsflexg}, it suffices to check flexibility of $\HH^L_S$.
Let $S < L$ be strata in $X$ and let $S' = \mathfrak{c}(S), L' = \mathfrak{c}(L)$ be the strata in $Y$ corresponding to $S, L$ through the configuration $\mathfrak{c}$. Let $t_{S,L}$ denote the microtangent bundle to $S$ in $\overline L$. Let $A_L$ (resp.\ $B_{L'}$) denote the link of $S$ (resp.\ $S'$) in $\overline{L}$ (resp.\ $\overline{L'}$), and $\{c\} \subset cA_L$ (resp.\ $\{c'\} \subset cB_{L'}$) denote the cone points of the respective normal cones. Then the microtangent bundle $t_{S,L}$ of $S$ in $L$ is of the form
$$t_{S,L} = TS \oplus N_{S,L},$$
where $TS$ is the tangent bundle to $S$, and $N_{S,L}$ is a $cA_L-$bundle over $S$. We refer to $N_{S,L}$ as the normal cone bundle of $S$ in $L$. The morphism
$$\mathrm{res}^L_S : i^*_S \operatorname{Imm}^{\mathfrak{c}}_{\overline{L}}(-, Y) \to \operatorname{Imm}_S^{\mathfrak{c}}(-, Y)$$
is given by restricting a germ of a stratified immersion defined on a neighborhood $N_{S, L}$ of $S$ in $\overline{L}$, to the zero section $S \subset N_{S, L}$. Let $U \subset S$ be a chart such that $N_{S, L}$ trivializes as $U \times cA_L$ over $U$. Then, elements of $i^*_S\operatorname{Imm}^{\mathfrak{c}}_{\overline{L}}(U, Y)$ consist of germs of stratified immersions
$$\varphi : (U \times cA_L, U \times \{c\}) \to Y$$
such that $\varphi|U \times \{c\}$ is an immersion of $U$ in $S'$. Observe that, for every $p \in U$,
$$\varphi\vert_{\{p\} \times cA_L} : (cA_L, \{c\}) \to (cB_{L'}, \{c'\}) \subset Y$$
is a germ of a stratified immersion at the cone point $c'$. Here, the cone $cB_{L'}$ is determined by the configuration $\mathfrak{c}$. In other words, the configuration $\mathfrak{c}$ induces a germ of a stratified embedding at the cone point. Indeed, the microtangent space $t_c(cA_L)$ to the cone $cA_L$ at the cone point $c$ is a germ of a neighborhood of $\{c\}$ in $cA_L$. The latter is germinally homeomorphic to $cA_L$ itself. Let $\mathrm{Emb}_c(cA_L, cB_{L'})$ denote the quasitopological space of germs of such embeddings. Thus, we have an induced map $\Phi_\varphi$ given by
$$\Phi_\varphi : U \to \mathrm{Emb}_c(cA_L, cB_{L'}), \; \Phi_{\varphi}(p) = \varphi\vert_{\{p\} \times cA_L}.$$
Therefore, $ \varphi \to (\varphi\vert_{U \times \{c\}}, \Phi_\varphi)$ furnishes a homeomorphism:
\begin{gather*}i^*_S\operatorname{Imm}^{\mathfrak{c}}_{\overline{L}}(U, Y) \to \mathrm{Imm}(U, S') \times \operatorname{Maps}(U, \mathrm{Emb}_c(cA_L, cB_{L'}))
\end{gather*}
This homeomorphism is natural with respect to passing to smaller open subsets $V \subset U$. It commutes with projections to $\mathrm{Imm}(U, S')$ under restriction maps $\mathrm{res}^L_S$ on the left-hand side. It also commutes with the canonical projection to the first factor on the right-hand side. Therefore, we have an isomorphism of continuous sheaves over $U \subset S$,
$$i^*_S\operatorname{Imm}^{\mathfrak{c}}_{\overline{L}}(-, Y) \cong \mathrm{Imm}(-, S') \times i^*_U\operatorname{Maps}(-, \mathrm{Emb}_c(cA_L, cB_{L'})).$$
Under this isomorphism, $\mathrm{res}^L_S$ is equivalent to the projection to the first factor of the product sheaf on the right-hand side. Therefore, as in Proposition \ref{prop-hlsflex},
$$i^*_U \mathcal{H}^L_S \cong P_{\psi}\mathrm{Imm}(-, S') \times i^*_U \operatorname{Maps}(-, \mathrm{Emb}_c(cA_L, cB_{L'}))$$
By Gromov's Open Extension Theorem \cite[p. 86]{Gromov_PDR}, $\mathrm{Imm}(-, S')$ is a flexible sheaf on $S$ since $\mathrm{dim}(S) < \mathrm{dim}(S')$ by the positive-codimension hypothesis. Thus, the arguments from Proposition \ref{prop-hlsflex} apply, and we conclude that $\mathcal{H}^L_S$ is a flexible sheaf on $S$.
Finally, for every stratum $S$ of $X$, $\mathrm{Imm}^{\mathfrak{c}}_S(-, Y) = \mathrm{Imm}(-, \mathfrak{c}(S))$ is a flexible sheaf on $S$ once again by the Open Extension Theorem \cite[p. 86]{Gromov_PDR}. Therefore, $\operatorname{Imm}^\mathfrak{c}$ is both stratumwise flexible, and infinitesimally flexible across strata. By Theorem \ref{thm-hofibsflexg}, $\operatorname{Imm}^\mathfrak{c}$ satisfies the parametric $h-$principle.\end{proof}
\begin{comment}
content...
\begin{defn}\label{def-immns}
Let $X, Y$ be smooth stratified spaces. The microtangent bundles to $X, Y$ will be denoted by $tX, tY$ respectively.
A stratified immersion of $X$ into $Y$ is a smooth map $i: X \to Y$ such that the induced
map $i_*: tX \to tY$ is a fiberwise stratified embedding of stratified spaces.
\end{defn}
Consider the sheaf $\operatorname{Imm}(X,Y)$ of \emph{positive codimension stratified immersions} of a stratified space $X$ in a stratified space $Y$. Note that in order to study $\operatorname{Imm}(X,Y)$, we need to look at sections of $P: X\times Y \to X$
and $tP: tX\times tY \to tX$. Both are examples of stratumwise bundles (Definition \ref{def-stratumwisebdl}), but not examples of stratified bundles (Example \ref{eg-pdkt}).
We, nevertheless, prove the following:
\begin{theorem}\label{thm-immns}
$\operatorname{Imm}(X,Y)$ satisfies the (stratified) parametric $h-$principle.
\end{theorem}
\begin{proof}
By Theorem \ref{thm-hofibsflexg}, the only additional point to check is flexibility of $\HH^L_S$.
Let $S < L$ (resp.\ $S' < L'$) be strata in $X$ (resp.\ $Y$). Let $\tau_{S,L}$ (resp.\ $\tau_{S',L'}$) denote the microtangent bundles to
$S$ (resp.\ $S'$) in $\overline L$ (resp.\ $\overline L'$). Let $A_L$ (resp.\ $A_L'$) denote the \emph{open} link of $S$ in $L$ (resp.\ open link of $S'$ in $L'$), i.e.\ these are the intersections of the links of $S$ (resp.\ $S'$) with $L$ (resp.\ $L'$). Equivalently, we can simply look at the stratified space given by $S\cup L$ and set $A_L$ to be the link of $S$ in $S\cup L$. Similarly, for $A_L'$. Then the microtangent bundle $T_{S,L}$ (resp.\ $T_{S',L'}$) of $S$ in $L$ (resp.\ $S'$ in $L'$) is of the form $$T_{S,L} = T_S \oplus N_{S,L}$$ (resp.\ $T_{S',L'} = T_S' \oplus N_{S',L'}$), where $T_S $ ($T_S'$) is the tangent bundle to $S$ (resp.\ $S'$),
and $N_{S,L}$ (resp.\ $N_{S',L'}$) is a $cA_L-$ (resp.\ $cA_L'-$) bundle over $S$
(resp.\ $S'$). We refer to $N_{S,L}$ (resp.\ $N_{S',L'}$) as the open normal bundle of $S$ in $L$ (resp.\ $S'$ in $L'$).
Let $i\in \operatorname{Imm}(X,Y)$ with $i(S) \subset S'$ and $i(L) \subset L'$.
Then
\begin{enumerate}
\item $i_S: S \to S'$ is an immersion,
\item $i_{S,L}: T_{S,L} \to T_{S',L'}$ is a fiberwise embedding covering $i_S$ respecting the direct sum decomposition into tangent and normal directions,
\item $i_L: L \to L'$ agrees with $i_{S,L}$ germinally.
\end{enumerate}
Locally $i_{S,L}|U$ embeds $N_{S,L}|U$ in $N_{S',L'}|U'$, where $U$
and $U'$ are local charts in $S$ and $S'$ respectively. In particular, the fibers
$cA_L$ of $N_{S,L}|U$ embed in fibers
$cA_L'$ of $N_{S',L'}|U'$.
The homotopy fiber $\HH^L_S$ is given exactly as in Proposition \ref{prop-hlsflex} by
$$\HH^L_S=\operatorname{Maps}(-, \Gamma_c (cA_L, cA_L') \times \GG,$$ where $\GG$ is a path of sections in the space of (manifold) immersions of $S$ in $S'$. The proof of Proposition \ref{prop-hlsflex} applies without change to establish flexibility of
$\HH^L_S$. Stratumwise flexibility of $\operatorname{Imm}(X,Y)$ follows from Gromov's Open Extension Theorem \cite[p. 86]{Gromov_PDR}, i.e.\ the sheaf of positive codimension immersions of
a manifold in a manifold is flexible.
Hence, by Theorem \ref{thm-hofibsflexg}, $\operatorname{Imm}(X,Y)$ satisfies the parametric $h-$principle.
\end{proof}
\end{comment}
\newcommand{\etalchar}[1]{$^{#1}$}
|
1804.06745
|
\section{Introduction}\label{sec:intro}
\IEEEPARstart{W}{ith} the explosive growth of mobile applications, cloud synchronization services and the rapid development of high-quality multimedia services such as high resolution image and 4K-resolution high dynamic range (HDR) video streaming, the existing 4G mobile communication technology could not meet the needs of enterprises and consumers for wireless communication networks any more. As a result, 5G mobile communication technology \cite{andrews2014will,shafi20175g,rusek2013scaling} has been raised up with higher transmission speed, stronger bearing capacity, and a wider range of applications. Massive multiple-input multiple-output (M-MIMO) is one of the key technologies of 5G \cite{boccardi2014five,larsson2014massive}, owing to its plenty advantages, such as high spectral efficiency, high power efficiency, and high robustness. The performance of M-MIMO systems relies heavily on the acquisition of the uplink (UL) and downlink (DL) channel state information (CSI). However, the large-scale antenna array of M-MIMO brings new challenges to channel estimation \cite{xie2016overview} :
\begin{itemize}
\item The overhead of training sequence grows due to the increasing number of users while the reuse of training sequence will arouse pilot contamination \cite{marzetta2010noncooperative,jose2009pilot,fernandes2013inter,you2015pilot}.
\item The growing dimension of channel matrices (CMs) or channel covariance matrices (CCMs) makes the complexity and resource consumption of the traditional UL and DL channel estimation algorithm greatly increased, limiting the M-MIMO system to play its superiority.
\item The amount of CSI that users feed back to the base station (BS) during DL channel estimation is growing with the increase of the number of antennas at the BS, which is a great burden of the feedback channel.
\item Channel reciprocity makes it easy to acquire DL CSI from UL CSI for time division duplexing (TDD) systems, while the non-reciprocity characteristic causes that the DL channel estimation of frequency division duplexing (FDD) systems cannot be predigested, which is a great burden for user's devices.
\end{itemize}
In order to reduce the computational complexity, we need to take advantage of the low-rank properties of the channel, which can reduce the dimension of CMs and CCMs significantly and acquire the valid information. Many works point out that the directions of arrivals (DOA) as well as the directions of departures (DOD) of propagation signals are limited in a narrow region (i.e., the angle spread (AS) is small) because the BS equipped with large-scale antenna array has to be located on the top of high buildings, which is known as the finite scattering model \cite{burr2003capacity}. Another similar scenario is the mmWave communication, where the channel is naturally sparse and the AS equals $0$ \cite{gao2016energy}. Meanwhile, due to the large-scale antenna array of M-MIMO, the spatial resolution of the BS is significantly improved, which means the BS can distinguish users from different directions more easily so that the representation of channel can be strongly sparse and there are relatively few non-zero elements in CMs and CCMs. As a result, a lot of new or optimized methods to acquire CSI have been proposed \cite{adhikary2013joint,nguyen2013compressive,rao2014distributed,sun2015beam,fang2017low}, especially for DL channel estimation of FDD system due to its non-reciprocity characteristic. \cite{adhikary2013joint} proposed an approach under joint spatial division and multiplexing (JSDM) scheme for DL channel estimation of FDD system, where the sparsity of CCMs is exploited and the eigenvalue decomposition (EVD) algorithm is required, which is a challenge for implementation.
\cite{nguyen2013compressive} proposed a low-rank matrix approximation
based on compressed sensing (CS) and solved via a quadratic semi-define programming (SDP), which is novel but far too complex to implement. \cite{rao2014distributed} deployed a CS-based method with the joint channel sparsity model (JCSM) which utilizes virtual angular domain representation of the CM and limited local scattering in order to reduce the training and feedback overhead. To this end, we first proposed a transmission strategy based on spatial based expansion model (SBEM) in \cite{xie2017unified}, which comes from array signal processing theory. This scheme is also known as angle-division multiple access (ADMA) scheme. ADMA scheme has some particular advantages:
\begin{itemize}
\item Due to the increased angle resolution of antenna arrays at the BS, the angular information can be easily obtained by the discrete Fourier transform (DFT) of CMs under ADMA scheme.
\item The angular information is corresponding to the real directions of users with the array signal processing theory, while the others' methods only have a virtual angular representation.
\item As a result of the reciprocity brought by the DOA and DOD, the complexity and overhead of DL channel estimation and feedback can be reduced.
\item The estimation algorithm mainly contains DFT calculation, matrix multiplication, sorting and grouping, which is convenient for implementation.
\end{itemize}
As it has shown in \cite{xie2017unified}, the performance of ADMA is better than \cite{adhikary2013joint} and \cite{rao2014distributed}, especially at low signal noise ratio (SNR). In addition, there are also blind and semi-blind estimation methods to be explored. Those methods have higher transmission efficiency because they need fewer (or no) training sequences. But the result of those methods may be not accurate at the start of transmission because the BS needs some time to accumulate channel statistics information. Moreover, the efficient implementation of ADMA is very challenging due to the non-linear computation involved in algorithm level, therefore hinder its application for channel estimation.
In order to bridge the aforementioned gap between algorithm and implementation, this paper devotes itself in proposing the hardware architecture for channel estimation under ADMA scheme for the first time. Hardware-aware partition of the algorithm is conducted. Our main technical contributions can be listed as follows:
\begin{itemize}
\item We propose a hardware-efficient channel estimator under ADMA, which takes the advantage of the sparsity of M-MIMO systems in order to reduce the complexity, save the amount of training sequences, and speed up the channel estimation of large amount of users.
\item We discuss the approximation of algorithm and transmission strategy as well as the quantization optimization in order to make the our channel estimator suitable for hardware implementation.
\item We propose the first channel estimator architecture with ADMA scheme, successfully achieve higher hardware efficiency and higher processing speed for channel estimation of M-MIMO systems.
\item We develop an optimized architecture to simplify our original channel estimator, with little performance loss but huge resources reduction and higher hardware efficiency.
\item We present the FPGA implementation of our ADMA channel estimator on Xilinx Virtex-$7$ xcvu$440$-flga$2892$-$2$-e, to demonstrate its suitability for $5$G wireless. The advantages have been verified by FPGA implementations.
\end{itemize}
The remainder of this paper is organized as follows. Section \ref{sec:pre} proposes the implementation-aware partition of ADMA algorithm. The hardware-friendly approximation and the simulation results are presented in Section \ref{sec:sim}. The detailed pipelined hardware architecture is presented in Section \ref{sec:hard}. FPGA implementation is also given in the same section to demonstrate the advantages. Finally, Section \ref{sec:con} concludes the entire paper.
\textbf{\textit{Notations}}. \textcolor{black}{The notations} employed in this paper are listed in Table \ref{table:notation} for clearer representation.
\begin{table}[ht]
\centering
\caption{Notations in This Paper}
\begin{tabular}{c|l}
\Xhline{1.0pt}
Symbol& \multicolumn{1}{c}{Definition}\\
\hline
\rowcolor{mygray}
$M$ & number of antennas at the BS, \\
$K$ & number of users that the BS serves,\\
\rowcolor{mygray}
$L$ & length of training sequences,\\
$\tau$ &number of parameters the BS can handle,\\
\rowcolor{mygray}
\hline
$\mathbf{h}$ / $\mathbf{H}$ & vector $\mathbf{h}$ / matrix $\mathbf{H}$,\\
$\left[\mathbf{h}\right]_i$ & the $i$-th element of vector $\mathbf{h}$,\\
\rowcolor{mygray}
$\left[\mathbf{H}\right]_{ij}$ & the $(i,j)$-th element of matrix $\mathbf{H}$,\\
${\mathbf{h}}^T$ / ${\mathbf{H}}^T$ & the transpose of vector $\mathbf{h}$ / matrix $\mathbf{H}$,\\
\rowcolor{mygray}
${\mathbf{h}}^H$ / ${\mathbf{H}}^H$ & the Hermitian of vector $\mathbf{h}$ / matrix $\mathbf{H}$,\\
$\cal B$ & set $\cal B$ of $\tau$ continuous integers,\\
\rowcolor{mygray}
$\left|\cal B\right|$ & the cardinality of the set $\cal B$,\\
$[{\mathbf{h}}]_{\cal B}$ & sub-vector of $\mathbf{h}$ by keeping the elements indexed by $\cal B$,\\
\rowcolor{mygray}
$[{\mathbf{H}}]_{:,{\cal B}}$ & sub-matrix of $\mathbf{H}$ by collecting the columns indexed by $\cal B$,\\
${\rm{diag}\left\{{\mathbf{h}}\right\}}$ & \makecell[lc]{a diagonal matrix with the diagonal elements constructed \\ from vector $\mathbf{h}$,}\\
\rowcolor{mygray}
${\mathbb{E}}\left\{\cdot\right\}$ & the statistical expectation.\\
\Xhline{1.0pt}
\end{tabular}
\label{table:notation}
\end{table}
\section{Implementation-Aware Partition of ADMA Channel Estimation}\label{sec:pre}
Implementation-aware partition of ADMA channel estimation is first conducted in this section.
\subsection{Setting-Up of ADMA}
For the ease of illustration, we consider a multiuser M-MIMO system, where the BS is equipped with $M$ ($M \gg 1$) antennas in the form of uniform linear array (ULA) and serves $K$ users. We assume that the number of parameters which the BS can handle is $\tau$. In addition, as we presume that each user is equipped with only one antenna, the CM of user-$k$ can be described as a $M \times 1$ vector ${{\mathbf{h}}_k}$. From array signal processing theory, the UL channel vector ${{\mathbf{h}}_k}$ of user-$k$ has the form
\begin{equation}\label{equ:aspt}
{{\mathbf{h}}_k} = {\frac{1}{\sqrt{P}}} {\sum_{p=1}^{P}} {\alpha_{kp}} {\mathbf{a}}({\theta_{kp}}),
\end{equation}
where $P$ is the number of beamforming rays, ${\alpha_{kp}}$ is the complex gain of the $p$-th ray and ${\mathbf{a}}({\theta_{kp}})$ is the array manifold vector which can be expressed as
\begin{equation}
{\mathbf{a}}({\theta_{kp}}) = \left[{1,~e^{j\frac{2{\pi}d}{\lambda}{\sin{\theta_{kp}}}},~...,~e^{j\frac{2{\pi}d}{\lambda}({M-1}){\sin{\theta_{kp}}}}}\right]^T~.
\end{equation}
\begin{Rem}
In this paper, we do not discuss in the situation that users are equipped with multiple antennas or the propagation signal contains multiple subcarriers in orthogonal frequency duplex division multiplexing (OFDM) systems. In fact, the sparsity of the vectors which can be obtained by collecting the row or column of channel matrices. And so do the sparsity of channel matrices of different subcarriers. So when we obtain the sparsity under ADMA scheme, it can be promoted to plenty of scenarios.
\end{Rem}
\subsection{Channel Sparsity Revealed by ADMA}
To grantee the performance of the proposed channel estimator, the sparsity reveal by ADMA must be kept during the implementation process. The ADMA presents a sparse channel representation for the channel of a M-MIMO system via the Discrete Fourier Transform (DFT) of channel vector, i.e., ${{\mathbf{\tilde h}}_k}$. which can be calculated by
\begin{equation}\label{equ:FFT}
{\mathbf{\tilde h}}_k = {\mathbf{F}}{\mathbf{h}}_k,
\end{equation}
where $\mathbf{F}$ is the $M \times M$ DFT matrix whose element is ${\left[ {\mathbf{F}} \right]_{pq}} = {e^{ - j{\textstyle{\frac{2\pi}{M}}pq}}} / {\sqrt M }$. For the ease of description, there are two lemmas which can be proved from paper \cite{xie2017unified} :
\begin{Lem}
If $P = 1$ (i.e., AS is zero) and $M \to \infty$, there will be only one non-zero element in ${{\mathbf{\tilde h}}_k}$ and the index of this non-zero element is relative to its DOA or DOD.
\end{Lem}
\begin{IEEEproof}
For $P=1$, ${{\mathbf{h}}_k}$ can be simplified to ${{\mathbf{h}}_k} = {\alpha_{kp}} {\mathbf{a}}({\theta_{kp}})$, then the $b$-th element of ${\mathbf{\tilde h}}_k$ can be calculated as
\begin{equation}\label{equ:lemma_main}
\begin{aligned}
{\left[ {\mathbf{\tilde h}}_k \right]_{b}} =& {\frac{\alpha_k}{\sqrt{M}}} {\sum_{m=0}^{M-1}} e^{-j({\frac{2\pi}{M}}mb-{\frac{2\pi}{\lambda}md{{\sin}{\theta_{k}}}})} \\
=& {\frac{\alpha_k}{\sqrt{M}}} e^{-j({\frac{2\pi}{M}}b-{\frac{2\pi}{\lambda}d{{\sin{\theta_{k}}}}})} \\
& \cdot{\frac{{\sin[({\frac{2\pi}{M}}b-{\frac{2\pi}{\lambda}d{{\sin{\theta_{k}}}}})\cdot{\frac{M}{2}}]}}{{\sin[({\frac{2\pi}{M}}b-{\frac{2\pi}{\lambda}d{{\sin{\theta_{k}}}}})\cdot{\frac{1}{2}}]}}},
\end{aligned}
\end{equation}
If $M \to \infty$, we can get that
\begin{equation}\label{equ:lemma_lim}
\lim_{M \to \infty} \left|{\left[ {\tilde{\mathbf{h}}_k} \right]_{b}}\right| = \left|{\alpha_k}\right|\cdot{\sqrt{M}}\cdot\delta\left({\frac{b}{M}-\frac{d{\sin}{\theta_k}}{\lambda}}\right).
\end{equation}
Eq. \ref{equ:lemma_lim} denotes the relationship between the index of the non-zero element (i.e., $b_0$) in ${\mathbf{\tilde h}}_k$ and the DOA when $M \to \infty$, which can be described as
\begin{equation}\label{equ:reciprocity}
\left\{
\begin{aligned}
b_0 &= \frac{Md\sin{\theta_k}}{\lambda} \\
\theta_k &= \arcsin (\frac{b_0\lambda}{Md}),
\end{aligned}
\right.
\end{equation}
\end{IEEEproof}
Since we have discussed the situation with $P = 1$ and $M \to \infty$, we can move onto the more complex and realistic scheme:
\begin{itemize}
\item when $P>1$ and $M \to \infty$, each propagation ray is corresponding to a non-zero element in ${\mathbf{\tilde h}}_k$. The index of the middle element is corresponding to the DOA of user-$k$ while the number of the non-zero elements is corresponding to the AS of user-$k$.
\item when $P=1$ and $M$ is large but finite, the power leakage emerges because the resolution of the BS is relatively limited, which causes that $b_0 = \frac{Md\sin{\theta_k}}{\lambda}$ is not always an integer. However, there are only a few non-zero elements concentrated around $b_0 = \lfloor \frac{Md\sin{\theta_k}}{\lambda} \rceil$ since $M$ is large. In fact, $M$ denotes the sample precision of the Discrete Time Fourier Transform (DTFT) of ${\mathbf{h}}_k$ in the frequency domain. Since the index of the non-zero elements in ${\mathbf{\tilde h}}_k$ is corresponding to the DOA and AS of user-$k$, $M$ can also determine the spatial resolution of the BS.
\item when $P>1$ and $M$ is large but finite, it is similar to the situation with $P=1$ and $M$ is large but finite, but the amount of non-zero elements in ${\mathbf{\tilde h}}_k$ will be larger, which is interrelated to the AS of user-$k$.
\end{itemize}
From the above we can see that we can simply get a sparse channel representation by applying DFT to the channel vector and pick the non-zero elements with their indexes. In practical scene, since the BS can handle $\tau$ ($\tau \ll M$) parameters at most, we can use $\tau$ non-zero points of the DFT channel vector ${{\mathbf{\tilde h}}_k}$ instead of all $M$ points to represent the CSI, which can reduce quite a lot of calculating and feedback overhead.
\subsection{Sparsity Enhancer for ADMA}
To enhance the channel sparsity under ADMA scheme, we define:
\begin{Def}
Define ${\mathbf{\Phi }}(\phi_k )$ as the rotation matrix for user-$k$ which can be expressed as
\begin{equation}
{\mathbf{\Phi }}(\phi_k ) = {\rm{diag}}\left\{ {\left[ {1,{e^{j\phi_k }}, \ldots ,{e^{j(M - 1)\phi_k }}} \right]} \right\},
\end{equation}
where $\phi_k \in \left[ { \text{-} {\textstyle{\frac{\pi}{M}}},{\textstyle{\frac{\pi}{M}}}} \right]$.
Then we can add this rotate-operation to the DFT calculating. Define ${{\mathbf{\tilde h}}^{{\rm{ro}}}}_k$ as the new channel representation with rotation given by
\begin{equation}
{{\mathbf{\tilde h}}^{{\rm{ro}}}}_k = {\mathbf{F}{\Phi}}(\phi_k ){\mathbf{h}}_k.
\end{equation}
\end{Def}
In this way, we can use less non-zero elements to represent the channel vector. Or in practical scene, the $\tau$ non-zero elements we pick will contain more energy of the channel, which is a great benefit for the training overhead.
\begin{Rem}
Notice that the rotation is actually the translation of ${{\mathbf{\tilde h}}_k}$ in the frequency domain. Since the spatial resolution of the BS is relatively limited, we can get the sample points aligned with the middle of the peak of the DTFT of ${\mathbf{h}}_k$ to the greatest extent via the rotation operation. Since the sampling interval in the frequency domain is ${\frac{\pi}{M}}$, it is only necessary to search over $\phi_k \in \left[ {\text{-} {{{\frac{\pi}{M}}},{\frac{\pi}{M}}}} \right]$
\end{Rem}
In this case, we can define the index set to describe the signature of channel vectors with rotation as following:
\begin{Def}
Define ${\cal B}^{\rm{ro}}_k$ as the spatial signature of user-$k$ which can be determined according to
\begin{equation}\label{twopara}
\max_{{\phi_k},{\cal B}^{\text{ro}}_k} \left.\left|\left| {\left[{\tilde{{\mathbf{h}}}}^{\text{ro}}_k\right]}_{{\cal B}^{\text{ro}}_k} \right|\right|^2 \over \left|\left|{\tilde{{\mathbf{h}}}}^{\text{ro}}_k \right|\right|^2 \right.~,~~{\text{subject to}}~\left| {{\cal B}^{\text{ro}}_k} \right|=\tau,
\end{equation}
\end{Def}
Now we have two parameters for each user to be determined under ADMA scheme: $\phi$ and ${\cal B}^{\rm{ro}}$. The main benefit of this sparse channel representation is that we only need a few training sequences because users from different directions whose DOAs do not overlap can share the same training sequence. In practical scene, we usually use $\tau$ orthogonal training sequence which can make full use of the BS.
Meanwhile, we can explain the transmission strategy under ADMA scheme which can be divided into three stages: the preamble stage, the UL training stage and the DL training stage. The aim of the preamble stage is to collect the two parameters of all users and divide them into different groups according to their spatial signatures. Then in the UL training stage and the DL training stage we can perform faster estimation than conventional channel estimation methods due to the grouping in the preamble stage. The preamble stage is not necessary after each UL and DL training stage and the times for UL and DL training stages after one preamble stage is corresponding to the mobility of users.
\subsection{Preamble Module}
In the preamble period, we need to find $\phi$ and ${\cal B}^{\rm{ro}}$ for each user so that we can allocate all users into different groups in which the index sets, i.e., ${\cal B}^{\rm{ro}}$ of users do not overlap each other's.
First we allocate all $K$ users into $G$ groups, each containing $\tau$ users as the BS can handle up to $\tau$ training sequences. Then we apply the conventional UL training for each group, and the receiving signals matrix of each group in the BS is given by
\begin{equation}
{\mathbf{Y}} = {\mathbf{H}}{{\mathbf{D}}^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}{{\mathbf{S}}^H} + {\mathbf{N}} = \sum\limits_{i = 1}^\tau {\sqrt {{d_i}} {{\mathbf{h}}_i}{\mathbf{s}}_i^H} + {\mathbf{N}},
\end{equation}
where ${\mathbf{Y}}\in{\mathbb{C}^{M \times L}}$, ${\mathbf{H}}=\left[ {{{\mathbf{h}}_1}, \ldots ,{{\mathbf{h}}_\tau }} \right] \in {\mathbb{C}^{M \times \tau}}$,
${\mathbf{S}} = \left[ {{{\mathbf{s}}_1}, \ldots ,{{\mathbf{s}}_\tau }} \right] \in {\mathbb{C}^{L \times \tau }}$,
${\mathbf{D}} = {\rm{diag}}\left\{\left[ {{{{d}}_1}, \ldots ,{{{d}}_\tau }} \right]\right\}\in {\mathbb{C}^{\tau \times \tau }}$ and ${d_k}=\left.{P_k^{ut}}/{L\sigma _p^2}\right.$
is used to satisfy the energy constraint (${P_k^{\rm{ut}}}$ is the UL training energy constraint of user-$k$, and $\sigma _p^2$ is the pilot
signal training power),
${\mathbf{N}}\in{\mathbb{C}^{{M} \times L}}$ is the additive white Gaussian noise matrix. Then ${{\mathbf{h}}_k}$ can be calculated through linear square (LS) method as
\begin{equation}\label{equ:preamble_2}
{{\mathbf{\hat h}}_k} = \frac{1}{{\sqrt {{d_k}} L\sigma _p^2}}{\mathbf{Y}}{{\mathbf{s}}_k}.
\end{equation}
Then we can find $\phi_k$ and ${{\cal B}_k^{\rm{ro}}}$ for each user by adopting Eq. (\ref{twopara}). The specific method is discussed in Section \ref{sec:sim}. After that, we need to allocate all users into $G^{\rm{ul}}$ groups in which the index sets of users do not overlap each other's so that the users in the same group can share the same training consequence, which can be described as
\begin{equation}\label{equ:grouping}
\left\{
\begin{aligned}
&{\cal B}^{\rm{ro}}_k \cap {\cal B}^{\rm{ro}}_l = \emptyset\\
&\min\left|b_1-b_2\right| \geq \Omega~,~\forall b_1 \in {\cal B}^{\rm{ro}}_k~,~\forall b_2 \in {\cal B}^{\rm{ro}}_l,
\end{aligned}
\right.
\end{equation}
where $\Omega$ is a certain guard interval which depends on the tolerance of users for the interference due to pilot reusing. Here we present a grouping strategy that is easy for VLSI implementation in Section \ref{sec:hard}.
\subsection{UL Training Module}
In the UL training, all $K$ users send their training sequences to the BS. The received signals matrix in the BS is given by
\begin{equation}
{\mathbf{Y}} = \sum\limits_{i = 1}^{{G^{{\rm{ul}}}}} {\sum\limits_{k \in {{\cal U}}_i^{{\rm{ul}}}} {\sqrt {{d_i}} } } {{\mathbf{h}}_k}{\mathbf{s}}_i^H + {\mathbf{N}}.
\end{equation}
So first we extract the channel vector for group-$g$ through a conventional LS method:
\begin{equation}
{\mathbf{y}}_g = {\frac{1}{{L\sigma_p^2 }}}{\mathbf{Y}}{{\mathbf{s}}_g}.
\end{equation}
Since the two parameters of each user is different, we should extract ${{\mathbf{\tilde h}}_k}$ for each user-$k$ through
\begin{equation}\label{equ:UL_1}
{\widehat {\left[ {{{{\mathbf{\tilde h}}}^{{\rm{ro}}}}_k} \right]}_{{{\cal B}}_k^{{\rm{ro}}}}} = {\left[ {{\mathbf{\tilde y}}_{g,k}^{{\rm{ro}}}} \right]_{{{\cal B}}_k^{{\rm{ro}}}}} = {\left[ {\frac{1}{{\sqrt {{d_k}} }}{\mathbf{F\Phi }}({\phi _k}){{\mathbf{y}}_g}}\right]_{{{\cal B}}_k^{{\rm{ro}}}}}.
\end{equation}
Finally we can recover the ${\mathbf{\hat h}}_k$ for user-$k$ by
\begin{equation}\label{equ:UL_2}
{{\mathbf{\hat h}}_k} = \Phi {({\phi _k})^H}{{\mathbf{F}}^H}{\mathbf{\hat {\tilde h}^{{\rm{ro}}}_k}} = \Phi {({\phi _k})^H}{\left[ {{{\mathbf{F}}^H}} \right]_{:,{{\cal B}}_k^{{\rm{ro}}}}}{\widehat {\left[ {{{{\mathbf{\tilde h}}}^{{\rm{ro}}}}_k} \right]}_{{{\cal B}}_k^{{\rm{ro}}}}}.
\end{equation}
\subsection{DL Training Module and Its Reciprocity}
Based on the reciprocity of ADMA, the DL CSI can be easily obtained from the UL training as shown in \cite{xie2017unified}. The reciprocity of ADMA comes from that the propagation path of electromagnetic wave is reciprocal. As a result, the DOA (DOD) of DL signal is the same as the DOD (DOA) of the UL signal. Assume that the DL spatial signature of user-$k$ is $\overline{{\cal B}^{\rm{ro}}_k}$ which can determined by the UL spatial signature ${{\cal B}^{\rm{ro}}_k}$ by applying Eq. (\ref{equ:reciprocity}):
\begin{equation}
\sin{\theta_{kp}}=\frac{q\lambda^{\rm{ul}}}{Md}=\frac{\overline{q}\lambda^{\rm{dl}}}{Md},
\end{equation}
where $q$ and $\overline{q}$ are the elements in ${{\cal B}^{\rm{ro}}_k}$ and $\overline{{\cal B}^{\rm{ro}}_k}$, while $\lambda^{\rm{ul}}$ and $\lambda^{\rm{dl}}$ denote the UL and DL carrier wavelengths. Since $\sin{\theta_{kp}}$ is a monotonic function with $\theta_{kp} \in \left[ \textstyle{-}\frac{\pi}{2} , \frac{\pi}{2} \right]$,
the minimum and maximum elements of ${{\cal B}^{\rm{ro}}_k}$ and $\overline{{\cal B}^{\rm{ro}}_k}$ have an one-to-one correspondence, i.e.,:
\begin{equation}
\overline{q}_{\rm{min}}=\biggl\lfloor {\frac{\lambda^{\rm{ul}}}{\lambda^{\rm{dl}}}q_{min}}\biggr\rfloor,~~~\overline{q}_{\rm{max}}=\biggl\lceil {\frac{\lambda^{\rm{ul}}}{\lambda^{\rm{dl}}}q_{max}}\biggr\rceil,
\end{equation}
where $q_{\rm{min}} \leq q \leq q_{\rm{max}},~\forall q \in {{\cal B}^{\rm{ro}}_k}$. Meanwhile, $\overline{\phi_k}$ can be calculated by ${\overline{\phi}}_k = (\lambda^{\rm{ul}}/\lambda^{\rm{dl}}) \phi_k$ similarly.
The DL training is mostly the same with UL training except the Grouping strategy. In DL training module, since users with identical spatial signatures can be carried out with the same beamforming vectors simultaneously, they can share the same training sequence. Meanwhile, users whose spatial signatures do not overlap each other's can share the same training sequence, which is the same with the UL training Grouping strategy. As a result, we denote our DL training strategy in two steps. First we allocate users with identical spatial signatures into the same cluster. Then we allocate these clusters in to different groups through Eq. (\ref{equ:grouping}). The rest of transmission and estimation is the same with the UL training module.
With the successful algorithm partition, we are now able to carry out the detailed implementation-aware algorithm optimization and module-wise architecture design as follows.
\section{Approximation and Quantization}\label{sec:sim}
For simulations, the mean square error (MSE) is calculated as follows:
\begin{equation}
{\text{MSE}} = \frac{{\mathbb{E}}\{{|| {\mathbf{h}}_k - {\mathbf{\hat h}}_k|| }^2\}}{{\mathbb{E}}\{{|| {\mathbf{h}}_k || }^2\}}.
\end{equation}
For comparison, the system parameters are set as: $M=128,~K=32,~L=64,~\tau=16,~{\theta_k} \in \{-48.59^{\circ}, -14.48^{\circ}, 48.59^{\circ}, 14.48^{\circ}\}~and~\Delta \theta_k = 2^{\circ}$, which are consistent with those in \cite{xie2017unified}.
\subsection{Approximation for Sliding Window Method}
The authors of \cite{xie2017unified} proposed a basic way to find ${{\cal B}_k^{\rm{ro}}}$ for user-$k$ by adopting a one dimensional search over $\phi_k = \left[ { \text{-} {\textstyle{\frac{\pi}{M}}},{\textstyle{\frac{\pi}{M}}}} \right]$ and for each possible $\phi_k$
by sliding a window of size $\tau$ over the $M$ elements in ${\mathbf{\tilde h}_k}$ to determine ${{\cal B}_k^{\rm{ro}}}$ that maximizes the channel power ratio. However, there are two main problems if we operate through this method. The first one is that searching over $\phi_k = \left[ { \text{-} {\textstyle{\frac{\pi}{M}}},{\textstyle{\frac{\pi}{M}}}} \right]$ can not be carried out since it is a continuous interval.
Fig. \ref{fig:epsN} shows the corresponding MSE of the simulations with $N$ separate elements we choose in the ${\left[ {\text{-} {\frac{\pi}{M}}, {\frac{\pi}{M}}} \right]}$. As we can see from that, the MSE of $N = 3$ is nearly the same as those of $N>3$, so $N = 3$ turns out to be suitable for VLSI implementation.
\begin{figure}[htbp]
\centering
\includegraphics[width = .8\linewidth]{figures/N.pdf}
\caption{MSE results of different $N$.}
\label{fig:epsN}
\end{figure}
The second problem is that this method will introduce quite a lot latency and increase the computation complexity as the accumulator and divider are needed. Here we have some approximations to lower the complexity:
\begin{itemize}
\item The first one is to find the maximum element in $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$ for each possible $\phi_k$ and determine the best ${b_k^{\rm{ro}}}$ and $\phi_k$ for user-$k$ from the largest elements of all the possible $\phi_k$.
\item The second one is to find the maximum element and the second largest element in $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$ and calculate the quadratic sum of the largest two elements for each possible $\phi_k$. Determine the best ${b_k^{\rm{ro}}}$ and $\phi_k$ from the largest quadratic sum of all the possible $\phi_k$ (the index ${b_k^{\rm{ro}}}$ will be the mean value of the indexes of the largest two elements in $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$ with $\phi_k$) .
\item The third one is to find the maximum element in $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$ and calculate the quadratic sum of $\tau$ continuous elements which center on the maximum element for each possible $\phi_k$. Determine the best ${b_k^{\rm{ro}}}$ and $\phi_k$ from the largest quadratic sum of all the possible $\phi_k$ (the index ${b_k^{\rm{ro}}}$ comes from the largest elements in $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$ with $\phi_k$) .
\end{itemize}
Fig. \ref{fig:epsswvsmax} shows the MSE increase for the approximations above. It is obvious that the performance of the third approximation is nearly the same as the basic method with a divider economized. Meanwhile, the performance loss of the first or the second method is higher but relatively acceptable with only one or two comparators needed.
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.8\linewidth]{figures/slvs.pdf}
\caption{MSE of different methods to determine the spatial signature of user.}
\label{fig:epsswvsmax}
\end{figure}
\subsection{Quantization Scheme}
For quantization, the variables are quantified with $1$ sign bit, $p$ integral bits, and $q$ fractional bits which is expressed as fixed [$1,p,q$]. The width of integer $p$ is usually determined by the Probability Density Function (PDF) of the data. But in our algorithm the largest data must be less than $2^q$ since the channel state information contains the largest element in $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$. Here we show the statistics of large amount of the largest data in ${\mathbf{h}}_k$, ${{\mathbf{\tilde h}}^{{\rm{ro}}}}_k$ and $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$.
According to our statistics, the largest data is less than $2^8$ so that the width of integral part of variables is set as $8$.
\begin{figure}[htbp]
\centering
\subfigure[The statistics of large amount of the largest data in ${\mathbf{h}}_k$.]{
\includegraphics[width = 0.46\linewidth]{figures/h_k.pdf}
}
\subfigure[The statistics of large amount of the largest data in ${{\mathbf{\tilde h}}^{{\rm{ro}}}}_k$.]{
\label{fig:epsfix1}
\includegraphics[width = 0.46\linewidth]{figures/fft_h_k.pdf}
}
\subfigure[The statistics of large amount of the largest data in $\left| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k \right|$.]{
\label{fig:epsfix2}
\includegraphics[width = .8\linewidth]{figures/abs_fft_h_k.pdf}
}
\caption{The statistics of large amount of the largest data in ${\mathbf{h}}_k$, ${{\mathbf{\tilde h}}^{{\rm{ro}}}}_k$ and $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$. The maximum data appears in the element of $| {{\mathbf{\tilde h}}^{{\rm{ro}}}}_k |$. }
\label{fig:epsfix:old}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width = .8\linewidth]{figures/fix.pdf}
\caption{MSE results of double precision floating and fixed simulations.}
\label{fig:epsfix}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width = \linewidth]{figures/topland.pdf}
\caption{The overall hardware architecture of channel estimation under ADMA scheme. The number of preamble processing module is equal to the number of training sequences $\tau$ and the number of UL estimation module is equal to the number of users in order to achieve the highest processing speed. } \label{fig:pdftop}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width = .9\linewidth]{figures/topwithoutro.pdf}
\caption{The overall hardware architecture without rotation operations. } \label{fig:pdftopwithoutro}
\end{figure*}
In order to determine the width of fractional part of the variables, the corresponding MSE of double floating and fixed simulations are illustrated in Fig. \ref{fig:epsfix}. From Fig. \ref{fig:epsfix} the MSE of fixed [$1,8,6$] and fixed [$1,8,7$] simulation keep almost the same, with a slight degradation compared with the double floating simulation. However, the MSE of fixed [$1,8,5$] simulation is a relatively far from double floating simulation. As a result, the quantization scheme with fixed [$1,8,6$] may be preferred for hardware implementation.
\section{Pipelined Architecture}\label{sec:hard}
For channel estimation under ADMA scheme, the operations are conducted on the $M$-dimensional vectors and matrices, where $M$ is large. For low-complexity and high processing speed, the pipelined architecture is demonstrated in Fig. \ref{fig:pdftop}. In addition, the quantization scheme with ``fixed [$1,8,6$]'' is employed, together with $\phi_k = {\{ \text{-} {\textstyle{\frac{\pi}{M}}}, 0, {\textstyle{\frac{\pi}{M}}}\}}$, respectively.
Our design has two stages controlled by a 1-to-2 switch. Stage 1 consists of pre-treatment module, preamble processing module and UL-grouping module corresponding to Eq.s (\ref{equ:preamble_2}) and (\ref{equ:FFT}). Stage 2 comprises pre-treatment module and UL-Estimation module corresponding to Eq.s (\ref{equ:UL_1}) and (\ref{equ:UL_2}).
\subsection{Module Design}
\subsubsection{Pre-treatment Module}
The pre-treatment module can be reused since preamble module and UL-estimation module are processed in different time slots. Pre-treatment module consists of data buffer and LS-based estimation module. The LS-based estimation module corresponding to Eq. (\ref{equ:preamble_2}) can be implemented by a systolic structure \cite{urquhart1984systolic} whose data flow graph is shown in Fig. \ref{fig:systolic}, which is an efficient processing method for matrix-vector multiplication. The processing element (PE) performs one complex multiplication and one complex addition. Each PE is corresponding to one elements of training sequence ${\mathbf{s}}_k$ and one column of receiving data matrix ${\mathbf{Y}}$ so that the data buffer is needed to get the data transmission proper because ${\mathbf{Y}}$ is received by column (i.e., we receive $M$ elements in one column of ${\mathbf{Y}}$ in one clock period).
\begin{figure}[htbp]
\centering
\includegraphics[width = .95\linewidth]{figures/systolic.pdf}
\caption{Systolic structure of LS-based estimation module. }
\label{fig:systolic}
\end{figure}
\subsubsection{Fast Fourier Transform (FFT) Module}
Eq. (\ref{equ:FFT}) can be divided into two steps. One is a diagonal matrix and vector multiplication which can be implemented by a complex multiplier and a $\Phi$-generator which outputs the diagonal elements of ${\Phi}(\phi_k)$ in pipeline. The other is DFT which can be implemented by Fast Fourier Transform (FFT) processors, reducing the computational complexity to ${\cal} O(M{\text{log}_2}M)$. These are plenty of structures of FFT which emphasize either higher processing speed or less resources overhead \cite{ayinala2012pipelined,cheng2007high,chang2008efficient}. For higher hardware efficiency, the single-path feedback pipelined hardware architecture is employed as it is shown in Fig. \ref{fig:fft}, where the number of registers is the smallest as a result of the application of multiplexers.
\subsubsection{Up-link Grouping module}
\begin{figure*}[htbp]
\centering
\includegraphics[width = .95\linewidth]{figures/mergingnetwork.pdf}
\caption{Merging network structure of $N$-element sorting.}
\label{fig:mergingnetwork}
\end{figure*}
\begin{figure}[htbp]
\centering
\includegraphics[width = \linewidth]{figures/fft.pdf}
\caption{Feed-back pipelined hardware architecture of FFT module.}
\label{fig:fft}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[Systolic structure of Grouping module.]{
\includegraphics[width = \linewidth]{figures/Grouping.pdf}
}
\subfigure[The structure of Compare PE.]{
\label{fig:Groupingpe}
\includegraphics[width = .65\linewidth]{figures/Groupingpe.pdf}
}
\caption{Systolic structure of Grouping module.}
\label{fig:Grouping}
\end{figure}
In the Up-link Grouping module, there are two main submodules: sorting and grouping. The sorting module is implemented by merging network \cite{batcher1968sorting} in pipeline which is shown in Fig. \ref{fig:mergingnetwork}. This sorting network is mainly based on recursion, merging from 2-element comparison to $N$-element comparison (assuming $log_2 N$ is a positive integer). Meanwhile, the merger-$N$ module is a combination of symmetric comparing network and two bitonic sorter of $N/2$ elements. Then the bitonic sorter-$N/2$ can be implemented by a half cleaner-$N/2$ module and two bitonic sorter-$N/4$. The grouping module is implemented by a systolic structure shown in Fig. \ref{fig:Grouping} where each comparing PE is corresponding to a group and decide if the latest input ${b_k^{\rm{ro}}}$ is suitable for the group by comparing it with the latest ${b_l^{\rm{ro}}}$ in this group.
Since the outputs of sorting module is paralleled and the input of grouping module is serial, a parallel-to-serial module is necessary.
\begin{Rem}
Notice that the grouping messages are sent to users through a independent feedback channel which is not contained in our hardware design.
\end{Rem}
\subsubsection{Up-link estimation module}
In the Up-link estimation module, the implementation of Eq. (\ref{equ:UL_1}) is a combination of a complex multiplier, an FFT module and an extraction module. Besides, the implementation of Eq. (\ref{equ:UL_2}) consists of an Inverse Fast Fourier Transform (IFFT) module and a complex multiplier. Due to the sparsity of ${\mathbf{\tilde h}}_k$, the IFFT module can be treated as an $M \times \tau $ matrix and $\tau \times 1$ vector multiplication which can be implemented by systolic structure which consists of $\tau$ PEs for higher efficiency.
\subsection{Optimized Architecture without Rotation}
As we can see from the Fig. \ref{fig:pdftop}, the FFT modules of preamble processing module and Up-link estimation module could be reused since they are not deployed at the same time. However, the spatial signatures of users in the same group are different, leading to the waste of FFT module. Here we find that we can simply omit the rotation operations as the architecture shown in Fig. \ref{fig:pdftopwithoutro}, which reuses the FFT module and reduces the number of FFT modules from $\tau + K$ to $\tau$, saving the resources a lot.
\begin{figure*}[htbp]
\centering
\includegraphics[width = \linewidth]{figures/timeoptimistic.pdf}
\caption{The processing schedule for the system.} \label{fig:timeoptimistic}
\end{figure*}
\begin{table*}[htbp]
\centering
\caption{Resource Cost of the Proposed Estimator}
\begin{tabular}{ l l l l l }
\Xhline{1.0pt}
Modules & Complex Multipliers & Complex Adders & Real Comparators & Registers \\
\hline
\rowcolor{mygray}
LS-based Estimation & $L$ & $L$ & 0 & $L-1$ \\
FFT & ${\log _{_2}}M-1$ & $2{\log _{_2}}M$ & 0 & $M-1$ \\
\rowcolor{mygray}
ABS & 1 & 0 & 0 & 0 \\
Max-selection & 0 & 0 & 1 & 1 \\
\rowcolor{mygray}
Sorting & 0 & 0 & $K{\log _2}K$ & $K{\log_2 K(\log_2 K + 1)}/{2}$ \\
Grouping & 0 & 1 & $\tau$ & $2\tau$ \\
\rowcolor{mygray}
Extraction & 0 & 0 & 1 & 0 \\
IFFT & $\tau$ & $\tau$ & 0 & $\tau-1$ \\
\Xhline{1.0pt}
\end{tabular}
\label{tab:Resource}
\end{table*}
\subsection{Processing Schedule and Overhead Analysis}
For channel estimation under ADMA scheme, the timing of the entire design is shown in Fig. \ref{fig:timeoptimistic}, where $T_s$ is the clock cycle. we can see that each module is processed in pipeline except the UL-grouping module. The timing of the optimized architecture without rotation is the same with it is shown in Fig. \ref{fig:timeoptimistic}. The resource statistics of each module is listed in Table \ref{tab:Resource}. In addition, the latency and processing time of each module is listed in Table \ref{tab:latency}. Here, ``Latency'' is associated with one data package, and ``Processing time'' is associated with $M$ data packages. Notice that $P$ is an integer between $0$ and $M-1$ which is determined by the spatial signature of each user.
\begin{table}[htbp]
\centering
\caption{Latency and Processing Time}
\begin{tabular}{ l l l }
\Xhline{1.0pt}
Modules & Latency ($T_s$) & Processing time ($T_s$) \\
\hline
\rowcolor{mygray}
LS-based Estimation & $L-1$ & $L+M$ \\
FFT & $M-1$ & $2M-1$ \\
\rowcolor{mygray}
Max-Selection & $M$ & $M$ \\
Sorting & - & $\left.{\log_2 K(\log_2 K + 1)}/{2}\right.$ \\
\rowcolor{mygray}
Grouping & - & $K+\tau$ \\
Extraction & $P$ & $P$ \\
\rowcolor{mygray}
IFFT & $\tau$ & $M+\tau$ \\
\Xhline{1.0pt}
\end{tabular}
\label{tab:latency}
\end{table}
\subsection{FPGA Implementation Results}
In order to demonstrate the advantage of channel estimation under ADMA scheme, our architectures are implemented with Xilinx Virtex-7 Ultrascale vu440-flga2892-2-e FPGA. For the ease of Implementation, the parameters are set as $M=128,~K=16,~L=4,~\tau=4,~{\theta_k} \in \{-48.59^{\circ}, -14.48^{\circ}, 48.59^{\circ}, 14.48^{\circ}\}~and~\Delta \theta_k = 2^{\circ}$. The resources overhead and maximum frequency are shown in Table \ref{tab:FPGA}. We can see that the omission of rotation operations brings us $54\%$ reduction in LUTs, $57\%$ reduction in registers, $55\%$ reduction in block RAMs and $60\%$ reduction in DSPs. And for the timing constraints, since the critical path lies in the FFT module, the maximum frequency of these two architecture can both reach 217.39 megahertz.
\begin{table}[htbp]
\centering
\caption{FPGA Implementation Results}
\begin{tabular}{ l l l }
\Xhline{1.0pt}
Structures & With Rotation & Without Rotation \\
\hline
\rowcolor{mygray}
LUTs & $52,416$ & $24,130$ \\
Registers & $90,191$ & $38,464$ \\
\rowcolor{mygray}
Block RAMs & $220$ & $100$ \\
DSPs & $1,092$ & $432$ \\
\rowcolor{mygray}
Frequency (MHz) & $217.39$ & $217.39$ \\
\Xhline{1.0pt}
\end{tabular}
\label{tab:FPGA}
\end{table}
\section{Conclusions}\label{sec:con}
In this paper, the hardware-efficient channel estimator based on ADMA scheme is first proposed. The corresponding optimizations on quantization and approximation are presented as well. To achieve high efficiency and low complexity, the pipelining technique and systolic structure have been employed to tailor the architecture for regularity. Finally, FPGA implementations are given. Suggestions on the choice of rotation are listed. Future work will be directed towards its application in our 5G Cloud Testbed.
\footnotesize
\bibliographystyle{IEEEtran}
|
1804.06598
|
\section{Introduction}
In this paper we study the supremum distribution of a spectrally positive or negative L\'evy process with a piecewise linear drift. We find exact formulas for the distribution of supremum which are expressed by one-dimensional densities of a given L\'evy process. The results can be applied to find ruin probabilities in the case when two insurance companies (or two branches of the same company) divide between them both claims and premia in some specified proportions (proportional reinsurance).
Moreover the formulas can be used for a two-node tandem queue (see Lieshout and Mandjes \cite{li:ma:07}). Avram et al. \cite{av:pa:pi:08:a} investigates a spectrally positive L\'evy process with a broken drift (reduction of the risk problem to one dimension) and they find the double Laplace transform of the infinite time survival probability. As an example they obtain an explicit analytical representation of the infinite time survival probability if the claims are exponentially distributed (the compound Poisson process with exponentially distributed claims).
In Avram et al. \cite{av:pa:pi:08} the related problem is investigated if the accumulated claim amount is modeled by a L\'evy process that admits negative exponential
moments. They find exact formulas for ruin probabilities expressed by ordinary ruin probabilities when the accumulated claim amount process is spectrally negative or a compound Poisson process with exponential claims. Additionally they find asymptotic behavior of ruin probabilities under the Cram\'er assumption. In Foss et al. \cite{fo:ko:pa:ro:17} the same problem is investigated as in Avram et al. \cite{av:pa:pi:08} but the subexponential claims are admitted and an asymptotic behavior of ruin probabilities on finite and infinite time horizon is found.
In the models analyzed in this contribution we assume that the accumulated claim amount process is a spectrally positive or a spectrally negative L\'evy process with one-dimensional density functions.
We find exact formulas for ruin probabilities expressed by one-dimensional densities of an underlying L\'evy process. The main difference of our models and models of Avram et al. \cite{av:pa:pi:08} is that we admit heavy tailed claims and we provide explicit formulas of ruin probabilities both on finite and infinite time horizon unlike Avram et al. \cite{av:pa:pi:08:a} and Avram et al. \cite{av:pa:pi:08} where it is done only on the infinite time horizon.
The layout of the rest of the article is the following. In this section we recall the formulas which will be used in the main results. The next section contains the main results that is the distribution of supremum of a L\'evy process with a broken drift and examples. In Section \ref{sec3} we outline how to apply the main results to ruin probabilities for two collaborating insurance companies.
The last section deals with the identities for Laplace transform of the distribution for supremum of
L\'evy processes with a randomly broken drift and on random intervals.
In Michna et al. \cite{mi:pa:pi:15} a joint distribution of the random variable $Y(T)$
and $\inf_{t< T}Y(t)$ was found where $Y$ is a spectrally negative L\'evy process (we will consider real stochastic processes with time defined on the non-negative half real line).
\begin{theorem}\label{tacneginf}
If $Y$ is a spectrally negative L\'evy process and the one-dimensional distributions of $Y$ are absolutely continuous then
$$
{\rm I\hspace{-0.8mm}P} (\inf_{t< T} Y(t)< -u, \,Y(T)+u\in \td z)=
\td z\int_0^T\frac{z}{T-s}\, p(z, T-s)\,p(-u,s)\td s,
$$
where $T, u>0$, $z\geq 0$ and $p(x,s)$ is a density function of $Y(s)$ for $s>0$.
\end{theorem}
\begin{remark}
We do not expose a linear drift of the process $Y$ but it can be incorporated in the process $Y$.
\end{remark}
If $X$ is a spectrally positive L\'evy process then $X=-Y$ and we get the following corollary.
\begin{corollary}\label{jsp}
If $X$ is a spectrally positive L\'evy process and the one-dimensional distributions of $X$ are absolutely continuous then
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}P} (\sup_{t< T} X(t)\leq u, \,X(T)\in \td z)=}\\
&&\left[f(z,T)-\int_0^T\frac{u-z}{T-s}\, f(z-u, T-s)\,f(u,s)\td s\right]\td z,
\end{eqnarray*}
where $T, u>0$, $z\in (-\infty, u]$ and $f(x,s)$ is a density function of $X(s)$ for $s>0$.
\end{corollary}
Integrating the last formula with respect to $z$ we get the following theorem (see Michna et al. \cite{mi:pa:pi:15} and Michna \cite{mi:11}).
\begin{theorem}\label{mi}
If the one-dimensional distributions of $X$ are absolutely continuous then
\begin{equation}\label{mi1}
{\rm I\hspace{-0.8mm}P}(\sup_{t< T} X(t)>u)
={\rm I\hspace{-0.8mm}P}(X(T)>u)+\int_0^T\frac{{\rm I\hspace{-0.8mm}E}(X(T-s))^-}{T-s}\,
f(u,s)\,{\rm d}s\,,
\end{equation}
where $x^-=-\min\{x,0\}$ and $f(u,s)$ is a density function of $X(s)$ for $s>0$.
\end{theorem}
\begin{remark}
The above formula extends the result of Tak\'acs \cite{ta:65} to L\'evy processes with infinite variation.
\end{remark}
Let us now find the joint distribution of supremum and the value of the process for any spectrally negative L\'evy process. It will easily follow from Corollary \ref{jsp} and the duality lemma.
\begin{corollary}\label{refy}
If $Y$ is a spectrally negative L\'evy process and the one-dimensional distributions of $Y$ are absolutely continuous then
$$
{\rm I\hspace{-0.8mm}P} (\sup_{t< T} Y(t)\leq u, \,Y(T)\in \td z)=
\left[p(z,T)-u\int_0^T\frac{p(u, T-s)}{T-s}\,p(z-u,s)\td s\right]\td z\,,
$$
where $T, u>0$, $z\in (-\infty, u]$ and $p(x,s)$ is a density function of $Y(s)$ for $s>0$.
\end{corollary}
\noindent {\bf Proof:} \
By the duality lemma (see e.g. Bertoin \cite{be:96}) we have that\\ $X((T-t)-)-X(T)\stackrel{d}{=}Y(t)$ in the sense of finite dimensional distributions for $t\leq T$ ($X(t-)$ means the left-hand side limit at $t$). Thus we get
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}P} (\sup_{t< T} X(t)\leq u, \,X(T)\in \td z)=}\\
&&{\rm I\hspace{-0.8mm}P} (\sup_{t< T} X((T-t)-)\leq u, \,X(T)\in \td z)\\
&&={\rm I\hspace{-0.8mm}P} (\sup_{t< T} (X((T-t)-)-X(T))\leq u-z, \,X(T)\in \td z)\\
&&={\rm I\hspace{-0.8mm}P} (\sup_{t< T} Y(t)\leq u-z, \,-Y(T)\in \td z)\,.
\end{eqnarray*}
Substituting $u'=u-z$ and $z'=-z$ and using Corollary \ref{jsp} we obtain the formula.
\newline\vspace{3mm}\hfill $\Box$
Integrating the last formula with respect to $z$ we could get a similar result to eq. (\ref{mi1}) for spectrally negative L\'evy processes. However we obtain a simpler formula from Kendall's identity (see Kendall \cite{ke:57}).
The following theorem can be found in a more general form in Tak\'acs \cite{ta:65} (see also Michna \cite{mi:13} for the distribution of supremum for spectrally negative L\'evy processes).
\begin{theorem}\label{supy}
If $Y$ is a spectrally negative L\'evy process and the one-dimensional distributions of $Y$ are absolutely continuous then
$$
{\rm I\hspace{-0.8mm}P} (\sup_{t< T} Y(t)>u)=u\int_0^T\frac{p(u,s)}{s}\,\td s\,,
$$
where $p(u,s)$ is the density function of $Y(s)$.
\end{theorem}
\noindent {\bf Proof:} \
It follows directly from Kendall's identity (see Kendall \cite{ke:57} or e.g. Sato \cite{sa:99} Th. 46.4).
\newline\vspace{3mm}\hfill $\Box$
\section{Main results and examples}
In this section we analyze the distribution of supremum for both $X(t)-c(t)$ and $Y(t)-c(t)$ where $X$ is a spectrally positive L\'evy process and $Y$ is a spectrally negative L\'evy process and
\begin{equation}\label{d}
c(t)=
\left\{\begin{array}{ll}
c_1 t &\mbox{if }\, t\in[0, T]\\
c_2(t-T)+c_1 T&\mbox{if }\, t\in(T, \infty)\,,
\end{array}
\right.
\end{equation}
where $c_1, c_2\geq 0$.
Since we now expose the drift of the process we will assume that densities of $X(s)$ and $Y(s)$ are
$f(x,s)$ and $p(x,s)$, respectively (unlike the previous section where a linear drift could be incorporated in the processes).
\begin{theorem}\label{main}
If $S>T$ ($S$ is finite or $S=\infty$) and $X(t)$ is absolutely continuous with density $f(x,t)$ then
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}P}(\sup_{t<S}(X(t)-c(t))>u)=A+B\coloneqq}\\
&&{\rm I\hspace{-0.8mm}P}(\sup_{t< T}(X(t)-c_1 t)>u)\\
&&+\,{\rm I\hspace{-0.8mm}P}(\sup_{t< T}(X(t)-c_1 t)\leq u, \sup_{0<t<S-T}(X(t+T)-X(T)-c_2 t)>u-X(T)+c_1 T)\,,
\end{eqnarray*}
where
$$
A={\rm I\hspace{-0.8mm}P}(X(T)-c_1 T>u)+\int_0^T\frac{{\rm I\hspace{-0.8mm}E}(X(T-s)-c_1 (T-s))^-}{T-s}\,
f(u+c_1 s,s)\,{\rm d}s
$$
and
\begin{eqnarray*}
B&=&\int_{0}^\infty {\rm I\hspace{-0.8mm}P}(\sup_{t< S-T}(X(t)-c_2 t)>z)f(-z+u+c_1 T,T)\td z\\
&&-\int_{0}^\infty z\,{\rm I\hspace{-0.8mm}P}(\sup_{t< S-T}(X(t)-c_2 t)>z)\td z\\
&&\,\,\,\,\,\,\cdot\int_{0}^T\frac{f(u+c_1 s,s)}{T-s}f(-z+c_1(T-s),T-s)\td s\,.
\end{eqnarray*}
\end{theorem}
\noindent {\bf Proof:} \
The decomposition $A+B$ we get as follows
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}P}(\sup_{t<S}(X(t)-c(t))>u)=A+B\coloneqq}\\
&&{\rm I\hspace{-0.8mm}P}(\sup_{t< T}(X(t)-c_1 t)>u)\\
&&+\,{\rm I\hspace{-0.8mm}P}((\sup_{t< T}(X(t)-c_1 t)\leq u, \sup_{T<t<S}(X(t)-c_2(t-T)-c_1T)>u)\\
&&={\rm I\hspace{-0.8mm}P}(\sup_{t< T}(X(t)-c_1 t)>u)\\
&&+\,{\rm I\hspace{-0.8mm}P}(\sup_{t< T}(X(t)-c_1 t)\leq u, \sup_{0<t<S-T}(X(t+T)-X(T)-c_2 t)>u-X(T)+c_1 T)\,.
\end{eqnarray*}
The formula for $A$ we directly get from Theorem \ref{mi}.
Let $F(\td x, \td z)$ be the joint distribution of $(\sup_{t< T}(X(t)-c_1 t), X(T)-c_1 T)$.
Then the formula for $B$ follows from the strong Markov property and Corollary \ref{jsp} that is
\begin{eqnarray*}
B&=&\int_0^u\int_{-\infty}^u{\rm I\hspace{-0.8mm}P}(\sup_{t<S-T}(X(t)-c_2 t)>u-z)F(\td x, \td z)\\
&=&\int_{-\infty}^u{\rm I\hspace{-0.8mm}P}(\sup_{t<S-T}(X(t)-c_2 t)>u-z)f(z+c_1 T, T)\td z\\
&&-\int_{-\infty}^u{\rm I\hspace{-0.8mm}P}(\sup_{t<S-T}(X(t)-c_2 t)>u-z)\td z\\
&&\cdot\int_0^T\frac{u-z}{T-s}f(z-u+c_1 (T-s), T-s)f(u+c_1s, s)\td s
\end{eqnarray*}
and substituting $z'=u-z$ we obtain the final formula.
\newline\vspace{3mm}\hfill $\Box$
Similarly we get a formula for spectrally negative L\'evy processes.
\begin{theorem}
If $S>T$ ($S$ is finite or $S=\infty$) and $Y(t)$ is absolutely continuous with density $p(x,t)$ then
${\rm I\hspace{-0.8mm}P}(\sup_{t<S}(Y(t)-c(t))>u)=A+B$
where
$$
A={\rm I\hspace{-0.8mm}P}(\sup_{t< T}(Y(t)-c_1 t)>u)=u\int_0^T\frac{p(u+c_1s,s)}{s}\,\td s
$$
and
\begin{eqnarray*}
B&=&\int_{0}^\infty {\rm I\hspace{-0.8mm}P}(\sup_{t< S-T}(Y(t)-c_2 t)>z)p(-z+u+c_1 T,T)\td z\\
&&-u\,\int_{0}^\infty {\rm I\hspace{-0.8mm}P}(\sup_{t< S-T}(Y(t)-c_2 t)>z)\td z\\
&&\,\,\,\,\,\,\cdot\int_{0}^T\frac{p(-z+c_1s, s)}{T-s}\,p(u+c_1(T-s), T-s)\td s\,.
\end{eqnarray*}
\end{theorem}
\noindent {\bf Proof:} \
Using Corollary \ref{refy} and Th. \ref{supy} we proceed the same way as in the proof of Th. \ref{main}.
\newline\vspace{3mm}\hfill $\Box$
The application of Th. \ref{main} leads to the following example with Brownian motion (see Mandjes \cite{ma:04} and Lieshout and Mandjes \cite{li:ma:07} or Avram et al. \cite{av:pa:pi:08}).
\begin{example}\label{brexpl}
If $W$ is the standard Brownian motion then
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}P}(\sup_{t<\infty}(W(t)-c(t))>u)=}\\
&&\Phi(-uT^{-1/2}-c_1\sqrt{T})+e^{-2c_1u}\Phi(-uT^{-1/2}+c_1\sqrt{T})\\
&&\,\,\,\,\,\,+e^{-2c_2(u+c_1T-c_2T)}\Phi(uT^{-1/2}+(c_1-2c_2)\sqrt{T})\\
&&\,\,\,\,\,\,-e^{2(c_2-c_1)u+2c_2^2T-2c_1c_2T}\Phi(-uT^{-1/2}+(c_1-2c_2)\sqrt{T})\,.
\end{eqnarray*}
Indeed using Theorem \ref{main} and
$$
{\rm I\hspace{-0.8mm}P}(\sup_{t< T}(W(t)-ct)>u)=\Phi(-uT^{-1/2}-c\sqrt{T})+e^{-2cu}\Phi(-uT^{-1/2}+c\sqrt{T})
$$
and
$$
{\rm I\hspace{-0.8mm}P}(\sup_{t<\infty}(W(t)-ct)>u)=e^{-2cu}
$$
for $c\geq 0$ (see e.g. Dębicki and Mandjes \cite{de:ma:15}) we get
\begin{equation}\label{abbr}
{\rm I\hspace{-0.8mm}P}(\sup_{t<\infty}(W(t)-c(t))>u)=A+B\,,
\end{equation}
where
\begin{equation}\label{ab}
A=A(c_1, T, u)\coloneqq\Phi(-uT^{-1/2}-c_1\sqrt{T})+e^{-2uc_1}\Phi(-uT^{-1/2}+c_1\sqrt{T})
\end{equation}
and
\begin{eqnarray*}
\lefteqn{B=}\\
&&e^{-2c_2(u+c_1T-c_2T)}\Phi(uT^{-1/2}+(c_1-2c_2)\sqrt{T})\\
&&\,\,\,\,\,-\,\frac{e^{-c_1u-c_1^2T/2}}{2\pi}\int_{0}^\infty ze^{(c_1-2c_2)z}\td z\int_0^T(T-s)^{-3/2}s^{-1/2}e^{-\frac{z^2}{2(T-s)}-\frac{u^2}{2s}}\td s\,.
\end{eqnarray*}
Let us take $c=c_1=c_2\geq 0$ in eq. (\ref{abbr}). Then $A+B=e^{-2uc}$ and the second summand
of $A$ and the first one of $B$ sum up to $e^{-2uc}$ thus we get
$$
\frac{e^{-cu-c^2T/2}}{2\pi}\int_{0}^\infty ze^{-cz}\td z\int_0^T(T-s)^{-3/2}s^{-1/2}e^{-\frac{z^2}{2(T-s)}-\frac{u^2}{2s}}\td s=
\Phi(-uT^{-1/2}-c\sqrt{T})\,.
$$
Thus using the last identity for $c=2c_2-c_1\geq 0$ we get the second therm of $B$.
Similarly let us take $c=c_1$ and $c_2=0$ in eq. (\ref{abbr}). Then $A+B=1$ and the first summand
of $A$ and the first one of $B$ sum up to 1 thus we get
$$
\frac{e^{cu-c^2T/2}}{2\pi}\int_{0}^\infty ze^{cz}\td z\int_0^T(T-s)^{-3/2}s^{-1/2}e^{-\frac{z^2}{2(T-s)}-\frac{u^2}{2s}}\td s=
\Phi(-uT^{-1/2}+c\sqrt{T})\,.
$$
Thus using the last identity for $c=c_1-2c_2> 0$ we get the second therm of $B$.
\end{example}
\begin{example}
Let $0<T<S<\infty$ and $W$ be the standard Brownian motion then
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}P}(\sup_{t<S}(W(t)-c(t))>u)=}\\
&&A(c_1, T, u)
+\frac{1}{\sqrt{2\pi T}}\int_0^\infty A(c_2, S-T, z) e^{-\frac{(u+c_1T-z)^2}{2T}}\td z\\
&&-\frac{e^{-uc_1-\frac{c_1^2 T}{2}}}{2\pi}\int_0^\infty ze^{c_1 z} A(c_2, S-T, z)\td z
\int_0^T s^{-1/2}(T-s)^{-3/2}\,e^{-\frac{z^2}{2(T-s)}-\frac{u^2}{2s}}\td s\,,
\end{eqnarray*}
where $A(c_1, T, u)$ is defined in eq. (\ref{ab}).
\end{example}
\begin{example}
Let $X(t)$ be gamma L\'evy process with the density
$$f(x, t)=\frac{\delta^t}{\Gamma(t)}x^{t-1}e^{-\delta x}{1\hspace{-1mm}{\rm I}}_{\{x>0\}}$$
where $\delta>0$ and $c(t)$ be defined in eq. (\ref{d}). Using Th. \ref{main} we give the explicit formulas of ${\rm I\hspace{-0.8mm}P}(\sup_{t<S}(X(t)-c(t))>u)=A+B$ for both $T<S<\infty$ and $S=\infty$, respectively. For $T<S<\infty$, we have that
\begin{eqnarray*}
A&=&\frac{\delta^T}{\Gamma(T)}\int_{u+c_1T}^\infty x^{T-1}e^{-\delta x}\td x\\
&&+\,\delta^T e^{-\delta u}\int_0^T\frac{(u+c_1s)^{s-1}e^{-c_1\delta s }}{\Gamma(s)\Gamma(T-s+1)}\td s\int_0^{c_1(T-s)}(c_1(T-s)-x)x^{T-s-1}e^{-\delta x} \td x\\
&\eqqcolon&A(c_1,T,u)
\end{eqnarray*}
and
\begin{eqnarray*}
\lefteqn{B=}\\
&&\frac{\delta^S e^{-\delta (u+c_1T)}}{\Gamma(T)\Gamma(S-T)}\int_0^{u+c_1T} (u+c_1T-z)^{T-1}e^{\delta z}\td z\int_{z+c_2(S-T)}^\infty x^{S-T-1}e^{-\delta x}\td x\\
&&+\frac{\delta^S e^{-\delta (u+c_1T)}}{\Gamma(T)}\int_0^{u+c_1T} (u+c_1T-z)^{T-1}\td z\int_0^{S-T}\frac{(z+c_2s)^{s-1}e^{-c_2\delta s}}{\Gamma(s)\Gamma(S-T-s+1)}\td s\\
&&\quad \cdot \int_0^{c_2(S-T-s)}(c_2(S-T-s)-x)x^{S-T-s-1}e^{-\delta x}\td x\\
&&-\delta^T e^{-\delta (u+c_1T)}\int_0^{c_1T} z e^{\delta z}A(c_2,S-T,z)\td z\\
&&\cdot\int_0^{\frac{c_1T-z}{c_1}}\frac{(u+c_1s)^{s-1}(c_1(T-s)-z)^{T-s-1}}{\Gamma(s)\Gamma(T-s+1)}\td s\,.
\end{eqnarray*}
For $S=\infty$, we additionally assume that $c_2\delta >1$. In this case, since $X(t)$ has finite variation, in view of Th. 4 in Tak\'acs \cite{ta:65} we have
$${\rm I\hspace{-0.8mm}P}(\sup_{t<\infty}(X(t)-c_2t)>z)=\frac{c_2\delta-1}{\delta}\,e^{-\delta z}\int_0^\infty\frac{\delta^s}{\Gamma(s)}(z+c_2 s)^{s-1}e^{-\delta c_2 s}\td s\,, \,z>0\,.$$
Let us notice that $A$ is the same as in the case $T<S<\infty$ and using the above expression we get
\begin{eqnarray*}
\lefteqn{B=}\\
&&\frac{(c_2\delta-1)\delta^{T-1}e^{-\delta(u+c_1T)}}{\Gamma(T)}\int_0^{u+c_1T}(u+c_1T-z)^{T-1}\td z\\
&&\cdot\int_0^\infty\frac{\delta^s}{\Gamma(s)}(z+c_2s)^{s-1}e^{-\delta c_2 s}\td s\\
&&-\,(c_2\delta-1)\delta^{T-1} e^{-\delta (u+c_1T)}\int_0^{c_1T} z \td z\\
&&\cdot\int_0^{T-\frac{z}{c_1}}\frac{(u+c_1s)^{s-1}(c_1(T-s)-z)^{T-s-1}}{\Gamma(s)\Gamma(T-s+1)}\td s\int_0^\infty\frac{\delta^t}{\Gamma(t)}(z+c_2 t)^{t-1}e^{-\delta c_2 t}\td t.
\end{eqnarray*}
\end{example}
\begin{example}
Let $Z(s)$ be an $\alpha$-stable L\'evy process totally skewed to the right (that is with $\beta=1$ see e.g. Janicki and Weron \cite{ja:we:94} or Samorodnitsky and Taqqu \cite{sa:ta:94}) with $1<\alpha<2$ and expectation zero then its density function is the following
$$
f(x,s)=\frac{1}{\pi s^{1/\alpha}}
\int_0^\infty e^{-t^\alpha}\cos\left(ts^{-1/\alpha}x-t^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td t
$$
(see e.g. Nolan \cite{no:97}). Then (see Michna et al. \cite{mi:pa:pi:15}) for $c>0$
\begin{eqnarray*}
\lefteqn{A(c,\infty,u)\coloneqq{\rm I\hspace{-0.8mm}P}(\sup_{t<\infty}(Z(t)-ct)>u)=}\\
&&\frac{c}{\pi} \int_0^\infty s^{-1/\alpha}\,\td s
\int_0^\infty e^{-t^\alpha}\cos\left(ts^{-1/\alpha}(u+cs)-t^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td t
\end{eqnarray*} and for any $c$ and $T>0$
\begin{eqnarray}
\lefteqn{A(c,T,u)\coloneqq {\rm I\hspace{-0.8mm}P}(\sup_{t<T}(Z(t)-ct)>u)=}\label{astable}\\
&&\frac{1}{\pi T^{1/\alpha}} \int_u^\infty\td x
\int_0^\infty e^{-t^\alpha}\cos\left(tT^{-1/\alpha}(x+cT)-t^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td t\nonumber\\
&& +\frac{1}{\pi}\int_0^T \frac{{\rm I\hspace{-0.8mm}E}(Z(T-s)-c(T-s))^-}{(T-s)s^{1/\alpha}}\,\td s\nonumber\\
&&\,\,\,\,\,\,\,\,\cdot\int_0^\infty e^{-t^\alpha}\cos\left(ts^{-1/\alpha}(u+cs)-t^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td t\nonumber
\end{eqnarray}
where
$$
{\rm I\hspace{-0.8mm}E}(Z(s)-cs)^-=\frac{-1}{\pi s^{1/\alpha}}\int_{-\infty}^0 x\, \td x\int_0^\infty e^{-t^\alpha}\cos\left(ts^{-1/\alpha}(x+cs)-t^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td t\,.
$$
Thus using Th. \ref{main} for $S>T>0$ (allowing also $S=\infty$ and putting $\infty-T=\infty$) we get
$$
{\rm I\hspace{-0.8mm}P}(\sup_{t<S}(Z(t)-c(t))>u)=A+B=A+B_1-B_2\,,
$$
where $A=A(c_1, T, u)$ (see eq. (\ref{astable})) and
\begin{eqnarray*}
\lefteqn{B_1=}\\
&&\frac{1}{\pi T^{1/\alpha}}\int_0^\infty A(c_2,S-T,z)\,\td z\\
&&\cdot\int_0^\infty e^{-t^\alpha}\cos\left(tT^{-1/\alpha}(-z+u+c_1T)-t^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td t
\end{eqnarray*}
and
\begin{eqnarray*}
\lefteqn{B_2=}\\
&&\frac{1}{\pi^2}\int_0^\infty z\, A(c_2, S-T, z)\,\td z\int_0^T\frac{\td s}{(T-s)^{1/\alpha+1}s^{1/\alpha}}\\
&&\cdot\int_0^\infty e^{-t^\alpha}\cos\left(ts^{-1/\alpha}(u+c_1s)-t^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td t\\
&& \cdot\int_0^\infty e^{-w^\alpha}\cos\left(w(T-s)^{-1/\alpha}(-z+c_1(T-s))-w^\alpha\tan{\frac{\pi\alpha}{2}}\right)\td w\,.
\end{eqnarray*}
\end{example}
\section{Two collaborating insurance companies}\label{sec3}
Let us consider two insurance companies which split the amount they pay out of each claim in
proportions $\delta_1> 0$ and $\delta_2> 0$ where $\delta_1+\delta_2=1$, and receive premiums at rates $p_1>0$ and $p_2>0$, respectively (see Avram et al. \cite{av:pa:pi:08}). Then the corresponding risk processes are
$$
R_i(t)=x_i+p_i t-\delta_i X(t)\,,
$$
where $i=1,2$, $x_i>0$ and $X(t)$ is an accumulated claim amount up to time $t$. One can be interested in the instant when at least one insurance company is ruined
$$
\tau_{or}(x_1, x_2)=\inf\{t>0: R_1(t)<0\,\, \mbox{or}\,\, R_2(t)<0\}
$$
and in the instant when both insurance companies are simultaneously ruined
$$
\tau_{sim}(x_1, x_2)=\inf\{t>0: R_1(t)<0\,\, \mbox{and}\,\, R_2(t)<0\}\,.
$$
Let the ultimate ruin probabilities be
$$
\psi_{or}(x_1, x_2)={\rm I\hspace{-0.8mm}P}(\tau_{or}(x_1, x_2)<\infty)\,,\,\,\,\,\,\,
\psi_{sim}(x_1, x_2)={\rm I\hspace{-0.8mm}P}(\tau_{sim}(x_1, x_2)<\infty)
$$
and
$$
\psi_{1}(x_1)={\rm I\hspace{-0.8mm}P}(\tau_1(x_1)<\infty)\,,\,\,\,\,\,\,
\psi_{2}(x_2)={\rm I\hspace{-0.8mm}P}(\tau_2(x_2)<\infty)\,,
$$
where $\tau_i(x_i)=\inf\{t>0: R_i(t)<0\}$ for $i=1,2$. One can also be interested in the following
ruin probability
$$
\psi_{and}(x_1, x_2)={\rm I\hspace{-0.8mm}P}(\tau_1(x_1)<\infty\,\,\mbox{and}\,\,\tau_2(x_2)<\infty)
$$
and the following relation is easily to notice
$$
\psi_{and}(x_1, x_2)=\psi_1(x_1)+\psi_2(x_2)-\psi_{or}(x_1,x_2)\,.
$$
Let us put $u_i=x_i/\delta_i$ and $c_i=p_i/\delta_i$ where $i=1,2$. Then we get
$$
\tau_{or}(x_1, x_2)=\inf\{t>0: X(t)>u_1+c_1 t\,\,\mbox{or}\,\,X(t)>u_2+c_2 t\}
$$
and
$$
\tau_{sim}(x_1, x_2)=\inf\{t>0: X(t)>u_1+c_1 t\,\,\mbox{and}\,\,X(t)>u_2+c_2 t\}\,.
$$
If the lines $y=u_1+c_1 t$ and $y=u_2+c_2 t$ do not cross each other in the first quadrant then
the ruin probabilities $\psi_{or}(x_1, x_2)$ and $\psi_{sim}(x_1, x_2)$ reduce to ordinary ruin probabilities of a risk process with a linear drift. If they cross each other in the first quadrant and e.g. $u_1<u_2$ ($c_1>c_2$) then
\begin{equation}\label{orfor}
\psi_{or}(x_1, x_2)={\rm I\hspace{-0.8mm}P}(\sup_{t<\infty}(X(t)-c(t))>u_1)\,,
\end{equation}
where $c(t)$ is defined in eq. (\ref{d}) with $T=(u_2-u_1)/(c_1-c_2)$ (we take $c(t)=\min(u_1+c_1 t, u_2+c_2 t)-u_1$).
Similarly, if the lines have a common point in the first quadrant and e.g. $u_2<u_1$ ($c_2>c_1$) then
\begin{equation}\label{simfor}
\psi_{sim}(x_1, x_2)={\rm I\hspace{-0.8mm}P}(\sup_{t<\infty}(X(t)-c(t))>u_1)\,,
\end{equation}
where $c(t)$ is defined in eq. (\ref{d}) with $T=(u_2-u_1)/(c_1-c_2)$ (we take $c(t)=\max(u_1+c_1 t, u_2+c_2 t)-u_1$).
\begin{example}
Let $X(t)$ be the standard Brownian motion. Then using eq. (\ref{orfor}) and Example \ref{brexpl}
we get for $u_1<u_2$ and $c_1>c_2$
\begin{eqnarray*}
\lefteqn{\psi_{or}(x_1, x_2)=}\\
&&\Phi(a(-u_1,-c_1))+e^{-2c_1u_1}\Phi(a(-u_1, c_1))\\
&&\,\,\,+e^{-2c_2u_2}\Phi(a(u_1, c_1-2c_2))-e^{-2(c_1-2c_2)u_1-2c_2u_2}\Phi(a(-u_1, c_1-2c_2))\,,
\end{eqnarray*}
where $a(u,c)=uT^{-1/2}+c\sqrt{T}$, $T=(u_2-u_1)/(c_1-c_2)$, $u_i=x_i/\delta_i$ and $c_i=p_i/\delta_i$ for $i=1,2$. This formula recovers the result of Avram et al. \cite{av:pa:pi:08} Eq. (67).
The same way we obtain the formula for $\psi_{sim}(x_1, x_2)$.
\end{example}
In a similar way we can consider ruin probabilities on a finite time horizon.
\section{Randomly broken drift and random\\ interval}
In the fluctuation theory there are many interesting identities for L\'evy processes and an exponentially distributed time e.g. the distribution of supremum on an exponentially distributed time interval (see e.g. Bertoin \cite{be:96} Sec. VI. 2. and Sec. VII).
Thus let us consider a spectrally positive L\'evy process $X$ with a randomly broken drift that is let us assume that $T$ (see eq. (\ref{d})) is an exponentially distributed random variable with mean $1/\lambda$ independent of the process $X$.
Moreover let us investigate two cases $S=\infty$ (see Th. \ref{main}) and $S-T=V$ is a positive random variable independent of the process $X$ and the random variable
$T$.
We put
$$\varphi_i(\gamma)=\ln {\rm I\hspace{-0.8mm}E}\exp(-\gamma( X(1)-c_i)),\, i=1,2\,,$$
where $\gamma\geq 0$
and $\overleftarrow{\varphi}_i(\lambda), i=1,2$ is the inverse function of $\varphi_i$.
\begin{theorem}\label{mainlap}
If $X$ is a spectrally positive L\'{e}vy process and $T$ is an exponential random variable with mean $1/\lambda>0$ independent of $X$ then for any \\
$\gamma>\overleftarrow{\varphi}_1(\lambda)$
\begin{eqnarray}\label{mainexp}
\lefteqn{{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t< T+V} (X(t)-c(t))}=}\nonumber\\
&&{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_ {t< T} (X(t)-c_1t)}\\
&&+\,\frac{\gamma\lambda}{\varphi_1(\gamma)-\lambda}\left[\frac{1-{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{ t< V}(X(t)-c_2 t)}}{\gamma}-\frac{1-{\rm I\hspace{-0.8mm}E} e^{-\overleftarrow{\varphi}_1(\lambda)\sup_{t<V}(X(t)-c_2 t)}}{\overleftarrow{\varphi}_1(\lambda)}\right]\nonumber
\end{eqnarray}
where $V$ is a positive random variable independent of $X$ and $T$.
\end{theorem}
\noindent {\bf Proof:} \
Observe that for $\gamma>0$
\begin{equation*}{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<T+V} (X(t)-c(t))}=1-\gamma\int_{0}^\infty e^{-\gamma u}\pk{\sup_{t<T+V}X(t)-c(t)>u}\td u
\end{equation*}
and
\begin{eqnarray*}
\lefteqn{\pk{\sup_{t<T+V}(X(t)-c(t))>u}}\\
&&=\pk{\sup_{t<T}(X(t)-c_1 t)>u}\\
&&+\,\pk{\sup_{t<T}(X(t)-c_1 t)\leq u, \sup_{t<V}(X(t+T)-X(T)-c_2 t)>u-X(T)+c_1 T}.
\end{eqnarray*}
Thus we have that
\begin{eqnarray}\label{I123}
\int_{0}^\infty e^{-\gamma u}\pk{\sup_{t<T+V}X(t)-c(t)>u}\td u= I_1+I_2\,,
\end{eqnarray}
where
$$I_1\coloneqq\int_{0}^\infty e^{-\gamma u}\pk{\sup_{t<T}X(t)-c_1t>u}\td u=\frac{1-{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<T} (X(t)-c_1t)}}{\gamma}$$
and
\begin{eqnarray*}
\lefteqn{I_2 \coloneqq\int_{0}^\infty e^{-\gamma u}}\\
&&\cdot\pk{\sup_{t<T}(X(t)-c_1 t)\leq u, \sup_{t<V}(X(t+T)-X(T)-c_2 t)>u-X(T)+c_1 T}\td u\,.
\end{eqnarray*}
By the fact that $T$ is exponentially distributed and independent of $X$ and $V$ we have
\begin{eqnarray*}
\lefteqn{I_2=}\\
&&\lambda\int_0^\infty e^{-\lambda s}\td s\,\int_{0}^\infty e^{-\gamma u}\\
&&\cdot\pk{\sup_{t<s}(X(t)-c_1 t)\leq u, \sup_{t<V}(X(t+s)-X(s)-c_2 t)>u-X(s)+c_1 s}\td u\,.
\end{eqnarray*}
Moreover, by the independence of $X(t+s)-X(s)-c_2 t,\, t\geq 0$ and $X(s)-c_1s$ and the fact that
$$\pk{\sup_{t<s}(X(t)-c_1 t)\leq u, u-X(s)+c_1 s\leq z}=0\,, \quad z<0$$
we have
\begin{eqnarray*}
I_2&=&\lambda\int_0^\infty e^{-\lambda s}\,\td s\int_{0}^\infty e^{-\gamma u}\,\td u\int_{0}^\infty \pk{\sup_{t<V}(X(t)-c_2 t)>z}\\
&&\cdot\pk{\sup_{t<s}(X(t)-c_1 t)\leq u, u-X(s)+c_1 s\in \td z}\\
&=&\lambda\int_{0}^\infty e^{-\gamma u}\,\td u\int_{0}^\infty e^{-\lambda s}\,\td s\int_0^\infty\pk{\sup_{t<V}(X(t)-c_2 t)>z}\\
&&\cdot \pk{\inf_{t<s}(u-X(t)+c_1 t)>0, u-X(s)+c_1 s\in \td z}\,.
\end{eqnarray*}
Due to Suprun \cite{su:76} (see also Bertoin \cite{be:97} Lemma 1) we have that
\begin{eqnarray*}
&&\int_0^\infty e^{-\lambda s}\pk{\inf_{t<s}(u-X(t)+c_1 t)>0, u-X(s)+c_1 s\in \td z} \td s\\
&&\quad = \left[e^{-\overleftarrow{\varphi}_1(\lambda)z} W^{(\lambda)}(u)-{1\hspace{-1mm}{\rm I}}(u\geq z)W^{(\lambda)}(u-z)\right]\td z\,,
\end{eqnarray*}
where ${1\hspace{-1mm}{\rm I}}(\cdot)$ is the indicator function and $W^{(\lambda)}: [0,\infty)\rightarrow [0,\infty)$ is a continuous and increasing function such that
$$\int_0^\infty e^{-\gamma x} \,W^{(\lambda)}(x)\td x=\frac{1}{\varphi_1(\gamma)-\lambda}\,, \quad \gamma>\overleftarrow{\varphi}_1(\lambda)\,. $$
Consequently, for $\gamma>\overleftarrow{\varphi}_1(\lambda)$
\begin{eqnarray*}
\lefteqn{I_2=}\\
&&\lambda\int_{0}^\infty\int_{0}^\infty e^{-\gamma u}\pk{\sup_{t<V}(X(t)-c_2 t)>z}\\
&&\cdot\left[e^{-\overleftarrow{\varphi}_1(\lambda)z} W^{(\lambda)}(u)-\mathbb{I}(u\geq z)W^{(\lambda)}(u-z)\right]\td z\td u\\
&=&\lambda \int_{0}^\infty e^{-\overleftarrow{\varphi}_1(\lambda)z}\pk{\sup_{t< V}(X(t)-c_2 t)>z}\td z\int_{0}^\infty e^{-\gamma u}W^{(\lambda)}(u)\td u\\
&&-\lambda \int_{0}^\infty \pk{\sup_{t<V}(X(t)-c_2 t)>z}\td z\int_0^\infty \mathbb{I}(u\geq z)e^{-\gamma u}W^{(\lambda)}(u-z)\td u \\
&=& \frac{\lambda}{\varphi_1(\gamma)-\lambda}\left[\int_{0}^\infty e^{-\overleftarrow{\varphi}_1(\lambda)z}\pk{\sup_{t<V}(X(t)-c_2 t)>z}\td z\right.\\
&&\left.- \int_{0}^\infty e^{-\gamma z}\pk{\sup_{t<V}(X(t)-c_2 t)>z}\td z\right]\\
&=& \frac{\lambda}{\varphi_1(\gamma)-\lambda}\left[\frac{1-{\rm I\hspace{-0.8mm}E} e^{-\overleftarrow{\varphi}_1(\lambda)\sup_{t<V}(X(t)-c_2 t)}}{\overleftarrow{\varphi}_1(\lambda)}-\frac{1-{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<V}(X(t)-c_2 t)}}{\gamma}\right].
\end{eqnarray*}
\newline\vspace{3mm}\hfill $\Box$
\begin{corollary} Under the assumption of Theorem \ref{mainlap}, if $V=\infty$, then
\begin{equation}\label{laplaceinf}
{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<\infty} (X(t)-c(t))}
=\frac{\gamma\lambda \varphi_2'(0)[\varphi_2(\gamma)-\varphi_2(\overleftarrow{\varphi}_1(\lambda))]}{\varphi_2(\gamma)(\varphi_1(\gamma)-\lambda)\varphi_2(\overleftarrow{\varphi}_1(\lambda))}\,.
\end{equation}
If $V$ is an exponentially distributed random variable with mean $1/\theta>0$ independent of $X$ and $T$ then
\begin{equation}\label{laplaceT}
{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<T+V} (X(t)-c(t))}
=
\gamma \lambda\theta\,\frac{\frac{\overleftarrow{\varphi}_2(\theta)-\overleftarrow{\varphi}_1(\lambda)}{\theta-\varphi_2(\overleftarrow{\varphi}_1(\lambda))}
-\frac{\overleftarrow{\varphi}_1(\lambda)[\overleftarrow{\varphi}_2(\theta)-\gamma]}{\gamma[\theta-\varphi_2(\gamma)]}}{\overleftarrow{\varphi}_1(\lambda)\overleftarrow{\varphi}_2(\theta)[\varphi_1(\gamma)-\lambda]}\,.
\end{equation}
\end{corollary}
\noindent {\bf Proof:} \
{\underline{Case $V=\infty$}}. It is well-known that
\begin{eqnarray}\label{laplace1}{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<T} (X(t)-c_1t)}=\frac{\lambda}{\lambda-\varphi_1(\gamma)}\left(1-\frac{\gamma}{\overleftarrow{\varphi}_1(\lambda)}\right)\,,
\end{eqnarray}
where $\gamma\geq 0$
(see e.g. Bertoin \cite{be:96} eq. (3) p. 192 or Th. 4.1 in Dębicki and Mandjes \cite{de:ma:15}).
Moreover, by Th. 3.2 in Dębicki and Mandjes \cite{de:ma:15} (or going with $\lambda$ to 0 in the previous identity), it follows that
$${\rm I\hspace{-0.8mm}E} \exp\left(-\gamma\sup_{t<\infty}(X(t)-c_2 t)\right)=\frac{\gamma\varphi_2'(0)}{\varphi_2(\gamma)}\,.$$
Consequently, by (\ref{mainexp}) for $\gamma>0$
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<\infty} (X(t)-c(t))}=}\\
&&\frac{\lambda}{\lambda-\varphi_1(\gamma)}\left[1-\frac{\gamma}{\overleftarrow{\varphi}_1(\lambda)}\right]
+\frac{\gamma\lambda}{\varphi_1(\gamma)-\lambda}\left[\frac{1-\frac{\gamma\varphi_2'(0)}{\varphi_2(\gamma)}}
{\gamma}-\frac{1-\frac{\overleftarrow{\varphi}_1(\lambda)\varphi_2'(0)}{\varphi_2(\overleftarrow{\varphi}_1(\lambda))}}{\overleftarrow{\varphi}_1(\lambda)}\right]\\
&=&\frac{\gamma \lambda\varphi_2'(0)[\varphi_2(\gamma)-\varphi_2(\overleftarrow{\varphi}_1(\lambda))]}{\varphi_2(\gamma)(\varphi_1(\gamma)-\lambda)\varphi_2(\overleftarrow{\varphi}_1(\lambda))}\,.
\end{eqnarray*}
{\underline{Case $V$ exponentially distributed}}.
Using (\ref{laplace1}), for $\gamma\geq 0$ we have that
$${\rm I\hspace{-0.8mm}E}\exp\left(-\gamma\sup_{t< V}(X(t)-c_2 t)\right)=\frac{\theta}{\theta-\varphi_2(\gamma)}\left(1-\frac{\gamma}{\overleftarrow{\varphi}_2(\theta)}\right).$$
Recalling (\ref{laplace1}), for $\gamma>0$ it follows that
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<T+V} X(t)-c(t)}=}\\
&&\frac{\lambda}{\lambda-\varphi_1(\gamma)}\left(1-\frac{\gamma}{\overleftarrow{\varphi}_1(\lambda)}\right)\\
&&+\frac{\gamma\lambda}{\varphi_1(\gamma)-\lambda}\left[\frac{1-\frac{\theta}{\theta-\varphi_2(\gamma)}\left[1-\frac{\gamma}{\overleftarrow{\varphi}_2(\theta)}\right]}
{\gamma}-\frac{1-\frac{\theta}{\theta-\varphi_2(\overleftarrow{\varphi}_1(\lambda))}\left[1-\frac{\overleftarrow{\varphi}_1(\lambda)}{\overleftarrow{\varphi}_2(\theta)}\right]}
{\overleftarrow{\varphi}_1(\lambda)}\right]\\
&=&
\gamma \lambda\theta\,\frac{\frac{\overleftarrow{\varphi}_2(\theta)-\overleftarrow{\varphi}_1(\lambda)}{\theta-\varphi_2(\overleftarrow{\varphi}_1(\lambda))}
-\frac{\overleftarrow{\varphi}_1(\lambda)[\overleftarrow{\varphi}_2(\theta)-\gamma]}{\gamma[\theta-\varphi_2(\gamma)]}}{\overleftarrow{\varphi}_1(\lambda)\overleftarrow{\varphi}_2(\theta)[\varphi_1(\gamma)-\lambda]}\,.
\end{eqnarray*}
\newline\vspace{3mm}\hfill $\Box$
\begin{corollary}
Let $W$ be the standard Brownian motion. Then $$\varphi_1(\gamma)=\frac{1}{2}\gamma^2+c_1\gamma\,, \quad \varphi_2(\gamma)=\frac{1}{2}\gamma^2+c_2\gamma\,,$$
$$\overleftarrow{\varphi}_1(\lambda)=\sqrt{c_1^2+2\lambda}-c_1\,, \quad \overleftarrow{\varphi}_2(\lambda)=\sqrt{c_2^2+2\lambda}-c_2\,.$$
Consequently, for $\gamma>\sqrt{c_1^2+2\lambda}-c_1$
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<\infty} (W(t)-c(t))}=}\\
&&
\frac{\gamma \lambda c_2 (\frac{1}{2}\gamma^2+c_2\gamma-c_1^2-\lambda -(c_2-c_1)\sqrt{c_1^2+2\lambda}+c_1c_2)}{(\frac{1}{2}\gamma^2+c_1\gamma-\lambda)(\frac{1}{2}\gamma^2+c_2\gamma)(c_1^2+\lambda +(c_2-c_1)\sqrt{c_1^2+2\lambda}-c_1c_2)}\\
\end{eqnarray*}
and
\begin{eqnarray*}
\lefteqn{{\rm I\hspace{-0.8mm}E} e^{-\gamma\sup_{t<T+V} (W(t)-c(t))}=}\\
&&
\gamma\lambda\theta\, \frac{\frac{\sqrt{c_2^2+2\theta}-\sqrt{c_1^2+2\lambda}+c_1-c_2}{\theta-c_1^2-\lambda-(c_2-c_1)\sqrt{c_1^2+2\lambda}+c_1c_2}-\frac{(\sqrt{c_1^2+2\lambda}-c_1)(\sqrt{c_2^2+2\theta}-c_2-\gamma)}{\gamma(\theta-\frac{1}{2}\gamma^2-c_2\gamma)}}{(\sqrt{c_1^2+2\lambda}-c_1)(\sqrt{c_2^2+2\theta}-c_2)(\frac{1}{2}\gamma^2+c_1\gamma-\lambda)}\,.
\end{eqnarray*}
\end{corollary}
\subsection*{Acknowledgments}
The author would like to express his sincere thanks to Professors Krzysztof Dębicki and Peng Liu for valuable comments and remarks and especially for pointing the problems with random time intervals.
|
1209.2955
|
\section{Introduction}
\label{intro}
The $\bar K NN$ system has been the subject of much attention and recent papers converge to having bindings of about 20 MeV and large widths of about 80 MeV \cite{Dote:2008hw,Ikeda:2010tk,Barnea:2012qa,Bayar:2011qj,Oset:2012gi,Bayar:2012rk}. A fraction of about 30 MeV of the width of the state of comes from absorption of the $\bar K$ on the pair of nucleons, recently evaluated with precision in \cite{melaabsorption}. The large width can be intuitively understood since the $\bar K N $ merges into a $\Lambda(1405)$ that has a width of about 30 MeV, but since the $\Lambda(1405)$ can be formed with either nucleon, the width can be estimated of the order of 60 MeV, to which the absorption \cite{melaabsorption} must be added. It is no wonder that with a width much larger than the binding such state has not been found in spite of searches and claims (see discussion in \cite{Oset:2007vu,Ramos:2007zz}).
The fate of the analogous $DNN$ system could be quite different. Indeed, the analogous resonance, according to studies of the $DN$ interaction with coupled channels \cite{Hofmann:2005sw,Mizutani:2006vq}, is the $\Lambda_c(2595)$, which has a width of 2.6 MeV \cite{pdg}. On the other hand, the binding of the $DN$ system, that by analogy to the $\bar K N$ goes as the relativistic energy of the $D$, should also be bigger than in the case of the $\bar K N$ system. As a consequence we are led to have a state more bound and with a smaller width, which could be easily observable. In the study done in \cite{dnn} this is what is observed looking into the problem from two perspectives: one is using the Fixed Center approximation to the Faddeev equations and the other one using a variational calculation. Both methods converge into a common answer providing a state around 3500 MeV with a width of about 30-40 MeV counting the absorption of the $D$ by two nucleons.
\section{The Fixed Center Approximation}
In this approach we consider that the two nucleons form a cluster and that the $D$ scatters with these nucleons without changing them from their ground state. This is fair when one has a bound D, which has no energy to excite the $NN$ system. Under this assumption one has a set of coupled equations involving partition functions which for the $D^0 p p $ system sum the diagrams that we can see in Fig. \ref{fig:tpp22}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.90\textwidth]{tpp1parca.eps}\\
\includegraphics[width=0.55\textwidth]{tpp2parca.eps}\\
\includegraphics[width=0.95\textwidth]{tpp3parca.eps}
\caption{Diagrammatic representations of the partition functions for the $D^0 p p \rightarrow D^0 p p $.}
\label{fig:tpp22}
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.65\textwidth]{newdnn.eps}
\caption{Diagrammatic representation of the $D (N N)$ absorption.}
\label{FCAfignew}
\end{figure}
These amplitudes fulfill a set of coupled equations
\begin{eqnarray}
T_{p}&=&t_{p}+t_{p}G_0T_{p}+t_{ex}G_0T_{ex}^{(p)}\nonumber\\
T_{ex}^{(p)}&=&t_{0}^{(p)}G_0T_{ex}^{(n)}\nonumber\\
T_{ex}^{(n)}&=&t_{ex}+t_{ex}G_0T_{p}+t_{0}^{(n)}G_0T_{ex}^{(p)},
\label{Eq:tptexp}
\end{eqnarray}
where the two-body amplitudes are given as
$t_p=t_{D^0 p , D^0 p}$, $t_{ex}=t_{D^0 p , D^+ n}$, $t_{0}^{(p)}=t_{D^+ p , D^+ p}$, and $t_{0}^{(n)}=t_{D^+ n , D^+ n}$. A set of similar, but easier equations, are obtained for scattering of the $DNN$ in isospin $I=3/2$ such that the proper $I=1/2$ amplitude where the bound $DNN$ state appears is given by
\begin{eqnarray}
T^{(1/2)}&=&\frac{\frac{3}{2}t^{(0)}+\frac{1}{2}t^{(1)}+2G_0t^{(0)}t^{(1)}}{1+\frac{1}{2}(t^{(1)}-t^{(0)})G_0-G_0^2t^{(0)}t^{(1)}},
\nonumber
\end{eqnarray}
where $t^{(0)}, ~t^{(1)}$ are the isospin $I=0,1$ $DN$ amplitudes and $G_0$ is the $D$ propagator folded by the $NN$ form factor.
\begin{equation}
G_0=\int\frac{d^3q}{(2\pi)^3}F_{NN}(q)\frac{1}{{q^0}^2-\vec{q}\,^2-m_{D}^2+i\epsilon} ,
\label{Eq:gzero}
\end{equation}
The absorption of the $D$ by two nucleons is based on the diagrams of Fig. \ref{FCAfignew}, and they are included in a nonperturbative way where the elementary $DN$ amplitudes are already modified to account by the possibility of the $D$ being absorbed by a second nucleon.
The modulus squared amplitude for the $DNN$ amplitude is shown in Fig. \ref{fig:redtmats0}. We can see a clear peak around 3500 MeV with a width of about 20-30 MeV, which indicates the appearance of a state of the $DNN$ system.
\section{Variational calculation}
\label{sec:2}
Here we calculate the energy of the $DNN$ system with a variational approach formulated for the $\bar{K}NN$ system in Refs.~\cite{Dote:2008hw,Dote:2008in}. As in the case of the FCA, we consider the $DNN$ system with total isospin $I=1/2$ and the total spin-parity $J^{P}=0^{-}$. The trial wave function for the state is prepared with two components:
\begin{equation}
\ket{\Psi^{J=0}}
= (\mathcal{N}^{0})^{-1}[\ket{\Phi_{+}^{0}}
+C^{0}\ket{\Phi_{-}^{0}}] ,
\nonumber
\end{equation}
where $\mathcal{N}^{0}$ is a normalization constant and $C^{0}$ is a mixing coefficient. In the main component $\ket{\Phi_{+}^{0}}$, two nucleons are combined into spin $S_{NN}=0$ and isospin $I_{NN}=1$ so all the two-body subsystems can be in $s$ wave. We also allow a mixture of the $\ket{\Phi_{-}^{0}}$ component where both spin and isospin are set to be zero, so the orbital angular momentum between two nucleons is odd.
We consider the following Hamiltonian in this study:
\begin{equation}
\hat{H}
= \hat{T}+\hat{V}_{NN}+Re\hat{V}_{DN}
-\hat{T}_{c.m.} ,
\label{eq:Hamiltonian}
\end{equation}
where $\hat{T}$ is the total kinetic energy, $\hat{V}_{DN}$ is the $DN$ potential term which is the sum of the contributions from two nucleons, and $\hat{T}_{c.m.}$ is the energy of the center-of-mass motion. For the $NN$ potential $\hat{V}_{NN}$, we use three models: HN1R which is constructed from Hasegawa-Nagata No.1 potential~\cite{PTP45.1786}, the Minnesota force~\cite{Thompson:1977zz}, and the gaussian-fitted version of the Argonne v18 potential~\cite{Wiringa:1994wb}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{withdelGreducedTsqs0.eps}
\caption{Modulus squared of the three-body scattering amplitude for $I=1/2$ and $J=0$ including absorption with reduced $NN$ radius.}
\label{fig:redtmats0}
\end{figure}
The $DN$ potential in this approach is obtained by studying the $DN$ scattering in the coupled channels of \cite{Mizutani:2006vq} and eliminating all except for the $DN$ one with an effective potential such as to obtain the same scattering amplitude as with the coupled channels \cite{Hyodo:2007jq}. As in \cite{Mizutani:2006vq}, we consider seven (eight) coupled channels in the isospin $I=0$ ($I=1$) sector, $DN$, $\pi\Sigma_{c}$, $\eta\Lambda_{c}$, $K\Xi_{c}$, $K\Xi_{c}^{\prime}$, $D_{s}\Lambda$, and $\eta^{\prime}\Lambda_{c}$ ($DN$, $\pi\Lambda_{c}$, $\pi\Sigma_{c}$, $\eta\Sigma_{c}$, $K\Xi_{c}$, $K\Xi_{c}^{\prime}$, $D_{s}\Sigma$, and $\eta^{\prime}\Sigma_{c}$).
\begin{table}[tdp]
\caption{Results of the energy compositions in the variational calculation for the ground state of the $DNN$ system with total isospin $I=1/2$ (range parameter $a_{s}=0.4$ fm). Terms ``bound'' and ``unbound'' are defined with respect to the $\Lambda_{c}^{*}N$ threshold. All the numbers are given in MeV.}
\begin{center}
\begin{tabular}{lrrrr}
& HN1R & & Minnesota & Av18 \\
& $J=1$ & $J=0$ & $J=0$ & $J=0$ \\
\hline
& unbound & bound & bound & bound \\
$B$ & 208 & 225 & 251 & 209 \\
$M_{B}$ & 3537 & 3520 & 3494 & 3536 \\
$\Gamma_{\pi Y_{c}N}$ & - & 26 & 38 & 22 \\[5pt]
$E_{kin}$ & 338 & 352 & 438 & 335 \\
$V(NN)$ & 0 & $-2$ & 19 & $-5$\\
$V(DN)$ & $-546$ & $-575$ & $-708$ & $-540$ \\
$T_{nuc}$ & 113 & 126 & 162 & 117 \\
$E_{NN}$ & 113 & 124 & 181 & 113 \\[5pt]
$P(Odd)$ & 75.0 \% & 14.4 \% & 7.4 \% & 18.9 \% \\
\end{tabular}
\end{center}
\label{tab:energy}
\end{table}%
In Table~\ref{tab:energy} we show some of the properties of the state found for different $NN$ potentials. As seen in the Table, the $DNN$ system in the $J=0$ channel is bound below the $\Lambda_{c}^{*}N$ threshold ($B\sim 209$ MeV) for all the $NN$ potentials employed. A large kinetic energy of the deeply bound system is overcome by the strong attraction of the $DN$ potential, while the $NN$ potential adds a small correction. Comparing the results with three different nuclear forces, we find that the binding energy is smaller when the $NN$ potential has a harder repulsive core.
Although we will not discuss it here, we also find in \cite{dnn} a state with $J=1$ but less bound and more uncertain than the $J=0$ that we have exposed.
\section{Possible experiments to produce the $DNN$ state}
As a suggestion to observe experimentally this state we can think of the $\bar{p}{~^3}He \rightarrow \bar{D}^0 D^0pn\to \bar{D}^{0} [DNN]$ reaction, which could be done by FAIR at GSI. With a $\bar{p}$ beam of $15~GeV/c$ there is plenty of energy available for this reaction and the momentum mismatch of the $D^0$ with the spectator nucleons of the $^3$He can be relatively small. Estimations made in \cite{dnn} indicate that one would expect several thousand events per day for the background of the proposed reaction. A narrow peak could be visible on top of this background corresponding to the $DNN$ bound state formation.
Another possibility is the high-energy $\pi$ induced reaction. An analogous reaction is $\pi^{-} d\to D^{-}D^{+}np \to D^{-} [DNN]$ where the relevant elementary process is $\pi^{-}N\to D^{+}D^{-}N$. Since the $DN$ pair in the $DNN$ system is strongly clustering as the $\Lambda_{c}^{*}$, the reaction $\pi^{-} d\to D^{-}\Lambda_{c}^{*}n \to D^{-} [DNN]$ is also another candidate. The elementary reaction $\pi^{-}p\to D^{-}\Lambda_{c}^{*}$ is relevant in this case. Such reactions may be realized in the high-momentum beamline project at J-PARC.
\section{Conclusions}
We have studied the $DNN$ system with $I=1/2$ using two independent methods: the Fixed Center Approximation to the Faddeev equations and a variational method, and have found that the system is bound and rather stable, with a width of about 20-40 MeV. We obtained a clear signal of the quasi-bound state for the total spin $J=0$ channel around 3500 MeV.
The small width of the $DNN$ quasi-bound state is advantageous for the experimental identification. The search for the $DNN$ quasi-bound state can be done by $\bar{p}$ induced reaction at FAIR, $\pi^{-}$ induced reaction at J-PARC, and relativistic heavy ion collisions at RHIC and LHC.
\section{Acknowledgments}
This work is partly supported by projects FIS2006-03438 from the Ministerio de Ciencia e Innovaci\'on (Spain), FEDER funds and by the Generalitat Valenciana in the program Prometeo/2009/090.
This research is part of the European
Community-Research Infrastructure Integrating Activity ``Study of
Strongly Interacting Matter'' (acronym HadronPhysics2, Grant
Agreement n. 227431)
under the Seventh Framework Programme of EU.
T.H. thanks the support from the Global Center of Excellence Program by MEXT, Japan, through the Nanoscience and Quantum Physics Project of the Tokyo Institute of Technology.
This work is partly supported by the Grant-in-Aid for Scientific Research from
MEXT and JSPS (Nos.\
24105702
and 24740152).
|
1907.02494
|
\section{Introduction}
The theory of graph minors, developed over the span of over 20 years by Robertson and Seymour, had a tremendous impact on
the area of graph algorithms.
Arguably, one of the cornerstone contributions is the notion of \emph{treewidth}~\cite{RobertsonS84} and the deep understanding
of obstacles to small treewidth, primarily in the form of the \emph{excluded grid theorem}~\cite{grid-minor-poly,RobertsonS86,RobertsonST94}.
Very tight relations of treewidth and the size of the largest grid as a minor in sparse graph classes, such
as planar graphs or graphs excluding a fixed graph as a minor, led to the rich and fruitful theory of bidimensionality~\cite{DemaineH08}.
In general graphs, fine understanding of the existence of well-behaved highly-connected structures (not necessarily grids)
in graphs of high treewidth has been crucial to the development of efficient approximation algorithms
for the \textsc{Disjoint Paths} problem~\cite{ChuzhoyL16}.
In undirected graphs, one of the first theorems that gave some well-behaved structure in a graph
that is in some sense highly connected is the famous Erd\H{o}s-P\'{o}sa theorem~\cite{ErdosP65} linking the feedback vertex set number
of a graph (the minimum number of vertices one needs to delete to obtain an acyclic graph) and the cycle packing number
(the maximum possible size of a family of vertex-disjoint cycles in a graph).
The Erd\H{o}s-P\'{o}sa theorem states that a graph that does not contain a family of $k$ vertex-disjoint cycles
has feedback vertex set number bounded by $\Oh(k \log k)$.
A similar statement for directed graphs, asserting that a directed graph without a family of $k$ vertex-disjoint cycles
has feedback vertex set number at most $f(k)$, has been long known as the Younger's conjecture until finally proven
by Reed, Robertson, Seymour, and Thomas in 1996~\cite{ReedRST96}.
However, the function $f$ obtained in~\cite{ReedRST96} is not elementary; in particular, the proof relies on the Ramsey theorem
for $\Theta(k)$-regular hypergraphs.
This is in contrast with the (tight) $\Theta(k \log k)$ bound in undirected graphs.
Our main result is that if one compares the feedback vertex set number of a directed graph
to the \emph{quarter-integral} and \emph{half-integral} cycle packing number (i.e., the maximum size of a family of cycles in $G$
such that every vertex lies on at most four resp. two cycles), one obtains a polynomial bound.
\begin{theorem}\label{thm:ep-all}
Let $G$ be a directed graph that does not contain a family of $k$ cycles such that every vertex in $G$ is contained
in at most $p$ cycles.
\begin{compactenum}[a)]
\item If $p=4$, then there exists a feedback vertex set in $G$ of size $\Oh(k^4)$,
\item If $p=3$, then there exists a feedback vertex set in $G$ of size $\Oh(k^5)$,
\item If $p=2$, then there exists a feedback vertex set in $G$ of size $\Oh(k^6)$.
\end{compactenum}
\end{theorem}
We remark that if one relaxes the condition even further to a \emph{fractional cycle packing},%
\footnote{A \emph{fractional cycle packing} assigns to every cycle $C$ in $G$ a non-negative real weight $w(C)$ such that for every $v \in V(G)$ the total weight of all cycles containing $v$ is at most $1$. The \emph{size} of a fractional cycle packing is the sum of the weights of all cycles in the graph.}
Seymour~\cite{Seymour95} proved that a directed graph without a fractional cycle packing of size at least
$k$ admits a feedback vertex set of size $\Oh(k \log k \log \log k)$.
\medskip
\emph{Directed treewidth} is a directed analog of the successful notion of treewidth, introduced in~\cite{JohnsonRST01,Reed99}.
An analog of the excluded grid theorem for directed graphs has been conjectured by Johnson, Roberston, Seymour,
and Thomas~\cite{JohnsonRST01} in 2001 and finally proven by Kawarabayashi and Kreutzer in 2015~\cite{DBLP:conf/stoc/KawarabayashiK15}.
Similarly as in the case of the directed Erd\H{o}s-P\'{o}sa property, the relation between the directed treewidth
of a graph and a largest directed grid as a minor in~\cite{DBLP:conf/stoc/KawarabayashiK15} is not elementary.
For a directed graph $G$, let $\fvs(G)$, $\dtw(G)$, and $\cp(G)$ denote the feedback vertex set number,
directed treewidth, and the cycle packing number of $G$, respectively. The following lemma is a restatement of the result of Amiri, Kawarabayashi, Kreutzer, and Wollan~\cite[Lemma 4.2]{AmiriKKW16}:
\begin{lemma}[{\cite[Lemma 4.2]{AmiriKKW16}}]
\label{lem:AKKW}
Let $G$ be a directed graph with $\dtw(G) \leq w$. For each strongly connected directed graph $H$, the graph $G$ has either $k$ disjoint copies of $H$ as a topological minor, or contains a set $T$ of at most $k \cdot (w+1)$ vertices such that $H$ is not a topological minor of $G-T$.
\end{lemma}
\noindent Note that the authors of~\cite{AmiriKKW16} prove Lemma~\ref{lem:AKKW} for both topological and butterfly minors, but the previous restatement is sufficient for our purposes.
By taking $H$ as the directed 2-cycle it is easy to derive the following bound:
\begin{lemma}\label{lem:dtw-cp-fvs}
For a directed graph $G$ it holds that
\[
\fvs(G)\le(\dtw(G)+1)(\cp(G)+1).
\]
\end{lemma}
In the light of Lemma~\ref{lem:dtw-cp-fvs} and
since a directed grid minor of size $k$ contains $k$ vertex-disjoint cycles, the directed grid theorem
of Kawarabayashi and Kreutzer~\cite{DBLP:conf/stoc/KawarabayashiK15} is a generalization of the directed Erd\H{o}s-P\'{o}sa property
due to Reed, Robertson, Seymour, and Thomas~\cite{ReedRST96}.
Theorem~\ref{thm:ep-all} is a direct corollary of Lemma~\ref{lem:dtw-cp-fvs} and the following statement that we prove.
\begin{theorem}\label{thm:dtw-ep-all}
Let $G$ be a directed graph that does not contain a family of $k$ cycles such that every vertex in $G$ is contained
in at most $p$ cycles.
\begin{compactenum}[a)]
\item If $p=4$, then $\dtw(G) = \Oh(k^3)$,
\item If $p=3$, then $\dtw(G) = \Oh(k^4)$,
\item If $p=2$, then $\dtw(G) = \Oh(k^5)$.
\end{compactenum}
\end{theorem}
\noindent Furthermore, if one asks not for a cycle packing, but a packing of subgraphs of large directed treewidth,
we prove the following packing result.
\begin{theorem}\label{thm:qp}
There exists an absolute constant $c$ with the following property.
For every pair of positive integers $a$ and $b$, and every directed graph $G$
of directed treewidth at least $c\cdot a^6 \cdot b^8 \cdot \log^2(ab)$, there are directed graphs $G_1,G_2,\ldots,G_a$ with the following properties:
\begin{compactenum}
\item each $G_i$ is a subgraph of $G$,
\item each vertex of $G$ belongs to at most four graphs $G_i$, and
\item each graph $G_i$ has directed treewidth at least $b$.
\end{compactenum}
\end{theorem}
\noindent Note that by setting $b=2$ in Theorem~\ref{thm:qp}, one obtains the case $p=4$ of Theorem~\ref{thm:dtw-ep-all} with a slightly weaker bound of $\Oh(k^6 \log^2 k)$ and,
consequently, case $p=4$ of Theorem~\ref{thm:ep-all} with a weaker bound of $\Oh(k^7 \log^2 k)$.
Theorem~\ref{thm:qp} should be compared to its undirected analog of Chekuri and Chuzhoy~\cite{ChekuriC13} that asserts
that in an undirected graph $G$ of treewidth at least $c \min (ab^2, a^3b)$ one can find $a$ vertex-disjoint subgraphs
of treewidth at least $b$. While we still obtain a polynomial bound, we can only prove the existence of a quarter-integral
(as opposed to integral, i.e., vertex-disjoint) packing of subgraphs of high directed treewidth.
In the \textsc{Disjoint Paths} problem,
given a graph $G$ and a set of terminal pairs $(s_i,t_i)_{i=1}^k$, we
ask to find an as large as possible collection of vertex-disjoint paths such that every path in the collection
connects some $s_i$ with $t_i$.
Let $\mathrm{OPT}$ be the number of paths in the optimum solution; we say that a family $\Pp$ is a \emph{congestion-$c$ polylogarithmic approximation}
if every path in $\Pp$ connects a distinct pair $(s_i,t_i)$, each vertex of $V(G)$ is contained in at most $c$ paths of $\Pp$, and $|\Pp| \geq \mathrm{OPT} / \mathrm{polylog}(\mathrm{OPT})$.
The successful line of research of approximation algorithms for the \textsc{Disjoint Paths} problem in undirected graphs
leading in particular to a congestion-2 polylogarithmic approximation algorithm of Chuzhoy and Li~\cite{ChuzhoyL16} for the edge-disjoint version, would not be possible without a
fine understanding of well-behaved well-connected structures in a graph of high treewidth.
Of central importance to such \emph{routing} algorithms is the notion of a \emph{crossbar}: a crossbar of order $k$ and congestion
$c$ is a subgraph $C$ of $G$ with an \emph{interface} $I \subseteq V(C)$ of size $k$ such that for every matching $M$
on $I$, one can connect the endpoints of the matching edges with paths in $C$ such that every vertex is in at most $c$ paths.
Most of the known approximation algorithms for \textsc{Disjoint Paths} find a crossbar $(C,I)$ with a large set of disjoint paths between $I$ and the set of terminals $s_i$ and $t_i$. While one usually does not control how the paths connect the terminals $s_i$ and $t_i$ to interface vertices of $I$, the ability of the crossbar to connect \emph{any} given matching on the interface leads to a solution.
To obtain a polylogarithmic approximation algorithm, one needs the order of the crossbar to be comparable to the number of terminal pairs, which --- by well-known tools such as \emph{well-linked decompositions}~\cite{ChekuriKS05} --- is of the order of the treewidth of the graph.
At the same time, we usually allow constant congestion (every vertex can appear in a constant number of paths of the solution, instead of just one).
Thus, the milestone graph-theoretic result used in approximation algorithms for \textsc{Disjoint Paths} is the existence of a congestion-2 crossbar
of order $k$ in a graph of treewidth $\Omega(k \mathrm{polylog}(k))$.
While the existence of similar results for the general \textsc{Disjoint Paths} problem in directed graphs
is implausible~\cite{AndrewsCGKT10}, Chekuri, and Ene proposed to study the case of \emph{symmetric demands} where one
asks for a path from $s_i$ to $t_i$ and a path from $t_i$ to $s_i$ for a terminal pair $(s_i,t_i)$.
First, they provided an analog of the well-linked decomposition for this case~\cite{ChekuriE14}, and then with Pilipczuk~\cite{ChekuriEP18}
showed the existence of an analog of a crossbar and a resulting approximation algorithm for \textsc{Disjoint Paths}
with symmetric demands in planar directed graphs.
Later, this result has been lifted to arbitrary proper minor-closed graph classes~\cite{CSS17}.
However, the general case remains widely open.
As discussed above, for applications in approximation algorithms for \textsc{Disjoint Paths}, it is absolutely essential
to squeeze as much as possible from the bound linking directed treewidth of a graph with the order of the crossbar,
while the final congestion is of secondary importance (but we would like it to be a small constant).
We think of Theorem~\ref{thm:qp} as a step in this direction: sacrificing integral packings for quarter-integral ones,
we obtain much stronger bounds than the non-elementary bounds of~\cite{ReedRST96}.
Furthermore, such a step seems necessary, as it is hard to imagine a crossbar of order $k$ that would not contain
a constant-congestion (i.e., every vertex might be used in a constant number of cycles) packing of $\Omega(k)$ directed cycles.
On the technical side, the proof of Theorem~\ref{thm:qp} borrows a number of technical tools from the recent
work of Hatzel, Kawarabayashi, and Kreutzer that proved polynomial bounds for the directed grid minor theorem in planar
graphs~\cite{DBLP:conf/soda/HatzelKK19}.
We follow their general approach to obtain a directed treewidth sparsifier~\cite[Section 5]{DBLP:conf/soda/HatzelKK19} and modify
it in a number of places for our goal. The main novelty comes in different handling of the case
when two linkages intersect a lot. Here we introduce a new partitioning tool (see Section~\ref{sec:sep})
which we use in the crucial moment where we separate subgraphs $G_i$ from each other.
\paragraph{Organization and proof outline.}
After brief preliminaries in Section~\ref{sec:prelim}, we prove Theorem~\ref{thm:qp} in Sections~\ref{sec:sep}--\ref{sec:main}. A brief outline of the proof is as follows.
Assuming that the directed treewidth of the graph~$G$ in the statement Theorem~\ref{thm:qp} is sufficiently large, we use a known result (Lemma~\ref{lem:path-system}) to obtain a sufficiently large set~$\Pp$ of paths whose endpoints are well-linked.
We then distinguish two cases.
In the first case, the intersection graph of the paths in~$\Pp$ is sparse---the \emph{sparse case}.
Then, by the properties of $\Pp$ guaranteed by Lemma~\ref{lem:path-system} we can rather directly construct the required graphs~$G_i$: Intuitively, then there is a subset of $\Pp$ whose paths are sufficiently independent from each other to allow for a small overlap of the constructed graphs.
In the second case, the intersection graph of the paths in $\Pp$ contains a dense subgraph---the \emph{dense case}.
To treat this case, we need a new partitioning tool which allows us to separate the dense intersection subgraph into sufficiently many subgraphs that all remain sufficiently dense.
We can then look at each of these dense subgraphs individually and, using the density, construct the required subgraph~$G_i$ of sufficiently large directed treewidth.
The organization is as follows. Section~\ref{sec:sep} introduces the new partitioning tool, Section~\ref{sec:dense} handles the dense case in the analysis, while Section~\ref{sec:main} handles the sparse case and wraps up the argument.
In Section~\ref{sec:imp}, we discuss how to modify the arguments of Section~\ref{sec:main} to obtain the improved bounds of Theorem~\ref{thm:dtw-ep-all}.
\section{Preliminaries}\label{sec:prelim}
For brevity, we use $[i] := \{1, 2, \ldots, i\}$, where $i \in \mathbb{N} \setminus \{0\}$.
\subsection{Linkages}
\newcommand{\linkfrom}[1]{\ensuremath{A(#1)}}
\newcommand{\linkto}[1]{\ensuremath{B(#1)}}
\newcommand{\pathfrom}[1]{\ensuremath{\textsf{start}(#1)}}
\newcommand{\pathto}[1]{\ensuremath{\textsf{end}(#1)}}
\newcommand{\textsf{back}}{\textsf{back}}
\newcommand{\ensuremath{\textsf{Aux}}}{\ensuremath{\textsf{Aux}}}
Let $G=(V(G),E(G))$ be a directed graph and let $A, B$ be subsets of $V(G)$ with $|A| = |B|$. A \emph{linkage} from $A$ to $B$ in~$G$ is a set~$\Ll$ of~$|A|$ pairwise vertex-disjoint paths in~$G$, each with a starting vertex in $A$ and ending vertex in $B$. The \emph{order} of $\Ll$ is $|\Ll|=|A|$.
For $X, Y \subseteq V(G)$ and a linkage $\Ll$ from $X$ to $Y$, we denote $\linkfrom{\Ll} := X$ and
$\linkto{\Ll} := Y$.
For a path or a walk $P$, by $\pathfrom{P}$ and $\pathto{P}$ we denote the starting and ending vertex of $P$, respectively.
Let $\Ll$ and $\Kk$ be linkages. The \emph{intersection graph} of $\Ll$ and $\Kk$, denoted by $I(\Ll, \Kk)$, is the bipartite graph with the vertex set $\Ll \cup \Kk$ and an edge between a vertex in~$\Ll$ and a vertex in~$\Kk$ if the corresponding paths share at least one vertex.
A vertex set~$W \subseteq V(G)$ is \emph{well-linked} if for all subsets $A, B \subseteq W$ with $|A| = |B|$ there is a linkage $\Ll$ of
order $|A|$ from $A$ to $B$ in $G \setminus (W \setminus (A \cup B))$.
Let $\Pp$ be a family of walks in $G$ and let $c$ be a positive integer. We say that $\Pp$ is \emph{of congestion $c$} if for every $v \in V(G)$, the total
number of times the walks in $\Pp$ visit $v$ is at most $c$; here, if a walk $W \in \Pp$ visits $v$ multiple times, we count each visit separately.
A family of paths $\Pp$ is \emph{half-integral} (\emph{quarter-integral}) if it
is of congestion $2$ (resp. $4$).
We call two linkages $\Ll$ and $\Ll^\textsf{back}$
\emph{dual} to each other if $\linkfrom{\Ll} = \linkto{\Ll^\textsf{back}}$ and
$\linkfrom{\Ll^\textsf{back}} = \linkto{\Ll}$.
For two dual linkages $\Ll$ and $\Ll^\textsf{back}$ in a graph $G$, we define
an auxiliary directed graph $\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})$ as follows.
We take $V(\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})) = \Ll$ and for every path $P \in \Ll^\textsf{back}$
that starts in a vertex $\pathfrom{P} = \pathto{L}$ for some $L \in \Ll$
and ends in a vertex $\pathto{P} = \pathfrom{L'}$ for some $L' \in \Ll$,
we put an arc $(L,L')$ to $\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})$. Note that
it may happen that $L = L'$.
When the backlinkage $\Ll^\textsf{back}$ is clear from the context, we abbreviate
$\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})$ to $\ensuremath{\textsf{Aux}}(\Ll)$.
Observe that in $\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})$ every node is of in- and out-degree exactly one
and thus this graph is a disjoint union of directed cycles.
With every arc $(L,L')$ of $\ensuremath{\textsf{Aux}}(\Ll, \Ll^\textsf{back})$ we can associate the walk from
$\pathfrom{L}$ to $\pathfrom{L'}$ that first goes along $L$ and then follows the path
$P \in \Ll^\textsf{back}$ that gives rise to the arc $(L,L')$.
Consequently, with every collection of pairwise disjoint paths and cycles in $\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})$
there is an associated collection of walks (closed walks for cycles) in $G$ that is
of congestion $2$ as it originated from two linkages. Note that the same construction works if $\Ll$ and $\Ll^\textsf{back}$ are half-integral linkages, and then the walks
in $G$ corresponding to a family of paths and cycles in $\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})$ would
be of congestion $4$.
Furthermore, with a pair of dual linkages $\Ll$ and $\Ll^\textsf{back}$ we can associate
a \emph{backlinkage-induced order} $\Ll = \{L_1,L_2,\ldots,L_{|\Ll|}\}$ as follows.
If $C_1,C_2,\ldots,C_r$ are the cycles of $\ensuremath{\textsf{Aux}}(\Ll,\Ll^\textsf{back})$ in an arbitrary order, then
$L_1,L_2,\ldots,L_{|C_1|}$ are the vertices of $C_1$ in the order of their appearance on $C_1$, and
$L_{|C_1|+1},\ldots,L_{|C_1|+|C_2|}$ are the vertices of $C_2$ in the order of their appearance on $C_2$, etc.
That is, we order the elements of $\Ll$ first according to the cycle of $\ensuremath{\textsf{Aux}}(\Ll)$ they lie on, and then, within
one cycle, according to the order around this cycle.
We will also need the following operation on a pair of dual linkages $\Ll$ and $\Ll^\textsf{back}$.
Let $\Pp \subseteq \Ll$ be a sublinkage. For every $P \in \Pp$, construct a walk $Q(P)$ as follows.
Start from the path $Q_0 \in \Ll^\textsf{back}$ with $\pathfrom{Q_0} = \pathto{P}$ and set $Q(P) = Q_0$. Given $Q_i \in \Ll^\textsf{back}$ for $i \geq 0$, proceed as follows.
Let $P_{i+1} \in \Ll$ be the path with $\pathto{Q_i} = \pathfrom{P_{i+1}}$. If $P_{i+1} \in \Pp$, then stop. Otherwise, define $Q_{i+1} \in \Ll^\textsf{back}$
to be the path with $\pathto{P_{i+1}} = \pathfrom{Q_{i+1}}$. Append $P_{i+1}$ and $Q_{i+1}$ at the end of $Q(P)$ and repeat.
Finally, we shortcut $Q(P)$ to a path $Q'(P)$ with the same endpoints.
In this manner, $\Qq := \{Q'(P)~|~P \in \Pp\}$ is a half-integral linkage with $\linkfrom{\Pp} = \linkto{\Qq}$ and $\linkfrom{\Qq} = \linkto{\Pp}$.
We call $\Qq$ the \emph{backlinkage induced by $\Pp$ on $(\Ll, \Ll^\textsf{back})$}.
Furthermore, we can perform the same construction if $\Ll$ and $\Ll^\textsf{back}$ are half-integral linkages, obtaining a quarter-integral linkage $\Qq$.
\subsection{Degeneracy and directed treewidth}
A graph $G$ is \emph{$d$-degenerate} if every subgraph of $G$ contains a vertex of degree at most $d$.
In this paper we do not need the exact definition of directed treewidth. Instead, we rely on the following two results.
\begin{lemma}[\cite{Reed99}]\label{lem:dtw2wl}
Every directed graph $G$ of directed treewidth $k$ contains a
well-linked set of size $\Omega(k)$.
\end{lemma}
\begin{lemma}[\cite{DBLP:journals/corr/KawarabayashiK14,DBLP:conf/stoc/KawarabayashiK15}]\label{lem:path-system}
There is an absolute constant $c'$ with the following property.
Let $\alpha,\beta \geq 1$ be integers and let $G$ be a digraph of $\dtw(G) \geq c' \cdot \alpha^2\beta^2$.
Then there exists a set of $\alpha$ vertex-disjoint paths $P_1,\ldots,P_\alpha$ and sets $A_i,B_i\subseteq V(P_i)$, where $A_i$ appears before $B_i$ on $P_i$, both $|A_i|, |B_i|= \beta$, and
$\bigcup_{i=1}^{\alpha} A_i\cup B_i$
is well-linked.
\end{lemma}
We also need the following two auxiliary results.
Note that a coloring in Lemma~\ref{lem:degenerate} can be arbitrary and is not necessarily proper.
\begin{lemma}[{\cite[Lemma~4.3]{DBLP:journals/ejc/ReedW12}}]\label{lem:degenerate}
Let $r\ge 2$, $d$ be a real, and $H$ be an $r$-colored graph with color classes $V_1,\ldots,V_r$, such that for every $i$ it holds that $|V_i|\ge 4e(r-1)d$ and for every $i \neq j$ the graph $H[V_i\cup V_j]$ is $d$-degenerate. Then there exists an independent set $\{x_1,\ldots,x_r\}$ such that $x_i\in V_i$ for every $i \in [r]$.
\end{lemma}
\begin{lemma}[{\cite[Lemma~5.5]{DBLP:conf/soda/HatzelKK19}}]\label{lem:dtw-bound}
Let $G$ be a digraph and $P_1,\ldots,P_k$ be disjoint paths such that each $P_i$ consists of two subpaths $A_i$ and $B_i$, where $A_i$ precedes $B_i$.
Furthermore, let $\{ L_{i,j} \colon i,j \in [k], i\neq j\}$ be a set of pairwise disjoint paths, such that $L_{i,j}$ starts in $B_i$ and ends in $A_j$.
Then
\[
\dtw\Bigl(\bigcup_i P_i \cup \bigcup_{i\neq j} L_{i,j}\Bigr)\ge \frac{k}{8}.
\]
\end{lemma}
\section{Partitioning lemma}\label{sec:sep}
In this section, we develop a main technical tool that we use in the
proof of Theorem~\ref{thm:qp}. Intuitively, in the dense case of the proof (see the proof of Lemma~\ref{lem:dense} in Section~\ref{sec:dense}),
we will have a bipartite graph of large minimum degree which we
partition into subgraphs induced by pairs of vertex sets~$(U_i, W_i)$.
These subgraphs will define the~$G_i$ from the statement of Theorem~\ref{thm:qp}. To
obtain a lower bound on the directed treewidth of~$G_i$, we need that
the parts~$(U_i, W_i)$ each induce a subgraph of large average degree.
The bipartite graph $G=(X \cup Y,E)$, which will be considered in this section, has a fixed ordering of vertices in each bipartition class: $X=\{x_1,x_2,\ldots,x_a\}$ and $Y=\{y_1,y_2,\ldots,y_b\}$.
A subset $X'$ of $X$ (resp. $Y'$ of $Y$) is called a \emph{segment} if it is of the form $\{x_i,x_{i+1},\ldots,x_j\}$ for some $1 \leq i < j \leq a$ (resp. $\{y_i,y_{i+1},\ldots,y_j\}$ for some $1 \leq i < j \leq b$).
Now we are ready to prove the following lemma.
\begin{lemma} \label{lem:partition}
Let $h \geq 0$ and $n$ be integers, $d$ be a positive real such that $d \cdot 4^{h+1} - 1 > 2$, and let $G$ be a bipartite graph with bipartition classes $X = \{x_1,x_2,\ldots,x_a\}$ and $Y=\{y_1,y_2,\ldots,y_b\}$, such that $a+b \leq n$ and $|E(G)| \geq (d \cdot 4^{h+1} -1) \cdot n$.
Then in $X$ we can find $k:=2^h$ pairwise disjoint sets $I_1,I_2,\ldots,I_k$, and in $Y$ we can find $k$ pairwise disjoint sets $J_1,J_2,\ldots,J_k$, such that:
\begin{compactenum}
\item for every $i \in [k]$ the set $I_i$ is a segment of $X$ and the set $J_i$ is a segment of $Y$,
\item for every $i \in [k]$, the number of edges in $G$ between $\{x_j: j \in I_i\}$ and $\{y_j: j \in J_i\}$ is at least $d \cdot n$.
\end{compactenum}
\end{lemma}
\begin{proof}
For $I \subseteq X$ and $J \subseteq Y$, let $e(I,J)$ denote the number of edges with one endpoint in $I$ and the other in $J$. Observe that $e(X,Y)=|E(G)| > 2n$.
We prove the lemma by induction on $h$. Note that for $h=0$ the claim is trivially satisfied by taking $I_1 = X$ and $J_1 = Y$, as $d \cdot 4^{h+1} - 1 > 2$ and $h \geq 0$ implies $d \cdot 4^{h+1}-1 \geq d$.
So now assume that $h \geq 1$ and the claim holds for $h-1$. Let $s \in [a]$ be the minimum integer, for which $\sum_{i=1}^s \deg x_i \geq e(X,Y)/2$, and let $t \in [b]$ be the minimum integer, for which $\sum_{i=1}^t \deg y_i \geq e(X,Y)/2$.
We observe that $d \cdot 4^{h+1} -1 > 2$ implies that $1 < s < a$ and $1 < t < b$.
Define $X^1 := \{x_1,x_2,\ldots,x_{s-1}\}$ and $X^2 := \{x_{s+1},\ldots,x_a\}$, and $Y^1 := \{y_1,y_2,\ldots,y_{t-1}\}$ and $Y^2 := \{y_{t+1},\ldots,y_b\}$.
We aim to show that the number of edges joining $X^1$ and $Y^1$ is roughly the same as the number of edges joining $X^2$ and $Y^2$, and the number of edges joining $X^1$ and $Y^2$ is roughly the same as the number of edges joining $X^2$ and $Y^1$. Since $\deg x_s \leq b < n$ and $\deg y_t \leq a < n$, by the choice of $s$ and $t$ we obtain the following set of inequalities.
\begin{align} \label{eq:boundXiY}
\begin{split}
e(X,Y)/2 - \deg x_s \leq e(X^1,Y) \leq e(X,Y)/2\\
e(X,Y)/2 - \deg x_s \leq e(X^2,Y) \leq e(X,Y)/2\\
e(X,Y)/2 - \deg y_t \leq e(X,Y^1) \leq e(X,Y)/2\\
e(X,Y)/2 - \deg y_t \leq e(X,Y^2) \leq e(X,Y)/2.
\end{split}
\end{align}
Observe that
\begin{align*}
e(X^1,Y^1) + e(X^1,Y^2) &\leq e(X^1,Y) = e(X^1,Y^1) + e(X^1,Y^2) + e(X^1,\{y_t\}) \\
&\leq e(X^1,Y^1) + e(X^1,Y^2) + \deg y_t
\end{align*}
(and analogously for each of the remaining inequalities in \eqref{eq:boundXiY}).
Thus we obtain:
\begin{align} \label{eq:bound-sumXY}
\begin{split}
e(X,Y)/2 - n \leq e(X^1,Y^1)+e(X^1,Y^2) \leq e(X,Y)/2\\
e(X,Y)/2 - n \leq e(X^2,Y^1)+e(X^2,Y^2) \leq e(X,Y)/2\\
e(X,Y)/2 - n \leq e(X^1,Y^1)+e(X^2,Y^1) \leq e(X,Y)/2\\
e(X,Y)/2 - n \leq e(X^1,Y^2)+e(X^2,Y^2) \leq e(X,Y)/2.
\end{split}
\end{align}
By subtracting appropriate pairs of inequalities in \eqref{eq:bound-sumXY}, we obtain the following bounds.
\begin{align} \label{eq:bound-diffXY}
\begin{split}
- n \leq e(X^1,Y^1)-e(X^2,Y^2) \leq n\\
- n \leq e(X^1,Y^2)-e(X^2,Y^1) \leq n\\
\end{split}
\end{align}
Recall that
\begin{align*}
e(X,Y) &= e(X^1,Y^1) + e(X^1,Y^2) + e(X^2,Y^1) + e(X^2,Y^2) + \deg x_s + \deg y_t \\
&\leq e(X^1,Y^1) + e(X^1,Y^2) + e(X^2,Y^1) + e(X^2,Y^2) + n.
\end{align*}
Thus, by the pigeonhole principle, at least one of the following holds:
\begin{align} \label{eq:cases}
\begin{split}
e(X^1,Y^1)+e(X^2,Y^2) \geq e(X,Y)/2 -n/2\\
e(X^1,Y^2)+e(X^2,Y^1) \geq e(X,Y)/2 -n/2.
\end{split}
\end{align}
Suppose that the first case holds. Define $G^1 := G[X^1 \cup Y^1]$ and $G^2 := G[X^2 \cup Y^2]$. Combining \eqref{eq:bound-diffXY} and \eqref{eq:cases}, we obtain that
\begin{align}
\begin{split}
|E(G^1)| &=e(X^1,Y^1) \geq e(X,Y)/4 -3n/4 \geq (d \cdot 4^{h+1}-1)n/4 - 3n/4 \ge (d\cdot 4^h-1)n\\
|E(G^2)| &=e(X^2,Y^2) \geq e(X,Y)/4 -3n/4 \geq (d\cdot 4^h-1)n.
\end{split}
\end{align}
We observe that graphs $G^1,G^2$ satisfy the inductive assumption (for $h-1$), so in the vertex set of $G^1$ we can find two families of $k/2$ pairwise corresponding segments $I^1_1,I^1_2,\ldots,I^1_{k/2}$ and $J^1_1,J^1_2,\ldots,J^1_{k/2}$, and in the vertex set of $G^2$ we can find two families of $k/2$ pairwise corresponding segments $I^2_1,I^2_2,\ldots,I^2_{k/2}$ and $J^2_1,J^2_2,\ldots,I^2_{k/2}$. We obtain the desired subsegments of $X$ and $Y$ by setting:
\begin{equation*}
\begin{aligned}[c]
I_i =
\begin{cases}
I^1_i & \text{ if } i \leq k/2,\\
I^2_{i-k/2} & \text{ if } i > k/2,
\end{cases}
\end{aligned}
\qquad
\begin{aligned}[c]
J_i =
\begin{cases}
J^1_i & \text{ if } i \leq k/2,\\
J^2_{i-k/2} & \text{ if } i > k/2.
\end{cases}
\end{aligned}
\end{equation*}
If the second case in \eqref{eq:cases} holds, we take $G^1 := G[X^1 \cup Y^2]$ and $G^2 := G[X^2 \cup Y^1]$, and the rest of the proof is analogous.
\end{proof}
The following statement brings the technical statement of
Lemma~\ref{lem:partition} into a more easily applicable form.
\begin{lemma} \label{lem:disjointPairs}
Let $k, r \geq 1$ be two integers and let $G$ be a bipartite graph with bipartition classes $X = \{x_1,x_2,\ldots,x_a\}$ and $Y=\{y_1,y_2,\ldots,y_b\}$ and minimum degree at least $2^9 \cdot r \cdot k$.
Then there are $k$ sets $U_1,U_2,\ldots,U_k$, and $k$ sets $W_1,W_2,\ldots,W_k$, such that:
\begin{compactenum}
\item for each $i \in [k]$ the set $U_i$ is a segment of $X$ and the set $W_i$ is a segment of $Y$,
\item for each distinct $i,j \in [k]$ we have $U_i \cap U_j = \emptyset$ and $W_i \cap W_j = \emptyset$,
\item for every $i \in [k]$, the average degree of the graph $G[U_i \cup W_i]$ is at least $r$.
\end{compactenum}
\end{lemma}
\begin{proof}
Let $h$ be the minimum integer, such that $k' :=2^h \geq 2k$; note that $k' < 4k$. Also, define $d= 2r/k$ and $n=a+b$.
We have
$$d \cdot 4^{h+1} -1 = 4d(k')^2 -1 \geq \frac{8r}{k} \cdot (2k)^2 - 1 = 32 \cdot r \cdot k - 1 > 2.$$
Observe that the number of edges in $G$ is at least
$$n \cdot r \cdot 2^8\cdot k = (16r/k \cdot (4k)^2)n > (4d (k')^2)n > (d \cdot 4^{h+1} -1)n.$$
Thus $G$ satisfies the assumptions of Lemma \ref{lem:partition} for $h$, $n$, and $d$. Let $I_1,I_2,\ldots,I_{k'}$ be the disjoint segments in $X$, and $J_1,J_2,\ldots,J_{k'}$ be the disjoint segments in $Y$, whose existence is guaranteed by Lemma \ref{lem:partition}.
A segment $I_i$ ($J_i$, resp.) is called \textit{large} if $|I_i| \geq 2n/k'$ ($|J_i| \geq 2n/k'$, resp.).
A pair $(I_i,J_i)$ is \textit{large} if at least one of $I_i, J_i$ is large, otherwise the pair is \textit{small}.
Note that there are at most $n / (2n/k') = k'/2$ large segments in total.
Thus the number of small pairs is at least $k'/2 \geq k$. We obtain the segments $(U_i,W_i)$ by taking the first $k$ small pairs ($I_i,J_i)$. Clearly these segments satisfy conditions 1.\ and 2.\ of the lemma.
Now take any $i \in [k]$ and let us compute the average degree of the graph $G_i:=G[U_i \cup W_i]$. By Lemma~\ref{lem:partition}, $|E(G_i)| \geq d \cdot n$. On the other hand, since $(U_i,W_i)$ is a small pair, we have that $|V(G_i)| = |U_i \cup W_i| < 4n/k'$. Thus we obtain that the average degree of $G_i$ is
\[
\frac{2\cdot |E(G_i)|}{|V(G_i)|} > \frac{d \cdot n}{4n/k'} = \frac{dk'}{4} \geq d \; \frac{2k}{4} = \frac{2r}{k} \cdot \frac{k}{2} = r.
\]
This completes the proof.
\end{proof}
\section{The dense case}\label{sec:dense}
In this section, we prove Theorem~\ref{thm:qp} roughly in the case
when there are two linkages $\Ll$ and $\Kk$ such that their set
$\linkfrom{\Ll} \cup \linkfrom{\Kk} \cup \linkto{\Ll} \cup
\linkto{\Kk}$ of endpoints is well linked and such that the paths in
$\Ll$ and $\Kk$ intersect a lot. The formal statement proved in this
section is as follows.
\begin{lemma}\label{lem:dense}
Let $a, b \in \mathbb{N}^+$. Let $D$ be a directed graph and $\Ll$
and $\Kk$ be two linkages in~$D$ such that
$\linkfrom{\Ll} \cup \linkto{\Ll} \cup \linkfrom{\Kk} \cup
\linkto{\Kk}$ is well-linked in~$D$. Suppose that the intersection
graph $I(\Ll, \Kk)$ has degeneracy more
than~$327\,680 \cdot a \cdot b \cdot \log_2(|\Ll|/b)$. Then there are
directed graphs $D_1, D_2, \ldots, D_a$ with the following
properties:
\begin{compactenum}[(i)]
\item each $D_i$ is a subgraph of~$D$,
\item each vertex of $D$ belongs to at most four graphs~$D_i$, and
\item each graph~$D_i$ has directed treewidth at least~$b$.
\end{compactenum}
\end{lemma}
\paragraph{Proof outline} The basic idea of the proof of
Lemma~\ref{lem:dense} is as follows. We first fix a pair of linkages
$\Ll^\textsf{back}$ and $\Kk^\textsf{back}$ which are dual to $\Ll$ and $\Kk$,
respectively. (This is possible because of well-linkedness of the
endpoints.) The subgraphs $D_i$ that we construct will subpartition
the vertex set of each of the four linkages~$\Ll, \Ll^\textsf{back}, \Kk, \Kk^\textsf{back}$ and
hence each vertex of $G$ is in at most four subgraphs~$D_i$. To
construct the desired subgraphs $D_i$, we consider the
backlinkage-induced order~$\Pi_\Ll$ on $\Ll$
and $\Pi_\Kk$ on $\Kk$.
Using these orderings of the
paths of~$\Ll$ and~$\Kk$, we can apply the partitioning lemma
(Lemma~\ref{lem:disjointPairs}) to the intersection graph of~$\Ll$
and~$\Kk$, obtaining a subpartition $I_1, \ldots, I_k$ of~$\Ll$ and a
subpartition~$J_1, \ldots, J_k$ of~$\Kk$. These subpartitions have the
nice property that each intersection graph~$I(I_i, J_i)$ induced by a
pair $I_i, J_i$ contains many edges (representing intersections
between the corresponding paths) and that only a constant number of
cycles of~$\ensuremath{\textsf{Aux}}(\Ll)$ and $\ensuremath{\textsf{Aux}}(\Kk)$ cross $I_i$ or $J_i$. By closing
each of these crossing cycles by introducing an artificial new path,
we obtain a pair of dual linkages $I_i, I_i'$, and a pair of dual of
linkages $J_i, J_i'$. Using then
Lemma~\ref{lem:high-degree-linkages-dtw} below, we will obtain a lower
bound on the directed treewidth of the graph induced by $I_i \cup J_i \cup I_i' \cup J_i'$,
which constitute our desired subgraph $D_i$.
\paragraph{Treewidth lower bound}
For technical reasons, we will have to work with half-integral
linkages. The intersection graph for a pair of
half-integral linkages is defined in the same way as for ordinary
linkages.
\begin{lemma}\label{lem:high-degree-linkages-dtw}
Let $k, d \in \mathbb{N}^+$
and
$\Pp, \Pp^\textsf{back}, \Qq, \Qq^\textsf{back}$ be four half-integral linkages in a
directed graph such that $\Pp$ and $\Pp^\textsf{back}$ are dual to each other
and $\Qq$ and $\Qq^\textsf{back}$ are dual to each other. Let the intersection
graph $I(\Pp, \Qq)$ have minimum degree at least~$d$ where
$d\ge 8k\log_{4\over 3} {({{|\Pp|}\over 24k})}+24k+4$.
Then the graph
$\bigcup(\Pp \cup \Pp^\textsf{back} \cup \Qq \cup \Qq^\textsf{back})$ has directed treewidth at least~$k$.
\end{lemma}
\noindent The proof of Lemma~\ref{lem:high-degree-linkages-dtw} is
inspired by the proof of Lemma 5.4 in~\cite{DBLP:conf/soda/HatzelKK19}.
We could use Lemma~5.4 here as well, but its proof, unfortunately, contains errors.
Nevertheless, we derive an incomparable bound which is better for our use since the lower bound on the degree that we need depends only linearly on $k$ whereas the lower bound claimed in Lemma~5.4~\cite{DBLP:conf/soda/HatzelKK19} is~$k^2$.
Also, we adapt the constants in the lemma for half-integral linkages.
The proof of Lemma~\ref{lem:high-degree-linkages-dtw} is based on the following
Lemma~\ref{lem:balanced-separation}. Herein, we use the following
definition. Let $D$ be a directed graph. A \emph{separation} in $D$ is
a pair $(X, Y)$ of two vertex subsets~$X, Y \subseteq V(D)$ with
$X \cup Y = V(D)$ such that there are no edges from $Y \setminus X$ to
$X \setminus Y$ in~$D$. The \emph{order} of $(X, Y)$ is $|X \cap Y|$.
\begin{lemma}[\cite{KO14}]\label{lem:balanced-separation}
Let $w \in \mathbb{N}$. Let $G$ be a directed graph of directed
treewidth at most $w$ and let $W \subseteq V(G)$ such that
$|W| \geq 2w + 2$. Then there is a separation $(X, Y)$ in~$G$ of
order at most~$w$ such that $X$ and $Y$ each contain at least
$|W|/4$ elements of~$W$.
\end{lemma}
\begin{proof}
The statement follows easily from Lemma~6.4.10 in~\cite{KO14}.
We provide a proof for completeness. By Lemma~6.4.10 in~\cite{KO14} there exist three pairwise disjoint vertex sets
$A, B, S \subseteq V(G)$ such that the following properties hold.
\begin{itemize}
\item[(i)] $W = A \cup (S \cap W) \cup B$.
\item[(ii)] There is no directed path from $B$ to $A$ in $G - S$.
\item[(iii)] Both $A$ and $B$ contain at most $3|W|/4$ elements
of~$W$.
\item[(iv)] $|S|\leq w$.
\end{itemize}
Based on the sets $A, B, S$, we define the desired
separation~$(X, Y)$. Let $R(B)$ be the set of vertices in
$V(G) \setminus B$ reachable from $B$, that is, a vertex
$v \in V(G)$ is in $R(B)$ if it is not in $B$ and there is a
directed path in $G$ to~$v$ from a vertex in $B$. Note that
$R(B) \cap A = \emptyset$ by Property~(ii). Define
$Y = S \cup B \cup R(B)$ and $X = (V(G) \setminus Y) \cup S$. Note
that $X \cap Y = S$. We claim that $(X, Y)$ is a separation for~$G$
with the desired properties.
Clearly, $X \cup Y = V(G)$. Thus, to show that $(X, Y)$ is a
separation, it remains to show that there is no edge from
$Y \setminus X$ to $X \setminus Y$. For the sake of contradiction,
assume that there is such an edge~$(y, x) \in E(G)$ with
$y \in Y \setminus X$ and $x \in X \setminus Y$. Observe that
$y \in Y \setminus S = B \cup R(B)$ and thus $x \in B \cup R(B)$.
Then, $x \in Y$ by definition, a contradiction. Hence, $(X, Y)$ is a
separation. Recall that $X \cap Y = S$ and thus $(X, Y)$ is of order at
most~$w$, as required.
It remains to show the balancedness property. Clearly,
$B \subseteq Y \setminus X$. Furthermore, since
$A \cap (S \cup B \cup R(B)) = \emptyset$, we have
$A \subseteq X \setminus Y$. Thus,
\begin{align*}
|W \cap (Y \setminus X)| &= |W \cap B| \leq 3|W|/4 \text{, and}\\
|W \cap (X \setminus Y)| &= |W \cap A| \leq 3|W|/4.
\end{align*}
Hence,
\begin{align*}
|W \cap X| &\geq |W| - |W \cap (Y \setminus X)| \geq |W|/4 \text{, and}\\
|W \cap Y| &\geq |W| - |W \cap (X \setminus Y)| \geq |W|/4.
\end{align*}
This completes the proof.
\end{proof}
We are now ready to prove that two pairs of half-integral linkages
whose paths intersect a lot induce a graph with large directed
treewidth.
\begin{proof}[Proof of Lemma~\ref{lem:high-degree-linkages-dtw}]
Let $D$ be the graph containing~$\Pp, \Pp^\textsf{back}, \Qq$, and $\Qq^\textsf{back}$, and let $H = \bigcup(\Pp \cup \Pp^\textsf{back} \cup \Qq \cup \Qq^\textsf{back})$.
Assume for the sake of contradiction that $H$ has directed treewidth less than $k$.
The basic idea is to iteratively separate the paths in
$\Pp$ and $\Qq$ using a balanced separation of small order while
maintaining that those paths which do not intersect any of the used
separators still intersect a lot among themselves.
By balancedness, this will shrink the number of paths quickly, but by high intersection, there will always
be many paths left, giving a contradiction.
Define $q\df \lceil \log_{4\over 3} {\bigl({{|\Pp|}\over 24k}\bigr)}\rceil $.
We inductively define two sequences of linkages $\Pp=\Pp_0\speq \Pp_1\speq\cdots\speq\Pp_q$ and $\Qq=\Qq_0\speq \Qq_1\speq\cdots\speq\Qq_q$ and prove that they satisfy the following conditions for each $i \in \{0,\ldots,q\}$.
\begin{itemize}
\item[(i)] If $i > 0$, then $|\Pp_i|\le{3\over4}|\Pp_{i-1}|$.
\item[(ii)] There exist quarter-integral linkages
$\Pp_i^\textsf{back}, \Qq_i^\textsf{back}$ which are dual to $\Pp_i$ and
$\Qq_i$, respectively.
\item[(iii)] The minimum degree of $I(\Pp_i,\Qq_i)$ is at least $d-8ik$.
\end{itemize}
For the induction beginning, we define $\Pp_0 \df \Pp$ and
$\Qq_0 \df \Qq$. By the preconditions of the lemma, it is clear that
the above conditions are satisfied; for Condition~(iii), observe
that $\Pp^\textsf{back}$ and $\Qq^\textsf{back}$ represent the required dual
linkages $\Pp^\textsf{back}_0$ and $\Qq^\textsf{back}_0$.
Now suppose that $i > 0$ and that $\Pp_{i - 1}$ and $\Qq_{i - 1}$
have already been defined and that they satisfy the conditions. Let
$A_i$ be the starting set of linkage $\Pp_{i-1}$, that is,
$A_i = \linkfrom{\Pp_{i - 1}}$. We use
Lemma~\ref{lem:balanced-separation} with $W = A_i$ to get a separation $(X_i, Y_i)$
and a corresponding separator $S_i \df X_i \cap Y_i$ of size at most
$k$ such that $X_i$ and $Y_i$ both contain at least $|A_i|/4$ elements of $A_i$. To see that Lemma~\ref{lem:balanced-separation} is applicable, recall that $d \geq 8kq + 24k + 4$ and thus \[|A_i| = |\Pp_{i-1}| \geq d-8k(i-1) \geq 8kq + 24k + 4 - 8k(i - 1) \geq 2k+2.\]
Recall that there is no directed path from $Y_i$ to $X_i$ avoiding $S_i$.
We define
\[
\Pp_i \df \{P\in\Pp_{i-1}\mid P\cap X_i=\emptyset\}\text{\qquad and \qquad} \Qq_i\df \{Q\in\Qq_{i-1}\mid Q\cap X_i=\emptyset\}.
\]
Clearly, we have $\Pp_i \subseteq \Pp_{i - 1}$ and $\Qq_i \subseteq \Qq_{i - 1}$. We claim that Conditions~(i) to~(iii) are satisfied. Condition~(i)
is straightforward since at least $1\over 4$ of the paths $\Pp_i$ start in $X_i$.
Now consider Condition~(ii).
We define $\Pp_i^\textsf{back}$ to be the backlinkage induced by $\Pp_i$ on $(\Pp,\Pp^\textsf{back})$ and
$\Qq_i^\textsf{back}$ to be a backlinkage induced by $\Qq_i$ on $(\Qq,\Qq^\textsf{back})$.
Since $\Pp$, $\Pp^\textsf{back}$, $\Qq$, and $\Qq^\textsf{back}$ are half-integral, $\Pp_i^\textsf{back}$ and $\Qq_i^\textsf{back}$ are quarter-integral.
It remains to show Condition~(iii). The condition is trivial if $i = 0$. If $i > 0$, we first prove the following claim:
\begin{claim}\label{cla:cutSize}
At most $8k$ paths from linkage $\Dd \in \{\Pp_{i - 1},\Qq_{i - 1}\}$ with corresponding dual linkage $\Dd^\textsf{back}$ can intersect both $Y_i$ and $X_i$.
\end{claim}
\begin{claimproof}
Clearly, there are at most $2k$ paths where a vertex in $Y_i$ precedes a vertex in $X_i$ since such a path has to pass through~$S_i$. Say that such a path is of the \emph{first type}.
In fact, there are at most $2k$ paths of the first type in the half-integral linkage $\Dd$.
Next, we bound the number of paths $P \in \Dd$ that go from a vertex in $X_i$ to a vertex in $Y_i$ and are not of the first type; say that such paths $P$ are of the \emph{second type}.
We claim that there is an injective mapping~$M$, mapping each path~$P$ of the second type to some path $Q \in \Dd \cup \Dd^\textsf{back}$ such that $Q$ has nonempty intersection with~$S_i$.
First, observe that $P$ has to start in $X_i$, because otherwise it is also of the first type.
Denote by $s \df \pathfrom{P} \in X_i$ the starting vertex of $P$.
Since $\Dd^\textsf{back}$ is dual to $\Dd$, there is a path $Q_1 \in \Dd^\textsf{back}$ that ends in~$s$.
Either $Q_1$ intersects $S_i$, in which case we put $M(P) \df Q_1$, or not.
In the second case, there is a path $Q_2 \in \Dd$ with $\pathto{Q_2} = \pathfrom{Q_1}$.
Again, either $Q_2$ intersects $S_i$, in which case we put $M(P) \df Q_2$, or not.
Continuing in this way, we will find~$Q_i \in \Dd \cup \Dd^\textsf{back}$ such that $Q_i$ intersects~$S_i$ since, in each step in which $Q_i$ does not intersect~$Y_i$ the number of paths in $(\Dd \cup \Dd^\textsf{back}) \setminus \{Q_i \mid i \in \mathbb{N}\}$ decreases, and there is at least one path in $(\Dd \cup \Dd^\textsf{back}) \setminus \{Q_i \mid i \in \mathbb{N}\}$ which does intersect~$Y_i$; namely the path $R \in \Dd^\textsf{back}$ with $\pathto{P} = \pathfrom{R}$. Furthermore, by definition no path in $\Dd \cup \Dd^\textsf{back}$ will be defined as~$Q_i$ for two different paths~$P$. Thus, the mapping~$M$ that we construct is injective.
Let $\mathcal{R}$ be the set of paths of the second type. Observe that
$|M(\mathcal{R}) \cap \Dd^\textsf{back}| \leq 4k$ since $\Dd^\textsf{back}$ is
quarter-integral by Condition~(iii). Furthermore,
$|M(\mathcal{R}) \cap \Dd| \leq 2k$ since $\Dd$ is half-integral.
Thus, overall there are at most $8k$ paths in $\Dd$ that intersect
both $X_i$ and $Y_i$.
\end{claimproof}
Now we can prove Condition~(iii) when $i > 0$. We first show that there is at least one path $P$ in $\Pp_i$.
Let $\Pp_{i - 1}^Y$ be the set of paths in~$\Pp_i$ that start in~$Y_i$.
Note that $\Pp_i \subseteq \Pp_{i - 1}^Y$.
By choice of the separation~$(X_i, Y_i)$, we have $|\Pp_{i - 1}^Y| \geq |\Pp_{i - 1}|/4$.
By Condition~(iii) of the induction assumption we have $|\Pp_{i - 1}| \geq d - 8(i - 1)k$ and thus $|\Pp_{i - 1}^Y| \geq (d - 8(i - 1)k)/4$.
Since each path in $\Pp_{i - 1}^Y$ intersects~$Y_i$, Claim~\ref{cla:cutSize} shows that at most $8k$ paths in~$\Pp_{i - 1}^Y$ intersect~$X_i$.
Thus, the number of paths in $\Pp_i$ is at least $|\Pp_{i - 1}^Y| - 8k \geq (d - 8(i - 1)k)/4 - 8k$. Since $d\ge 8kq+24k+4$ by precondition of the lemma, we have
\[{1\over 4} (d-8k(i-1)) - 8k \ge {1\over 4} (d-8ki+8k -32k) \ge {1\over 4} (d-8ki-24k)
\ge 1.\]
Thus, indeed, there is a path $P \in \Pp_i$.
Path $P$ intersects with at least $d-8k(i-1)$ paths in $\Qq_{i-1}$ by the induction assumption.
At most $8k$ of them intersect with $X_i$ so $|\Qq_i|\ge d-8ki$.
This gives us several paths in $\Qq_i$ avoiding $X_i$.
We apply the previous argument symmetrically on one such path in $\Qq_i$ to get $|\Pp_i| \ge d-8ki$.
To conclude the proof of Condition (iii) observe that such arguments hold in fact for each path in either $\Pp_i, \Qq_i$.
We finish the proof of the lemma by showing that Conditions (i) and (iii) are in contradiction for some~$i \in [q]$.
Observe that these two conditions imply $d - 8ki \le |\Pp_i| \le ({3\over 4})^i|\Pp_0|$.
We show that $d - 8kq > ({3\over 4})^q|\Pp_0|$.
Since the conditions hold for $i = 0$, there is thus some smallest $i \in [q]$ for which $\Pp_i$ and $\Qq_i$ are well defined but the Conditions (i) and (iii) contradict each other.
Since $d>8kq+24k+4$ by precondition of the lemma, we have $d - 8kq > 24k + 4$. By definition of $q$ on the other hand \[({3\over 4})^q|\Pp_0| = \frac{|\Pp|}{{4 \over 3}^{\lceil \log_{4 \over 3}(|\Pp|/24k)\rceil}} \leq \frac{|\Pp|}{{4 \over 3}^{\log_{4 \over 3}(|\Pp|/24k)}} = 24k.\] Thus, indeed $d - 8kq > ({3\over 4})^q|\Pp_0|$, giving the desired contradiction.
\end{proof}
\paragraph{Main proof of the dense case}
We are now ready to prove the main lemma of this section.
\begin{proof}[Proof of Lemma~\ref{lem:dense}]
Let $d = 327\,680\cdot a \cdot b \cdot \log_2(|\Ll|/b)$. Since
$I(\Ll, \Kk)$ is not $d$-degenerate, it contains an induced
subgraph~$I'$ of minimum degree larger than~$d$. Redefine $\Ll$ and
$\Kk$ to be the sublinkages of $\Ll$ and $\Kk$ contained in this
subgraph~$I'$, that is, $\Ll \df \Ll \cap V(I')$ and
$\Kk := \Kk \cap V(I')$. Note that $|\Ll| > d$, $|\Kk| > d$, the
size of $\Ll$ only decreases, that is, it remains true that $d \geq 327\,680\cdot a \cdot b \cdot \log_2(|\Ll|/b)$, and note that
$\linkfrom{\Ll} \cup \linkto{\Ll} \cup \linkfrom{\Kk} \cup
\linkto{\Kk}$ remains well-linked.
Let $\Ll^\textsf{back}$ be
a linkage in~$D$ from $\linkto{\Ll}$ to $\linkfrom{\Ll}$ and let
$\Kk^\textsf{back}$ be a linkage in~$D$ from $\linkto{\Kk}$ to
$\linkfrom{\Kk}$. Note that $\Ll^\textsf{back}$ and $\Kk^\textsf{back}$
exist because
$\linkfrom{\Ll} \cup \linkto{\Ll} \cup \linkfrom{\Kk} \cup
\linkto{\Kk}$ is well linked.
We focus on $\ensuremath{\textsf{Aux}}(\Ll)$ and $\ensuremath{\textsf{Aux}}(\Kk)$.
Take backlinkage-induced orderings $(L_1, \ldots, L_{|\Ll|})$
of $\Ll$
and $(K_1, \ldots, K_{|\Kk|})$ of $\Kk$.
Apply Lemma~\ref{lem:disjointPairs} with $k = a$, $r = 640b\log_2(|\Ll|/b)$,
$G = I(\Ll, \Kk)$, $X = \{L_1, \ldots, L_{|\Ll|}\}$, and
$Y = \{K_1, \ldots, K_{|\Kk|}\}$, obtaining $a$ sets
$U_1, \ldots, U_a$ and $a$ sets
$W_1, \ldots, W_a$ with the corresponding properties. To see that Lemma~\ref{lem:disjointPairs} is applicable, observe that $I(\Ll, \Kk)$ has minimum degree at least $327\,680 \cdot a \cdot b \log_2(|\Ll|/b) = 2^9 \cdot 640b\log_2(|\Ll|/b) \cdot a = 2^9 \cdot r \cdot k$.
Observe for later
on that, for each $i \in [a]$, the intersection graph $I(U_i, W_i)$ of the two linkages $U_i$ and $W_i$ has average degree at least $640b\log_2(|\Ll|/b)$ by property 3\ of Lemma~\ref{lem:disjointPairs}.
Now define,
for each $i \in [a]$, a graph $D_i$ as follows. Initially, take the
union of all paths in $U_i$ and $W_i$. Then, for each edge~$(L, L')$
of $\ensuremath{\textsf{Aux}}(\Ll)$ such that $L, L' \in U_i$, add to $D_i$ the unique
path $P \in \Ll^\textsf{back}$ that connects $L$ and $L'$, that is,
$\pathto{L} = \pathfrom{P}$ and $\pathto{P} = \pathfrom{L'}$.
Similarly, for each edge~$(K, K')$ of $\ensuremath{\textsf{Aux}}(\Kk)$ such that
$K, K' \in W_i$, add to $D_i$ the unique path $Q \in \Kk^\textsf{back}$ with
$\pathto{K} = \pathfrom{Q}$ and $\pathto{Q} = \pathfrom{K'}$. In
formulas:
\begin{multline*}
U'_i \ \df\ \{P \in \Ll^\textsf{back} \mid \exists (L, L') \in E(\ensuremath{\textsf{Aux}}(\Ll))
\colon \\
L, L' \in U_i \wedge \pathto{L} = \pathfrom{P}\wedge
\pathto{P} = \pathfrom{L'}\}
\end{multline*}
and
\begin{multline*}
W'_i \ \df\ \{Q \in \Kk^\textsf{back} \mid \exists (K, K') \in E(\ensuremath{\textsf{Aux}}(\Kk))
\colon \\
K, K' \in W_i \wedge \pathto{K} = \pathfrom{Q}\wedge
\pathto{Q} = \pathfrom{K'}\}.
\end{multline*}
We set
\[ D_i \ \df\ \bigcup(U_i \cup W_i \cup U'_i \cup W'_i).\]
We claim that $D_i$ satisfies the required properties. Clearly,
$D_i$ is a subgraph of~$D$, giving property~(i). To see
property~(ii), consider a
linkage~$\Pp \in \{\Ll, \Ll^\textsf{back}, \Kk, \Kk^\textsf{back}\}$. We claim that no two
subgraphs $D_i$, $D_j$ contain the same path of~$\Pp$. This claim
follows indeed from property 2.\ of Lemma~\ref{lem:disjointPairs},
stating that $U_i \cap U_j = \emptyset$ and
$W_i \cap W_j = \emptyset$ and inspecting the definition of~$D_i$
and~$D_j$. Thus, $\{V(D_i) \mid i \in [a]\}$ is a partition of a
subset of the vertex set~$V(\Pp)$ of the paths in~$\Pp$. Thus, each
vertex $v \in V(D)$ occurs in at most four subgraphs~$D_i$, showing
property~(ii).
It remains to show property~(iii), the lower bound on the directed
treewidth of~$D_i$. We aim to modify~$D_i$, increasing the directed
treewidth by at most a constant, to obtain a graph~$D^{(2)}_i$ which
is the union of two pairs of dual half-integral linkages such that two linkages contained in distinct pairs intersect a lot. Then we can apply
Lemma~\ref{lem:high-degree-linkages-dtw}, giving a lower bound
on the directed treewidth of~$D^{(2)}_i$ which then implies a lower
bound on the directed treewidth of~$D_i$.
\newcommand{\textsf{out}}{\textsf{out}}
\newcommand{\zeroin}{\textsf{in}} We first modify~$D_i$ to obtain a
graph~$D^{(1)}_i$ which is the union of two pairs of dual linkages.
Recall the orderings $\vec{\Ll} \df (L_1, \ldots, L_{|\Ll|})$ and
$\vec{\Kk} \df (K_1, \ldots, K_{|\Kk|})$ on $\Ll$ and $\Kk$,
respectively, which we have defined above. By property~1.\ of
Lemma~\ref{lem:disjointPairs}, $U_i$ is a segment of~$\vec{\Ll}$ and
$W_i$ is a segment of~$\vec{\Kk}$. Hence, by the way we have defined
$\vec{\Ll}$, there are at most two
cycles~$C$ in $\ensuremath{\textsf{Aux}}(\Ll)$ which are not contained in $U_i$ or disjoint with $U_i$, that is
$V(C) \setminus U_i \neq \emptyset$ and $V(C) \cap U_i \neq \emptyset$. Call such a cycle
\emph{broken}. Similarly, there are at most two cycles $C$ in
$\ensuremath{\textsf{Aux}}(\Kk)$ such that $V(C) \setminus W_i \neq \emptyset$ and $V(C) \cap W_i \neq \emptyset$.
Call
such a cycle \emph{broken} as well. For each broken cycle~$C$, do
the following operation on~$D_i$ to obtain~$D^{(1)}_i$. If $C$ is in
$\ensuremath{\textsf{Aux}}(\Ll)$, let $L^C_\textsf{out}$ be the vertex of outdegree zero in
the subgraph $\ensuremath{\textsf{Aux}}(\Ll)[V(C) \cap U_i]$ and let $L^C_\zeroin$
be the vertex of indegree zero. Add the directed
edge~$(\pathto{L^C_\textsf{out}}, \pathfrom{L^C_\zeroin})$ to~$D_i$.
Proceed analogously if $C$ is in $\ensuremath{\textsf{Aux}}(\Kk)$: Let $K^C_\textsf{out}$ be
the vertex of outdegree zero in the subgraph $\ensuremath{\textsf{Aux}}(\Kk)[V(C) \cap W_i]$
and let $K^C_\zeroin$ be the vertex of indegree zero,
and add the directed
edge~$(\pathto{K^C_\textsf{out}}, \pathfrom{K^C_\zeroin})$ to~$D_i$. In
this way, we add at most four edges to~$D_i$, obtaining~$D^{(1)}_i$.
Note that adding an edge increases the directed treewidth by at most
one\footnote{In the corresponding robber-cop game (see~\cite{JohnsonRST01}), we can always guard the new edge with
an additional cop.}, and hence
$\dtw(D^{(1)}_i) \leq \dtw(D_i) + 4$.
We claim that $D^{(1)}_i$ is the union of two pairs of dual
linkages. To see this, note first that $U_i$ and $W_i$ are linkages
in~$D^{(1)}_i$. Now consider
\[U^b_i \ \df\ U'_i \cup \{(\pathto{L^C_\textsf{out}}, \pathfrom{L^C_\zeroin}) \mid C
\text{ a broken cycle in }\ensuremath{\textsf{Aux}}(\Ll)\}\]
and
\[W^b_i \df W'_i \cup \{(\pathto{K^C_\textsf{out}},
\pathfrom{K^C_\zeroin}) \mid C \text{ a broken cycle in
}\ensuremath{\textsf{Aux}}(\Kk)\},\] wherein $L^C_\zeroin, L^C_\textsf{out}, K^C_\zeroin$,
and $K^C_\textsf{out}$ are defined as above. Clearly,
$D^{(1)}_i = \bigcup (U_i \cup W_i \cup U^b_i \cup W^b_i)$.
Moreover, both $U^b_i$ and $W^b_i$ are linkages because $U'_i$ and
$W'_i$ are linkages and because
$L^C_\zeroin, L^C_\textsf{out}, K^C_\zeroin$, and $K^C_\textsf{out}$ have
indegree or outdegree zero in $\ensuremath{\textsf{Aux}}(\Ll)[V(C)]$ or
$\ensuremath{\textsf{Aux}}(\Kk)[V(C)]$, respectively. Finally, by definition, $U_i$ and
$U^b_i$ are dual to each other and $W_i$ and $W^b_i$ are dual to
each other. Thus, $D^{(1)}_i$ is the union of two pairs of dual
linkages, as claimed.
In order to apply Lemma~\ref{lem:high-degree-linkages-dtw}, we need
a pair of linkages whose intersection graph has a large minimum
degree. So far, the linkages which define~$D^{(1)}_i$ guarantee only
large average degree (via property 3.\ of
Lemma~\ref{lem:disjointPairs}). We now derive a subgraph $D^{(2)}_i$ of $D^{(1)}_i$ such that $D^{(2)}_i$ is the union of
two pairs of dual half-integral linkages
$(\Pp, \Pp^\textsf{back}), (\Qq, \Qq^\textsf{back})$ and $I(\Pp, \Qq)$ has large minimum
degree.
To achieve this, recall
that the intersection graph $I(U_i, W_i)$ of the two linkages $U_i$,
$W_i$ in $D^{(1)}_i$ has average degree at least $640b\log_2(|\Ll|/b)$.
Hence, there is a subgraph~$I'$ of $I(U_i, W_i)$ with minimum degree
at least~$320b\log_2(|\Ll|/b)$.
Let $\Pp \subseteq U_i$ be the sublinkage of $U_i$
contained in $I'$, that is $\Pp = U_i \cap V(I')$. Similarly, let
$\Qq = W_i \cap V(I')$.
We define $\Pp^\textsf{back}$ to be the backlinkage induced by $\Pp$
on $(U_i, U^b_i)$ and $\Qq^\textsf{back}$ to be the backlinkage induced
by $\Qq$ on $(W_i,W^b_i)$. Note that $\Pp^\textsf{back}$ and $\Qq^\textsf{back}$
are half-integral and dual to $\Pp$ and $\Qq$, respectively.
Take now the subgraph~$D^{(2)}_i$ to be the union
$\bigcup(\Pp \cup \Pp^\textsf{back} \cup \Qq \cup \Qq^\textsf{back})$. Then
apply Lemma~\ref{lem:high-degree-linkages-dtw} to
$\Pp, \Pp^\textsf{back}, \Qq, \Qq^\textsf{back}$ with $k = b + 4$ and
$d = 320b\log_2(|\Ll|/b)$. To see that the preconditions of
Lemma~\ref{lem:high-degree-linkages-dtw} are satisfied, first recall
that the intersection graph $I(\Pp, \Qq)$ has minimum degree at
least $320b\log_2(|\Ll|/b)$. Furthermore,
\begin{multline*}
d = 320b\log_2\frac{|\Ll|}{b} \geq 200b \log_2\frac{|\Ll|}{b} + 120b + 4 \geq \frac{5 \cdot 40b}{2} \log_2\frac{|\Ll|}{b} + 120b + 4 \geq {}\\
\frac{8 \cdot 5b}{\log_2(4/3)}\log_2\frac{|\Ll|}{b} + 24(5b) + 4 \geq 8
\cdot (b + 4) \log_{4/3}\frac{|\Ll|}{24(b + 4)} + 24(b + 4) + 4 =\\
8k\log_{4/3}\frac{|\Ll|}{24k} + 24k + 4,
\end{multline*}
and thus indeed the preconditions of
Lemma~\ref{lem:high-degree-linkages-dtw} are satisfied. Thus, the
directed treewidth of $D^{(2)}_i$ is at least $b + 4$. Since
$D^{(2)}_i$ is a subgraph of $D^{(1)}_i$ and
$\dtw(D_i) \geq \dtw(D^{(1)}_i) - 4$, we have $\dtw(D_i) \geq b$, as
required.
\end{proof}
\section{Wrapping up the proof of Theorem~\ref{thm:qp}}\label{sec:main}
\begin{proof}[Proof of Theorem~\ref{thm:qp}]
Let $G$ be a directed graph of $\dtw(G) \geq c \cdot a^6b^{8}\log^2(ab)$, where $c$ is a large constant, whose value will follow from the reasoning below. First, we invoke Lemma~\ref{lem:path-system} with $\beta=2^{37}a^2b^3\log(ab)$ and $\alpha=8ab$ (here we assume that $c$ is sufficiently large so that the assumption is satisfied). We obtain a set of vertex-disjoint paths $P_1,\ldots,P_{8ab}$ and sets $A_i,B_i\subseteq V(P_i)$, where $A_i$ appears before $B_i$ on $P_i$, and $|A_i|= |B_i|= 2^{37}a^2b^3\log(ab)$, and the set
$
\bigcup_{i=1}^{8ab} A_i\cup B_i
$
is well-linked.
Denote by $\Ll_{i,j}$ a linkage from $B_i$ to $A_j$.
We split the $8ab$ paths $P_i$ into $a$ segments,
each consisting of $8b$ paths.
Formally, for every $\iota \in [a]$ we define $I_\iota = \{j~|~8(\iota-1)b < j \leq 8\iota b\}$.
Now we set $r = 64ab^2$ and create an auxiliary $r$-colored graph $H$, whose vertices will be paths of appropriately chosen linkages $\Ll_{i,j}$. More specifically, for every $\iota \in [a]$, and every $i,j \in I_\iota$, we introduce a vertex for every path in $\Ll_{i,j}$ and color it $(i,j)$. Two vertices of $H$ are adjacent if and only if their corresponding paths share a vertex in $G$. Note that for two linkages $\Ll_{i,j}$ and $\Ll_{i',j'}$, the graph $H[\Ll_{i,j} \cup \Ll_{i',j'}]$ is precisely the intersection graph $I(\Ll_{i,j},\Ll_{i',j'})$.
We set $d\df2^{27} ab\log(ab)$ and consider two cases:
\begin{enumerate}
\item[(i)] for all $i,j,i',j'$ the graph $I(\Ll_{i,j},\Ll_{i',j'})$ is $d$-degenerate.
\item[(ii)] there exist $i,j,i',j'$, for which the graph $I(\Ll_{i,j},\Ll_{i',j'})$ is not $d$-degenerate.
\end{enumerate}
An intuition behind case (i) is that for each subgraph of $H$ there is always a path (in $G$) such that it shares a vertex with at most $d$ paths from all used linkages back.
\smallskip
{\bf Case (i)}~~We use Lemma~\ref{lem:degenerate} on $H$.
Graph $H$ has $64ab^2$ color classes such that for each $(i,j)\neq (i',j')$ the graph $H[\Ll_{i,j}\cup\Ll_{i',j'}]$ is $d$-degenerate.
Note that $|\Ll_{i,j}|=2^{37}a^2b^3\log(ab)\ge 4e(r-1)d$ is sufficiently large to satisfy the last assumption of the lemma.
We are given an independent set $x_1,\ldots, x_{64ab^2}$ that represents pairwise disjoint paths $L_{i,j}$ from $B_i$ to $A_j$ for all $i,j \in I_\iota$.
We also recall that $A_i$ and $B_i$ lie on $P_i$ and all $P_i$'s are pairwise disjoint.
Let $G_\iota$ consist of all paths $P_i$ for $i \in I_\iota$ and $L_{i,j}$ for $i,j \in I_\iota$.
By Lemma~\ref{lem:dtw-bound} for $k=8b$ we obtain $\dtw(G_\iota)\ge b$ while each vertex is in at most 2 such subgraphs.
Indeed, each vertex can appear only once on some $P_i$ and once on some $L_{i,j}$.
\smallskip
{\bf Case (ii)}~~The claim follows from Lemma~\ref{lem:dense}.
Since $|\Ll|=2^{37}a^2b^3\log(ab)$ then
$d=2^{27} ab\log(ab)>2^{19}ab\log(2^{37}a^2b^2\log(ab))$.
\end{proof}
\section{Improved bound for cycles: Proof of Theorem~\ref{thm:dtw-ep-all}}\label{sec:imp}
This section is devoted to the proof of Theorem~\ref{thm:dtw-ep-all}.
We follow the outline of Section~\ref{sec:main}, but circumvent the usage of Lemma~\ref{lem:path-system} to avoid the quadratic blow-up stemming from it.
The proof of the cases $p=4$, $p=3$, and $p=2$ differ only in minor details. We first present the proof for the case $p=4$ in Section~\ref{ss:imp1},
abstracting the common parts of the proofs as independent lemmas, and then continue with the proof of the case $p=2$ in Section~\ref{ss:imp2}.
A simple mixture of the tricks used for the cases $p=4$ and $p=2$ yields the proof for the case $p=3$ and is discussed in Section~\ref{ss:imp3}.
\subsection{Case $p=4$}\label{ss:imp1}
The crucial replacement of Lemma~\ref{lem:path-system} is the following.
\begin{lemma}\label{lem:walk-system}
Let $G$ be a directed graph, $a, b, k \geq 1$ be integers, and let $D$ be a well-linked set in $G$ of size $4(a+k)b$.
If $G$ does not contain a family of $k$ cycles such that every vertex of $G$ is in at most two of the cycles, then
there exists a family $\Pp = \{P_1,P_2,\ldots,P_a\}$ of walks in $G$ and sets $A_i, B_i \subseteq V(P_i)$ for every $1 \leq i \leq a$ such that
\begin{enumerate}
\item $\Pp$ is of congestion $2$,
\item the sets $A_i$ and $B_i$ are of size $b$ each and are pairwise disjoint,
\item for every $1 \leq i \leq a$, all vertices of $A_i$ appear on $P_i$ before all vertices of $B_i$, and
\item $\bigcup_{i=1}^a A_i \cup B_i$ is well-linked in $G$.
\end{enumerate}
\end{lemma}
Lemma~\ref{lem:walk-system} differs from Lemma~\ref{lem:path-system} in a number of ways.
First, it avoids the quadratic blow-up in the size of the well-linked set (which is linearly lower bounded by directed treewidth by Lemma~\ref{lem:dtw2wl}).
Second, $\Pp$ is no longer a linkage but a family of walks of congestion $2$.
Third, there is another assumption that $G$ does not contain a large half-integral packing of cycles;
we do not know how to avoid this assumption and this assumption is the reason the improvement described here works only in the setting of Theorem~\ref{thm:dtw-ep-all},
not in the general setting of Theorem~\ref{thm:qp}.
\begin{proof}[Proof of Lemma~\ref{lem:walk-system}.]
Partition $D$ into two sets $D_1$ and $D_2$ of size $2(a+k)b$ each.
By well-linkedness, there exists a linkage $\Ll$ from $D_1$ to $D_2$ and a linkage $\Ll^\textsf{back}$ from $D_2$ to $D_1$.
We focus on the auxiliary graph $\ensuremath{\textsf{Aux}}(\Ll)$ and a backlinkage-induced
order $\Ll = \{L_1,L_2,\ldots,L_{|\Ll|}\}$.
Note that $\ensuremath{\textsf{Aux}}(\Ll)$ has less than $k$ connected components, since the closed
walks in $G$ corresponding to the cycles of $\ensuremath{\textsf{Aux}}(\Ll)$ give rise to a half-integral
packing of cycles in $G$.
We say that an index~$i \in \{1, 2, \ldots, a+k\}$ is \emph{good} if all vertices $L_j$ for $(i-1)\cdot 2b < j \leq i \cdot 2b$ lie on the same cycle of $\ensuremath{\textsf{Aux}}(\Ll)$, and \emph{bad} otherwise.
Note that we have less than $k$ bad indices. Let $I$ be a family of exactly $a$ good indices.
For every $i \in I$, we define $P_i$ to be the walk in $G$ that corresponds to the path $\{ L_j~|~(i-1) \cdot 2b < j \leq i \cdot 2b \}$ in $\ensuremath{\textsf{Aux}}(\Ll)$.
Furthermore, let $A_i = \{\pathfrom{L_j}~|~(i-1) \cdot 2b < j \leq i \cdot 2b - b\}$ and $B_i = \{\pathfrom{L_j}~|~i \cdot 2b - b < j \leq i \cdot 2b\}$.
Then clearly $\Pp = \{P_i~|~i \in I\}$ is of congestion $2$; the other required properties are straightforward to verify.
\end{proof}
With Lemma~\ref{lem:walk-system} in hand, we can closely follow the reasoning of Section~\ref{sec:main}.
We first formulate and prove two lemmas which we will reuse in the next section.
We start with the sparse scenario.
\begin{lemma}\label{lem:sparse-win}
Let $a,b,d$ be positive integers with $a$ even and $b \geq 4e \cdot a\cdot d$, and $G$ be a directed graph.
Let $\Pp=P_1,\ldots,P_{a}$ be a set of paths of congestion $\alpha$ such that there exist pairwise disjoint sets $A_i,B_i \subseteq V(P_i)$, $i=1,2,\ldots,a$.
Furthermore, assume that each set $A_i$ and $B_i$ is of size $b\ge 4ead$ and that
for every $1 \leq i \leq a$, all vertices of $A_i$ appear on $P_i$ before all vertices of $B_i$.
Let $\mathcal{I} = \{(1,2), (2,1), (3,4), (4,3), \ldots, (a-1,a), (a,a-1)\}$.
For every $(i,j) \in \mathcal{I}$, let $\mathcal{L}_{i,j}$ be a linkage from $B_i$ to $A_j$.
If for every $(i,j),(i',j') \in \mathcal{I}$, $(i,j) \neq (i',j')$, the intersection graph $I(\Ll_{i,j}, \Ll_{i',j'})$ is $d$-degenerate,
then there exist a family of $a\over 2$ directed cycles of congestion $\alpha +1$.
\end{lemma}
\begin{proof}
Create an auxiliary $a$-partite graph $H$ with vertex sets of color classes equal to $\Ll_{i,j}$ for $(i,j) \in \mathcal{I}$.
Between $\Ll_{i,j}$ and $\Ll_{i',j'}$ put the graph $I(\Ll_{i,j},\Ll_{i',j'})$.
By Lemma~\ref{lem:degenerate} and our choice of $b$, there
exists $L_{i,j} \in \Ll_{i,j}$ for every $(i,j) \in \mathcal{I}$
that are independent in $H$.
By the construction of the graph $H$, the paths $L_{i,j}$ for $(i,j) \in \mathcal{I}$ are
pairwise vertex-disjoint.
Fix $\iota \in \{1, 2, \ldots, {a\over 2}\}$ and consider the union $U_\iota$ of $P_{2\iota-1}$, $P_{2\iota}$,
$L_{2\iota-1,2\iota}$, and $L_{2\iota,2\iota-1}$. Observe
that this union contains a closed walk: from the ending vertex of $L_{2\iota,2\iota-1}$
follow $P_{2\iota-1}$ to the starting vertex of $L_{2\iota-1,2\iota}$, then
follow $L_{2\iota-1,2\iota}$ to the end, then follow $P_{2\iota}$ to the starting
vetex of $L_{2\iota,2\iota-1}$, and follow this path to the end.
Thus, $U_\iota$ contains a cycle $C_\iota$.
Furthermore, since every vertex can appear at most $\alpha$ times on walks $P_i$ and at most once on paths $L_{i,j}$, every vertex can appear at most $\alpha +1$ times on the cycles $\{ C_\iota~|~1 \leq \iota \leq {a\over 2}\}$.
\end{proof}
For the core of the complementary (dense) situation, we derive the following lemma.
Consider backlinkage-induced order $\Ll = \{L_1,L_2,\ldots,L_{|\Ll|}\}$ for linkage $\Ll$ and the corresponding backlinkage $\Ll^\textsf{back}$.
We say that a walk (path) is an \emph{($\Ll$,$\Ll^\textsf{back}$)-interlaced walk (path) of size $q\ge 1$} if it starts at $\pathfrom{L_j}$ for some $L_j\in \Ll$ and then it has the following structure:
\[
L_{j},L_{j}^\textsf{back}, L_{j+1},L_{j+1}^\textsf{back},\ldots,L_{j+q-1}.
\]
We may omit the size when it only matters whether such a walk exists.
\begin{lemma}\label{lem:dense-win}
Let $\Ll$ and $\Kk$ be two linkages in a directed graph $G$.
Let $U_1,\ldots,U_k$ be a set of $k$ walks such that the congestion of $(U_i)_{i=1}^k$ is $\alpha$ and $\mathcal{U}_i$ is the family of paths of $\Ll$ that are subpaths of $U_i$.
Similarly, let $W_1,\ldots,W_k$ be a set of $k$ walks such that the congestion of $(W_i)_{i=1}^k$ is $\beta$ and $\mathcal{W}_i$ is the family of paths of $\Kk$ that are subpaths of $W_i$.
If for every $1 \le i \leq k$ the average degree of $I(\Ll,\Kk)[\mathcal{U}_i,\mathcal{W}_i]$ is at least 2, then there exists in $G$ a family of $k$ cycles of congestion $\alpha +\beta$.
\end{lemma}
\begin{proof}
Fix $i \in \{1,\ldots,k\}$.
Let $L_1,L_2,\ldots$ be the paths of $\mathcal{U}_i$ in the order of their appearance on $U_i$
and let $K_1,K_2,\ldots$ be the paths of $\mathcal{W}_i$ in the order of their appearance on $W_i$.
Since the average degree of $I(\Ll,\Kk)[\mathcal{U}_i,\mathcal{W}_i]$ is at least $2$, this graph is not a forest.
Consequently, there are indices
$\alpha < \beta$ and
$\gamma < \delta$ such that
$L_\alpha K_\delta \in E(I(\Ll,\Kk)[\mathcal{U}_i,\mathcal{W}_i])$ and $L_\beta K_\gamma \in E(I(\Ll,\Kk)[\mathcal{U}_i,\mathcal{W}_i])$.
Consider the following closed walk $Q_i$ in $G$:
starting from the intersection of $L_\alpha$ and $K_\delta$, we follow $U_i$
up to the intersection with $K_\gamma$. Then we follow $W_i$
up to the intersection with $L_\alpha$, where we started the walk.
Let $Q_i'$ be any cycle inside the closed walk $Q_i$.
Thus we obtained $k$ cycles.
Observe that as we build the cycles only using vertices in $\cup_{i=1}^k U_i\cup W_i$, every vertex of $G$ is used at most $\alpha+\beta$ times.
\end{proof}
\medskip
We conclude the proof of case $p=4$ in Theorem~\ref{thm:dtw-ep-all} by a combination of Lemmas~\ref{lem:walk-system},~\ref{lem:sparse-win}, and~\ref{lem:dense-win}.
Let $k$ be an integer and $G$ be a directed graph of $\dtw(G) = \Omega(k^3)$ and suppose, for a contradiction, that no family of $k$ cycles exists such that every vertex of $G$ is in at most four of the cycles.
Let
\begin{align*}
d & := 2^{10} \cdot k, & a & := 2k, & b & := \lceil 4ead \rceil = \Theta(k^2).
\end{align*}
By Lemma~\ref{lem:dtw2wl}, $G$ contains a well-linked set of size $\Omega(k^3)$.
We apply Lemma~\ref{lem:walk-system} to $G$ with parameters $a$ and $b$, obtaining
a family $\Pp = \{P_1,P_2,\ldots,P_a\}$ and sets $A_i$, $B_i$ of size $b$ each.
Let $\mathcal{I} = \{(1,2), (2,1), (3,4), (4,3), \ldots, (a-1,a), (a,a-1)\}$.
For every $(i,j) \in \mathcal{I}$, let $\mathcal{L}_{i,j}$ be a linkage from $B_i$ to $A_j$.
We consider two cases.
In the case where for every $(i,j),(i',j') \in \mathcal{I}$, $(i,j) \neq (i',j')$, the intersection graph $I(\Ll_{i,j}, \Ll_{i',j'})$ is $d$-degenerate we get a contradiction by Lemma~\ref{lem:sparse-win}.
For the remaining case observe that
there exist a linkage $\Ll \subseteq \Ll_{i,j}$ and a linkage
$\Kk \subseteq \Ll_{i',j'}$ such that $I(\Ll,\Kk)$ has minimum degree more than $d$.
Furthermore, since $\bigcup_{i=1}^a A_i \cup B_i$ is well-linked, there exist a linkage $\Ll^\textsf{back}$ from $B(\Ll)$ to $A(\Ll)$
and an analogous linkage $\Kk^\textsf{back}$ from $B(\Kk)$ to $A(\Kk)$.
We focus on auxiliary graph $\ensuremath{\textsf{Aux}}(\Ll)$ and $\ensuremath{\textsf{Aux}}(\Kk)$.
Let $\Ll = \{L_1,L_2,\ldots,L_{|\Ll|}\}$ and $\Kk = \{K_1,K_2,\ldots,K_{|\Kk|}\}$
be backlinkage-induced orders of $\Ll$ and $\Kk$.
Let $L_j^\textsf{back}$ be the path of $\Ll^\textsf{back}$ that starts at $\pathto{L_j}$
and similarly define $K_j^\textsf{back}$.
Since $G$ does not admit a quarter-integral packing of cycles of size $k$, we infer that both $\ensuremath{\textsf{Aux}}(\Ll)$ and $\ensuremath{\textsf{Aux}}(\Kk)$ have each less than $k$ connected components.
We now apply Lemma~\ref{lem:disjointPairs} to $I(\Ll,\Kk)$ with the aforementioned backlinkage-induced
orders of $\Ll$ and $\Kk$, aiming at $3k$ sets
$U_1,\ldots,U_{3k}$ and $3k$ sets $W_1,\ldots,W_{3k}$ such that
$I(\Ll,\Kk)[U_i,W_i]$ has average degree at least $2$
for every $1 \leq i \leq 3k$.
An index $1 \leq i \leq 3k$ is bad if either $U_i$ is not contained in a single cycle of $\ensuremath{\textsf{Aux}}(\Ll)$
or $W_i$ is not contained in a single cycle of $\ensuremath{\textsf{Aux}}(\Kk)$.
By our orderings of $\Ll$ and $\Kk$, there are less than $2k$ bad indices.
Let $I \subseteq [3k]$ be a family of exactly $k$ indices that are not bad.
We can now use Lemma~\ref{lem:dense-win}.
For each $i\in I$, $U_i$ can be turned into ($\Ll$,$\Ll^\textsf{back}$)-interlaced walk $U_i'$.
Similarly each $W_i$ can be turned into ($\Kk$,$\Kk^\textsf{back}$)-interlaced walk $W_i'$.
$k$ blow-upThe congestion of $(U'_i)_{i \in I}$ is two as it is composed of two linkages, and similarly for $(W'_i)_{i \in I}$.
Therefore we obtain a quarter-integral cycle packing of size $k$, a contradiction.
This finishes the proof of case $p=4$ in Theorem~\ref{thm:dtw-ep-all}.
\subsection{Case $p=2$}\label{ss:imp2}
First, we prove a lemma that serves as a key technique to lower the congestion.
\begin{lemma}[Untangling Lemma]\label{lem:untangle}
Let $G$ be a directed graph, let $q, k \geq 1$ be integers, and let $D_1,D_2$ be two vertex sets of size $q(2k-2)+1$ each.
Let $\Ll$ be linkage from $D_1$ to $D_2$ of size $q(2k-2)+1$ in graph $G$ and $\Ll^\textsf{back}$ be a corresponding back-linkage of size $q(2k-2)+1$, too.
If $G$ does not admit a half-integral packing of $k$ cycles, then
$G$ contains an ($\Ll$,$\Ll^\textsf{back}$)-interlaced path of size $q$.
\end{lemma}
\begin{proof}
We iteratively define subgraphs $H_1,H_2,\ldots$ of $G$ using the following greedy process.
Let $\Ll = \{L_1,L_2,\ldots \}$ be the backlinkage-induced order of $\Ll$.
Fix $i \geq 1$ and assume that all $H_{i'}$ for $1 \leq i' < i$ have been defined.
Let $j$ be smallest index such that $L_j$ was not used for construction of $H_{i'}$ for any $i'$ with $1\le i'<i$.
The subgraph $H_i$ is defined as the following walk.
Starting in $\pathfrom{L_j}$, we follow $L_j$, the path of $\Ll^\textsf{back}$ from $\pathto{L_j}$ to $\pathfrom{L_{j+1}}$,
$L_{j+1}$, etc., until we reach either an end of a cycle of $\ensuremath{\textsf{Aux}}(\Ll)$ or a self-intersection of the walk.
In the latter case, let $H_i$ be the walk from $\pathfrom{L_j}$ up to and including the last arc leading to the self-intersection.
We measure the \emph{size of $H_i$} as the number of vertices paths $L_{j'}$ for which we passed $\pathfrom{L_{j'}}$ in the construction.
Now, we observe that as $\mathcal{H} := \{H_1,H_2,\ldots\}$ is created using $\Ll$ and $\Ll^\textsf{back}$ only, so it has congestion $2$.
Furthermore, every $H_i$ whose greedy process ended because of a self-intersection contains a cycle.
Since $G$ does not contain a half-integral packing of $k$ cycles, $\ensuremath{\textsf{Aux}}(\Ll)$ has less than $k$ cycles and thus for less than $k$ walks $H_i$ the greedy process ended because
of a self-intersection. Consequently, $|\mathcal H| \leq 2k-2$.
Hence, there exists $H_i\in \mathcal H$ of size at least $q+1$.
It follows that $H_i$ contains the desired ($\Ll$,$\Ll^\textsf{back}$)-interlaced path of size $q$.
\end{proof}
Second, we give an analog of Lemma~\ref{lem:walk-system} that serves as a replacement of Lemma~\ref{lem:path-system} in this section.
This time, we trade linear blow-up in the exponent for no congestion.
\begin{lemma}\label{lem:disj-walk-system}
Let $G$ be a directed graph, let $a, b, k \geq 1$ be integers, and let $D$ be a well-linked set in $G$ of size $2(ab(2k-2)+1)$.
If $G$ does not contain a family of $k$ cycles such that every vertex of $G$ is in at most two of the cycles, then
there exists a family $\Pp = \{P_1,P_2,\ldots,P_a\}$ of paths in $G$ and sets $A_i, B_i \subseteq V(P_i)$ for every $1 \leq i \leq a$ such that
\begin{enumerate}
\item the paths in $\Pp$ are mutually disjoint,
\item the sets $A_i$ and $B_i$ are of size $b$ each and are pairwise disjoint,
\item for every $1 \leq i \leq a$, all vertices of $A_i$ appear on $P_i$ before all vertices of $B_i$, and
\item $\bigcup_{i=1}^a A_i \cup B_i$ is well-linked in $G$.
\end{enumerate}
\end{lemma}
\begin{proof
We partition $D$ into two equal sets $D_1$ and $D_2$ of size $ab(2k-2)+1$ each.
By well-linkedness, there exists a linkage $\Ll$ from $D_1$ to $D_2$ and a backlinkage $\Ll^\textsf{back}$ from $D_2$ to $D_1$.
This gives us the backlinkage-induced order $\Ll = \{L_1,L_2,\ldots,L_{|\Ll|}\}$.
We immediately use Lemma~\ref{lem:untangle} with $q=ab$.
As $G$ does not contain a half-integral packing of $k$ cycles, we obtain an $(\Ll,\Ll^\textsf{back})$-interlaced path $P$
that contains at least $2ab$ vertices in $D=\{\pathfrom{L_j} \mid L_j\in\Ll\}\cup\{\pathto{L_j} \mid L_j\in\Ll\}$.
For every $j \in \{1, 2, \ldots, a\}$, we define $P_j$ to be $j$-th subpath of $P$ containing exactly $2b$ consecutive vertices from set $D$; the define $A_i$ to be the set of the first $b$
of these vertices and $B_i$ to be the set of the last $b$ of these vertices.
Then it is straightforward to verify that $\Pp$ satisfy the required properties.
\end{proof}
\medskip
We conclude the proof of case $p=2$ in Theorem~\ref{thm:dtw-ep-all} by combination of Lemmas~\ref{lem:untangle},~\ref{lem:disj-walk-system},~\ref{lem:sparse-win}, and~\ref{lem:dense-win}.
Let $k$ be an integer and $G$ be a directed graph of $\dtw(G) = \Omega(k^5)$ and suppose, for a contradiction, that no family of $k$ cycles exists such that every vertex is in at most two of the cycles.
Let
\begin{align*}
d & := 3 \cdot 2^{10}\cdot k, & a & := 2k, & q & := \lceil 4ead \rceil, & b & := 2(q(2k-2)+1) = \Theta(k^3).
\end{align*}
By Lemma~\ref{lem:dtw2wl}, $G$ contains a well-linked set of size $\Omega(k^5)$.
We apply Lemma~\ref{lem:disj-walk-system} to $G$ with parameters $a$ and $b$, obtaining
a family $\Pp = \{P_1,P_2,\ldots,P_a\}$ and sets $A_i$, $B_i$ of size $b$ each.
Let $\mathcal{I} = \{(1,2), (2,1), (3,4), (4,3), \ldots, (a-1,a), (a,a-1)\}$.
For every $(i,j) \in \mathcal{I}$, let $\mathcal{L}_{i,j}$ be a linkage from $B_i$ to $A_j$ and $\mathcal{L}^\textsf{back}_{i,j}$ is the corresponding linkage back (which exists due to well-linkedness of $\bigcup_{i=1}^a A_i \cup B_i$).
Now we will untangle all such linkages using Lemma~\ref{lem:untangle}.
We apply Lemma~\ref{lem:untangle} on each $(i,j)\in \mathcal{I}$ separately with the parameter $q$,
obtaining an $(\Ll_{i,j},\Ll_{i,j}^\textsf{back})$\nobreakdash-interlaced path $Q_{i,j}$ containing at least $q$ vertices in $A_i$ and at least $q$ vertices in $B_j$.
Let $\mathcal{Q}_{i,j}$ be the sublinkage of $\Ll_{i,j}$ consisting of $q$ paths contained in $Q_{i,j}$.
We consider two cases.
In the case where, for every $(i,j),(i',j') \in \mathcal{I}$, $(i,j) \neq (i',j')$, the intersection graph $I(\mathcal{Q}_{i,j}, \mathcal{Q}_{i',j'})$ is $d$-degenerate we get a contradiction by Lemma~\ref{lem:sparse-win} as $\Pp$ has congestion one.
In the remaining case, fix two distinct $(i,j),(i',j') \in \mathcal{I}$ such that $I(\mathcal{Q}_{i,j},\mathcal{Q}_{i',j'})$ is not $d$-degenerate.
We have a linkage $\mathcal{Q}_1 \subseteq \mathcal{Q}_{i,j}$ and a linkage
$\mathcal{Q}_2 \subseteq \mathcal{Q}_{i',j'}$ such that $I(\mathcal{Q}_1,\mathcal{Q}_2)$ has minimum degree more than~$d$.
We now apply Lemma~\ref{lem:disjointPairs} to $I(\mathcal{Q}_1,\mathcal{Q}_2)$, aiming at $k$ sets $U_1,\ldots,U_k$ and $k$ sets $W_1,\ldots,W_k$ such that $I(\mathcal{Q}_1,\mathcal{Q}_2)[U_\iota,W_\iota]$ has average degree at least $2$ for every $\iota \in \{1, 2, \ldots, k\}$.
In the application of Lemma~\ref{lem:disjointPairs}, the paths in $\mathcal{Q}_1$ and $\mathcal{Q}_2$ are ordered as in $Q_{i,j}$ and $Q_{i',j'}$, respectively.
Hence, all paths in $U_\iota$ are contained in a subpath $U_{\iota}'$ of $Q_{i,j}$ and the paths $(U_\iota')_{\iota = 1}^k$ are vertex-disjoint.
Similarly,
all paths in $W_\iota$ are contained in a subpath $W_{\iota}'$ of $Q_{i',j'}$ and the paths $(W_\iota')_{\iota = 1}^k$ are vertex-disjoint.
We can now use Lemma~\ref{lem:dense-win} to get a contradiction. Thus case $p=2$ of Theorem~\ref{thm:dtw-ep-all} holds.
\subsection{Case $p=3$}\label{ss:imp3}
We conclude with a remark that we can combine both approaches.
If we use Lemma~\ref{lem:walk-system} instead of Lemma~\ref{lem:disj-walk-system} in the proof of case $p=2$ of Theorem~\ref{thm:dtw-ep-all}, we are guaranteed only $k$ one-third-integral cycles (as $\Pp$ has congestion 2 instead of 1 while using Lemma~\ref{lem:sparse-win}) but we save blow-up by factor $k$ in the bound on directed treewidth.
Hence, we obtain the statement of Theorem~\ref{thm:dtw-ep-all} for $p=3$.
\section{Conclusions}
We have shown that if one relaxes the disjointness constraint to half- or quarter-integral packing (i.e., every vertex used at most two or four times, respectively), then
the Erd\H{o}s-P\'{o}sa property in directed graphs admits a polynomial bound between
the cycle packing number and the feedback vertex set number.
The obtained bound for quarter-integral packing is smaller than the one for half-integral packing.
A natural question would be to decrease the dependency further, even at the cost of higher
congestion (but still a constant).
More precisely, we pose the following question: Does there exist a constant $c$ and a polynomial $p$ such that
for every integer $k$ if a directed graph $G$ does not contain a family of $k$ cycles
such that every vertex of $G$ is in at most $c$ of the cycles, then
the directed treewidth of $G$ is at most $k p(\log k)$?
One of the sources of polynomial blow-up in the proof of Theorem~\ref{thm:qp}
is the quadratic blow-up in Lemma~\ref{lem:path-system}.
Lemma~\ref{lem:path-system} is a direct corollary of another result
of~\cite{DBLP:journals/corr/KawarabayashiK14} that asserts that a directed graph $G$
of directed treewidth $\Omega(k^2)$ contains a path $P$ and a set $A \subseteq V(P)$
that is well-linked and of size $k$.
Is this quadratic blow-up necessary? Can we improve it, even at the cost of some constant
congestion in the path $P$ (i.e., allow $P$ to visit every vertex a constant number of times)?
We remark that the essence of the improvement from $\Oh(k^6 \log^2 k)$ (obtained by setting $b=2$ in Theorem~\ref{thm:qp}) to $\Oh(k^3)$ asserted by Theorem~\ref{thm:dtw-ep-all} for $p=4$ is to avoid the usage
of Lemma~\ref{lem:path-system} and to replace it with a simple well-linkedness trick.
However, this trick fails in the general setting of Theorem~\ref{thm:qp}.
\paragraph{Acknowledgments.} We thank Stephan Kreutzer (TU Berlin) for
interesting discussions on the topic and for pointing out
Lemma~\ref{lem:dtw-cp-fvs}.
\marginpar{\includegraphics[height=25px]{logo-eu}\hspace{.5cm}\includegraphics[height=25px]{logo-erc}}
This research is part of projects that have received funding from the
European Research Council (ERC) under the European Union's Horizon
2020 research and innovation programme Grant Agreements 648527 (Irene
Muzi) and 714704 (all authors).
Tom\'{a}\v{s} Masa\v{r}\'{i}k was also supported by student grant
number SVV–2017–260452 of Charles University, Prague, Czech Republic.
\bibliographystyle{abbrv}
|
1907.02637
|
\section{Introduction}
In the early '80s, the widespread use of the sampler revolutionized the way music is produced: besides hiring professional musicians, music producers have since been able to compose with sampled sounds. This has brought much flexibility for both drum and melody production, thanks to the various offline edition possibilities offered by such systems like pitch shifting, time stretching, looping and others.
Nowadays, many producers still rely on samplers for drums production, mainly due to the always-increasing amount of samples libraries available for download. This has helped music production become increasingly accessible, even to newcomers with no or little notion in sound design. However, relying on samples has also some drawbacks. Indeed, producers now have to browse their vast collection of samples in order to find the "right sound". This process is often inefficient and time-consuming. Kick drum datasets are usually unorganized with, for instance, samples gathered in a single folder, regardless of whether they sound "bright" or "dark". As a result, many producers would rely only on a limited selection of their favourite sounds, which could hamper creativity.
Hence, a method allowing a comfortable and rich exploration of sounds becomes an essential requirement in music production, especially for non-expert users. Numerous research efforts have been done in the domain of user experience in order to provide interfaces that enhance the fluidity of human-machine interactions. As an example, synthesizers interfaces now often feature "macro" controls that allow to tune a sound to one's will quickly.
Another approach to tackle this problem is the use of Music Information Retrieval (MIR) to deal more efficiently with vast libraries of audio samples. MIR is an approach based on feature extraction: by computing a lot of audio features \cite{peeters2004large} over a dataset, one can define a perceptual similarity measure between sounds. Indeed, audio features are related to perceptual characteristics, and a distance between a combination of features is more relevant than a squared error between two waveforms. The combination of MIR with machine learning techniques appears natural in order to organize such audio libraries by allowing, for example, clustering or classification based on audio content. We can cite software such as AudioHelper's Samplism, Sononym and Algonaut's Atlas.
While such software only allows one to organize an existing database, we propose to use artificial intelligence to intuitively generate sounds, thus also tackling the problem of sound exploration. Only very recently, some machine learning models have been developed specifically for the problem of audio generation. These \textit{generative models} perform what we could define as \textit{synthesis by learning}. They rely on generative modelling, which allows performing audio synthesis by learning while tackling the question of intuitive parameter control \cite{esling2018generative,engel2017neural}.
\begin{figure*}[!ht]
\centering
\includegraphics[width =0.9\textwidth]{images/Full_workflow.pdf}
\caption{This diagram presents our end-to-end system for drum sounds synthesis. The generative model (1) learns how to reconstruct spectrograms from a parameters' space. Then, the second part of the system (2) is dedicated to spectrogram inversion, to generate some signal from a Mel spectrogram. Finally, the software interface (3) allows a user to interact with the model and to generate sound from the parameters' space.}
\label{fig:workflow}
\end{figure*}
\textit{Generative models} are a flourishing class of machine learning approaches whose purpose is to generate novel data based on the observation of existing examples \cite{bishop2014pattern}. The learning process consists of modelling the underlying (and unknown) probability distribution of the data based on samples.
Once the model is trained, it is then possible for a user to generate new samples at will. However, for the user to be active during the synthesis process and not only passively browsing the outputs of the system, we find crucial the requirement that the system should provide intuitive controls. To this end, we need a model that extracts a compact high-level representation of the data. Then, by providing these simple high-level controls to a user, the synthesis process can be guided by perceptual characteristics. A user would just have to explore a continuous and well-organized parameter space to synthesize an infinite variety of sounds.
\subsection{Our proposal}
\label{sec:proposal}
In this work, we describe a system that allows to create a controllable audio synthesis space so that we can use it to synthesize novel sounds in an intuitive manner. This system can be split into three components (Fig.~\ref{fig:workflow}):
\begin{itemize}
\item A Conditional Wasserstein Auto-Encoder (CWAE) which generates Mel-scaled spectrograms.
\item An extension of the Multi-Head Convolutional Neural Network (MCNN) which reconstructs signal from Mel-scaled spectrograms.
\item A Max4Live plugin allowing users to interact with the model in a music production environment.
\end{itemize}
In the remainder of this document, we first provide a state of the art on Wasserstein auto-encoders and MCNN. Then we describe our model and the data we used to train it. We discuss reconstruction and generation results. Finally, we showcase the associated plugin and explain how it could change the way drum tracks are produced.
\section{Related work}
\subsection{Generative models on audio waveforms}
A few systems based on generative models have been recently proposed to address the learning of latent spaces for audio data.
The Wavenet auto-encoder \cite{engel2017neural} combines Wavenet \cite{oord2016wavenet} with auto-encoders and uses dilated convolutions to learn waveforms of musical instruments. By conditioning the generation on the pitch, such a system is capable of synthesizing musical notes with various timbres. The WaveGAN \cite{donahue2018adversarial} uses Generative Adversarial Networks (GANs) to generate drum sounds or bird vocalizations by directly learning on waveform. However, the GAN approach provides little control over the generation because it is still difficult to structure their latent space.
\subsection{Generative models on spectral representations}
Other works have focused on generating sound as spectrograms, a complex time-frequency representation of sound. This visual representation of sound intensity through time allows us to treat sounds like images, but has to reverted back to the signal domain to produce sound.
In \cite{esling2018generative} uses VAEs to learn a generative space where instrumental sounds are organized with respect to their timbre. However, because the model is trained on spectra frames, it lacks temporal modeling. This hampers the capacity of the model to easily allow users to generate evolving structured temporal sequences such as drum sounds.
This approach introduced in \cite{donahue2018adversarial} takes into account these temporal dependencies by proposing SpecGAN, a generative models which uses GANs to generate spectrograms as if they were images.
\subsection{Spectrogram inversion}
Working with neural networks often forces us to discard the phase information of a spectrogram. Therefore, one cannot use the inverse Fourier transform to retrieve the signal it originates from. With classic STFT, a common workaround is to use the Griffin-Lim Algorithm (GLA) \cite{griffin1984signal} which allows to estimate the missing phase information.
Also, The Multi-head Convolutional Neural Network (MCNN) is a model that inverts STFTs \cite{arik2019fast} using neural networks.
However, STFT is not the best transform for our purpose. Indeed, Mel-scaled spectrograms are known to be more suitable for training convolutional neural networks \cite{huzaifah2017comparison}. Mel-scaled spectrograms are computed with filters based on the Mel scale, a perceptual frequency scale that tries to mimic the human perception of pitches.
Despite being more adapted for training, using Mel-scaled spectrograms introduces a problem: they are not invertible and GLA cannot be used.
Therefore, some deep learning based models have been developed in order to estimate signal from non-invertible spectrograms. In \cite{prenger2018waveglow}, the authors present WaveGlow, a flow-based network capable of generating high quality speech from Mel spectrograms. Also, in \cite{huang2018timbretron}, the authors use a conditioned Wavenet to estimate signal from Constant-Q Transforms, another non-invertible transform.
\section{Proposed model}
Our model is composed of two components: a generative model on spectrograms, whose role is to learn a latent space from our dataset and to generate meaningful spectrograms from this space, and a spectrogram inversion model, whose role is reconstruct waveforms from our generated spectrograms.
\subsection{Preliminaries on variational autoencoders}
To formalize our problem, we rely on a set of data $\{\mathbf{x}_{n}\}_{n\in[1,N]}$ lying in a high-dimensional space $\mathbf{x}_i\in\mathbb{R}^{d_{x}}$. We assume that these examples follow an underlying probability distribution $p\left(\mathbf{x}\right)$ that is unknown.
Our goal is to train a generative model able to sample from this distribution.
We consider a parametrized \textit{latent variable model}
\begin{equation*}
p_\theta(\mathbf{x}, \mathbf{z}) = p_\theta(\mathbf{x} | \mathbf{z})\pi(\mathbf{z}).
\end{equation*}
by introducing latent variables $\mathbf{z}\in\mathbb{R}^{d_{z}}$ lying in a space of smaller dimensionality than $x$ ($d_{z} \ll d_{x}$) and distributed according to the prior $\pi(z)$. We are interested in finding the parameter $\theta$ that maximizes the likelihood $\sum_i p_\theta(x_i)$ of the dataset. However, for usual choices of the conditional probability distributions $p_\theta(x|z)$ (typically a deep neural network), this quantity is intractable.
The variational autoencoder (VAE) \cite{Kingma2013Auto-EncodingBayes} is a model that introduces a variational approximation $q_\phi(\mathbf{z|x})$ to the intractable posterior $p_\theta(\mathbf{x|z})$ (the approximate posterior $q_\phi(\mathbf{z|x})$ is often chosen as a parametrized family of diagonal Gaussian distributions). The network $q_\phi(z|x)$ is called the \emph{encoder} whose aim is to produce latent codes given $x$ while the network $p_\theta(x|z)$ is called a \emph{decoder}, which tries to reconstruct $x$ given a latent code $z$.
The introduction of the variational approximation of the posterior allows us to obtain the following lower bound $\mathcal{L}(\boldsymbol{\theta, \phi})$ (called ELBO for Evidence Lower BOund) over the intractable likelihood:
\begin{multline}
\label{eq:param-obj}
\mathcal{L}(\boldsymbol{\theta, \phi}) = \mathbb{E}_{\mathbf{x} \sim p(\mathbf{x})} \big[ \underbrace{\mathbb{E}_{\mathbf{z} \sim p(\mathbf{z|x})} \big[ \log{ p_\theta (\mathbf{x|z}) } \big]}_{\text{reconstruction}} \\
- \underbrace{D_\mathrm{KL} \big[ q_\phi(\mathbf{z|x}) \parallel \pi(\mathbf{z}) \big]}_{\text{regularization}} \big],
\end{multline}
where $D_\mathrm{KL}$ denotes the Kullback-Leibler divergence \cite{cover2012elements}.
\begin{itemize}
\item The first term $\mathbb{E}_{\mathbf{z} \sim p(\mathbf{z|x})} \big[ \log{ p_\theta (\mathbf{x|z}) } \big]$ is the likelihood of the data $\mathbf{x}$ generated from the set of latent variable $\mathbf{z} \sim q_\phi(z|x)$ coming from the approximate posterior. Maximizing this quantity can be seen as minimizing a \textit{reconstruction error}.
\item The second term $D_{KL} \big[ q_\phi(\mathbf{z|x}) \parallel \pi(\mathbf{z}) \big]$ is the distance between $q_\phi(\mathbf{z|x})$ and $\pi(\mathbf{z})$ and can be interpreted as a regularization term.
\end{itemize}
In \cite{sohn2015learning}, the authors add a conditioning mechanism to the original VAE which consists in conditioning all three networks $p_\theta(x|z)$, $q_\phi(z|x)$ and $\pi(z)$ on some metadata $m$ (in most cases, the prior $\pi(z)$ does not depend on $m$).
However, a known problem of VAEs is that they tend to generate blurry samples and reconstructions \cite{chen2016variational}. This becomes a major hindrance in the context of spectrogram reconstructions.
Hopefully, this problem can be overcome by the use of Wasserstein Auto-Encoders (WAEs) instead of VAEs. The main difference consists in replacing the $D_\mathrm{KL}$ term in (\ref{eq:param-obj}) by another divergence between the prior $\pi$ and the \textit{aggregated posterior} $q_Z(\mathbf{z}):= \ee{x\sim p_X}{q(\mathbf{z|x})}$.
In particular, the MMD-WAE considers a Maximum Mean Discrepancy (MMD) \cite{berlinet2011reproducing} distance defined as follows:
\begin{equation}
\label{eq:mmdk}
\mathrm{MMD}_k^2(p, q) = \bigl\| \int_{Z} k(z, \cdot) p(z)dz - \int_{Z} k(z, \cdot) q(z)dz\bigr\|^2_{\mathcal{H}_k},
\end{equation}
where $k: \mathcal{Z} \times \mathcal{Z} \to \mathbf{R}$ is an positive-definite reproducing kernel and $\mathcal{H}_k$ the associated Reproducing Kernel Hilbert Space (RKHS) \cite{berlinet2011reproducing}.
MMD is known to perform well when matching high-dimensional standard normal distributions \cite{tolstikhin2017wasserstein,gretton2012kernel}.
Since the MMD distance is not available in closed form, we use the following unbiased U-statistic estimator \cite{gretton2012kernel} for a batch size $n$ and a kernel $k$:
\begin{multline}
\label{eq:discrete-MMD}
\mathrm{MMD}_{k, n}^2(\pi, q_z) := \frac{1}{n(n-1)}\sum_{l \neq j} k(z_l, z_j) \\
+\frac{1}{n(n-1)}\sum_{l \neq j} k(\tilde{z}_l, \tilde{z}_j)
- \frac{2}{n^2}\sum_{l,j}k(z_l,\tilde{z}_j),
\end{multline}
with $\tilde{z} := \{\tilde{z}_1, \dots, \tilde{z}_n\}$ where $\tilde{z_i} \sim \pi$ and $z := \{z_1, \dots, z_n\}$ where $z_i \sim q_z$.
\subsection{The Conditional WAE}
We now introduce a Conditional WAE (CWAE) architecture so that we can generate spectrograms depending on additional metadata such as the category of the original sound (e.g. kick drum, snare, clap, etc.).
Our encoder is defined as a Convolutional Neural Network (CNN) with $l$ layers of processing. Each layer is a 2-dimensional convolution followed by conditional batch normalization \cite{perez2017learning,perez2018film} and a ReLU activation. This CNN block is followed by Fully-Connected (FC) layers, in order to map the convolution layers activation to a vector of size $d_{z}$ which is that of the latent space. The decoder network is defined as a mirror to the encoder, so that they have a similar capacity. Therefore, we move the FC block before the convolutional one and change the convolution to a convolution-transpose operation. Also, we slightly adjust the convolution parameters so that the output size matches that of the input.
Our convolutional blocks are made of 3 layers each, with a kernel size of (11,5), a stride of (3,2) and a padding of (5,2). Our FC blocks are made of 3 layers with sizes 1024, 512 and $d_{z} = 64$. Therefore, our latent space is of size $d_{z} = 64$.
In the case of WAEs, the MMD is computed between the prior $\pi$ and the aggregated posterior $q_Z(\mathbf{z}):= \ee{\mathbf{x}\sim p_X}{q(\mathbf{z|x})}$. As a result, the latent spaces obtained with WAEs are often really Gaussian which makes them easy to sample. Here, the conditioning mechanism implies that we use separated gaussian priors $\pi_c = \mathcal{N}(0,1)$ for each class $c$, in order to be able to sample all classes as Gaussian. Indeed, computing a MMD loss over all classes would force the global aggregated posterior to match the gaussian prior, and thus restrict the freedom for latent positions. Therefore, we have to compute the per-class MMD to backpropagate on.
Let's formalize this problem by decomposing our dataset $\mathbb{D}$ into $\mathbb{C}$ subsets $\mathbb{D}_c$ with $1\le c \le \mathbb{C} $, containing all elements from a single class. We define $q^c_z(\mathbf{z}) := \ee{x\in \mathbb{D}_c}{q(\mathbf{z|x},m=c)}$. Thus, our regularizer is computed as follows :
\begin{equation}
\label{eq:mmdkc}
\mathcal{D}_Z(\pi_c, q_z) = \frac{1}{\mathbb{C}}\sum_{c=1}^{\mathbb{C}} \mathrm{MMD}^2_{k,n}(\pi, q_z^c).
\end{equation}
Finally, our loss function is computed as:
\begin{equation}
\mathcal{L}(\boldsymbol{\theta, \phi}) = \sum^n_{i=1} \mathrm{MSE}(x_i, \hat{x_i}) + \beta \mathcal{D}_Z(\pi, q_z),
\end{equation}
where $\beta=10$ and $k$ is the \textit{multi-quadratics kernel} as for CelebA in \cite{tolstikhin2017wasserstein}.
\subsection{MCNN inversion}
\begin{figure}[t]
\centering
\includegraphics[width =0.5\textwidth]{model/mcnn.pdf}
\caption{The MCNN for spectrogram inversion. Its multiple heads estimate waveforms that are summed to produce the final waveform. Finally, the loss is computed between the resulting spectrogram and the original one}
\label{fig:MCNN}
\end{figure}
To invert our Mel-spectrograms back to the signal domain, we use a modified version of the original MCNN. In this section, we first review the original MCNN before detailing how we adapted it to handle Mel-spectrograms of drum samples.
MCNN is composed of multiple heads that process STFTs (Fig.~\ref{fig:MCNN}). These heads are composed of $L$ processing layers combining 1D transposed convolutions and Exponential Linear Units (ELUs). The convolution layers are defined by a set of parameters $(f,s,c)$, respectively the filter width, the stride and the number of output channels. We multiply the output of every head with a trainable scalar $w_i$ to weight these outputs, and we compute the final waveform as their sum. Lastly, we scale the waveform with a non-linearity (scaled softsign). The model is trained to estimate a waveform which spectrogram matches the original one. For more implementation details, we refer the interested readers to the original article.
We have chosen to use this model because of three main points. First, it performs a fast (300x real-time) and precise estimation of a signal given a spectrogram. Then, it can deal with non-invertible transforms that derive from STFT such as Mel-STFT. Finally, its feed-forward architecture allows to takes advantage of GPUs, unlike iterative or auto-regressive models.
In our implementation, we kept most of the parameters suggested in \cite{arik2019fast}.We use a MCNN with 8 heads of $L=8$ layers each where, for each layer $l_i$, $1 \le i \le L$, we have $(w_i,s_i) = (13, 2)$. However, because we have padded our signals with zeros to standardize their length, two problems appear.
First, we observed that the part of the spectrogram corresponding to the padding (made of zeros) was not well reconstructed if convolution feature biases. Without biases, zeros stay zeros throughout the kernel multiplications. Therefore, we removed all biases.
Then, we observed a leakage phenomenon: because the convolution filters are quite large (length 13), the reconstructed waveform had more non-zero values than the original one.
Therefore, the loss is lower-bounded by this effect. To tackle this problem, we decided to apply a mask to the final output of our model, aiming at correcting this leakage. Thus, for the number of output channels for layer $i$, we have :
\[
c_i =
\begin{cases}
2^{L-i} & \text{if } 2 \le i \le L \\
2 & \text{if } i = 1.
\end{cases}
\]
The output of head h is a couple of 2 vectors $(s_h, m_h)$. We estimate the mask $\hat{M}$ as follows:
\begin{equation}
\hat{M} = \sigma \left (\sum_{h=1}^{8} m_h \right).
\end{equation}
The finally output waveform $\hat{s}$ is computed as :
\begin{align}
\hat{s}^* & = \sum_{h=1}^{8} w_h * s_h, \\
\hat{s} & = \hat{s}^* \times \hat{M}.
\end{align}
To train the mask, we use supervised training and introduce a loss term between the original mask $M$ and the estimated one $\hat{M}$, that we name \textit{mask loss}:
\begin{equation}
\label{eq:ML_eq}
\textrm{L}_{mask} (M,\hat{M}) =
\textrm{BCE}(M,\hat{M}).
\end{equation}
At generation time the mask is binarized. This solution has worked very well to cut the tail artifacts introduced by the convolutions.
A second change is that we now train MCNN on Mel-scaled spectrograms rather than STFT. However, original losses were computed on STFT. To turn a STFT into a Mel-scaled spectrogram, we compute a filterbank matrix $F$ to combine the 2048 FFT bins into 512 Mel-frequency bins. Finally, we multiply this matrix with the STFT to retrieve a Mel-scaled spectrogram:
\begin{equation}
\label{eq:stft2mel}
\textrm{Mel} = \textrm{STFT} \times F.
\end{equation}
Therefore, we can simply convert all STFTs to Mel-scaled spectrograms before the loss computation. This does not affect the training procedure: back-propagation remains possible since this conversion operation is differentiable.
In addition, we have modified the loss function. When training the original model on our data, we noticed some artifacts that we identified as 'checkerboard artifacts'. These are known to appear when using transposed convolutions \cite{odena2016deconvolution}. We have tried known workarounds such as NN-Resize Convolutions \cite{aitken2017checkerboard} but it did not yield better results. We empirically realized that, in our particular case, removing the phase-related loss terms helped reducing these artifacts.
Therefore, we removed from \cite{arik2019fast} the instantaneous frequency loss and the weighted phase loss terms while keeping the Spectral Convergence (SC) term:
\begin{equation}
\label{eq:SC_eq}
\textrm{SC} (s,\hat{s}) =
\frac{\| |\textrm{MEL}(s)| - |\textrm{MEL}(\hat{s})| \|_{F}}{\||\textrm{MEL}(s)|\|_{F}},
\end{equation}
where $\|\cdot\|_{F}$ is the Frobenius norm over time and frequency, and the Log-scale MEL-magnitude loss ($\textrm{SC}_{log}$):
\begin{equation}
\label{eq:log_MEL_eq}
\textrm{SC}_{log} (s,\hat{s}) =
\frac{\|\log (|\textrm{MEL}(s)| + \epsilon) -
\log(|\textrm{MEL}(\hat{s})| + \epsilon)\|_1}{\log(|\textrm{MEL}(s)| + \epsilon)\|_1} ,
\end{equation}
where $\|\cdot\|_1$ is the $L^1$ norm and $\epsilon$ is a small number.
\vspace{0.01\textwidth}
Finally, our global loss term is:
\begin{equation}
\label{eq:totalloss}
L= \alpha \textrm{SC}(s,\hat{s})+ \beta \textrm{SC}_{log}(s,\hat{s})+ \gamma\textrm{L}_{mask}(M,\hat{M}),
\end{equation}
where $\alpha,\beta$ and $\gamma$ are constants used for weighting loss terms. In our experiments, we set $(\alpha, \beta, \gamma) = (3,10,1) $, which works well in practice.
\section{Experiments}
\subsection{Dataset}
\label{sec:dataset}
We built a dataset of drums samples coming from various sample packs that we have bought (Vengeance sample packs and others). Overall, we collected more than 40,000 samples across 11 drum categories. All sounds are WAV audio files PCM-coded in 16 bits and sampled at 22050 Hz. Sounds that were longer than 1 second were removed in order to obtain a homogeneous set of audio samples.
After this preprocessing, the final dataset contains 11 balanced categories (kicks, claps, snares, open and closed hi-hats, tambourines, congas, bongos, shakers, snaps and toms) with 3000 sounds each for a total of 33000 sounds. All sounds in the dataset have a length between 0.1 and 1 second (mean of 0.46 second). In order to validate our models, we perform a class-balanced split between 80\% training and 20\% validation sets. All the results we present are computed on this validation set to ensure generalization.
As said in previous sections, we compute the Mel-scaled spectrograms of these sounds. To do so, we first pad all waveforms with zeros to ensure a constant size among the whole dataset. Thus, all audio files are 22015 samples long. We also normalize them so that the maximum absolute value of samples is 1. Then, we compute STFTs for all sounds with a Hann window with a length of 1024, a hop size of 256 and an FFT size of 2048. To turn the STFTs into Mel-scaled spectrograms, we multiply the STFTs with the filter-bank matrix we mentioned earlier (Eq.~\ref{eq:stft2mel}).
\begin{figure*}[]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\columnwidth]{results/rec/c1s.png}%
\caption{Clap}%
\label{fig:clapsrec}%
\end{subfigure}%
\hspace{0.1\textwidth}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\columnwidth]{results/rec/k1s.png}%
\caption{Kick drum}%
\label{fig:kicksrec}%
\end{subfigure}
\caption{Spectrogram reconstructions of sounds from the evaluation set. From left to right, we have: the original spectrogram, the CWAE reconstruction and the one obtained from the reconstructed waveform (the amplitudes are presented in log-scale for the sake of visibility}
\label{fig:specrec}
\end{figure*}
\subsection{Experimental setup}
Before assembling the two parts of our model to create an end-to-end system, we pre-train each network separately.
We train our CWAE with an ADAM optimizer \cite{kingma2014adam}. The initial learning rate is set to $\eta = 1e^{-3}$ and is annealed whenever the validation loss has not decreased for a fixed number of epochs. The annealing factor is set to 0.5 and we wait for 10 epochs. The WAE is trained for 110k iterations. To obtain a good estimation of the MMD between each $q_Z^c$ and their Gaussian prior, we have to compute enough $z$. Indeed, it is said in \cite{reddi2015high} that $n$ in equation~\ref{eq:discrete-MMD} should be the same order of magnitude as $d_z$ = 64. Therefore, at each iteration, we have to ensure that this criterion is satisfied for each class. We then implemented a balanced sampler, for our data loader to yield balanced batches containing 64 samples for each class. It ensures more stability than a standard random batch sampler. In the end, our final batch size equals $64 \times 11 = 704$.
When training the CWAE, we perform some data processing steps that allow greater stability and performance. First, we compute the log of our spectrograms to reduce the contrast between high and low amplitudes. Then, we compute the per-element means and variances to scale the log-Mel spectrograms so that each element is distributed as a zero-mean unit-variance Gaussian. Indeed, we have noticed that it improves the WAE reconstruction quality.
When training the MCNN, we use the Mel spectrograms without scaling. The initial learning rate is set to $\eta = 1e^{-4}$ and is annealed by a scheduler at a rate of 0.2 with a patience of 50 epochs. The MCNN is trained for around 50k iterations, with a batch size of 128.
\subsection{Reconstruction}
We first evaluate the reconstruction abilities of each part of our system, and the system as a whole. On figure~\ref{fig:specrec}, we compare the original spectrogram with both our CWAE's reconstruction and the spectrogram computed on the final output. In both cases, the reconstruction performed by the CWAE is good yet a bit blurry. After passing through the MCNN, we can see some stripes, corresponding to some checkerboard artifact, which periodically affects the waveform. Thus, this appears as a harmonic artifact on the spectrogram. While appearing important on these spectrograms because of the log, the sound is often clean, as shown on the kick reconstruction on figure~\ref{fig:waverec}.
\begin{figure}[]
\centering
\begin{subfigure}{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{results/rec/c1w.png}%
\caption{Clap}%
\label{fig:clapwrec}%
\end{subfigure}%
\hfill
\begin{subfigure}{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{results/rec/k1w.png}%
\caption{Kick drum}%
\label{fig:kickwrec}%
\end{subfigure}
\caption{Waveform reconstruction of sounds from the evaluation set. The top row shows the original waveform and the bottom shows the reconstruction after passing the spectrogram throughout the whole system}
\label{fig:waverec}
\end{figure}
More examples are available on the companion website\footnote{https://sonycslparis.github.io/NeuralDrumMachine/}, along with audio.
\begin{figure*}[!ht]
\centering
\includegraphics[width =\textwidth]{images/interface.png}
\caption{The Neural Drum Machine interface. First, the XY pad on the left controls values for the two most influential dimensions. The "Fine" knob controls the value for the third most influential dimension and can be seen as fine tuning. The range selector controls the range of values available for these three dimensions.
Then, a selector allows the user to control which type of sound is generated. Finally, the waveform visualizer on the right allows to trim a sample to play only a particular region.}
\label{fig:ndm}
\end{figure*}
\subsection{Sampling the latent space}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.4\columnwidth}
\centering
\includegraphics[width=\columnwidth]{results/gen/bongo.png}%
\caption{Bongo}%
\label{fig:genbongo}%
\end{subfigure}%
\hspace{0.1\columnwidth}
\begin{subfigure}{0.4\columnwidth}
\centering
\includegraphics[width=\columnwidth]{results/gen/hh.png}%
\caption{Hi-hat}%
\label{fig:genhh}%
\end{subfigure}
\caption{Sounds generated by sampling the latent space. From top to bottom, we have the final waveform, the spectrogram generated by the CWAE and the one corresponding to the waveform (the amplitudes are presented in log-scale for the sake of visibility).}
\label{fig:gen}
\end{figure}
On figure~\ref{fig:gen}, we show generated sounds. We generate them by first sampling a multivariate Gaussian in the latent space. Then, we decode this latent code, conditioned on a given class label and obtain a spectrogram. Finally, this spectrogram is passed to the MCNN which estimates the corresponding waveform. Here, both these sounds are pretty realistic and artifact free. However, sampling the latent space in this fashion does not always yield good sounding results. This is because our latent distributions do not really match Gaussian distributions. Also, conditioning on a category does not ensure to generate sounds from this category only. Indeed, some regions of the space will sound close to a hi-hat, even if the class label for claps, is provided to the CWAE. While this can be seen as a drawback, we think that this does not lower the interest because it allows synthesizing hybrid sounds. You can hear additional audio examples on the companion website.
\section{Creative Applications}
\subsection{Interface}
For our model to be used in a studio production context, we have developed a user interface. This interface is a Max4Live patch which allows a direct integration into Ableton Live. In this section, we describe how it works and show some screen-shots.
To recall, we pass a (latent code, category) couple $(z,c)$ to the decoder of our CWAE to produce a spectrogram $\hat{x}$. Then the MCNN generates a .wav file from this spectrogram. However, the latent code $z$ is high dimensional (64 dimensions), so choosing a value for each parameter would be a long and complex process. To facilitate interactivity, we decided to use a Principal Components Analysis (PCA) which aim is to find the 3 most influential dimensions, thus reducing the complexity of the fine tuning process while ensuring a good diversity in sounds. From now on, we denote the PCA dimensions $P_1$, $P_2$ and $P_3$.
To generate sound through the interface, we provide controllers: First, we provide control over the values for $z$: an XY pad allows to control $P_1$ and $P_2$ and the 'Fine' knob provides control over $P_3$. Also, a selector allows the user to define the range of both the pad and the knob. Then, a menu allows the user to set a value for $c$ which comes down to selecting the type of sounds one wants to generate. Finally, the user can use the waveform visualizer to crop out remaining artifacts for example.
\subsection{Generation Process}
Every time a parameter value changes, a new sound is generated as follows.
A python server is listening on a UDP port. This server contains the model and will be in charge of all the computation. When the user modifies the value of a dimension, the Max client sends a message via UDP. This message contains the values for $P_1$, $P_2$, $P_3$, and the category of the sound. When the server receives the message, it creates the associated latent code $z$ by computing the inverse PCA of $(P_1, P_2, P_3)$ and concatenate it with the conditioning vector. Then the server passes $(z,c)$ to the CWAE decoder which feeds a spectrogram to the MCNN. The obtained waveform is then exported to a WAV file, and its location is returned to the Max plugin. Finally, our plugin loads its buffer with the content of this file and displays it on the visualizer.
Our system can generate sounds with very low latency on CPU ($<$50ms delay between the change and the sound with a 2,6 GHz Intel Core i7). Once the sound is in the buffer, it can be played without any latency. A demonstration video is available on the companion website.
\subsection{Impact on creativity and music production}
We think that this system is a first approach towards a new way to design and compose drums. Indeed, it is a straightforward and efficient tool for everyone to organize and browse their sample library and design their drum sounds. Despite the parameters being autonomously learnt by the neural network, it is pretty intuitive to navigate in the latent space.
Also, such a tool can be used to humanize programmed drums. It is often claimed that programmed electronic drums lack a human feeling. Indeed, when a real drummer plays, subtle variations give the rhythm a natural groove whereas programmed MIDI drum sequences can sound robotic and repetitive, leaving listeners bored.
There are common techniques to humanize MIDI drums such as varying velocities. By allowing the synthesis parameters to vary in a small given range, our system can be used to slightly modify the sound of a drum element throughout a loop. This could, for example, mimic a drummer who hits a snare at slightly different positions.
\section{Conclusion and Future Work}
We propose a first end-to-end system that allows intuitive drum sounds synthesis. The latent space learnt on the data provides intuitive controls over the sound. Our system is capable of real-time sound generation on CPU while ensuring a satisfying audio quality. Moreover, the interface we have developed is studio-ready and allows users to easily integrate it into one of the most used DAWs for electronic music.
We identify two axes for improvement:
The first one is about the conditioning mechanism that should be more precise and powerful so that each category can clearly be distinguished from the others.
The other axis is about developing novel ways to interact with a large latent space to explore its full diversity.
Also, similarly to what is achieved on symbolic music \cite{latentconstraints2017,hadjeres2019variation}, we will investigate approaches that let the users specify the controls they want to shape the sounds. This would be an effortless way for novice sound designers to tune their drum sounds and create drum kits on purpose, rather than relying on existing ones. Also, to merge the computation server into the plugin is a required feature for the model to be even more accessible.
\bibliographystyle{iccc}
|
1907.02603
|
\section{Introduction}\label{sec_intro}
Next-generation cellular networks are expected to support ultra-high data rates to serve the exponential increase in capacity and traffic demands~\cite{5gtechs}. Millimeter-wave (mmWave) spectrum band is considered as a promising solution to achieve the required data rates, due to its large bandwidth ~\cite{tedmmwave, mmwavecellular}. However, transmission over mmWave band suffers from high propagation loss and low signal penetration, through buildings and solid materials, which results in regions with limited signal coverage~\cite{mmwaveref}.
To overcome such coverage gaps in mmWave communication, network densification, or the deployment of ultra-dense cellular networks (UDNs), has been recently proposed~\cite{UDCN,UDNmmWave}. However, the deployment cost and the spatio-temporal variability of capacity and coverage demands are among the challenges facing UDNs.
Alternatively, integrating unmanned aerial vehicles (UAVs) into the cellular structure has been recently proposed (e.g.\cite{UAVhetnet,UAVRelay,OBIABUAV}), including the mmWave communication~\cite{uavmmwave}. Essentially, a UAV can relay the received data from the neighboring base station to its surrounding mobile users. Consequently, UAVs can be considered as a feasible, cost-effective and easily scalable network solution, which can be adjusted based on the coverage and capacity demands~\cite{totUAV}. Having a UAV communicating over backhaul links, towards base stations, and access links, towards mobile users, naturally leads to creating a wirelessly backhauled network architecture~\cite{UAVIBIAB}.
In this regard, the integrated access and backhaul (IAB) network architecture is considered as a promising solution to allow for easier deployment of wirelessly backhauled networks. Generally, in an IAB architecture (e.g.~\cite{3GPPIAB,3GPPIAB_att}), the macro base station (MBS) uses the same infrastructure and wireless channel resources to provide access and backhauling functionalities for cellular users and IAB-relays, respectively. Despite the recent studies focusing on the coverage probability and interference mitigation in IAB 5G cellular networks (e.g.~\cite{BWPart, jointIBIAB}), there is no consideration of UAV-assisted IAB mmWave networks, which is the focus of this paper.
UAV-assisted IAB mmWave networks may be studied via conducting \emph{ray tracing} simulations. Ray tracing approach has been recently utilized to characterize the mmWave signal propagation in indoor environments~\cite{ismailray, mmwaveindoor} and outdoor environments at both the 28 and 72 GHz bands~\cite{mmwaveray, airinterray}. However, none of the above studies has jointly studied the integration of UAVs into the mmWave-based IAB scenarios.
In this paper, we aim to characterize the coverage gains in UAV-assisted IAB mmWave network. More precisely, we utilize WinProp software package, which employs a ray tracing approach, to characterize the potential performance improvements of using UAVs as hovering relays in an IAB mmWave network. We focus on the outdoor mmWave channels at $30$ and $60$ GHz frequency bands. In doing so, we consider two relaying modes of UAVs, which are amplify-and-forward (AF) and decode-and-forward (DF). In the AF mode, we study the Out-of-Band IAB (OB-IAB) scenario and propose a power mapping table to determine the transmission power of UAVs at the access links, based on the received signal strength at the backhaul links. Furthermore, we define where to best position the UAVs, based on the generated ray tracing coverage maps.
In the DF mode, we study the in-band IAB (IB-IAB) scenario and define the 3D deployment of UAVs. In doing so, we consider the required received signal-to-interference-plus-noise-ratio (SINR) threshold at the backhaul links along with the signal coverage and interference levels at the access links. To the best of our knowledge, this is the first ray tracing-based study that investigates the use of UAVs in mmWave-based OB and IB scenarios for IAB cellular networks.
The rest of this paper is organized as follows. In Section~\ref{sec_netarch}, we present the network architecture and the environment modeling in WinProp software. In Section~\ref{sec_AF}, we discuss the WinProp implementation of AF relaying mode and demonstrate, with the aid of the ray tracing simulations, the performance gains of using UAVs in IAB mmWave scenarios. Similarly, DF relaying mode is studied in Section~\ref{sec_DF}. Finally, conclusion is drawn in Section~\ref{sec_conc}.
\section{Network Architecture and Ray-tracing Simulated Environment}\label{sec_netarch}
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig1.png
\caption{UAV-assisted integrated access and backhaul.}\label{fig_sysmod}
\end{center}
\vspace{-0.1 in}
\end{figure}
Fig.~\ref{fig_sysmod} depicts the considered architecture for the UAV-assisted IAB system.
As shown, the IAB-donor, which is the MBS in this case, provides access and wireless backhauling functionalities to terrestrial users (tUEs) and UAVs, respectively. Both backhaul and access links operate on the same spectrum band, centered around the $f_1$ frequency. UAVs are utilized as relaying IAB-nodes to fill the coverage gaps and provide access functionality to aerial users (aUEs). The access links of UAVs operate at the same spectrum resources as backhaul links ($f_1$) in the IB-IAB transmission mode, and at different resources ($f_2$) in the OB-IAB transmission mode.
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig2.png
\caption{Received Downlink SINR from single IAB-donor.}\label{sc1_covrg_singleIABDon}
\end{center}
\vspace{-0.15 in}
\end{figure}
We utilize the WinProp software package to model a 3-dimensional (3D) urban outdoor scenario, namely the downtown Manhattan area, as shown in Fig.~\ref{sc1_covrg_singleIABDon}.
The 3D map layer allows for precise testing of the high path-loss limitations of mmWave spectrum bands. All structures are high rise buildings composed of concrete and flat glass surfaces. The heights of the buildings vary between $80\mathrm{m}$ and $120\mathrm{m}$. The buildings are uniformly distributed over a four-way intersection geographical area of size $1500\mathrm{m}\times460\mathrm{m}$.
A single IAB-donor is positioned at the $(-700,0)$ coordinates at an altitude of $25\mathrm{m}$.
The high scattering nature of the simulated environment limits signal penetration and results in large coverage gaps. In this paper, we aim to shrink such coverage gaps by finding the best positions for UAVs. The UAV transmitter is implemented similar to a conventional transmitter with varying 3D location. The IAB-donor and UAVs utilize a directional antenna pattern for the ray tracing simulations. The 2D horn antenna radiation patterns are shown in Figs.~\ref{antennapatternA} and \ref{antennapatternE}. The antenna pattern has a beamwidth $\left(3~\mathrm{dB}\right)$ of ${\approx}~30$ degrees. The AMan module in WinProp software is utilized to post process these patterns and create a 3D antenna radiation pattern that is imported into the ray tracing simulations.
\begin{figure}[h]
\vspace{-0.104 in}
\begin{center}
\subfloat[Azimuth plane radiation pattern\label{antennapatternA}]{
\includegraphics[width=7.75cm,height=7.75cm,keepaspectratio]{AP1.png}}
\end{center}
\begin{center}
\subfloat[Elevation plane radiation pattern\label{antennapatternE}]{
\includegraphics[width=7.75cm,height=7.75cm,keepaspectratio]{AP2.jpg}}
\caption{Antenna radiation pattern.}
\end{center}
\vspace{-0.2 in}
\end{figure}
\section{Amplify-and-Forward Relaying in UAV-assisted OB-IAB mmWave Network}\label{sec_AF}
In this section, we address two design aspects related to the AF relaying mode. First, we propose a power mapping table for the AF relaying UAV, which defines its transmission power over the access link, based on its received power from the IAB-donor over the backhaul links. Second, we define the best locations for the group of UAVs to be deployed.
\subsection{AF Power Mapping and UAV's Positioning}\label{subsec_AFpwrmap}
The received SINR at the backhaul link of a UAV depends on its location and distance from the IAB-donor. Hence, we propose an adaptive power transmission scheme, to be utilized at the AF UAVs, to reflect the strength of the backhaul connection of a UAV. In other words, the higher the received power over the backhaul links, the more transmission power will go over the access links.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,height=8cm,keepaspectratio]{fig3.png
\caption{Received SINR levels at backhaul link of a single UAV.}\label{sc1_sinr_bh}
\end{center}
\vspace{-0.15 in}
\end{figure}
First, we conduct ray tracing simulation to generate a power map of the received signal strength over the backhaul link at the UAV altitude of $200$~m. Given that this is a single-transmitter simulation environment, there is no interference and the signal strength and SINR are proportional to each other.
Fig.~\ref{sc1_sinr_bh} depicts a sample coverage map with a resolution of $200$m of the received SINR levels at the UAV from a single IAB-donor. Each square represents a potential UAV location.
Second and for a generic UAV's location, $i$, represented by one of the squares in Fig.~\ref{sc1_sinr_bh}, we propose to utilize a UAV transmission power, $P_{i}^{(\mathrm{Tx})}$, equal to
\begin{equation}
P_{i}^{(\mathrm{Tx})}=P^{(\mathrm{max})}\times\left(\frac{\upgamma_{i}^{(\mathrm{BH})}}{\upgamma_{\mathrm{u}}^{(\mathrm{max})}}\right) \;,
\end{equation}
where $P^{(\mathrm{max})}$ and $\upgamma_{i}^{(\mathrm{BH})}$ denote the maximum power capability of the UAV and the received SINR value at the $i^{\mathrm{th}}$ potential UAV location. Furthermore, $\upgamma_{\mathrm{u}}^{(\mathrm{max})}$ represents a normalization factor, which is equal to the maximum received SINR at the user's altitude level.
\begin{table}[h]
\centering
\caption{AF UAV power mapping table.}\label{tab_pwrmap}
\begin{tabular}{|c|c||c|c|}
\hline
Received & UAV Tx & Received & UAV Tx\\
SINR (dB) & power (dBm) & SINR (dB) & power (dBm)\\
\hline
16.95 & 32.28 & 20.37 & 33.10\\
\hline
17.42 & 32.41 & 20.74 & 33.16\\
\hline
17.45 & 32.43 & 20.81 & 33.18\\
\hline
18.56 & 32.67 & 23.47 & 33.69\\
\hline
19.98 & 32.99 & 25.17 & 34.00\\
\hline
20 & 33.01 & 25.21 & 34.01\\
\hline
20.24 & 33.05 & 34.22 & 34.42\\
\hline
20.27 & 33.07 & 34.32 & 34.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab_pwrmap} shows non-repeated entries of the received SINR at the backhaul link obtained from the coverage map, in Fig.~\ref{sc1_sinr_bh}, and the corresponding transmission power calculated according to (1). Third, the best position for the UAV is determined based on 1) its received SINR level from the IAB-donor, as shown previously in Fig.~\ref{sc1_sinr_bh}, and 2) its potential impact on the coverage map at the ground user level, taking into consideration the power mapping table presented in Table~\ref{tab_pwrmap}. Any additional UAV can be deployed similarly in a consecutive manner.
\subsection{Coverage Enhancement}\label{subsec_AFsimres}
In this section, we show the coverage improvement due to deploying two UAVs in the UAV-assisted OB-IAB mmWave network. We consider the same baseline deployment scenario, shown previously in Fig.~\ref{sc1_covrg_singleIABDon}, in which a single IAB-donor is positioned at the $(-700,0)$ position and at an altitude of $25\mathrm{m}$. In the considered UAV-assisted OB-IAB scenario, $30$~GHz frequency band is used for the downlink transmissions of backhaul and access links of the IAB-donor. Given its \emph{out-of-band} nature, the access links of UAVs operate at a different frequency, which is the $60$~GHz frequency band. The IAB-donor and any UAV has a maximum downlink transmission power of $10$ and $5$ watts, respectively. Based on the UAV positioning and transmission power mapping approach, introduced in section~\ref{subsec_AFpwrmap}, the best positions for the two UAVs are found to be $(-50, 150)$ and $(-50, -150)$. The two UAVs are positioned almost at the middle of a four-way intersection at an altitude of $200$m. The deployed UAVs leverage their Line-of-Sight (LOS) capabilities and fill the coverage gaps as shown in the received downlink SINR coverage map in Fig.~\ref{sc1_covg_2uavs}.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm,height=8.5cm,keepaspectratio]{fig4.png
\caption{Ray tracing coverage map after adding two AF UAVs.}\label{sc1_covg_2uavs}
\end{center}
\vspace{-0.15 in}
\end{figure}
\begin{figure}
\begin{center}
\hspace{-.45in}
\includegraphics[width=8.5cm,height=8.5cm,keepaspectratio]{AFCDF.png
\caption{AF relaying mode: CDF of downlink received SINR.}\label{sc1_cdf}
\end{center}
\vspace{-0.1 in}
\end{figure}
Fig.~\ref{sc1_cdf} depicts the cumulative distribution function (CDF) of the received downlink SINR of the baseline scenario, in which UAVs are not used, and the UAV-assisted scenario. In both cases, the users are distributed uniformly across the coverage map every $10$~m. Fig.~\ref{sc1_cdf} shows that initially around $70^{\mathrm{th}}$ percentile of users fall in coverage gaps in the baseline scenario. With the deployment of two UAVs, only $30^{\mathrm{th}}$ percentile are still left in coverage gaps. In other words, deploying two UAVs, according to the proposed scheme in this paper, has provided an average of $2.3\times$ gain in downlink coverage
\section{UAV-Assisted Decode-and-Forward Relaying in UAV-Assisted IB-IAB mmWave Network}\label{sec_DF}
In this section, we consider DF relaying nature of the UAVs, and aim to define the best locations of a set of deployed UAVs. In the DF relaying mode, a UAV forwards its received packet, if the received SINR is above a certain threshold. Otherwise, it remains idle. The transmitted signal will be sent with the maximum transmission power of the UAV. In this section, we use IB-IAB transmission mode, as a potential candidate for tighter integration between access and backhaul links. In that, the backhaul and access links of each UAV fully overlap on spectrum resources. The baseline simulation scenario is depicted in Fig.~\ref{sc2_covg_single_IABDon}, in which, the IAB-donor is positioned at the origin coordinates with an altitude of $25\mathrm{m}$.
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig6.png
\caption{Received downlink SINR map at user's altitude.}\label{sc2_covg_single_IABDon}
\vspace{-0.15 in}
\end{center}
\end{figure}
\subsection{DF UAVs Positioning}\label{subsec_DFimp}
We utilize two UAVs to fill in the coverage gaps in Fig.~\ref{sc2_covg_single_IABDon}, taking into consideration the inter-UAV interference to be at low levels. In particular, we conduct ray tracing simulation to generate the coverage map of the received SINR at the backhaul links, as depicted in Fig.~\ref{sc2_covg_bh}. On one hand, each UAV must be positioned in a location where the received backhaul SINR from the IAB-donor is above a specific threshold. On the other hand, the UAVs are positioned such that the signal coverage is maximized at the ground user level while taking into account minimizing the inter-cell interference levels. We set the SINR threshold to $15\,\mathrm{dB}$ in the proposed ray tracing scenario. Hence, the UAVs can be placed anywhere where the received SINR in Fig.~\ref{sc2_covg_bh} is above this threshold. The best positions for the two UAVs are found to be $(-20,200)$ and $(20,-200)$.
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig7.png
\caption{Received downlink SINR map at backhaul link of a single UAV.}\label{sc2_covg_bh}
\end{center}
\vspace{-0.15 in}
\end{figure}
\subsection{Coverage Enhancement}\label{subsec_DFres}
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig8.png
\caption{Ray tracing coverage map after adding two DF UAVs.}\label{sc2_covg_2uavs}
\end{center}
\vspace{-0.15 in}
\end{figure}
Fig.~\ref{sc2_covg_2uavs} shows how UAVs are positioned to fill the coverage gaps in the baseline scenario (see Fig.~\ref{sc1_covg_2uavs}) and demonstrates the improvement in the downlink coverage after deploying the UAVs. The CDF plot of the received downlink SINR is shown in Fig.~\ref{sc2_cdf}. Fig.~\ref{sc2_cdf} shows that $60^{\mathrm{th}}$ percentile of users suffer from coverage gaps before deploying the UAVs, while only $30^{\mathrm{th}}$ percentile do after deploying the UAVs. In other words, the use of two DF UAVs yields an average of $1.75\times$ gain in downlink coverage of the proposed IAB mmWave scenario. It is worth noting that the available spectrum resources are directly proportional to the number of UAVs in the DF relaying mode. Consequently, given that deploying two UAVs has doubled the downlink coverage, Fig.~\ref{sc2_cdf} shows that the downlink capacity achieves $2\times$ gain in the proposed DF relaying mode. Fig.~\ref{sc2_cdf} also reveals that the received downlink SINR of terrestrial users, i.e., cell-center users, is slightly decreased by around $4\,\mathrm{dB}$ to provide coverage to more than $30^{\mathrm{th}}$ percentile of users. This degradation is due to the nature of \emph{in-band} mode, which creates mutual interference between the access links of the UAVs and the IAB-donor.
\begin{figure}
\begin{center}
\hspace{-.45in}
\includegraphics[width=8.5cm,height=8.5cm,keepaspectratio]{DFCDF.png
\caption{DF relaying mode: CDF of downlink received SINR.}\label{sc2_cdf}
\end{center}
\vspace{-0.15 in}
\end{figure}
\section{Conclusion}\label{sec_conc}
In this paper, we utilized the ray tracing simulations in WinProp software to investigate the coverage gains of using UAVs, as hovering relays, in IAB mmWave cellular networks. We considered the AF and DF relaying modes of UAVs and analyzed the propagation characteristics of $30$ and $60$ GHz outdoor channels. We used the ray tracing simulation results to define the 3D deployment and the access functionality of the UAVs. The ray tracing simulation results show that using UAV AF and DF relaying modes achieves an average of $2.3\times$ and $1.75\times$ gains in the downlink coverage of IAB mmWave networks, respectively.
\balance
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec_intro}
Next-generation cellular networks are expected to support ultra-high data rates to serve the exponential increase in capacity and traffic demands~\cite{5gtechs}. Millimeter-wave (mmWave) spectrum band is considered as a promising solution to achieve the required data rates, due to its large bandwidth ~\cite{tedmmwave, mmwavecellular}. However, transmission over mmWave band suffers from high propagation loss and low signal penetration, through buildings and solid materials, which results in regions with limited signal coverage~\cite{mmwaveref}.
To overcome such coverage gaps in mmWave communication, network densification, or the deployment of ultra-dense cellular networks (UDNs), has been recently proposed~\cite{UDCN,UDNmmWave}. However, the deployment cost and the spatio-temporal variability of capacity and coverage demands are among the challenges facing UDNs.
Alternatively, integrating unmanned aerial vehicles (UAVs) into the cellular structure has been recently proposed (e.g.\cite{UAVhetnet,UAVRelay,OBIABUAV}), including the mmWave communication~\cite{uavmmwave}. Essentially, a UAV can relay the received data from the neighboring base station to its surrounding mobile users. Consequently, UAVs can be considered as a feasible, cost-effective and easily scalable network solution, which can be adjusted based on the coverage and capacity demands~\cite{totUAV}. Having a UAV communicating over backhaul links, towards base stations, and access links, towards mobile users, naturally leads to creating a wirelessly backhauled network architecture~\cite{UAVIBIAB}.
In this regard, the integrated access and backhaul (IAB) network architecture is considered as a promising solution to allow for easier deployment of wirelessly backhauled networks. Generally, in an IAB architecture (e.g.~\cite{3GPPIAB,3GPPIAB_att}), the macro base station (MBS) uses the same infrastructure and wireless channel resources to provide access and backhauling functionalities for cellular users and IAB-relays, respectively. Despite the recent studies focusing on the coverage probability and interference mitigation in IAB 5G cellular networks (e.g.~\cite{BWPart, jointIBIAB}), there is no consideration of UAV-assisted IAB mmWave networks, which is the focus of this paper.
UAV-assisted IAB mmWave networks may be studied via conducting \emph{ray tracing} simulations. Ray tracing approach has been recently utilized to characterize the mmWave signal propagation in indoor environments~\cite{ismailray, mmwaveindoor} and outdoor environments at both the 28 and 72 GHz bands~\cite{mmwaveray, airinterray}. However, none of the above studies has jointly studied the integration of UAVs into the mmWave-based IAB scenarios.
In this paper, we aim to characterize the coverage gains in UAV-assisted IAB mmWave network. More precisely, we utilize WinProp software package, which employs a ray tracing approach, to characterize the potential performance improvements of using UAVs as hovering relays in an IAB mmWave network. We focus on the outdoor mmWave channels at $30$ and $60$ GHz frequency bands. In doing so, we consider two relaying modes of UAVs, which are amplify-and-forward (AF) and decode-and-forward (DF). In the AF mode, we study the Out-of-Band IAB (OB-IAB) scenario and propose a power mapping table to determine the transmission power of UAVs at the access links, based on the received signal strength at the backhaul links. Furthermore, we define where to best position the UAVs, based on the generated ray tracing coverage maps.
In the DF mode, we study the in-band IAB (IB-IAB) scenario and define the 3D deployment of UAVs. In doing so, we consider the required received signal-to-interference-plus-noise-ratio (SINR) threshold at the backhaul links along with the signal coverage and interference levels at the access links. To the best of our knowledge, this is the first ray tracing-based study that investigates the use of UAVs in mmWave-based OB and IB scenarios for IAB cellular networks.
The rest of this paper is organized as follows. In Section~\ref{sec_netarch}, we present the network architecture and the environment modeling in WinProp software. In Section~\ref{sec_AF}, we discuss the WinProp implementation of AF relaying mode and demonstrate, with the aid of the ray tracing simulations, the performance gains of using UAVs in IAB mmWave scenarios. Similarly, DF relaying mode is studied in Section~\ref{sec_DF}. Finally, conclusion is drawn in Section~\ref{sec_conc}.
\section{Network Architecture and Ray-tracing Simulated Environment}\label{sec_netarch}
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig1.png
\caption{UAV-assisted integrated access and backhaul.}\label{fig_sysmod}
\end{center}
\vspace{-0.1 in}
\end{figure}
Fig.~\ref{fig_sysmod} depicts the considered architecture for the UAV-assisted IAB system.
As shown, the IAB-donor, which is the MBS in this case, provides access and wireless backhauling functionalities to terrestrial users (tUEs) and UAVs, respectively. Both backhaul and access links operate on the same spectrum band, centered around the $f_1$ frequency. UAVs are utilized as relaying IAB-nodes to fill the coverage gaps and provide access functionality to aerial users (aUEs). The access links of UAVs operate at the same spectrum resources as backhaul links ($f_1$) in the IB-IAB transmission mode, and at different resources ($f_2$) in the OB-IAB transmission mode.
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig2.png
\caption{Received Downlink SINR from single IAB-donor.}\label{sc1_covrg_singleIABDon}
\end{center}
\vspace{-0.15 in}
\end{figure}
We utilize the WinProp software package to model a 3-dimensional (3D) urban outdoor scenario, namely the downtown Manhattan area, as shown in Fig.~\ref{sc1_covrg_singleIABDon}.
The 3D map layer allows for precise testing of the high path-loss limitations of mmWave spectrum bands. All structures are high rise buildings composed of concrete and flat glass surfaces. The heights of the buildings vary between $80\mathrm{m}$ and $120\mathrm{m}$. The buildings are uniformly distributed over a four-way intersection geographical area of size $1500\mathrm{m}\times460\mathrm{m}$.
A single IAB-donor is positioned at the $(-700,0)$ coordinates at an altitude of $25\mathrm{m}$.
The high scattering nature of the simulated environment limits signal penetration and results in large coverage gaps. In this paper, we aim to shrink such coverage gaps by finding the best positions for UAVs. The UAV transmitter is implemented similar to a conventional transmitter with varying 3D location. The IAB-donor and UAVs utilize a directional antenna pattern for the ray tracing simulations. The 2D horn antenna radiation patterns are shown in Figs.~\ref{antennapatternA} and \ref{antennapatternE}. The antenna pattern has a beamwidth $\left(3~\mathrm{dB}\right)$ of ${\approx}~30$ degrees. The AMan module in WinProp software is utilized to post process these patterns and create a 3D antenna radiation pattern that is imported into the ray tracing simulations.
\begin{figure}[h]
\vspace{-0.104 in}
\begin{center}
\subfloat[Azimuth plane radiation pattern\label{antennapatternA}]{
\includegraphics[width=7.75cm,height=7.75cm,keepaspectratio]{AP1.png}}
\end{center}
\begin{center}
\subfloat[Elevation plane radiation pattern\label{antennapatternE}]{
\includegraphics[width=7.75cm,height=7.75cm,keepaspectratio]{AP2.jpg}}
\caption{Antenna radiation pattern.}
\end{center}
\vspace{-0.2 in}
\end{figure}
\section{Amplify-and-Forward Relaying in UAV-assisted OB-IAB mmWave Network}\label{sec_AF}
In this section, we address two design aspects related to the AF relaying mode. First, we propose a power mapping table for the AF relaying UAV, which defines its transmission power over the access link, based on its received power from the IAB-donor over the backhaul links. Second, we define the best locations for the group of UAVs to be deployed.
\subsection{AF Power Mapping and UAV's Positioning}\label{subsec_AFpwrmap}
The received SINR at the backhaul link of a UAV depends on its location and distance from the IAB-donor. Hence, we propose an adaptive power transmission scheme, to be utilized at the AF UAVs, to reflect the strength of the backhaul connection of a UAV. In other words, the higher the received power over the backhaul links, the more transmission power will go over the access links.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,height=8cm,keepaspectratio]{fig3.png
\caption{Received SINR levels at backhaul link of a single UAV.}\label{sc1_sinr_bh}
\end{center}
\vspace{-0.15 in}
\end{figure}
First, we conduct ray tracing simulation to generate a power map of the received signal strength over the backhaul link at the UAV altitude of $200$~m. Given that this is a single-transmitter simulation environment, there is no interference and the signal strength and SINR are proportional to each other.
Fig.~\ref{sc1_sinr_bh} depicts a sample coverage map with a resolution of $200$m of the received SINR levels at the UAV from a single IAB-donor. Each square represents a potential UAV location.
Second and for a generic UAV's location, $i$, represented by one of the squares in Fig.~\ref{sc1_sinr_bh}, we propose to utilize a UAV transmission power, $P_{i}^{(\mathrm{Tx})}$, equal to
\begin{equation}
P_{i}^{(\mathrm{Tx})}=P^{(\mathrm{max})}\times\left(\frac{\upgamma_{i}^{(\mathrm{BH})}}{\upgamma_{\mathrm{u}}^{(\mathrm{max})}}\right) \;,
\end{equation}
where $P^{(\mathrm{max})}$ and $\upgamma_{i}^{(\mathrm{BH})}$ denote the maximum power capability of the UAV and the received SINR value at the $i^{\mathrm{th}}$ potential UAV location. Furthermore, $\upgamma_{\mathrm{u}}^{(\mathrm{max})}$ represents a normalization factor, which is equal to the maximum received SINR at the user's altitude level.
\begin{table}[h]
\centering
\caption{AF UAV power mapping table.}\label{tab_pwrmap}
\begin{tabular}{|c|c||c|c|}
\hline
Received & UAV Tx & Received & UAV Tx\\
SINR (dB) & power (dBm) & SINR (dB) & power (dBm)\\
\hline
16.95 & 32.28 & 20.37 & 33.10\\
\hline
17.42 & 32.41 & 20.74 & 33.16\\
\hline
17.45 & 32.43 & 20.81 & 33.18\\
\hline
18.56 & 32.67 & 23.47 & 33.69\\
\hline
19.98 & 32.99 & 25.17 & 34.00\\
\hline
20 & 33.01 & 25.21 & 34.01\\
\hline
20.24 & 33.05 & 34.22 & 34.42\\
\hline
20.27 & 33.07 & 34.32 & 34.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab_pwrmap} shows non-repeated entries of the received SINR at the backhaul link obtained from the coverage map, in Fig.~\ref{sc1_sinr_bh}, and the corresponding transmission power calculated according to (1). Third, the best position for the UAV is determined based on 1) its received SINR level from the IAB-donor, as shown previously in Fig.~\ref{sc1_sinr_bh}, and 2) its potential impact on the coverage map at the ground user level, taking into consideration the power mapping table presented in Table~\ref{tab_pwrmap}. Any additional UAV can be deployed similarly in a consecutive manner.
\subsection{Coverage Enhancement}\label{subsec_AFsimres}
In this section, we show the coverage improvement due to deploying two UAVs in the UAV-assisted OB-IAB mmWave network. We consider the same baseline deployment scenario, shown previously in Fig.~\ref{sc1_covrg_singleIABDon}, in which a single IAB-donor is positioned at the $(-700,0)$ position and at an altitude of $25\mathrm{m}$. In the considered UAV-assisted OB-IAB scenario, $30$~GHz frequency band is used for the downlink transmissions of backhaul and access links of the IAB-donor. Given its \emph{out-of-band} nature, the access links of UAVs operate at a different frequency, which is the $60$~GHz frequency band. The IAB-donor and any UAV has a maximum downlink transmission power of $10$ and $5$ watts, respectively. Based on the UAV positioning and transmission power mapping approach, introduced in section~\ref{subsec_AFpwrmap}, the best positions for the two UAVs are found to be $(-50, 150)$ and $(-50, -150)$. The two UAVs are positioned almost at the middle of a four-way intersection at an altitude of $200$m. The deployed UAVs leverage their Line-of-Sight (LOS) capabilities and fill the coverage gaps as shown in the received downlink SINR coverage map in Fig.~\ref{sc1_covg_2uavs}.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm,height=8.5cm,keepaspectratio]{fig4.png
\caption{Ray tracing coverage map after adding two AF UAVs.}\label{sc1_covg_2uavs}
\end{center}
\vspace{-0.15 in}
\end{figure}
\begin{figure}
\begin{center}
\hspace{-.45in}
\includegraphics[width=8.5cm,height=8.5cm,keepaspectratio]{AFCDF.png
\caption{AF relaying mode: CDF of downlink received SINR.}\label{sc1_cdf}
\end{center}
\vspace{-0.1 in}
\end{figure}
Fig.~\ref{sc1_cdf} depicts the cumulative distribution function (CDF) of the received downlink SINR of the baseline scenario, in which UAVs are not used, and the UAV-assisted scenario. In both cases, the users are distributed uniformly across the coverage map every $10$~m. Fig.~\ref{sc1_cdf} shows that initially around $70^{\mathrm{th}}$ percentile of users fall in coverage gaps in the baseline scenario. With the deployment of two UAVs, only $30^{\mathrm{th}}$ percentile are still left in coverage gaps. In other words, deploying two UAVs, according to the proposed scheme in this paper, has provided an average of $2.3\times$ gain in downlink coverage
\section{UAV-Assisted Decode-and-Forward Relaying in UAV-Assisted IB-IAB mmWave Network}\label{sec_DF}
In this section, we consider DF relaying nature of the UAVs, and aim to define the best locations of a set of deployed UAVs. In the DF relaying mode, a UAV forwards its received packet, if the received SINR is above a certain threshold. Otherwise, it remains idle. The transmitted signal will be sent with the maximum transmission power of the UAV. In this section, we use IB-IAB transmission mode, as a potential candidate for tighter integration between access and backhaul links. In that, the backhaul and access links of each UAV fully overlap on spectrum resources. The baseline simulation scenario is depicted in Fig.~\ref{sc2_covg_single_IABDon}, in which, the IAB-donor is positioned at the origin coordinates with an altitude of $25\mathrm{m}$.
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig6.png
\caption{Received downlink SINR map at user's altitude.}\label{sc2_covg_single_IABDon}
\vspace{-0.15 in}
\end{center}
\end{figure}
\subsection{DF UAVs Positioning}\label{subsec_DFimp}
We utilize two UAVs to fill in the coverage gaps in Fig.~\ref{sc2_covg_single_IABDon}, taking into consideration the inter-UAV interference to be at low levels. In particular, we conduct ray tracing simulation to generate the coverage map of the received SINR at the backhaul links, as depicted in Fig.~\ref{sc2_covg_bh}. On one hand, each UAV must be positioned in a location where the received backhaul SINR from the IAB-donor is above a specific threshold. On the other hand, the UAVs are positioned such that the signal coverage is maximized at the ground user level while taking into account minimizing the inter-cell interference levels. We set the SINR threshold to $15\,\mathrm{dB}$ in the proposed ray tracing scenario. Hence, the UAVs can be placed anywhere where the received SINR in Fig.~\ref{sc2_covg_bh} is above this threshold. The best positions for the two UAVs are found to be $(-20,200)$ and $(20,-200)$.
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig7.png
\caption{Received downlink SINR map at backhaul link of a single UAV.}\label{sc2_covg_bh}
\end{center}
\vspace{-0.15 in}
\end{figure}
\subsection{Coverage Enhancement}\label{subsec_DFres}
\begin{figure}
\begin{center}
\includegraphics[width=8.75cm,height=8.75cm,keepaspectratio]{fig8.png
\caption{Ray tracing coverage map after adding two DF UAVs.}\label{sc2_covg_2uavs}
\end{center}
\vspace{-0.15 in}
\end{figure}
Fig.~\ref{sc2_covg_2uavs} shows how UAVs are positioned to fill the coverage gaps in the baseline scenario (see Fig.~\ref{sc1_covg_2uavs}) and demonstrates the improvement in the downlink coverage after deploying the UAVs. The CDF plot of the received downlink SINR is shown in Fig.~\ref{sc2_cdf}. Fig.~\ref{sc2_cdf} shows that $60^{\mathrm{th}}$ percentile of users suffer from coverage gaps before deploying the UAVs, while only $30^{\mathrm{th}}$ percentile do after deploying the UAVs. In other words, the use of two DF UAVs yields an average of $1.75\times$ gain in downlink coverage of the proposed IAB mmWave scenario. It is worth noting that the available spectrum resources are directly proportional to the number of UAVs in the DF relaying mode. Consequently, given that deploying two UAVs has doubled the downlink coverage, Fig.~\ref{sc2_cdf} shows that the downlink capacity achieves $2\times$ gain in the proposed DF relaying mode. Fig.~\ref{sc2_cdf} also reveals that the received downlink SINR of terrestrial users, i.e., cell-center users, is slightly decreased by around $4\,\mathrm{dB}$ to provide coverage to more than $30^{\mathrm{th}}$ percentile of users. This degradation is due to the nature of \emph{in-band} mode, which creates mutual interference between the access links of the UAVs and the IAB-donor.
\begin{figure}
\begin{center}
\hspace{-.45in}
\includegraphics[width=8.5cm,height=8.5cm,keepaspectratio]{DFCDF.png
\caption{DF relaying mode: CDF of downlink received SINR.}\label{sc2_cdf}
\end{center}
\vspace{-0.15 in}
\end{figure}
\section{Conclusion}\label{sec_conc}
In this paper, we utilized the ray tracing simulations in WinProp software to investigate the coverage gains of using UAVs, as hovering relays, in IAB mmWave cellular networks. We considered the AF and DF relaying modes of UAVs and analyzed the propagation characteristics of $30$ and $60$ GHz outdoor channels. We used the ray tracing simulation results to define the 3D deployment and the access functionality of the UAVs. The ray tracing simulation results show that using UAV AF and DF relaying modes achieves an average of $2.3\times$ and $1.75\times$ gains in the downlink coverage of IAB mmWave networks, respectively.
\balance
\bibliographystyle{IEEEtran}
|
1108.0868
|
\section{Introduction}
\rightline{\begin{minipage}{7.6cm}
{\em ``Perch\'e i discorsi nostri hanno a essere intorno
al mondo sensibile, e non sopra un mondo di carta."}\\
Galileo Galilei, {\em Dialogo sopra i massimi sistemi}
\end{minipage}}
\vspace{4mm}
\noindent
I am not an expert on strings. I follow the results announced as main string achievements, but I have not worked in the field. I have therefore much hesitated before accepting the invitation by \emph{Foundations of Physics} to express a view on the theory. I have eventually decided to accept, in the hope of giving a contribution to the overall debate on the theory, because a central problem addresses by string theory is also addressed by the research direction in which I work.
For a considerable number of years, strings have represented a huge intellectual investment, aiming at a complete theory capable to describe the world at the elementary level, including quantum gravity. Today, the problem is obviously not yet solved. String theory is incomplete, far from describing precisely our real world, and its foundation is poorly understood. But the difficulties of a similar task are arduous and advances are necessarily slow. Strings provide tantalizing hints, partial answers, intriguing mathematical tools, and the tentative architecture of a grand overall picture to solve the problem.
In such a situation it is hard to evaluate string theory in isolation. An evaluation can only be made by comparing the theory with alternative research directions. Following the indications of the editors of \emph{Foundations of Physics}, I try here to asses the results of string theory by comparing them with results and methods of my own field of research. I hope that this can contribute to put the results of string theory in perspective, and seeking a sober evaluation of the relative merits and the relative potential of different research directions, in a field where the final answer is not yet known.
String theory has recently been evolving into a toolbox, with tentative applications to fields such as QCD, strongly interacting fluids, or pure mathematics. I will not comment on the interest of string theory for these fields. This should be evaluated by QCD theorists, condensed matter physicists, or mathematicians. I focus on the motivating claim of string theory, which is to describe the real world beyond what is well accounted-for by the particle-physics standard-model and classical general relativity, and in particular to concretely describe the regimes where the quantum property of gravity cannot be neglected. This last problem is my own specific field of interest.
\section{What we know, what we do not know, and what is the problem}
Before addressing merits and shortcomings of the tentative \emph{solution} provided by string theory, let me briefly recall the \emph{problem} on the table, which string theory means to solve. The Standard Model and classical general relativity are spectacular theories that have enjoyed an empirical success with few --if any-- equal in the history of science. Today, these theories (with neutrino mass and cosmological constant) seem able to account for virtually anything we can measure, with the notable exception of dark-matter phenomenology. These are the currently established fundamental theories that summarize what we know about the physical world at the most elementary level we can access. But this set of theories does not allow us to compute what happens in all physical regimes, is patchy, and manifestly incomplete.
Specifically, if we want to compute the scattering amplitude of two point-particles interacting gravitationally, as a function of their center of mass energy $E$ and impact parameter $b$, we can do so using (non-renormalizable) perturbative quantum general relativity as an effective theory, but predictivity breaks down when $E$ becomes of the order of $b$ in Planck units. Therefore current established theory gives no prediction whatsoever on what happens to particles that scatter at that energy. Such lack of predictivity is particularly relevant in important physical situations, such as early cosmology, some aspects of black hole physics, and our understanding of the short-scale structure of physical space. We need new theory to understand this physics.
Furthermore, there are major theoretical and conceptual shortcoming of the current theory. Ultraviolet divergences appear to indicate that there is something important we miss at short scale. General relativity is a beautiful theory with a tight formal structure and a minimal number of free parameters, but the Standard Model is a patchwork with a number of free parameters that calls for an explanation. The conceptual structure of the Standard Model includes aspects (fixed spacetime, global Poincar\'e invariance, local energy conservation...) which play a major role in dealing with its quantum properties, but are profoundly different from that of general relativity (dynamical spacetime, no global Poincar\'e invariance, no local energy conservation, general covariance...): if we want a coherent picture, we need a way to combine the two. In particular, if the central physical tenet of general relativity is correct, namely if the geometry of physical space is a physical field, then the quantum character of this field imply that physical geometry is ``quantum geometry". What is quantum geometry? Can we find a complete and consistent theoretical picture where these issues are resolved?
There are \emph{two} problems raised by this situation. The first is to {complete} the picture and make it {consistent}. This is the problem of \emph{quantum gravity}, since what is clearly missing is the understanding of quantum gravitational physics. A second, distinct, problem, is \emph{unification}, namely the hope of reducing the full phenomenology to the manifestation of a single entity. (QCD completes the standard model and is consistent with electroweak theory, but is not unified with it.) String theory is an attempt to solve the two problems at once, namely to provide a quantum theory of gravity \emph{within} a unified picture (a ``final theory").
Below, I discuss the extent to which these problems are solved by the current state of string theory, in comparison with other approaches. I focus on the approach to quantum gravity in which I work, loop quantum gravity. (Recent reviews of string theory to which I am particularly indebted are \cite{Mukhi:2011zz}, and \cite{Blau:1900zza} which focus on strings as a theory of quantum gravity. A recent overview of loop gravity, with relevant references is \cite{Rovelli:2010bf}, a technical introduction is in \cite{Rovelli:2011eq}.) It is important to stress upfront, however, that the problems addressed by strings and loops do not coincide. Both theories aim at a quantum theory of gravity in order to complete the current theoretical description of the world and and make it coherent, but string theory assumes the working hypothesis that this can only be achieved in the context of a unified theory, capable of addressing also questions that are outside the scope of loop gravity (such as: why these matter couplings and not others? why four dimensions? what is the final theory of nature? and so on.)
\section{Ultraviolet finiteness}
A major achievement of string theory is the control of the ultraviolet divergences of conventional quantum field theory. An actual proof of ultraviolet finiteness is still lacking. At least, I have searched and much asked around, but I have not yet been able to find a reference with such a proof. But string theorists appear to be convinced of finiteness, and I believe them.
Indeed string theory provides intuitive ways for seeing how singularities are resolved. When point-particles scatter at very high energy, the stringy degrees of freedom ``open-up", effectively spreading over a finite spacetime region, smoothing out the interaction region. From a different perspective, in the perturbation expansion the Riemann surfaces corresponding to high momentum are the ``thin" ones, but a modular transformation relates such surfaces to non-degenerate ones, effectively avoiding the ultraviolet regime. The reason for the stringy ultraviolet finiteness can therefore be traced to the very hypothesis at the basis of string theory, namely the existence of an infinite number of degrees of freedom besides the ones we see, defining extended elementary objects. These can smooth out the standard quantum field theory divergences.
These result can be compared with the ultraviolet finiteness of loop quantum gravity. Here the proof of ultraviolet finiteness is straightforward.\footnote{Infrared finiteness can be proven as well in an appropriate version of theory. See the review articles quoted above for specific references.} Unlike string theory, the physical ground for the ultraviolet finiteness of loop quantum gravity does not require any additional input beside general relativity and quantum mechanics, and it can be understood as follows. Since the geometry of space is a quantum field, it has quantum properties. In particular, the spectrum of the geometrical quantities can be computed and turns out to be discrete. At short scale, spacetime geometry is therefore effectively discrete, in very much the same manner in which the energy of a harmonic oscillator is discrete. Therefore there is literally ``no room" for ultraviolet divergences in the theory: there is no short-distance beyond the Planck scale. This becomes manifest in computing transition amplitudes, where one sees explicitly that high momenta are cut off at the Planck scale.\footnote{Geometry discreteness does not break Lorentz invariance in loop gravity because it is quantum mechanical. Eigenvalues do not transform continuously under a continuous symmetry. (The $\hbar/2$ eigenvalue of the angular-momentum component $L_z$ does not transform continuously under rotation.) A boosted observer does not measure Lorentz-contracted discrete lengths: he measures a continuously-deformed probability distribution for the \emph{same} spectrum of lengths. The hard-to-die idea that the existence of a minimal length at the Planck scale necessarily breaks Lorentz invariance is plainly wrong.}
Ultraviolet finiteness is therefore a main achievement both for string theory and for loop gravity. But it is realized differently in the two theories. In both cases, it reflects an intrinsic physical limitation in measuring distances shorter than the Plank scale. But in the first, it follows from a novel hypothesis about Nature; in the second, it is a direct consequence of quantum theory and general relativity.
\section{Quantum geometry}
The peculiar features of general relativity, and in particular its large gauge symmetry, have followed a curious fate along the evolution of string theory. To start with, general relativity was treated as a conventional field theory in the string context. It was considered the effective low-energy manifestation of something else, like Fermi weak-interactions theory, and dealt with using basic quantum field theoretical ``non-general-relativistic" tools: expanding around a fixed background, relying heavily on global Poincar\'e invariance and so on. That is, relying on notions that are at odds with the symmetry of general relativity. In the hands of theoreticians mostly coming from the particle-physics tradition, general relativity was treated in a way that appeared to general relativists to betray its central physical ideas. But the peculiar features of general relativity and of the quantum aspects of spacetime have eventually resurfaced and are playing an increasingly important role in string theory today.
The main effect of the large symmetry group of gravity, and the main teaching of general relativity, indeed, is that the world is not a given spacetime over which dynamical degrees of freedom evolve. Rather, spacetime itself is a dynamical entity. In a quantum theory, spacetime itself is a quantum entity, whose structure cannot be assigned a priori. A number of developments of string theory aim at coming to terms with this deeply unconventional and novel aspects of the world that is directly implied by the physics of general relativity and quantum theory. For instance, with states that have no natural continuum spacetime description (say, a vacuum which is a tensor product of conformal field theories). Similarly, the difficulties of defining local bulk observables in a general relativist context which have long been discussed in the quantum gravity literature are now being increasingly discussed in the string literature.
Of course, much physics can be derived by choosing a background and computing around it. A state of a background-independent theory constitutes a background, and physics around that state will be -obviously- background dependent. There is no conflict between this background dependence about a chosen (``vacuum") state, and fundamental background independence, any more than there would be between quantum electrodynamics and its expansion about the field of an atomic nucleus. The difficult problem is another one: whether the full definition of the theory, and in particular the characterization of its degrees of freedom, requires a background to start with or not. For instance, if we reinterpret general relativity as the theory of small fluctuations around a fixed space-time, we lose most of the interesting phenomena it predicts, such as the Schwarzschild solution, the dynamics of the universe, black holes's horizons, and so on. In other words, the problem is whether or not we have a quantum theory with a clear definition of a state space capable of listing all possible background states.
There have been numerous beautiful attempts to find this fully background independent formulation of string theory, such as string field theory, matrix theory, holography... But full background independence of string theory is not yet properly understood.
The way this fundamental issue is addressed in string theory is often indirect. For instance, attempts are made to describe the bulk quantum geometry of spacetime by using the ADS-CFT conjecture, thus trying to describe what we do not know (quantum gravity) in terms of conceptual tools that we control (flat-space quantum field theory on the boundary). Analogously, the string theory calculations of black hole entropy exploit the relation between the strong-coupling genuinely-gravitational regime of interest, and the weak-coupling regime where conventional flat-space tools can be used, and states can be counted. Again, string cosmology often addresses the highly non-Minkowskian geometry of early cosmology by an hypothesis, that sounds bizarre to relativists: an overall larger Minkowski space where everything happens.
In all these cases, instead of addressing the real problem, which is to learn how to do physics where background spacetime plays no role, the strategy is to try to circumvent the problem, bringing back the calculations to the familiar pre-general-relativistic conceptual framework. The reason of this, of course, is not lack of imagination or courage from string theorists. String theory gives glimpses and hints of how a genuine theory of quantum geometry could be, with general states having no Riemannian spacetime interpretation at all ---like a general state of a quantum particle is not necessarily similar to a classical localized particle--- but for the moment it is far from providing a complete coherent picture of quantum geometry.
This must be compared with the picture of quantum geometry offered by loop gravity. Contrary to the string case, loop gravity addressed upfront the problem of describing the fundamental degrees of freedom of a theory without a fixed background spacetime. The result is that everything is conceptually clear, fully general relativistic, and well defined. There is a Hilbert space, whose states have a clean interpretation as quantum states of the geometry. These do not live over a background, but themselves build-up spacetime. The quanta of the theory are ``quanta of space", quantum bricks that build up spacetime. The mathematics of quantum geometry is clear at the level of mathematical physics, as well as at the conceptual physical level. A formalism for computing well defined background-independent observables, as well as perturbing around a given background, is known. These techniques are perhaps unfamiliar to many, and might look strange at first sight, but so did string theory for many years, before becoming fashionable.
It seems to me that the clarity of the picture of quantum geometry is definitely a plus for loop quantum gravity that lacks in string theory.
\section{Overall picture}
The beauty of string theory, on the other hand, is that it offers a tentative overall picture capable of bringing together in a natural and compelling way so many aspects of the world. It provides an ultraviolet consistent theory of gravity and at the same time has natural room for gauge symmetry,
unification, holography, all fused in an interrelated net that suggest the existence of a compelling overall architecture. Even not working in the field, one cannot fail to appreciate the tantalizing aspects of the relations unraveled by the string research. It is very tempting to believe that beyond all these relations there should be a remarkable coherent edifice.
The difficulty is that for the moment we see only bits and pieces of the hypothetical complete edifice. In particular, we do not see the foundations: the basic degrees of freedom and the basic equations. The sentiment that this beautiful underlying theory \emph{should} exist is strong among the people immersed in string theory, and is reinforced by the discovery of the beautiful relations --dualities-- relating the different bits. It is difficult for an outsider
to fully appreciate the support of this sentiment, but string theorists appear to be convinced of the existence of the underlying theory. They might be right, but the fundamental theory, if it exists, is still outside our control. Until we see it, its beauty and its physical consistency are hypothetical.
This can be compared with loop quantum gravity. The scope of the theory is much narrower, because the theory does not pretend to be a unified theory, does not select the matter couplings, and does not aim at being the final theory of the world. But the elementary degrees of freedom, which are the quanta of space, or, equivalently, the quanta of gravitational field, are clearly defined. The basic operators are well defined. The dynamics can be compactly presented with three equations. The overall structure of the theory is complete and simple.
Loops and strings differ in another key respect. Strings are based on a definite physical hypothesis: elementary constituents of the world are extended objects. The hypothesis might be right. Or wrong. The world might not be supersymmetric and 10 dimensional.
Loop gravity, on the other hand, is grounded in quantum theory and in the symmetry underlying general relativity, a symmetry today generally expected to survive at high energy. Loop gravity is just a \emph{general covariant} quantum field theory, with degrees of freedom reducing to Riemannian geometry at low energy. Loop gravity can very well turn out to be wrong as well, of course. But if the theory is wrong, it must be so for some more subtle reason, which, in any case, would still teach us something about the quantum world at the general covariant quantum level.
Of course, assuming that the basic physical tenets of general relativity and quantum theory remains valid at the Planck scale is an extrapolation. But extrapolation has always been the most spectacularly effective tool in science. Maxwell equations, found in a lab, work from the atomic to the galactic scale. Up to contrary empirical indications, always possible, a good bet is that what we have learned may continue to hold.
\section{Describing \emph{this} world}
Let me now come to what I see as a serious shortcoming of string theory. The interest in the theory exploded around 1985, when $E8\times E8$ and the heterotic string appeared to be the \emph{unique} viable option, and the low-energy field theory of such a string, compactified to 4 dimensions on a suitable class of 6-dimensional manifolds, was shown in a classic paper of Candelas, Horowitz, Strominger and Witten \cite{Candelas:1985en} to yield qualitatively correct phenomenological properties, including parity violation. The central promise of that paper and the hope it raised was that a realistic string theory incorporating and generalizing the Standard Model, plus gravity, was round the corner. I think that one can safely say today, in hindsight and despite the defining historical role played by the paper, that the hope grounding that paper, which sparked all that interest, was misplaced.
String theory would be in a stronger position, if it could exhibit a mechanics yielding the $SU(3)\times SU(2)\times U(1)$ gauge group, the particle content of our world, the three generations, no supersymmetry at our scale and so on. Understanding \emph{this} was its original aim. So far, string theory fails to describe our world as see it. It describes, instead, lots of worlds, in all sort of higher dimensions, generally with cosmological constant having the wrong sign, with ``microscopical" internal spaces of cosmological size, and so on. This is a beautiful theoretical world, with marvels and surprises, but where is \emph{our} world in it? Until the description of our world is found in this immense paper edifice, it seems to me that caution should be maintained.
This can be compared with the situation in loop gravity, or with other approaches to this problem, that might shed light on some of these issues, like for instance Alain Connes's non commutative geometrization of the Standard Model \cite{Chamseddine:1991qh}. Again, loop gravity does not pretend to provide a unified picture of nature, to tell us what is the matter content of the universe, or to determine the number of dimensions of spacetime. But the theory is compatible with a description of the world as we see around us: four dimensions, no supersymmetry, fermions and a certain Yang-Mills gauge group. Like all successful physical theories developed so far (QED, QCD, or general relativity) it is compatible with unphysical couplings. The ambition of loop gravity is not to solve all problems of physics and provide the final theory. Its ambition is to provide a consistent theory of quantum gravitational phenomena, coupled with the matter that we find (with experiments) in the world.
On the other hand, scattering calculation around the Minkowski background in loop quantum gravity are being developed, but they are in a far more primitive stage than the scattering calculations one can do with string theory. Also, in the last couple of years, loop gravity has seen the development of an explicit formulation of the theory that includes fermions and Yang-Mills fields, and of a technique to compute scattering amplitudes around the Minkowski background. But these developments are recent, and the results are preliminary.
\section{Unification}
There is one issue in which string theory appears to be in a definitely better position than quantum gravity. This is the issue of unification, where loop gravity has nothing to offer.
I think that it is important to emphasize the fact that the unification of the forces and the quantization of gravity are two conceptually distinct problems. The first is the old dream of having a single theory explaining everything. The second refers specifically to the present inconsistency between general relativity and the standard formulation of quantum field theory, and is a problem that has to be solved in order to have a coherent theory of the world. Solving the second does not necessarily imply solving the first: the quantum theory of the gravitational field can in principle be found without addressing the unification problem, like the quantum theory of the strong interactions has been understood and found without solving the problem of unifying them with the electroweak forces.
The idea is often put forward that the problem of quantum gravity can \emph{only} be solved together with the unification problem. There are hints that this might be the case. The running of the Standard Model coupling constants appears to converge not too far from Planck scale, fermions and boson divergences tend to cancel in supersymmetric theories, and so on. In the history of physics, often two major problems have been solved at once, and the temptation to do the trick again is reasonable.
But even more often, however, hopes to solve two problems at once have been disappointed. When I was a student, the idea that the theory for the strong interactions could only be found by getting rid of renormalization theory at the same time, was an unquestioned mantra, repeated by everybody. It turned out to be wrong. There are standard arguments against the possibility of finding a consistent quantum theory of general relativity alone. But these arguments hold in the context of standard \emph{local} field theory, where fields operators are defined on a spacetime metric manifold. They are all circumvented by loop gravity by moving up to the proper context of a \emph{general-covariant} quantum field theory.
Loop quantum gravity is a theory of quantum gravity that does not address the unification problem. It is like QED, or, more precisely, QCD: a quantum field theory for a certain interaction, which can be coupled to other interactions (affecting them), but is consistent by itself. The philosophy underlying loop gravity is that we are not near the end of physics, we better not dream of a final theory of everything, and we better solve one problem at the time, which is hard enough.
Back to the unification problem, does string theory actually solve it? Closed and open strings describe gravity and gauge theory. More than that, they can even be shown as two sides of the same physics, under certain conditions. This is very compelling. String theory definitely provides a unified picture in which gauge theory and gravity live together, and the nineteen or so parameters of the Standard Model are replaced by a single fundamental parameter. This is a strong plus.
But the initial objective of unification was far more ambitious. It was to understand what is beyond the Standard Model in order to be able to \emph{compute} the value of the free parameters of the theory, in the same manner in which the Shr\"odinger equation allows us to compute chemical or condensed-matter parameters from fundamental constants. There is no computation of the Standard Model parameters from string theory. Nor a solution to the other puzzles in the theory: why is the cosmological constant so small? What is the origin of the three families? Can we give a better account of symmetry breaking? Little concrete physics has emerged from the theory so far. The results expected from a true unification do not seem to me to be there.
\section{Applications}
Black holes thermodynamics is definitely a success of string theory, and in my opinion, the strongest evidence for its physical relevance. A similar success can be claimed by loop gravity. Both successes are partial in my opinion. The string derivation is still confined to, or around, extreme situations, as far as I know, and since it is based on mapping the physical black-hole solution into a different solution, it fails to give us a direct-hand concrete understanding of the relevant black hole degrees of freedom, as far as I can see. The loop derivation of black hole entropy gives a clear and compelling physical picture of the relevant degrees of freedom contributing to the entropy, but it is based on tuning a free parameter to get the correct Bekenstein-Hawking entropy coefficients, and this does not sound satisfactory to me either.
The crucial application to both strings and loops will probably turn out to be cosmology. This is the most likely domain where a window of opportunity for testing the theories might open. Loop cosmology is the most spectacular success of loop quantum gravity. The theory elegantly resolves the big bang singularity and predicts a sort of ``bounce" from a previously contracting phase. When a collapsing universe reaches Planck-scale density its wave function opens up into a genuinely quantum state where classical space and time are ill defined. The quantum equation of the theory continue to hold, and the evolution can be studied across this non-classical region into a new expanding universe.
This is similar of the picture of an electron falling straight into a Coulomb potential: the classical trajectory falls into the singularity. But the classical trajectory becomes ill-defined in the quantum evolution of the corresponding wave packet. Spacetime is ill-defined around the big bang like the classical trajectory of the electron around the center of the Coulomb potential.
The full quantum gravity effects are nicely summarized into an effective Planck-scale term that modifies the Friedmann equation, and an effort is under way to explore eventual testable consequences. Furthermore, inflation appear to be generic in this picture. The picture is simple, physically compelling and based only on standard general relativity and quantum mechanics, empirically well established physical inputs.
String cosmology is much developing as well, in a number of variants. The ability of the string to effectively resolve singularities and the possibility of topology change potentially provides important inputs to cosmology. I might be wrong, and this is vague, but for my general relativist formation, however, many concrete scenarios proposed by string theory to describe the big bang, in particular some brane cosmologies with configurations of branes in a background space-time, do not sound physically very plausible to me, compared to the clean simplicity of the loop-cosmology scenario.
Finally, string theory techniques may have potential applications to other domains of physics. These are very interesting, but in no way they testify in favor of the relevance of string theory for the fundamental interactions. Enormous intellectual investments have gone into string theory in the last decades and it would be strange if all the theoretical technology developed did not turn out to be good for something. Theoretical physics is pretty coherent and techniques developed in one field often turn out to be helpful elsewhere, irrespective of their success in the first place. After all, if string theory turned out to be useful for QCD, it would, in a sense, finally fulfill the aim for which its ideas were conceived at its very early initial stage, when Gabriele Veneziano wrote the dual amplitudes to describe strong interactions. To some approximation, there certainly are strings in the real world: the flux tubes of a confining gauge interaction.
\section{Predictions}
Finally, although this should have probably been the first section, the main shortcoming of string theory is definitely its failure, so far, to produce any concretely verifiable physical prediction. To be sure, string theory has provided numerous ``predictions", like short scale modifications of the gravitational force, black holes at CERN, dielectron resonances, or the existence of super-symmetric particles at low energy, but so far all these ``predictions" have been falsified by observation. The theory has survived these failed predictions, because they were not solid predictions, but only hints of possibilities, effects compatible with the theory, but not necessary consequences of the theory. The real problem is that the theory does not appear, so far, to have any verifiable necessary consequence at accessible scales.
A burning difficulties is of course the landscape problem. If there is an accurate string description of the real world, then there are probably so many of them to make the discovery of the right one virtually impossible and in any case devoid of predictive power.
In my opinion, this is serious. A physical theory that does not give predictions is not a good theory. We need definite predictions, like those that \emph{all} good physical theories of the past have been able to produce.
Sometimes the strategy of saying ``so is the world, we have to live with this", is put forward. I find this strategy unconvincing. Such a strategy would be questionable even if string theory had already proved itself as a physically correct theory of the world. But concluding that fundamental physics cannot anymore make definite predictions, just because a hypothetical theory turns out to be too weak to be predictive, is mistaking hypotheses for consequences.
As far as clear verifiable predictions are concerned, loop quantum gravity is in no better shape either. There are no experiments supporting loops, nor any other quantum theory of gravity. The simple question I have emphasized in the introduction --what is the scattering amplitude for two particles interacting gravitationally with a center of mass energy of the order of the impact parameter in Planck units?-- does not have a clear answer yet, neither from strings nor from loops. Therefore the above condemnation of string theory applies equally to all other approaches to the problem of quantum gravity.
The closest to a verified prediction in the domain, as far as I know, comes from the poset approach to quantum gravity, which indicated the correct order of magnitude of the cosmological constant before its measurement \cite{Ahmed:2002mj}. In this particular regard, string theory features particularly badly: not only it failed to predict a positive cosmological constant, but the very introduction of a positive cosmological constant appears to be at least problematic for the theory.
\section{``It does not work, therefore let's develop it further"}
I think that the problem of describing our physical world at the elementary level beyond current established theories is open. String theory is one of the research directions among others aiming at solving this problem, with points of strength and weakness. Its main strength is its mathematical construction where gauge fields, fermions and the gravitational field can be seen as parts of an overall coherent construct. The theory has not delivered what it seemed to be almost there twenty years ago: a finite theory where the fundamental degrees of freedom are clearly identified, capable of describing our own world, with three fermion families, the $SU(3)\times SU(2) \times U(1)$ gauge groups, the values of parameters of the standard model computable, and (we should add today) a small positive cosmological constant. The tentative predictions of the theory have so far been falsified. The development of the theory has constructed a toolbox that can perhaps be used in other contexts, but for the moment does not appear very effective for producing concrete results for high-energy physics. The picture of quantum geometry offered by the theory is still very unclear.
There has been a tremendous theoretical investment on strings, by far unmatched by alternative research directions, there have been successes, string revolutions and excitement. Seven years ago, I wrote a playful ``dialog" to point out what I saw as the theory's shortcomings at the time \cite{Rovelli:2003wd}; reading the dialog today, it seems to me that those same difficulties are still open. Contrary to this, a theory like loop gravity has developed because the key problems open seven years ago have since been solved.
There is a compelling logical evolution that has lead from the particle-theory successes to strings. This path has been characterized by a sequence of spectacularly successful predictions (antiparticles, neutral currents, $W$ and $Z$, various quarks, just to mention some) which at some point has turned into a sequence of spectacularly failed predictions (grand unified theories predicted proton-decay at $10^{31}$ years, Kaluza-Klein theory predicted an observable scalar field, strings suggested effects of extra dimensions, supersymmetry has been ``on the verge of being seen" year after year, \ldots) I think that we should keep in mind the possibility that a wrong turn might have been taken at some point along this path.
In recent years, various theories have developed following the logic: ``it does not work, therefore let's develop it further". Perseverance may pay (it worked with Yang-Mills theories), but at a risk: a theory can grow on its own failures, enriching its structures to cover previous insuccesses.
There is certainly much beauty in strings. But beautiful ideas have turned out to be wrong in science, even ideas developed by large groups of scientists. (In the words of a quote attributed to Thomas Henry Huxley: ``Science is organized common sense where many a beautiful theory was killed by an ugly fact".) The history of quantum gravity is particularly sprinkled with great hopes disappointed. I remember as a young student sitting in a major conference where a world-renewed physicist announced that the definitive theory of quantum-gravity-and-everything, had finally been found. There were a few skeptics in the audience, regarded as zombies by the majority. Today most of us do not even remember the name of that ``final theory". Or worse, we can think of more than one possibility \ldots
It is obviously \emph{not} my intention to suggest that research in string theory should not be vigorously pursued.
String theory is a spectacular intellectual achievement and it might well turn out to be the right track. It is a rich and elaborate theory, that deserves to be studied further, with the resolute aim of arriving at assessing its physical viability. If all the hopes of the string community are realized, it is a triumph.
But I think it would be a mistake to consider string theory as an established result about nature and therefore concentrate the attention solely on it. Also if the hopes of other research directions are realized, it would be a triumph. String theory appears of unmatched beauty to string theorists, but other ideas appear of unmatched beauty to others. What I think is important is to keep in mind that these theories are provisional.
I am not pessimistic. Major problems like the ones we are facing have sometimes resisted for a while in the history of physics, but a solution has generally been found eventually. The issue is open. I think that different path must be pursued. Completeness, internal consistency, full agreement with known low-energy physics, simplicity, and, ultimately, experience, will tell.
\vskip3mm
\centerline{---}
\vskip1mm
I thank two anonymous referees for very useful criticisms and Matthias Blau for a detailed reading of the paper and a useful conversation. Thanks to Sal, a bit grown up and still without a job, for comments and suggestions.
|
1108.1261
|
\section{Introduction}
In the simple constituent quark model, where the proton is made of
two constituent $u$ quarks and one $d$ quark, a good explanation of
static properties e.g. magnetic moment can be achieved. However,
experimental results of the pion-nucleon sigma term value, strange
magnetic moment $\mu_s$, strangeness contribution to nucleon form
factor \cite{vonHarrach:2005at} as well as the apparent violations
in nucleon-antinucleon annihilation reactions involving $\phi$ meson
\cite{Amsler:1991hs} indicate that the proton might contain a
substantial strange quark-antiquark ($s\bar{s}$) component. The
strangeness sigma term appears to lie somewhere in the range of
$2-7$\% of the nucleon mass \cite{Young:2010aj}. The substantial
Okubo-Zwieg-Iizuka (OZI) rule violations in the $N\bar{N}$
annihilation reactions involving $\phi$ meson may suggest the
presence of an intrinsic $s\bar{s} $ in nucleon wave function
\cite{Ellis:1994ww}, for instance, the presence of a
$q^3s\bar{s}(\bar{q}^3s\bar{s})$ piece in the $N(\bar{N})$ wave
function. With such an assumption, the $\phi $ meson could be
produced in $N\bar{N}$ annihilation reactions via a shake-out or
rearrangement of the strange quarks already stored in the nucleon
without the violation of the OZI rule. There are other explanations
of the OZI rule violation without introducing strange component in
the nucleon such as the resonance interpretation, instanton induced
interaction \cite{Kochelev:1995kc}, and rescattering
\cite{Locher:1994cc}.
The EMC spin experiment \cite{Ashman:1987hv} on deep inelastic
scattering of longitudinally polarised muons by longitudinally
polarised protons revealed the first time that the polarization of
the strange quark sea may contribute to the proton spin $\sigma_s$ a
significant negative value. This experimental result was confirmed by
the subsequent deep inelastic double polarization experiments. Ref.
\cite{Ellis:1994py} analyzed all the available data then in a
systematic way and found $\sigma_s= -0.10 \pm 0.03$. Among a large
number of theoretical works, Cheng and Li apply the chiral quark
model (ChQM) to explain the spin and flavor structure of proton
\cite{Li:1997kp}. With the fluctuation of the proton into a kaon and
a hyperon, they can explain the negative polarization of the strange
quark sea and get other theoretical results consistent with the DIS
experimental results.
However, the configuration of strange quarks in the nucleon is still
an open question. The strangeness magnetic moment $\mu_s $ can be
extrapolated from the strange magnetic form factor $G_M^s(Q^2)$ at the
momentum transfer $Q^2=0$ measured in the parity violation experiments
of electron scattering from a nucleon \cite{Diehl:2007uc}. Most
experimental measurements suggest a positive value for $\mu_s $, in
contrast to the recent experiment data \cite{Baunack:2009gy} and
most theoretical calculations which have obtained negative values
for this observable~\cite{Beck:2001yx,Lyubovitskij:2002ng}.
A recent work~\cite{An:2005cj}
has proposed a different form for the strangeness
content of the proton which has the strange quark piece in terms of
pentaquark configurations instead of the 5-quark component which
consist of a $uud$ cluster and a $s\bar{s}$ pair proposed for
solving the puzzle of violation the OZI rule. Different pentaquark
configurations that may be contained in the proton may yield both
positive and negative values for the strangeness spin and magnetic
moment of the proton.
The experimental results on $\mu_s$, which is extracted from
experimental data on $G_M^s(Q^2)$, are rather uncertain due to the
large uncertainties in $G_M^s(Q^2)$ and the extrapolation approach. So
it is believed that the proton-antiproton reactions involving $\phi$
production may be another platform to be applied to tackle the
possible configuration of strange quarks in the proton. In the
present work we consider the strange content in the proton wave
function in three models, namely, the $uud$ cluster with a
$s\bar{s}$ sea quark component, kaon-hyperon clusters based on the
chiral quark model, and the pentaquark picture $uuds\bar s$. The
theoretical $\sigma_s$, $\mu_s$ and branching ratios of the
reactions $p\bar{p}\rightarrow \phi X$ ($X
=\pi^0,\eta,\rho^0,\omega$) will be compared to experimental data.
We resort to the $^3P_0$ quark model \cite{LeYaouanc:1988fx} and the
nearest threshold dominance model \cite{Vandermeulen:1988hh} to
obtain quantitative predictions for the branching ratios of the
annihilation reactions from atomic $p\bar{p}$ states with the
relative orbital angular momentum $L=0$ \cite{Gutsche:1997gy}. The
paper is organized as follows. The proton wave functions are briefly
described in Section 2 while $\sigma_s$ and $\mu_s$ are calculated
and discussed in Section 3 for various strangeness quark
configurations. In Section 4 we evaluate the branching ratios for
the reactions $p\bar{p}\rightarrow \phi X$ for the three forms of
proton wave functions by using the $^3P_0$ quark model. Finally a
summary and conclusion are given in Section 5.
\section{Proton wave functions}\label{sec:1}
The proton wave function in the presence of strange quarks may
include a 5-quark component $qqqs\bar{s}$ in addition to the $uud$
quark component, taking generically the form
\begin{equation}
|p\rangle = A|uud\rangle+B|uuds\bar{s}\rangle
\end{equation}
where $A$ and $B$ are the amplitudes for the 3- and 5-quark
components in the proton, respectively \cite{Dover:1990ic}. The
possible spin-flavor structures of the 5-quark components discussed
in the $N\bar N$ annihilation process are considered in
the next three subsections.
\subsection{Proton wave function with an explicit $s\bar{s}$ sea-quark component}
We consider the idea that strange quarks are present in the form of
an $s\bar{s}$ sea-quark component in the proton state.
This idea was proposed for describing the apparent violation
of the OZI rule in the $\phi NN$ production process~\cite{Henley:1991ge}
and in more general form used to discuss the $\phi $ meson production in $N\bar{N}$
annihilation reactions~\cite{Ellis:1994ww}. The corresponding
5-quark component for this model can be written in Fock space as
\begin{equation}
|uuds\bar{s}\rangle^{s\bar{s}} = a_0|(uud)_{1/2}(s\bar{s})_0\rangle_{1/2}+a_1|(uud)_{1/2}(s\bar{s})_1\rangle_{1/2}
\end{equation}
where the subscripts denote the spin coupling of the quark clusters,
$a_0$ and $a_1$ represent the amplitudes for the spin 0 and spin 1
components of the admixed $s\bar{s}$ pairs.
\subsection{Proton wave function based on a chiral quark model}
In the chiral quark model, the dominant process is the fluctuation
of a valence quark $q$ into a quark $q'$ plus a Goldstone boson
(GB) which in turn forms a ($q \bar{q}'$) system
\cite{Eichten:1991mt}. After the fluctuation of the $u$ and $d$
quarks in the proton, one of these quarks turns into a quark plus a
quark-antiquark pair involving a strange quark. This idea was
considered, for example, for calculating the flavor and spin content
of the proton \cite{Li:1997kp}. To obtain the proton wave function
we consider the SU(3) invariant interaction Lagrangian
of baryon octet with nonet of pseudoscalar mesons:
\begin{equation}
\mathcal{L}_I=-g_{8}\sqrt{2}\left(\alpha
[\bar B B P]_F +(1-\alpha)[\bar B B P]_D
\right)-g_{1}\frac{1}{\sqrt{3}}[\bar B B P]_S
\end{equation}
where $g_{8}=3.8$ and $g_{1}=2.0$ are coupling constants
\cite{Stoks:1998xv} and $\alpha$ is known as the $F/(F+D)$ ratio
with $F\simeq0.51,~D\simeq0.76$ \cite{Thomas:2001kw}. The square
parentheses denote the SU(3) invariant combinations:
\begin{eqnarray}
{[\bar B B P]}_F&=&
{\rm Tr}(\bar B P B)-{\rm Tr}(\bar B B P)
\,, \\
{[\bar B B M]}_D&=&
{\rm Tr}(\bar B P B)+
{\rm Tr}(\bar B B P)
-\frac{2}{3}{\rm Tr}(\bar B B){\rm Tr}(P)\,, \\
{[\bar B B P]}_S&=&
{\rm Tr}(\bar B B){\rm Tr}(P)\; ,
\end{eqnarray}
where $B$ and $P$ are the baryon octet and pseudoscalar meson nonet
matrices, respectively, given by
\begin{eqnarray}
B&=&\left( \begin{array}{ccc}
\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}} & \Sigma^{+} & p \\
\Sigma^{-} & -\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}} &n \\
-\Xi^{-} & \Xi^{0} & -\frac{2\Lambda}{\sqrt{6}}
\end{array}
\right) \; , \\
P&=&\left(\begin{array}{ccc}
\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta_8}{\sqrt{6}}+\frac{\eta_1}{\sqrt{3}}
& \pi^{+} & K^{+} \\
\pi^{-} & -\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta_8}{\sqrt{6}}
+\frac{\eta_1}{\sqrt{3}} & K^{0} \\
K^{-} & \bar{K}^{0} & \frac{-2\eta_8}{\sqrt{6}}+\frac{\eta_1}{\sqrt{3}}
\end{array}
\right) \; .
\end{eqnarray}
The part of the interaction Lagrangian which allows for a fluctuation
of the proton into kaons and hyperons is contained in
\begin{eqnarray}
\mathcal{L}_I&=&-g_1\bar{p}\eta_1p
+g_8\biggl[\bar{p}\pi^0+\frac{1-4\alpha}{\sqrt{3}}\bar{p}\eta_8
+\frac{1+2\alpha}{\sqrt{3}}\bar{\Lambda}K^-
+(2\alpha-1)\bar\Sigma^0K^-
\nonumber\\
&-&\sqrt{2}\bar{n}\pi^-+\sqrt{2}(2\alpha-1)\bar\Sigma^-K^0\biggr]p
+ \cdots
\end{eqnarray}
The final states resulting from pseudoscalar meson emission by the
proton are summarized as
\begin{eqnarray}
|\Psi \rangle &\sim& -g_1|p\eta_1\rangle
+g_8\biggl[\frac{1-4\alpha}{\sqrt{3}}|p\eta_8\rangle
+|p\pi^0\rangle+\frac{1+2\alpha}{\sqrt{3}}|\Lambda K^+\rangle
\nonumber\\
&+&(2\alpha-1)|\Sigma^0 K^+\rangle
-\sqrt{2}|n\pi^+\rangle+\sqrt{2}(2\alpha-1)|\Sigma^+K^0\rangle\biggr] \,.
\label{eq:chiralp}
\end{eqnarray}
In the absence of the fluctuation, the proton is made up of the conventional two $u$ quarks and one $d$ quark.
Thus $\Psi(p)$ may be interpreted as the 5-quark component of the proton wave function which is given by
\begin{eqnarray}\label{5q-ChQM}
|uuds\bar{s}\rangle^{\rm ChQM}= G_{1}|\Sigma^0K^+\rangle
+G_{2}|\Sigma^+K^0\rangle+G_{3}|\Lambda^0K^+\rangle+
G_4|p\eta_1\rangle+G_{5}|p\eta_8\rangle+ \,,
\end{eqnarray}
where the $G_i$ are the coefficients corresponding to the respective factor
in Eq.~(\ref{eq:chiralp}).
Each component in the last equation can be represented
in terms of quark cluster comnfigurations as
\begin{eqnarray}\label{K-Y}
|p\eta_{1,8}\rangle=|(uud)_{1/2}(s\bar{s})_0\rangle_{1/2}\,, \quad
|\Sigma^0K^+\rangle =|(uds)_{1/2}(u\bar{s})_0\rangle_{1/2}\,,\nonumber \\
|\Sigma^+K^0\rangle
=|(uus)_{1/2}(d\bar{s})_{0}\rangle_{1/2}\,, \quad |\Lambda^0K^+\rangle =
|(usd)_{1/2}(u\bar{s})_{0}\rangle_{1/2} \,.
\end{eqnarray}
\subsection{Proton wave function including general configurations of the
$uuds$ subsystem}
Another, more general form of the 5-quark component was proposed and
analyzed in
Ref.~\cite{An:2005cj}. Instead of first generating a meson coupling to a baryon
cluster, they consider the genuine 5-quark or $q^4\bar{q}$ pentaquark component
in the proton.
In this model the 5-quark component in this model
may be expressed in terms of the $uuds$ and the $\bar s$ wave functions as
\begin{equation}
|uuds\bar{s}\rangle ^{uuds}= |(uuds)\bar{s}\rangle_{1/2}.
\end{equation}
The flavor wave
functions for the $uuds\bar{s}$ components are usually constructed
by coupling the $uuds$ to the $\bar{s}$ flavor wave function.
The configurations studied in~\cite{An:2005cj} include at most one unit of
orbital angular momentum. The favored configurations are connected
to a positive sign for the strangeness magnetic moment and a negative one
for the strangeness contribution to the proton spin.
\section{Strangeness magnetic moment and spin of the proton}\label{sec:2}
In the nonrelativistic quark model the strangeness magnetic moment
operator $\vec{\mu }_s$ and the strangeness contribution to the
proton spin operator $\vec{\sigma}_s$ are defined as
\begin{equation}
\vec{\mu }_s=\frac{e}{2m_s}\underset{i}{ \sum }\widehat{S}_i(\widehat{\ell}_s +\widehat{\sigma}_s)\; ,
\end{equation}
\begin{equation}
\vec{\sigma}_s=\widehat{\sigma}_s + \widehat{\sigma}_{\bar{s}} \;.
\end{equation}
$\widehat{S}_i$ is the strangeness counting operator with
eigenvalue +1 for an $s $ and -1 for an $ \bar{s}$ quark and $m_s$
is the constituent mass of the strange quark. To calculate the matrix
elements of these operators explicit forms of the
spin-flavor wave functions of the proton including orbital
angular momentum are needed.
For the first model the
spin-flavor wave function can be constructed by coupling the
$|s\bar{s}\rangle_{j_s=0,1}$ configuration to the $|uud\rangle
_{1/2}$ cluster. Since the admixed $s\bar s$ carries negative
intrinsic parity, an orbital P-wave ($\ell=1$) has to be introduced
into the nucleon quark cluster wave function. The simplest configuration
(see also Ref.~~\cite{Henley:1991ge}) corresponds to an 1S-state of
the $s\bar s$ pair moving in a p-wave relative to the (uud) valence
quark cluster of the nucleon. Then the 5-quark component
with total angular momentum 1/2 can be written in the general form:
\begin{equation}\label{5q-sea-spin+1/2}
|uuds\bar{s} \rangle^{s\bar{s}}_{\frac{1}{2},m_{ps\bar{s}}
=\frac{1}{2}}=\underset{j_s,j_i=0,1}{ \sum
}\alpha_{j_sj_i}|[(s\bar{s})_{j_s}\otimes \ell=1]_{j_i }\otimes
(uud)_{\frac{1}{2} }\rangle_{\frac{1}{2},m_{ps\bar{s}}=\frac{1}{2}}
\end{equation}
with the normalization $\underset{j_s,j_i=0,1}{ \sum
}|\alpha_{j_sj_i}|^2=1$.
Similarly, for the proton wave function in the ChQM, where the sea-quark contributions
are embedded in the pseudoscalar mesons, a relative $P$-wave between the pseudoscalars
and the $uud$ or hyperon clusters has to be included.
The spin-flavor wave function with spin +1/2 for each coupled meson-baryon state of
Eq.~(\ref{K-Y}) may be expressed as
\begin{equation}
|uuds\bar{s}
\rangle^{\rm ChQM}_{\frac{1}{2},\frac{1}{2}}=|[(q\bar{s})_{j_s=0}\otimes
\ell=1]_{j_i }\otimes (qqs)_{s
}\rangle_{\frac{1}{2},m_{ps\bar{s}}=\frac{1}{2}}.
\end{equation}
Wave functions of the pentaquark $uuds\bar{s}$ states employed in the
third model are more complicated because no restrictions are set
concerning the sub-clusters.
One has to carefully consider the coupling of the color, spin,
flavor and spatial parts to
construct the total wave functions~\cite{An:2005cj}. The color part of the
antiquark in the pentaquark states is a $[11]$ antitriplet, denoted
by the Weyl tableau of the SU(3) group. Hence
the color symmetry of all the $uuds$ configurations is limited to
a $[211]$ triplet in order to form a pentaquark color singlet labeled by the
Weyl tableau $[222]$.
Three flavor symmetry patterns exist for the $uuds$ system corresponding to the octet
representation for the proton: $[31]_F ,[22]_F $ and $[211]_F $
characterized by the $S_4$ Young tableau. However, the pentaquark should
be antisymmetric under any permutation of the four quark
configuration. If the spatial wave function is symmetric, the
spin-flavor part of the $uuds$ component must be a $[31]$ state in
order to form the antisymmetric color-spin-flavor $uuds$ part of the
pentaquark wave function. For instance, the flavor symmetry
representations $[31]_F$ and $[211]_F$ may combine with the spin
symmetry state $[22]_S $ to form the mixed symmetry spin-flavor
states $[31]_{FS}$ (the explicit forms may be found in
\cite{An:2005cj, Bijker:2004gr, An:2008xk}). In this work we
consider only the case that the $uuds$ component is in the ground
state with the spin symmetry $[22]_S$ corresponding to spin zero,
and the relative orbital angular momentum between the $uuds$
component and the $\bar{s}$ is of one unit to obtain the positive
parity for the proton wave function.
Theoretical results for the strangeness magnetic moment $\mu_s$
of the proton and the strangeness contribution to the proton spin
$\sigma_s$ are listed in Table I. In the first model we have fixed
the configuration parameters as
$\alpha_{1,0}=\alpha_{1,1}=\bar{\alpha}$. The
strangeness magnetic moment $\mu_s$ depends explicitly on
$\alpha_{0,1}$, which is related
to the amplitude for the $s\bar{s}$ quark cluster with spin 0.
Setting $\alpha_{0,1}=0$ is equivalent to
excluding the quantum number $J^{PC}=0^{-+}$ for the $s\bar{s}$
admixture in the nucleon wave function connected to the the production of $\eta$
and $\eta'$ in $N\bar N$ annihilation as discussed in \cite{Dover:1990ic,Ellis:1999er}.
The chiral quark model always gives results for $\mu_s$ and $\sigma_s$
which are negative, the size of the strangeness contribution depends
on the coupling $g_8^2$.
For the third model, we show only the results for the cases where the
$uuds$ component is in the ground state with the spin-flavor
configurations $[31]_{FS}[211]_{F}[22]_{S}$ and
$[31]_{FS}[31]_{F}[22]_{S}$ and the relative motion between the
$uuds$ component and the $\bar{s}$ is a $P$-wave.
\begin{table}
\begin{center}\label{spin-table}
\begin{tabular}{c c c c }
\hline
$|uuds\bar{s}\rangle$ & $\mu_s(\frac{eB^2}{2m_s})$ & $\sigma_s(B^2)$&
\\
\hline
&&&\\
$ s\bar{s} $& $-0.55\bar{\alpha}\alpha_{0,1}$ & $-1.22\bar{\alpha}^2$ \cite{Gutsche:1997gy}& \\
&&&\\
$\rm{ChQM}$& $-1.1g_8^2$ & $-0.31g_8^2$& \\
&&&\\
$[31]_{FS}[211]_{F}[22]_{S}$& $ -\frac{1}{3}$\cite{An:2005cj}& $-\frac{1}{3}$\cite{An:2005cj}& \\
&&&\\
$[31]_{FS}[31]_{F}[22]_{S} $& $ -\frac{1}{3}$\cite{An:2005cj}& $-\frac{1}{3}$\cite{An:2005cj}& \\
&&&\\
\hline
\end{tabular}
\normalsize \caption{Strangeness magnetic moment and spin of the
proton for the three models of the 5-quark component.}
\end{center}
\end{table}
All the three models yield negative values for the strangeness
contribution to the proton spin, which is consistent with present
experimental results \cite{Ashman:1987hv,Ellis:1994py}. Negative
values for the strangeness magnetic moment also result from all three models.
Note that we restricted the considerations of Ref.~\cite{An:2005cj}
to the pentaquark components with the $uuds$ configurations
$[31]_{FS}[211]_{F}[22]_{S}$ and $[31]_{FS}[31]_{F}[22]_{S} $,
respectively.
\section{$N \bar N$ transition amplitude and branching ratios} \label{sec:3}
To describe the annihilation reactions $N\bar N \to X \phi $
$(X=\pi^0 , \eta ,\rho^0, \omega )$ we use an effective transition dynamics,
which is evaluated in the context of a simple constituent quark model.
In this specific process the $\phi $ meson couples to the intrinsic $s\bar s$
component of the nucleon, which is the leading order OZI allowed contribution.
The process $p\bar{p}$ annihilation
into $\phi X$ involving the 5-quark components in the proton wave
function can be described by the quark line diagrams of Fig. 1.
In the hadronic transition the effective quark annihilation operator
is taken with the quantum numbers of the vacuum ($^3P_0$, isospin $I=0$ and
color singlet). Meson decays and $N\bar N$ annihilation into two mesons
are well described phenomenologically using such an effective
quark-antiquark vertex. At least fro meson decay, this approximation has been
given a rigorous basis in strong-coupling QCD. The nonperturbative
$q\bar{q}$ $^3P_0$ vertex is defined according to
\cite{Dover:1992vj}
\begin{equation}
V^{ij}=\sum_\mu\sigma^{ij}_{-\mu}Y_{1\mu}(\vec{q}_i-\vec{q}_{j})\delta^{(3)}(\vec{q}_i+\vec{q}_{j})(-1)^{1+\mu}1^{ij}_F1^{ij}_C~,
\end{equation}
where $Y_{1\mu}(\vec{q})=|\vec{q}|\mathcal{Y}_{1\mu}(\widehat{q})$
with $\mathcal{Y}_{1\mu}(\widehat{q})$ being the spherical harmonics
in momentum space, and $1^{ij}_{F}$ and $1^{ij}_{C}$ are unit
operators in flavor and color spaces, respectively. The spin
operator $\sigma^{ij}_{-\mu}$ is part of the $^3P_0$ vertex,
destroying or creating quark-antiquark pairs with spin 1.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=5cm]{diagrams.eps}
\end{center}
\caption{Quark line diagrams for the production of two meson final states in $p\bar{p}$ annihilation.
Small circles refer to the effective vertex of the $^3P_0$ quark dynamics for $q\bar{q}$ annihilation.
The first diagram corresponds to the shake-out of
the intrinsic $s\bar{s}$ component of the proton wave function \cite{Ellis:1994ww,Gutsche:1997gy}.}
\end{figure}
In the momentum space representation the transition amplitudes for
the quark diagrams of Fig. 1 are given by
\begin{eqnarray} T_{A_I} = \int d^3q_1
..d^3q_8 d^3q_{1'}..d^3q_{4'}\langle \phi
X|\vec{q}_{1'}..\vec{q}_{4'} \rangle
\langle\vec{q}_{1'}..\vec{q}_{4'}|
\mathcal{O}_{A_I}|\vec{q}_{1}..\vec{q}_{8}\rangle \langle
\vec{q}_{1}..\vec{q}_{8}|(uuds\bar{s})\otimes(\bar{u} \bar{u}
\bar{d})\rangle
\end{eqnarray}
where $(\bar{u} \bar{u} \bar{d})$ stands for the antiproton wave
function and $(uuds\bar{s})$ for the five quark component of the
proton wave function. The effective operators $\mathcal{O}_{A_I}$ take the
form
\begin{equation}
\mathcal{O}_{A_1} = \lambda_{A_1}
\delta^{(3)}(\vec{q}_1-\vec{q}_{1'})
\delta^{(3)}(\vec{q}_2-\vec{q}_{2'})\delta^{(3)}(\vec{q}_3-\vec{q}_{3'})\delta^{(3)}(\vec{q}_8-\vec{q}_{4'})V^{56}V^{47}~,
\end{equation}
\begin{equation}
\mathcal{O}_{A_2} = \lambda_{A_2} \delta^{(3)}(\vec{q}_2-\vec{q}_{1'}) \delta^{(3)}(\vec{q}_3-\vec{q}_{2'})\delta^{(3)}(\vec{q}_1-\vec{q}_{3'})\delta^{(3)}(\vec{q}_8-\vec{q}_{4'})V^{56}V^{47}~,
\end{equation}
\begin{equation}
\mathcal{O}_{A_3} = \lambda_{A_3} \delta^{(3)}(\vec{q}_2-\vec{q}_{1'}) \delta^{(3)}(\vec{q}_3-\vec{q}_{2'})\delta^{(3)}(\vec{q}_4-\vec{q}_{3'})\delta^{(3)}(\vec{q}_8-\vec{q}_{4'})V^{56}V^{17}~.
\end{equation}
The $\delta$-functions represent the noninteracting and continuous
quark-antiquark lines in the diagrams. The constants $\lambda_{A_I}$
describe the effective strength of the transition topology and are
considered to be overall fitting parameters in the
phenomenological description of experimental data. Since the 5-quark
component is treated as a small perturbative admixture in the
proton ($B^2<<1$), we ignore the transition amplitude with a term
to $ \langle \vec{q}_{1}..\vec{q}_{8}|(uuds\bar{s})\otimes(\bar{u}
\bar{u} \bar{d}\bar{s}s)\rangle$ or the rearrangement process
\cite{Ellis:1994ww}.
In this work the internal spatial wave functions are taken in the
harmonic oscillator approximation. For the mesons $ M $ ($ \phi$ and
$ X $), the wave function can be expressed in terms of the quark
momenta as
\begin{equation}
\langle M |\vec{q}_{i'}
\vec{q}_{j'}\rangle\equiv\varphi_M(\vec{q}_{i'},\vec{q}_{j'})\chi_{M}=N_M{\rm
exp}\left\{-\frac{R^2_M}{8} \Big(\vec{q}_{i'} -\vec{q}_{j'}\Big)^2
\right\}\chi_{M},
\end{equation}
with $N_M = (R_M^2/\pi)^{3/4}$ and $R_M$ is the meson radial
parameter. The spin-color-flavor wave function is denoted by
$\chi_M$. The baryon wave functions are given by
\begin{equation}
\langle B|\vec{q}_i \vec{q}_j \vec{q}_k\rangle\equiv\varphi_B\chi_{B}=
N_B{\rm exp}\left\{-\frac{R^2_B}{4}
\Big[(\vec{q}_j-\vec{q}_k)^2
+\frac{(\vec{q}_j+\vec{q}_k-2\vec{q}_i)^2}{3}\Big] \right\}
\chi_{B} \,,
\end{equation}
where $N_B=(3R^2_B/\pi)^{3/2}$ and $R_B$ is the baryon radial
parameter. For the first and the second model the full 5-quark
component wave function, resulting from the coupling of a meson to
a baryon, is given by
\begin{eqnarray}
\langle \vec{q}_1\cdot\vec{q}_5 |uuds\vec{s}\rangle&=&
\varphi_{uuds\bar{s} }(\vec{q}_1,\cdots,\vec{q}_5)\chi_{uuds\bar{s}}
=
N_{uuds\bar{s}} \, {\rm exp}\left\{-\frac{R^2_{B}}{4}
\Big[(\vec{q}_4-\vec{q}_5)^2
+\frac{(\vec{q}_4+\vec{q}_5-2\vec{q}_3)^2}{3}\Big] \right\}
\nonumber \\
&\times&
{\rm exp}\left\{-\frac{R^2}{8}
(\vec{q}_3+\vec{q}_4+\vec{q}_5-\vec{q}_1-\vec{q}_2)^2
\right\}
Y_{1\mu}(\vec{q}_3+\vec{q}_4+\vec{q}_5-\vec{q}_1-\vec{q}_2)
\nonumber\\
&\times& {\rm exp}\left\{-\frac{R^2_{M}}{8} (\vec{q}_1-\vec{q}_2)^2 \right\}
\ (\chi_{B}\otimes\chi_{M} ).
\end{eqnarray}
The exponential form with the radial parameter $R$ and the spherical
harmonics $Y_{1\mu}$ together represent the internal relative P-wave
between the 3-quark and 2-quark clusters.
For the third model the proton wave function includes a pentaquark
component $uuds\bar{s} $ with the $uuds$ part in the ground state
and the P-wave internal relative orbital angular momentum between
$uuds$ and the $\bar s$. One may write the spatial wave
function of the pentaquark component $uuds\bar{s} $ as
\begin{eqnarray}
\varphi_{uuds\bar{s} }(\vec{q}_1,\cdots,\vec{q}_5)&=&
N_{uuds\bar{s}} \,
{\rm exp}\biggl\{-\frac{R^2_{B}}{4}\Big[(\vec{q}_2-\vec{q}_3)^2
+\frac{(\vec{q}_2+\vec{q}_3-2\vec{q}_4)^2}{3} \nonumber\\
&+&\frac{(\vec{q}_2+\vec{q}_3+\vec{q}_4-3\vec{q}_5)^2}{6}+
\frac{(\vec{q}_2+\vec{q}_3+\vec{q}_4+\vec{q}_5-4\vec{q}_1)^2}{10}\Big]\biggr\}
\nonumber \\
&\times&
Y_{1\mu}\biggl(\frac{\vec{q}_2+\vec{q}_3+\vec{q}_4
+\vec{q}_5-4\vec{q}_1}{\sqrt{20}}\biggr) \; .
\end{eqnarray}
By choosing the plane wave basis for the relative motion of the
proton and antiproton, the initial state wave functions in the
center of momentum system
($\vec{k}=\vec{q}_1+\vec{q}_2+\vec{q}_3+\vec{q}_4+\vec{q}_5$) are
obtained as:
\begin{equation}
\langle \vec{q}_{1}\cdots\vec{q}_{8}|(uuds\bar{s})\otimes
(\bar{u} \bar{u} \bar{d})\rangle=
\varphi_{uuds\bar{s},\bar p}[\chi_{uuds\bar{s}}\otimes\chi_{\bar p}]_{S,S_z}
\end{equation}
with
\begin{equation}
\varphi_{uuds\bar{s},\bar p}=\varphi_{uuds\bar{s}}\varphi_{\bar
p}\delta^{(3)}(\vec{q}_1+\vec{q}_2
+\vec{q}_3+\vec{q}_4+\vec{q}_5-\vec{k})
\delta^{(3)}(\vec{q}_6+\vec{q}_7 +\vec{q}_8+\vec{k})~.
\end{equation}
The spins of the $p\bar p$ system are coupled to the total spin $S$
with projection $S_z$. Similarly, the final state $\phi X $ wave
functions in the center of momentum system are given by
($\vec{q}=\vec{q}_{1'}+\vec{q}_{2'}$):
\begin{equation}
\langle \phi X|\vec{q}_{1'}...\vec{q}_{4'}\rangle=\varphi_{\phi,X}[\chi_{\phi }\otimes\chi_{X}]_{j_i,m_{\epsilon }}
\end{equation}
with
\begin{equation}
\varphi_{\phi,X}=\varphi_{\phi}\varphi_{X}\delta^{(3)}
(\vec{q}-\vec{q}_{1'}-\vec{q}_{2'})
\delta^{(3)}(\vec{q}+
\vec{q}_{3'}+\vec{q}_{4'})~.
\end{equation}
The spins of the two meson states are coupled to $j_i $
with projection $m_{\epsilon } $.
In the low-momentum approximation, the transition amplitude $T_{fi}$
of the annihilation reaction of the $S$-wave $\overline pp$ initial
state $i $ to the $P$-wave two-meson final state $f $ with the quark
line diagrams $A_I$ as shown in Fig. 1 is derived as
\begin{equation}\label{T-1}
T_{fi}(\vec{q},\vec{k})=\lambda_{A_I}F_{L=0,\ell_f=1}q \,
{\rm exp} \left\{ -Q^2_q q^2 -Q^2_k k^2\right\} \langle f | O_{A_I}|i \rangle
\end{equation}
The index $i $ represents the initial state $^{2I+1,2S+1} L_J$ where
$L $ is the orbital angular momentum, $S$ is the total spin, $J $ is
the total angular momentum and $I$ is the total isospin. The final
state $f $ is represented by the set of quantum numbers $f=\{ \ell_f j
J'\}$ where $\ell_f $ is the relative orbital angular momentum. The
constants $F_{0,1}$, $Q^2_q$ and $Q^2_k$ are geometrical constants
depending on the radial parameters. The matrix element $\langle f |
O_{A_I}|i \rangle$ is the spin-flavor weight for a quark line
diagram $A_I$. The detailed evaluation of the expression in
Eq.(\ref{T-1}) is given in Appendix A. Since the in the particle
basis $p\bar p$ and $n\bar n$
give the same spin-flavor weight, the $\phi$ production from the
nucleon-antinucleon annihilation at rest can be described by the
transition amplitude Eq.(\ref{T-1}) multiplied with a factor
$\sqrt{2}$.
As we consider $p\bar{p}$ annihilations at rest where the strong
interaction between the proton and antiproton may largely distort
the $\overline pp$ hydrogen-like wave function at small distances
\cite{Yan:1997yi}, the effect of the initial state interaction is in
general not negligible. The inclusion of the initial state interaction
for the atomic state of the $p\bar{p}$ system results in the
transition amplitude \cite{Kercek:1999sc},
\begin{equation}\label{atomic state}
T_{f,LSJ}(\vec{q})=\int d^3k ~ T_{fi}(\vec{q},\vec{k}) \phi ^I_{LSJ}(\vec{k} ),
\end{equation}
where $\phi ^I_{LSJ}(\vec{k} ) $ is the protonium wave function in
momentum space for fixed isospin $I $. The partial decay width for
the transition of the $p\bar{p} $ state to the two-meson state $\phi
X $ is given by
\begin{equation}
\Gamma_{p\bar{p}\rightarrow \phi X}=\int\frac{d^3p_\phi}{2E_\phi}\frac{d^3p_X}{2E_X}\delta ^{(3)}(\vec{p}_\phi +\vec{p}_X)\delta(E-E_\phi-E_X )|T_{f,LSJ}(\vec{q})|^2
\end{equation}
where $E$ is the total energy ($E=1.876$ GeV) and $E_{\phi ,X}=
\sqrt{m^2_{\phi ,X}+\vec{p}^2_{\phi ,X}} $ is the energy of outgoing
meson $\phi$ and $X$ with mass $m_{\phi ,X}$ and momentum
$\vec{p}_{\phi ,X}$. With the explicit form of the transition
amplitude given by Eq.~(\ref{T-1}), the partial decay width
for the S to P transition ($L=0 $, $ \ell_f=1 $) is written as
\begin{equation}\label{decay width}
\Gamma_{p\bar{p}\rightarrow \phi X}=\lambda_{A_I}^2f(\phi ,X)\langle f | O_{A_I}|i \rangle ^2 \gamma(I,J) ,
\end{equation}
with
\begin{equation}
\gamma(I,J)= |F_{0,1} \int d^3 k ~\phi ^I_{LSJ}(\vec{k} ) { \rm exp} \left\{ -Z^2_\gamma k^2\right\}|^2
\end{equation}
and the kinematical phase-space factor defined by
\begin{equation}\label{phase-space factor}
f(\phi ,X)=2\pi\frac{E_\phi E_X}{E }q^3 {\rm exp} \left\{ -2Z^2_\alpha q^2 \right\}.
\end{equation}
The spin-flavor weights $\langle f | O_{A_I}|i \rangle$ for the transitions $N\bar N \to \phi X$
involving the different 5-quark components of the proton wave functions are listed in Table II.
\begin{table}\label{SFtable}
\caption{Spin-flavor matrix elements $\langle f | O_{A_I}|i \rangle$
for the transitions $ p\bar{p}(L=0)\rightarrow\phi X(\ell_f=1)$
which are described by the quark line diagram $A_I$. Here $\eta_{ud}$
refers to the nonstrange flavor combination
$\eta_{ud}=(u\bar{u}+d\bar{d})/\sqrt{2}$.}
\begin{center}
\begin{tabular}{c c c c c c}
\hline
Transition & $s\bar{s}_{A_1}$ & ChQM
&$[31][31][22]_{A_1}$ & $[31][211][22]_{A_1}$
\\
\hline
&&&\\
$^{11}S_0$$\rightarrow\omega\phi$& $\frac{5}{9\sqrt{6}}$& -0.097& $\frac{5}{36\sqrt{6}}$& $\frac{5}{36\sqrt{6}}$\\
&&&\\
$^{33}S_1$$\rightarrow\pi^0\phi$& $\frac{5}{27\sqrt{2}}$& 0.031&$\frac{5}{108\sqrt{2}}$& $\frac{5}{108\sqrt{2}}$\\
&&&\\
$^{31}S_0$$\rightarrow\rho^0\phi$& $\frac{13}{27\sqrt{6}}$&0.040& $\frac{13}{108\sqrt{6}}$& $\frac{13}{108\sqrt{6}}$\\
&&&\\
$^{13}S_1$$\rightarrow\eta_{ud}\phi$& $\frac{1}{9\sqrt{2}}$& 0.013& $\frac{1}{36\sqrt{2}}$& $\frac{1}{36\sqrt{2}}$\\
&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
For the initial values of the total angular momentum $J $ the statistical
weights 1/4 and 3/4 have to be added for $J=0 $ and $J=1 $,
respectively. Finally the branching ratio of S-wave $p\bar{p}$
annihilation to the final state $ \phi X $ is then given by
\begin{equation}\label{BR}
BR(\phi,X)=\frac{(2J+1 )\Gamma_{p\bar{p}\rightarrow \phi X}}{4\Gamma_{tot}(J)},
\end{equation}
where $\Gamma_{tot}(J) $ is the total annihilation width of the
$p\bar{p} $ atomic state with fixed principal quantum number
\cite{Dover:1991mu}.
The model dependence in Eq.(\ref{decay width}) may be reduced by
choosing a simplified phenomenological approach that has been
applied in studies of two-meson branching ratios in
nucleon-antinucleon \cite{Kercek:1999sc} and radiative protonium
annihilation \cite{Gutsche:1998fc}. Namely, instead of the phase
space factor in Eq.(\ref{phase-space factor}) which depends on the
relative momentum and the masses of $\phi X$ system, we use a
kinematical phase-space factor of the form
\begin{equation}\label{f-function}
f(\phi,X)=q\cdot {\rm exp}\{-a_s\,(s-s_{\phi X})^{1/2}\}
\end{equation}
where $a_s=1.2$ GeV$^{-1}$, $s_{\phi X}=(m_{\phi}+m_{X})^{1/2}$ and
$\sqrt{s}=(m_{\phi}^2+q^2)^{1/2}+(m_{X}^2+q^2)^{1/2} $. Last form
is obtained from the fit to the momentum dependence of the cross section of various
annihilation channels \cite{Vandermeulen:1988hh}. In addition, the
functions $\gamma(I,J) $, depending on the initial-state
interaction, are related to the probability for a protonium state to
have isospin $I$ and spin $J $ with the normalization condition
$\gamma(0,J)+\gamma(1,J)=1 $. Here we adopt for a protonium state
the probability $\gamma(I,J) $ and the total decay width
$\Gamma_{tot}(J) $ obtained in an optical potential calculation
\cite{Carbonell:1989cs}, where explicit values are listed
in \cite{Dover:1991mu}.
In Table III we give the theoretical results for the branching ratios
of Eq.~(\ref{BR}) compared with experimental data. The branching
ratios $BR^{s\bar{s}}$, resulting from the first model where the
proton wave function has an explicit $s\bar s$ admixture, have already been
derived and studied in Ref.~\cite{Gutsche:1997gy} by using the same approach.
Annihilation processes in the first and third model are described by the
quark line diagram $A_1$. Since the effective strength parameter
$\lambda_{A_1}$ is a priori unknown it has to be adjusted to data.
For this purpose one entry (as indicated by $\star $) is normalized to the
observed value.
For the second chiral model where the proton wave function contains a
kaon-hyperon or eta-proton cluster component, all three quark line
diagrams may have contributions to the $\overline pp$ annihilation
process. However, the process proceeding by the diagram $A_1$ with the
$|p\eta\rangle$ component in the proton wave function has no
contribution to the transition because of orthogonality to the $\phi $ state.
Therefore, the annihilation process in the second model can only be
described by the quark line diagrams $A_2$ and $A_3$. Considering
the same annihilation pattern in these two diagrams, for
simplicity the two unknown strength parameters
are of the same order with $\lambda_{A_2}= \lambda_{A_3}$. Model
predictions are also normalized to experimental data (as indicated
by $\star $). For final states with $X=\eta $, the physical $\eta $
meson is produced by its nonstrange component $\eta_{ud} $ with
$\eta = \eta_{ud}(\sqrt{1/3}\cos\theta - \sqrt{2/3}\sin\theta) $
corresponding to a variation of the pseudoscalar mixing angle $\theta $
from $\theta=-10.7^o $ to $\theta =-20^o$.
\begin{table}\label{BRtable}
\caption{Branching ratio $BR(\times 10^{4})$ for the transition
$p\bar{p}\rightarrow \phi X$ ($X =\pi^0,\eta,\rho^0,\omega$) in
$p\bar{p}$ annihilation at rest. The results indicated by $\star$
are normalized to the experimental values.}
\begin{center}
\begin{tabular}{c c c c c c}
\hline
Transition & BR$^{\rm exp}$ & BR$^{s\bar{s}}$
& BR$^{\rm ChQM}$ & BR$^{[31][31][22]}$ & BR$^{[31][211][22]}$\\
\hline
&&&\\
$^{11}S_0$$\rightarrow\omega\phi$ & 6.3$\pm$2.3 & 6.3 $\star$ & 6.3 $\star$ & 6.3 $\star$ & 6.3 $\star$\\
&&&\\
$^{33}S_1$$\rightarrow\pi^0\phi$& 5.5 $\pm$ 0.7& 5.4 & 1.6 & 5.4 & 5.4\\
&&&\\
$^{31}S_0$$\rightarrow\rho^0\phi$& 3.4 $\pm$ 1.0 & 3.8 & 0.87 & 3.8 & 3.8\\
&&&\\
$^{13}S_1$$\rightarrow\eta\phi$& 0.9 $\pm$ 0.3& 1.4$-$1.8 & 0.20$-$0.27 & 1.4$-$1.8 & 1.4$-$1.8\\
&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
As shown in Table III, the theoretical results of the first and
third models, where the proton wave function possesses respectively
a small kaon-hyperon component and a pentaquark, are in good
agreement with the experimental data. Note that for these two cases
the annihilation processes $p\bar{p}\rightarrow \phi X$ are
described with the quark line diagram $A_1$.
\section{Summary}
Three models have been studied for the proton involving intrinsic
strangeness in the form of a 5-quark component $qqqs\bar{s}$ in the
wave function. In particular, the proton wave function is made up of
a $uud$ configuration and a $uud$ cluster with a $s\bar{s}$
sea-quark component, kaon-hyperon clusters based on the simple
chiral quark model, or a pentaquark component $uuds\bar s$. We have
calculated the strangeness magnetic moment $\mu_s $ and spin
$\sigma_s $ for the first and second models and generate negative
values in line with recent experimental indication. Similarly, for
the third model we pick these configurations, where negative values
for $\mu_s$ and $\sigma_s$ result~\cite{An:2005cj}.
We further applied quark line diagrams supplemented by the $^3P_0$ vertex
to study the annihilation reactions $p \bar{p}\rightarrow \phi X$
($X=\pi^0,\eta,\rho^0,\omega$) with the three types of proton wave
functions. Excellent agreements of the model predictions in the
first and third models with the experimental data are found for the
branching ratios of the reactions of the $L=0$ atomic $p \bar{p}$
state to $\phi X$ ($X=\pi^0,\eta,\rho^0,\omega$).
{\bf Acknowledgements} {\\ \small This work was supported by the DFG
under Contract No. FA67/31-2. This research is also part of the
European Community-Research Infrastructure Integrating Activity
``Study of Strongly Interacting Matter'' (acronym HadronPhysics2,
Grant Agreement No. 227431) and part of the Federal Targeted Program
``Scientific and scientific-pedagogical personnel of innovative
Russia'' Contract No. 02.740.11.0238. We also acknowledge the
generous help of Chun-Sheng AN for providing us with the proton wave
function with the 5-quark component in $uuds$ subsystem used in this
paper. The stay in T$\rm \ddot{u}$bingen of Sorakrai Srisuphaphon
was supported by the DAAD under PKZ:A/07/98879, and the study at SUT
was supported by Burapha University. }
\numberwithin{equation}{section}
\begin{appendix}
\section{Transition amplitudes of the annihilation processes $p \bar{p}\rightarrow \phi X$}
To describe the annihilation process $p \bar{p}\rightarrow \phi X$
where $X=\pi^0,\eta,\rho^0,\omega$ with the proton wave function
with $s\bar{s}$ sea quark we consider the shake-out of the intrinsic
$s\bar{s}$ component of the proton wave function as indicated in the
diagram $A_1$. With the operator $\mathcal{O}_{A_1} $ and the full
account of the spin-flavor-color-orbital structure of the initial
and final states, the transition amplitude can be written as
\begin{eqnarray}{\label{T-ss}}
T_{if}^{s\bar{s}} =\lambda_{A_1}\langle f| \sum_{\nu,\lambda} (-1)^{\nu+\lambda} \sigma^{56}_{-\nu}\sigma^{47}_{-\lambda}1^{56}_F1^{47}_F1^{56}_C1^{47}_C I_{spatial}^{s\bar{s}} |i\rangle \; ,
\end{eqnarray}
where
\begin{eqnarray}\label{state-i}
|i\rangle=|\{ \chi_{\frac{1}{2},m_{ps\bar{s }}}(uuds\bar{s })\otimes
\chi_{\frac{1}{2},m_{\bar{p }}}
(\bar{u}\bar{u}\bar{d})\}_{S,S_z}\otimes (L,M)\rangle_{J,J_z},
\end{eqnarray}
\begin{eqnarray}\label{state-f}
|f\rangle=| \{\chi_{1,m_\alpha}(\phi)\otimes
\chi_{j_m,m_{3',4'}}(X)\}_{j,m_\epsilon }\otimes
(\ell_f,m_f)\rangle_{J,J_z}.
\end{eqnarray}
The spin-flavor-color content of the clusters is denoted by $\chi
\equiv \chi_{\sigma}\otimes\chi_F\otimes\chi_C$. The 5-quark
component $ \chi_{\frac{1}{2},m_{ps\bar{s }}} (uuds\bar{s })$ is
defined as
\begin{equation}
\chi_{\frac{1}{2},m_{ps\bar{s }}} (uuds\bar{s })=|
\{\chi_{j_s,m_s}(s\bar{s})\otimes (\ell=1,\mu)\}_{j_i,m_i }\otimes \chi_{\frac{1}{2},m_p }(uud) \rangle_{\frac{1}{2},m_{ps\bar{s }}}\;.
\end{equation}
The spatial amplitude $I_{spatial}^{s\bar{s }}$ is explicitly given
by
\begin{equation}
I_{spatial}^{s\bar{s }}=\int d^3q_1 ...d^3q_8 d^3q_{1'}...d^3q_{4'} \varphi_{\phi,X }\mathcal{O}_{A_1}^{spatial } \varphi_{uuds\bar{s },\bar{p}}
\end{equation}
where
\begin{eqnarray}
\mathcal{O}_{A_1}^{spatial } =Y_{1\lambda}(\vec{q}_4-\vec{q}_{7})\delta^{(3)}(\vec{q}_4+\vec{q}_{7})Y_{1\nu}(\vec{q}_5-\vec{q}_{6})\delta^{(3)}(\vec{q}_5+\vec{q}_{6})~~~~~
\nonumber \\ \delta^{(3)}(\vec{q}_1-\vec{q}_{1'}) \delta^{(3)}(\vec{q}_2-\vec{q}_{2'})\delta^{(3)}(\vec{q}_3-\vec{q}_{3'})\delta^{(3)}(\vec{q}_8-\vec{q}_{4'}).
\end{eqnarray}
Partial wave amplitudes can be obtained by projecting the transition
amplitude onto the partial waves, where $L=0$ and $l_f=1$
corresponds to $\overline pp$ annihilation at rest. In the
low-momentum approximation the integrals can be done analytically,
and the partial wave amplitude in the leading order of the external
momenta $q$ is given by
\begin{eqnarray}\label{I-ss}
I_{spatial,L=0,l_f=1}^{s\bar{s}}=qF_{0,1}^{s\bar{s}}f^{s\bar{s}}_{0,1}(\nu,\lambda,\mu,m_f){\exp}\left\{
-Q^2_q q^2 -Q^2_k k^2\right\} \;.
\end{eqnarray}
The geometrical constant $F_{0,1}^{s\bar{s}} $ and the spin-angular momentum function $f_{0,1}^{s\bar{s}}(\nu,\lambda,\mu,m_f) $ are given by
\begin{eqnarray}\label{I-ss1}
F_{0,1}^{s\bar{s}}=2 N \pi ^2 \left(\frac{1}{Q_{p_2}^2}\right)^{3/2}
\left(\frac{3
\sqrt{\pi }}{\left(Q_{p_4}^2\right){}^{5/2}}-\frac{3 \sqrt{\pi }}{4
\left(Q_{p_3}^2\right){}^{5/2}}\right),
\nonumber \\ f_{0,1}^{s\bar{s}}(\nu,\lambda,\mu,m_f)=(-1)^{\nu}\delta_{\nu,-\lambda}\delta_{\mu,m_f},~~~~~~~~~~~~
\end{eqnarray}
where $N=N_\phi N_X N_{uuds\bar{s} } N_{\bar{p} }$, and the
coefficients in the exponential expression depend on the meson and
baryon size parameters:
\begin{eqnarray}
Q_k^2&=&\frac{4 R_M^2 R_B^2+9 R^2 R_B^2+3 R_M^2
R^2}{24 \left(R_M^2+3 R_B^2\right)},
\nonumber\\Q_q^2&=&\frac{12 R_B^4+5 R_M^2 R_B^2+36 R^2 R_B^2+12 R_M^2
R^2}{24 \left(R_M^2+3 R_B^2\right)},
\nonumber\\Q_{p_2}^2&=&R_M^2,~Q_{p_3}^2=\frac{1}{2} \left(R_M^2+3 R_B^2\right),
~Q_{p_4}^2=2 R_B^2.
\end{eqnarray}
By using the spatial wave amplitude $I_{spatial}^{s\bar{s }}$ we
obtain the transition amplitude $T_{if}^{s\bar{s}} $ taking the form
as in Eq.~(\ref{T-1}) with the spin-color-flavor weight:
\begin{equation}
\langle f | O_{A_1}|i \rangle=\langle f| \sum_{\nu,\lambda} (-1)^{\nu+\lambda} \sigma^{56}_{-\nu}\sigma^{47}_{-\lambda}1^{56}_F1^{47}_F 1^{56}_C1^{47}_C (-1)^{\nu}\delta_{\nu,-\lambda}\delta_{\mu,m_f}|i\rangle.
\end{equation}
According to the $^3P_0$ quark model the matrix element $ \langle f
| O_{A_1}|i \rangle$ can be evaluated by using the two-body matrix
elements for spin, flavor and color given by
\begin{equation}\label{3p0-spin}
\langle 0 |\sigma^{ij }_\upsilon | \chi^{{J_{ij }}}_{m_{ij }}(ij) \rangle=\delta_{J_{ij },1}\delta_{m_{ij },-\upsilon}(-1)^\upsilon\sqrt{2},
\end{equation}
\begin{equation}\label{3p0-flavor}
\langle 0 |1^{ij }_F | \chi^{{T_{ij }}}_{t_{ij }}(ij) \rangle=\delta_{T_{ij },0}\delta_{t_{ij },0}\sqrt{2},
\end{equation}
and
\begin{equation}\label{3p0-color}
\langle0|1^{ij}_{C}|q_{\alpha}^i\bar{q}_\beta^j\rangle=\delta_{\alpha\beta},
\end{equation}
where $\alpha$ and $\beta$ are the color indices. The
spin-color-flavor weights $ \langle f | O_{A_1}|i \rangle$ are
evaluated for various transitions, as listed in Table II.
In case of the simple chiral quark model the annihilation processes are
described by the quark line diagrams $A_2$ and $A_3$.
Then the transition amplitude is set up as
\begin{equation}\label{T-ChQM-1}
T^{ChQM}_{if}=T^{\rm ChQM}_{if}(\mathcal{O}_{A_2})
+T^{\rm ChQM}_{if}(\mathcal{O}_{A_3}),
\end{equation}
where the corresponding transition amplitudes for the two quark line
diagrams are given by
\begin{eqnarray}\label{T-A2}
T^{\rm ChQM}_{if}(\mathcal{O}_{A_2})
= \lambda_{A_2}\langle f| \sum_{\nu,\lambda} (-1)^{\nu+\lambda}
\sigma^{56}_{-\nu}\sigma^{47}_{-\lambda}1^{56}_F1^{47}_F
1^{56}_C1^{47}_C I_{spatial,A_2}^{\rm ChQM} |i\rangle
\end{eqnarray}
and
\begin{eqnarray}\label{T-A3}
T^{\rm ChQM}_{if}(\mathcal{O}_{A_3})
=\lambda_{A_3}\langle f| \sum_{\nu,\lambda} (-1)^{\nu+\lambda}
\sigma^{56}_{-\nu}\sigma^{17}_{-\lambda}1^{56}_F1^{17}_F 1^{56}_C1^{17}_C
I_{spatial,A_3}^{\rm ChQM} |i\rangle.
\end{eqnarray}
The initial state $|i\rangle$ and the final state $|f\rangle$ take
the same form as defined in Eq.~(\ref{state-i}) and
Eq.~(\ref{state-f}), but the 5-quark component in this case is given
by
\begin{equation}\label{5q-ChQM-chi}
\chi_{\frac{1}{2},m_{KY}}(uuds\bar{s })=\sum^3_{i=1}G_i|
\{\chi^i_{j_s,m_s}(q\bar{s})\otimes (\ell=1,\mu)\}_{j_i,m_i }\otimes
\chi^i_{\frac{1}{2},m_Y}(qqs) \rangle_{\frac{1}{2},m_{KY}},
\end{equation}
where $i=1,2,3$ represent the kaon-hyperon clusters $K^+\Sigma^0$,
$K^0\Sigma^+$ and $K^+\Lambda^0$, respectively, and the coefficients
$G_i$ are as defined in Eq.~(\ref{5q-ChQM}).
In the low-momentum approximation the partial wave amplitude from
each of the quark line diagrams $A1$ and $A2$ in leading order
of the external momentum $q$ takes the general form as in
Eq.~(\ref{I-ss}) but with different coefficients. In order to combine
the two transition amplitudes, we choose the radial parameters for
the baryons and mesons as $R_B=3.1 ~GeV^{-1}$, $R_M=4.1~ GeV^{-1}$
\cite{Gutsche:1997gy} and the size parameter between the two quark
clusters as $R=4.1~ GeV^{-1}$. Then the total transition
amplitude eq.(\ref{T-ChQM-1}) becomes
\begin{equation}\label{T-ChQM-A2+A3}
T^{\rm ChQM}_{if}=\lambda_{\rm ChQM}F^{\rm ChQM}_{0,1}q~ {\rm exp}
\left\{ -Z^2_q q^2 -Z^2_k k^2\right\} \langle f | O_{\rm ChQM}|i \rangle,
\end{equation}
where $F^{\rm ChQM}_{0,1}=4.9\times10^{-4}~ GeV^{-11}$, $Z_q\simeq2.3~
GeV^{-1}$ and $Z_k\simeq1.3~ GeV^{-1}$, and
$\lambda_{A_2}=\lambda_{A_3}=\lambda_{\rm ChQM}$. The total
spin-color-flavor weight $ \langle f | O_{\rm ChQM}|i \rangle$ is
calculated with the spin-angular momentum wave functions in Eq.
(\ref{5q-ChQM-chi}) and its elements are derived as
\begin{equation}
f_{0,1}^{\rm ChQM}=-(-1)^\nu\delta_{\nu,-\lambda}\delta_{\mu,m_f}+2
(-1)^\mu\delta_{\mu,-\nu}\delta_{\lambda,m_f}
+2(-1)^\lambda\delta_{\mu,-\lambda}\delta_{\nu,m_f}~.
\end{equation}
Finally we discuss the third model where the proton wave function
includes a $5q$ component in the form of a pentaquark configuration.
The $\phi$
production is described by only the quark line diagram $A_1$, and
the transition amplitude takes the same form as eq.(\ref{T-ss}) but
the 5-quark component $|uuds\bar{s }\rangle $ is given by
\begin{equation}
\chi_{\frac{1}{2},m_{ps\bar{s }}} (uuds\bar{s })=|
\{\chi_{1/2,m_{\bar{s}}}(\bar{s})\otimes (\ell=1,\mu)\}_{j_i,m_i
}\otimes \chi_{s,s_z}(uuds) \rangle_{\frac{1}{2},m_{ps\bar{s }}}.
\end{equation}
In the low-momentum approximation, the partial wave amplitude and
for the transition of the $S$-wave $\overline pp$ state to the
$P$-wave two-meson final states takes the same form as
Eq.~(\ref{I-ss}). The spin-angular momentum function
$f_{0,1}^{s\bar{s}}(\nu,\lambda,\mu,m_f)$ is also the same as the
one in Eq.~(\ref{I-ss1}) but the corresponding geometrical constant
is given by
\begin{eqnarray}
F_{0,1}=-\frac{3}{16} \sqrt{5} N \pi ^4
\left(\frac{1}{Q_{p_2}^2}\right)^{3/2}
\left(\frac{\left(\frac{1}{Q_{p_4}^2}\right)^{3/2}}{\left(Q_{p_3}^
2\right)^{5/2}}-\frac{4
\left(\frac{1}{Q_{p_3}^2}\right)^{3/2}}{\left(Q_{p_4}^2\right)^{
5/2}}\right),
\end{eqnarray}
with the constants depending on the baryon and meson size
parameters:
\begin{eqnarray}
&\;&Q_k^2=\frac{7 R_B^2}{30}-\frac{R_B^4}{2 \left(3 R_B^2+R_M^2\right)},
~Q_q^2=\frac{1}{8} R_B^2 \left(5-\frac{R_B^2}{3 R_B^2+R_M^2}\right),
\nonumber\\ &\;& Q_{p_2}^2=R_B^2+\frac{R_M^2}{2},
~Q_{p_3}^2=\frac{1}{2} \left(3 R_B^2+R_M^2\right),
~Q_{p_4}^2=2 R_B^2.
\end{eqnarray}
\end{appendix}
|
1108.0885
|
\section{Introduction}
Hurwitz theory is one of the rapidly developing branches
of modern mathematical physics \cite{Hurfirst}-\cite{I}.
It has its origins in the enumeration problem
of ramified coverings of $CP^1$,
and it is brought into modern context
with the formula due to Frobenius, \cite{Dijk}
\be
{\rm Cover}_n(\Delta_1,\ldots,\Delta_m)
= \sum_{R} d^2_R \varphi_R(\Delta_1)\ldots\varphi_R(\Delta_m)
\delta_{|R|,n}
\label{Frof}
\ee
expressing the covering multiplicities through the quantities
$\varphi_R(\Delta)$ proportional to the characters of the symmetric group $S_n$
\cite{Mac}.
Here $\Delta_i$ is the Young diagram (integer partition),
characterizing the type (conjugation class) of ramification point,
all sizes of the diagrams being the same and equal to
the number of sheets in the covering, $|\Delta_1| = \ldots =
|\Delta_m|=n$,
and the sum runs over the Young diagrams $R$ of the same size
$|R|=n$, but this time they label representations of the symmetric
group.
The symmetric group characters are among the most important objects
in combinatorics, more sophisticated then the
$GL(\infty)$ characters $\chi_R(t)$, very well-known and used
in physical applications \cite{Ham}.
Still the two sets of characters are directly related by the Frobenius formula,
which is a sort of Fourier transform \cite{Mac},
\be
\chi_R(t) = \sum_\Delta d_R\varphi_R(\Delta) p(\Delta)
\delta_{|\Delta|,|R|}
\label{chiphitr}
\ee
where $\Delta = [\ldots\geq\delta_2\geq\delta_1] =
[\ldots , 3^{m_3},2^{m_2},1^{m_1}]$, its size
$|\Delta| = \sum_k \delta_k = \sum_k km_k$,
the time monomial
\be
p(\Delta) = \prod_k p_{\delta_k} = \prod_k p_k^{m_k}
\ee
and $p_k = kt_k$.
These two character sets are further unified through the
concept \cite{GJV} of cut-and-join operators $\hat W(\Delta)$,
commuting
differential operators of degree $|\Delta|$ in $t$-variables,
for which they serve as eigenvectors and eigenvalues respectively
\cite{MMN}:
\be
\hat W(\Delta)\chi_R(t) = \varphi_R(\Delta)\chi_R(t)
\label{WchiI}
\ee
The problem with these operators is that they look rather
complicated (they belong to the class of $W$-operators,
made from powers of the $U(1)$ current).
In \cite{MMN} we explained that the cut-and-join operators acquire
a very simple form if expressed in terms of the matrix Miwa variables
$p_k = \tr\ X^k$, then\footnote{
The combinatorial coefficient $z(\Delta)=
\prod_k \frac{1}{k^{m_k}m_k!}$ here that counts the order of the
automorphism group of the Young diagram, appears everywhere in
the theory of symmetric functions and symmetric group $S(\infty)$.
In particular, the standardly normalized symmetric group characters
$\hat\chi_R(\Delta) = d_R z(\Delta)\varphi_R(\Delta)$.
They are generated by the command
${\large Chi(R, \Delta)}$ in Maple in the package
{\large combinat}.
}
\be\label{caj}
\hat W_\Delta = \ : \prod_k \frac{1}{k^{m_k}m_k!}\big(\tr \hat D^k)^{m_k}\, :
\ee
for the $GL(\infty)$ matrix generator $D = X\frac{\p}{\p X_{{\rm tr}}}$.
This representation opens a constructive way to evaluation
of the structure constants $C_{\Delta_1\Delta_2}^\Delta$,
which appear in the CAA of cut-and-join operators and,
as a consequence, in the ordinary multiplication algebra
of symmetric group characters (this algebra was earlier considered from
combinatorial point of view in \cite{IK} where it is claimed to be
equivalent to the algebra of shifted symmetric functions of \cite{OO}):
\be
\hat W_{\Delta_1} \hat W_{\Delta_2} =
\sum_\Delta C_{\Delta_1\Delta_2}^\Delta \hat W_\Delta, \nn \\
\varphi_R(\Delta_1)\varphi_R(\Delta_2)
= \sum_\Delta C_{\Delta_1\Delta_2}^\Delta \varphi_R(\Delta)
\label{CAAW}
\ee
It is important in these formulae that the sums over $\Delta$
are not restricted to $|\Delta| = |\Delta_{1,2}|$, moreover,
$|\Delta_1|$ can be different from $|\Delta_2|$.
Note that already in (\ref{WchiI}) there is no restriction to
$|R| = |\Delta|$, and $\varphi_R(\Delta)$'s in this formula
are more general than those in (\ref{chiphitr}). They are defined for
the Young with $r$ unit rows, $[\Delta]=[\tilde\Delta,1^r]$ by adding
additional unit rows to reach $|R|=|\Delta|$ in accordance with the rule
\be
\varphi_R([\Delta]) \equiv \left\{\begin{array}{ccc}
0 & {\rm for} & |\Delta|>|R| \\
{(|R|-|\Delta|+r)!\over r!(|R|-|\Delta|)!}\
\varphi_R([\Delta,\underbrace{1,\ldots,1}_{|R|-|\Delta|}])
& {\rm for} & |\Delta|\leq |R|
\end{array}\right.
\label{phiext}
\ee
This naturally leads to extension of (\ref{Frof})
by removing the projector $\delta_{|R|,n}$ and lifting the restriction
$|\Delta_1| = \ldots = |\Delta_m| = n$, which defines
what was called {\it generalized} Hurwitz numbers in \cite{MMN}.
Note that the CAA of cut-and-join operators induces the multiplication on the Young diagrams:
\be
\Delta_1 * \Delta_2 =\sum_\Delta C_{\Delta_1\Delta_2}^\Delta \Delta
\ee
This multiplication can be considered as extension of another $\circ$-multiplication on the
Young diagrams, given by the composition of permutations and related to the ordinary Hurwitz
numbers. This latter is connected with the *-multiplication by
restricting onto diagrams of the same size:
\be
\Delta_1 \circ \Delta_2 =\sum_\Delta C_{\Delta_1\Delta_2}^\Delta \Delta\
\delta_{|\Delta_1|,|\Delta|}
\ee
with $|\Delta_1|=|\Delta_2|$. Inversely, one can construct $*$-multiplication from the
$\circ$-one by the procedure described in section 3.
In this paper we discuss one of the immediate implications
of the extension to the generalized Hurwitz numbers: their generating function
is the partition function of a topological field theory
associated with the CAA (\ref{CAAW}) and, hence, satisfies
the WDVV equations of \cite{MMM}.
The ordinary Hurwitz partition functions, based on the ordinary
$\circ$-multiplication in symmetric group $S(n)$, also satisfy
WDVV for each given $n$, but they provide only trivial
solutions (which, however, were not discussed in the
existing literature).
The WDVV equations are imposed on "quasiclassical $\tau$-functions",
which are obtained by one or another kind of Whitham
averaging procedure \cite{Whith} from the KP/Toda hierarchy
and a particular set of Riemann surfaces (background) with additional data.
The quasiclassical hierarchies are well studied in the case
when the background is a Riemann sphere, but in the case
of Hurwitz theory it should be different (a Lambert curve,
for example, \cite{mari,mmhk,kaz}), and such hierarchies are not yet described (see,
however, \cite{Takasaki}).
An advantage of the quasiclassical hierarchy would be that
particular equations involve only derivatives
w.r.t. the finite number of time-variables, while the WDVV equations
involve inversion of an infinite size matrix.
Derivation of such reducible equations for Hurwitz partition
functions remains an open problem.
This paper is the second in the series which describes properties of the generating
functions in Hurwitz theory. The first paper, \cite{I} contains a summary of
integrable properties, which will be described in detail in our next paper,
\cite{III}.
\bigskip
In section 2 we begin with reminding the general construction
of topological theory for any CAA and explain why its partition
function always satisfy the WDVV equations (their original
form is a little more general than in \cite{MMM}, and far
more general than in \cite{WDVV}, the triple derivative
equations of \cite{MMM} being a direct corollary, but not vice versa!).
Then, in sections 3 we provide discuss the two multiplications:
$\circ$ and *-products. The corresponding multiplication tables can be found in
Appendices I and II.
Knowledge of these tables allows one to examine concrete examples illuminating
following sections. In section 4 we construct
two types of generating functions which are associated
with two multiplications, and in section 5 the corresponding WDVV equations
satisfied by these
generating functions are discussed.
As a particular example, in sections 6 and 7
we provide details about the $[1^p]$ subring
of the *-algebra, describing cut-and-join operators,
associated with the single row diagrams
(the "complementary" single column operators would instead
generate the entire algebra).
\section{Topological theories and WDVV}
\subsection{Topological theory on sphere (tree level)}
At the tree (string) level (i.e. on sphere) a topological theory is defined by three
ingredients:
\begin{itemize}
\item a vector space with the basis of "observables"
$\{\phi_i\}$,
\item an associative and commutative multiplication
\be
\phi_i *
\phi_j = \sum_k C_{ij}^k \phi_k
\ee
\item
and a linear form ($c$-valued function) on this space
$<\phi_i> = K_i$.
\end{itemize}
At the loop (string) level (i.e. on higher genera Riemann surfaces) one needs also to define
traces and impose an additional constraint on the toric $1$-point function (in addition to
associativity and commutativity of the multiplication), but we do not need this in
the present text, which is fully devoted to the tree level topological
theories.
The tree correlators are defined as
\be
K_{i_1\ldots i_n} = <
\phi_{i_1},\ldots, \phi_{i_n}> = < \phi_{i_1}*\ldots *\phi_{i_n} > =
\sum_k C_{i_1\ldots i_n}^k <\phi_k> = \sum_k C_{i_1\ldots i_n}^k K_k
\ee
where the coefficients $C$ are products of the original 3-valent
structure constants $C_{ij}^k$. These correlators are totally
symmetric under permutations of $i_1,\ldots,i_n$.
It is also convenient to introduce "the bare metric"
\be
G_{ij} \equiv
<\phi_i,\phi_j> = <\phi_i*\phi_j> = \sum_k C_{ij}^k <\phi_k>
\ee
and use it and its inverse $G^{ij}$ to raise and lower indices, in particular, to construct
the totally symmetric tensors
\be
C_{ijk} \equiv <\phi_i,\phi_j,\phi_k> = \sum_m
C_{ij}^mG_{mk} = \sum_m C_{jk}^m G_{mi} = \sum_m C_{ik}^m G_{mj}
\ee
and
\be <\phi_i,\phi_j,\phi_k,\phi_l> = \sum_m C_{ij}^m C_{mkl} =
\sum_m C_{ik}^m C_{mjl} = \sum_m C_{il}^m C_{mjk} = \ldots
\ee
Next one defines the tree partition function
\be
Z[\beta] = \left<
e_*^{\sum_i \beta_i\phi_i} \right> = <E[\beta]> \equiv <<1>>
\ee
where $e_*(\phi) = \sum_n \frac{1}{n!}\underbrace{\phi*\phi*\ldots*\phi}_n$. Then,
\be
C_{ijk} =
\left.\frac{\p^3Z[\beta]}{\p\beta_i\p\beta_j\p\beta_k}\right|_{\beta
= 0}
\ee
\subsection{Deformation by coupling constants $\beta$
and WDVV equations}
One can now introduce deformed, $\beta$-dependent algebra with a
$\beta$-dependent multiplication
\be
\phi_i \hat * \phi_j \equiv
\phi_i *\phi_j *E[\beta] \equiv \sum_k \hat C_{ij}^k \phi_k
\ee
where $E[\beta]$ is a family of elements of the algebra. The new
multiplication
is still commutative and associative:
\be
(\phi_i\hat *\phi_j)\hat *
\phi_k = \phi_i*\phi_j*\phi_k*E[\beta]*E[\beta] = \phi_i \hat *
(\phi_j \hat * \phi_k)
\ee
It is also possible to introduce the deformed
observables $\hat\phi_i = \phi_i * E[\beta]$, then
\be
\hat\phi_i *
\hat\phi_j = \phi_i * \phi_j * E[\beta]*E[\beta] = (\phi_i \hat
*\phi_j) * E[\beta] = \widehat{(\phi_i \hat * \phi_j)} = \sum_k \hat
C_{ij}^k\hat \phi_k
\ee
is also a commutative associative algebra.
Now one can introduces the $\beta$-dependent correlators:
\be
<<
\phi_{i_1},\ldots,\phi_{i_n} >>\ \equiv\ <\phi_{i_1}*\ldots
*\phi_{i_n}* E[\beta]>
\ee
Then the triple correlator possesses {\it
two} alternative representations (the last two sums in this formula):
\be
\hat C_{ijk} \equiv \frac{\p^3
Z[\beta]}{\p\beta_i\p\beta_j\p\beta_k} =\ <<\phi_i,\phi_j,\phi_k>>\
=\ <\phi_i*\phi_j*\phi_k*E[\beta]>\ = \sum_m \hat C_{ij}^m G_{mk} =
\sum_m C_{ij}^m \hat G_{mk} \label{hatnothat}
\ee
The first representation
is in terms of deformed $\hat C_{ij}^k$ and the {\it bare} metric
$G_{mk} = <\phi_k*\phi_m>$, while the second one is in terms of the
{\it bare} (undeformed) $C_{ij}^k$ and the deformed metric
\be
\hat
G_{mk} \equiv\ <<\phi_m,\phi_k>>\ =\ <\phi_m*\phi_k*E[\beta]>\ =
\frac{\p^2 Z[\beta]}{\p\beta_k\p\beta_m}
\ee
Associativity of
original and deformed algebras is expressed in the commutativity condition
of the structure constants $\left(\check C_i\right)^k_j\equiv C_{ij}^k$
\be\label{assC}
\check C_i\check C_j=\check C_j\check C_i
\ee
In its turn, this implies that
\be
\hat C_{ijm} G^{mn}
\hat C_{kln} = \hat C_{ikm} G^{mn} \hat C_{jln}
\ee
and
\be
\hat
C_{ijm} \hat G^{mn} \hat C_{kln} = \hat C_{ikm} \hat G^{mn} \hat
C_{jln} \label{WDVVf}
\ee
which we respectively call as {\it
bare} and {\it full} \cite{MMM,gWDVV} WDVV equations for the
partition function $Z[\beta]$ (in the case of the {\it full} WDVV
equation all ingredients are triple derivatives of $Z[\beta]$,
the deformed metric is the triple correlator with the unity operator
$\phi_0 = I$: $\hat G_{mn} = \hat C_{0mn}$).
In some cases the choice of the vector space and observables
$\phi_i$ is not unique. The same algebra may possess different
realizations (representations) and one can ask if the
$\beta$-deformed topological theory respects this freedom. The
problem is very similar to the representation theory of Lie algebras and
to the concept of the universal group elements etc.
\subsection{Hurwitz topological theory}
Consider here an explicit example
when $\phi_i$ has an additional label, $R$ that is the variable averaged in the
mean value $<...>$, and such that the structure constants in the product
\be
\phi_i(R)\phi_j(R) = \sum_k C_{ij}^k \phi_k(R)
\ee
do not depend on $R$.
An example of such
topological theory is provided by the theory of Hurwitz numbers,
and the role of index $R$ can be played by different
structures, for instance, the Young diagrams in the Frobenius formula
(\ref{Frof}).
In this case, we
define the correlators
involving the sum over $R$:
\be\label{cor}
<i_1,\ldots,i_n > = <\phi_{i_1}(R)*\ldots *\phi_{i_n}(R)>=\sum_R d_R^2
\phi_{i_1}(R)\ldots \phi_{i_n}(R)
\ee
Now, it is clear that the equality
\be
\hat C_{ijk} \equiv
<<i,j,k>> = <\phi_i(R)*\phi_j(R)*\phi_k(R)*E[\beta,R]> = \nn \\
= \sum_m C_{ij}^m <\phi_m(R)*\phi_k(R)*E[\beta,R]> =
\sum_m C_{ij}^m \hat G_{mk}=\sum_m \hat C_{ij}^m G_{mk}
\ee
with
\be
E[\beta,R] = e_*^{\sum_i
\beta_i \phi_i(R)} \label{EBR}
\ee
continues to hold in this case.
As follows from the discussion above, the Hurwitz partition function
as a function of $\beta$
satisfies the {\it full} WDVV equations (\ref{WDVVf}). Sometime, for
restricted sets of $\beta$-variables, it happen to be also KP
$\tau$-functions \cite{I} but this is beyond our
consideration here. Instead we note that the weight $E[\beta,R]$
can be made more general than (\ref{EBR}), without changing anything
in the content of the previous consideration. Namely, one can change
(\ref{EBR}) for
\be
P_*(\phi(R)) *e_*^{\sum_i \beta_i\phi_i(R)}
\ee
with an arbitrary $*$-polynomial of observables $\phi_i(R)$ with the
same $R$. Sometimes new integrability properties can occur for the
partition function as a function on these additional parameters \cite{I}.
In Hurwitz theory {\it per se}, the role of observables $\phi_i(R)$
is played by the characters of the symmetric group $S_\infty$, denoted
by $\varphi_R(\Delta)$, where the label $i\rightarrow \Delta$ is now
the Young diagram. The most interesting choice for $P_*(\phi(R))$ is a
product of several $GL(\infty)$ characters,
$\chi_R(t)\chi_R(t')\chi_R(t'')\ldots$, where $\chi_R(t)$ is related to
$\varphi_R(\Delta)$ by formula (\ref{chiphitr}). With so
modified $E[\beta,R]$ the Hurwitz partition function becomes a
function of both $t$- and $\beta$-variables. While in
$\beta$-variables (when considering their complete set, not a subset)
it is usually a "quasiclassical $\tau$-function",
i.e. a solution to the full WDVV equations, in $t$-variables it can be a
KP $\tau$-function. This is, indeed, the case when there is one $(t)$
and two $(t,t')$ sets of $t$-variables, see \cite{GKM2,OkToda,I,III}.
Surprisingly or not, the KP integrability in
$t$ {\it dis}appears for three $(t,t',t'')$ or more sets of
$t$-variables. This peculiar pattern of
(non)-integrability structures is discussed in the next paper of this series \cite{III}.
In variance with our construction in this section,
in \cite{NTWDVV} there was suggested another, polynomial
class of solutions to the WDVV equations. In the Hurwitz theory context this
would correspond to a power series instead of polynomial solutions.
\section{Two multiplications of Young diagrams}
As we mentioned above, there are two natural multiplications on the Young diagrams: one,
$\circ$-multiplication given by the composition of permutations, and the other one,
*-multiplication induced by the algebra of cut-and-join operators (\ref{caj}).
\subsection{$\circ$-Multiplication of Young diagrams
from composition of permutations}
The $\circ$-multiplication is given on the Young
diagrams (integer partitions)
labeling elements of the group algebra of the symmetric group, i.e.
the sum of all permutations\footnote{
The composition of permutations is done in Maple by the command {\large{\it mulperms}}
of the package {\large{\it group}}.}
from the corresponding conjugation class:
$$
[211] = (12) + (13) + (14) + (23) + (24) + (34)
$$
etc.
The number of items is denoted by $||\Delta||$.
The naive $\circ$-multiplication of Young diagrams,
induced by the (non-commutative but associative) composition
of permutations is commutative and associative.
Of course, $||\Delta_1\circ\Delta_2|| = ||\Delta_1||\cdot||\Delta_2||$.
Examples of the multiplication tables for different symmetric groups
can be found in Appendix II.
\noindent
$\bullet$
For any $k$ the $\circ$-multiplication by $[1^k]$ acts like unity:
\be
[1^k]\circ \Delta = \Delta
\ \ \ \ \ \ \forall \Delta: \ |\Delta|=k
\ee
$\bullet$
Multiplication by $[2,1^k]$ can be deduced from the
cut-and-join property.
Namely, if permutations are written in the cyclic notations, then
permutation $(12) \in S_{k+2}$ acts as follows:
\be
(12) \circ (12)K = K, \nn \\
(12) \circ (12C)K = (1C)K, \nn \\
(12) \circ (21C)K = (2C)K, \nn \\
(12) \circ (1C)K = (12C)K, \nn \\
(12) \circ (C)K = (12)(C)K, \nn \\
(12) \circ (1C_1)(2C_2)K = (1C_22C_1)K
\label{12mult}
\ee
where $C$ denotes any set of elements,
and $K$ any set of non-intersecting cycles
(of course, it is assumed that $1,2\notin C,C_1,C_2,K$
and $C,C_1,C_2\notin K$).
If for a given level $k$ all the "powers" $||\Delta||$
are known, this is enough to construct all the entries
$[2,1^k]\circ\Delta$ in the $\circ$-multiplication table:
the coefficient for each line in (\ref{12mult})
is given by the ratio of items of the given type
at the l.h.s. of (\ref{12mult}) to the "power" at
the r.h.s., multiplied by $||2,1^k||$. These rules are illustrated in manifest
examples of Appendix I.
\subsection{*-multiplication of Young diagrams}
The *-multiplication of Young diagrams is associated with the product of
the differential cut-and-join operators (\ref{caj}): if
\be
\hat W[\Delta_1] \hat W[\Delta_2] = \sum_{\Delta} C_{\Delta_1\Delta_2}^\Delta
\hat W[\Delta],
\ee
then
\be
\Delta_1*\Delta_2 = \sum_{\Delta} C_{\Delta_1\Delta_2}^\Delta \Delta
\label{D*D}
\ee
and thus it is commutative and associative.
The sums are actually finite:
the size of the diagrams $\Delta$ is restricted to
\be
{\rm max}(|\Delta_1|,|\Delta_2|) \leq |\Delta|
\leq |\Delta_1|+|\Delta_2|
\ee
As it was already mentioned in the Introduction,
these commuting $\hat W$-operators have all the $GL(\infty)$
characters as common eigenfunctions, while $S_\infty$ characters
are the corresponding eigenvalues (\ref{WchiI}).
Representations of $GL(\infty)$ characters through the first
and second Weyl formulas are associated to representations
of the cut-and-join operators in time and matrix variables.
It follows from (\ref{WchiI}) that the symmetric group characters
form the same commutative associative algebra:
\be
\varphi_R(\Delta_1)\varphi_R(\Delta_2)
= \sum_{\Delta} C_{\Delta_1\Delta_2}^\Delta \varphi_R(\Delta)
\ee
with the same $R$-independent structure constants
$C_{\Delta_1\Delta_2}^\Delta$.
\subsection{Connection between the two multiplications\label{twop}}
The *-multiplication (\ref{D*D}) can be expressed through
the $\circ$-multiplication of Young diagrams.
It is a rather long recursive formula,
but actually a very constructive one:
$$
\Delta_1*\Delta_2 =
\sum_{n={\rm max}(|\Delta_1|,|\Delta_2|)}^{|\Delta_1|+|\Delta_2|}
\{\Delta_1,\Delta_2\}_n
$$
\be
\{\Delta_1,\Delta_2\}_n=\sum_{\Delta: |\Delta|=n}
C_{\Delta_1\Delta_2}^\Delta\Delta
= \rho_{n-|\Delta_1|}(\Delta_1)\circ \rho_{n-|\Delta_2|}(\Delta_2)
- \sum_{k={\rm max}(|\Delta_1|,|\Delta_2|)}^{n-1}
\rho_{n-k}\left(\{\Delta_1,\Delta_2\}_k\right)
\label{De*De}
\ee
and $\rho_k$ is a lift of the Young diagram to the size $|\Delta|+k$,
achieved by adding $k$ unit length rows with additional
numeric factor: if $\Delta$ already has $r$ rows of the length $1$,
then
\be
\rho_k([\Delta]) = \frac{(r+k)!}{r!k!}[\Delta,\underbrace{1,\ldots,1}_k],
\ \ \ \ \ \ \ \ \hbox{or}\ \ \ \ \ \ \ \
\rho_k([\tilde\Delta,1^r]) = \frac{(r+k)!}{r!k!}[\tilde\Delta,1^{r+k}]
\ee
where $[\Delta]\equiv[\tilde\Delta,1^r]$ and $\tilde\Delta$ does not contains
unit rows.
According to this definition, $\rho_0(\Delta) = \Delta$.
Note that
\be
\rho_k\left(\rho_l(\Delta)\right) = \frac{(k+l+r)!}{k!l!r!}\,
[\Delta,1^{k+l}] \ \ \ \neq \ \ \
\rho_{k+l}(\Delta) = \frac{(k+l+r)!}{(k+l)!r!}\,[\Delta,1^{k+l}]
\ee
The highest term in the product (\ref{De*De}) is
\be
\{\Delta_1,\Delta_2\}_{|\Delta_1|+|\Delta_2|}
= C_{\Delta_1\Delta_2}^{[\Delta_1,\Delta_2]} [\Delta_1,\Delta_2]
\ee
and for $\Delta_1 = [k^{m_k}]$, $\Delta_2 = [k^{n_k}]$,
$[\Delta_1,\Delta_2] = [k^{m_k+n_k}]$
the combinatorial coefficient is
\be
C_{\Delta_1\Delta_2}^{[\Delta_1,\Delta_2]}
= \prod_k \frac{(m_k+n_k)!}{m_k!n_k!}
\ee
This follows from the definition (\ref{caj}) of the $\hat W$ operator \cite{MMN}.
If $\Delta_1*\Delta_2 = \sum_{\Delta} C_{\Delta_1\Delta_2}^\Delta
\Delta$, formula (\ref{De*De}) can be rewritten as
\be
\rho_{n-|\Delta|_1}
(\Delta_1) \circ \rho_{n-|\Delta|_2} (\Delta_2) = \sum_m
\rho_{n-m}\Big(\{\Delta_1,\Delta_2\}_m\Big)
= \sum_{\Delta}
C_{\Delta_1\Delta_2}^\Delta \rho_{n-|\Delta|} (\Delta)
\ee
Expressed in terms of the generating functions $J_\Delta(u) =
\sum_{m=0}^\infty u^{|\Delta|+m} \rho_m(\Delta)$ this multiplication
formula becomes
\be
\oint J_{\Delta_1}(u) \circ
J_{\Delta_2}\left(\frac{v}{u}\right) \frac{du}{u} = \sum_{\Delta}
C_{\Delta_1\Delta_2}^\Delta J_\Delta(v) = J_{\Delta_1*\Delta_2}(v)
\label{JJJ}
\ee
Note that the contour integral over $u$ at the l.h.s.
selects diagrams of the same weight, so that the operation $\circ$ is
well defined.
Examples of the $*$-multiplication tables can be found in section 6 and in
Appendix II, here we
consider only the case of product $[1]*[\Delta]$ which will be of use for our further
consideration.
\subsection{Example of level $(1,m)$}
For $\Delta$ of the size $|\Delta|=m$, which already has $r$ columns of height $1$,
one gets
$$
[1]*[\Delta] = \{1,\Delta\}_m + \{1,\Delta\}_{m+1},
$$
$$
\{1,\Delta\}_m = \rho_{m-1}[1]\circ \Delta = m[1^m]\circ\Delta = m\Delta,
$$
\be
\{1,\Delta\}_{m+1} = \rho_m[1]\circ \rho_1[\Delta] - \rho_1\left(\{1,\Delta\}_m\right)
= (m+1)[1^{m+1}]\circ (r+1)[\Delta,1] - m(r+1)[\Delta,1] = (r+1)[\Delta,1],
\ee
$$
\boxed{
\[1]*\Delta = |\Delta|\,\Delta + (r+1)[\Delta,1]
}
$$
In other words, if $\Delta = [\tilde\Delta,1^r]$,
where $\tilde\Delta$ contains no more units, then
\be
[1] * [\tilde\Delta,1^r] = (|\tilde\Delta|+r)[\tilde\Delta,1^r]
+ (r+1)[\tilde\Delta,1^{r+1}]
\ee
\section{Generating functions}
Introduce now a linear form (average) on the Young diagrams:
\be
< \Delta > = \frac{\delta(\Delta,[1^{|\Delta|}])}{|\Delta|!} =
\sum_R d_R^2\varphi_R(\Delta) \delta_{|R|,|\Delta|}
\label{avedef}
\ee
It can be used to construct a variety of generating functions
for averages of Young diagrams products (correlators).
In fact, the projection to $|R|=|\Delta|$ in the sum over $R$
in (\ref{avedef}) can be eliminated, and
the {\it infinite} sum
\be
\sum_R d_R^2\varphi_R(\Delta)
= \sum_{R: \ |R|\geq|\Delta|} d_R^2\varphi_R(\Delta)
= e<\Delta>
\label{eave}
\ee
where $e = 2.718\ldots$
This formula is important for evaluation of the
partition functions in the case of *-products.
\subsection{The standard Hurwitz partition function}
The standard generating function of Hurwitz numbers is
\be
Z_\circ\{\beta_{\tilde\Delta}\}
= \sum_n q^n Z_\circ^{(n)} \{\beta_{\Delta}\} =
\sum_n q^n \left< \exp_\circ \left(\sum_{\tilde\Delta:\
|\tilde\Delta|\leq n} \beta_{\tilde\Delta}
\rho_{n-|\tilde\Delta|}(\tilde\Delta) \right) \right>\ =
\sum_n q^n \left< \exp_\circ \left(\sum_{\tilde\Delta:\
|\tilde\Delta|\leq n} \beta_{\tilde\Delta}
[\tilde\Delta,1^{n-|\Delta|}]\right) \right>
\label{circHPF}
\ee
where, as before, $\tilde\Delta$ denotes the Young diagram without units and
the $\circ$-multiplication of the Young diagram of different sizes is defined to
be zero. Actually, $q = e^{\beta_1}$.
Of course, one can also introduce a whole infinite
tower of $\beta$-variables for each $\tilde\Delta$:
$\beta_{\tilde\Delta,p} = \beta_{[\tilde\Delta,1^p]}$,
but we prefer not to do it.
Again, for each given $n$ the component
$Z_\circ^{(n)} \{\beta_{\Delta}\}$ satisfies the WDVV equations,
but after summation over $n$, (\ref{circHPF})
is not an average of any CAA exponential
and does not need to satisfy WDVV equations.
And, indeed, it does not.
With the help of (\ref{chiphitr}) one can rewrite (\ref{circHPF})
in terms of symmetric group characters $\varphi_R(\Delta)$, (\ref{phiext}).
Indeed, because of (\ref{WchiI}) they form a representation of the CAA algebra
with *-product:
\be
\varphi_R(\Delta_1)\varphi_R(\Delta_2)
= \varphi_R(\Delta_1*\Delta_2) =
\sum_{\stackrel{\Delta}{{\rm max}(|\Delta_1|,|\Delta_2|)
\leq |\Delta| \leq |\Delta_1|+|\Delta_2|}}
C^\Delta_{\Delta_1\Delta_2} \varphi_R(\Delta)
\ \ \ \ \ \ \ \ \forall \ R,\Delta_1,\Delta_2
\ee
For $|\Delta_1|=|\Delta_2|=n$ and for $|R|=n$ the property
(\ref{phiext}) implies that also
\be
\varphi_R(\Delta_1\circ\Delta_2) = \varphi_R(\Delta_1)\varphi_R(\Delta_2),
\ \ \ {\rm for}\ |R|=|\Delta_1|=|\Delta_2|
\label{circvarp}
\ee
and this allows one to express (\ref{circHPF}) through $\varphi_R(\Delta)$:
if all the sizes $|\Delta_1| = \ldots |\Delta_k| = n$ are the same, then
\be
\left< \circ_{i=1}^k \Delta_i \right> =
\sum_\Delta c^\Delta_{\Delta_1,\ldots,\Delta_k} <\Delta > \delta_{|\Delta|,n}
= \sum_\Delta c^\Delta_{\Delta_1,\ldots,\Delta_k} \sum_R d^2_R\varphi_R(\Delta)
\delta_{|\Delta|,n} \delta_{|R|,n}
= \sum_R d^2_R \prod_{i=1}^k \varphi_R(\Delta_i)\delta_{|R|,n}
\label{Wvsphi}
\ee
Here
\be
c^\Delta_{\Delta_1,\ldots,\Delta_k} = \sum_{\Delta'_1,\ldots,\Delta'_{k-2}}
C^\Delta_{\Delta_1\Delta'_1}
C^{\Delta'_1}_{\Delta_2\Delta_2'}\ldots C^{\Delta'_{k-2}}_{\Delta_{k-1}\Delta_k}
\delta_{|\Delta'_1|,n}\ldots\delta_{|\Delta'_{k-2}|,n}
\ee
we denote it by {\it small} letter $c$ to emphasize that all sums are restricted
(projected) to diagrams of the same size $n$.
Restriction to $|R|=n$ is important for the last transition in (\ref{Wvsphi}),
where {\it small} $c$ stand at the l.h.s.: still equality takes place
because of (\ref{circvarp}).
From (\ref{Wvsphi}) it follows directly that
partition function (\ref{circHPF}) can be rewritten as
\be
Z_\circ\{\beta_{\tilde\Delta}\} =
\sum_n q^n \sum_{R: \ |R|=n} d_R^2
\exp \left(\sum_{\tilde\Delta: \ |\tilde\Delta|\leq n}
\beta_{\tilde\Delta} \varphi_R(\tilde\Delta,1^{n-|\tilde\Delta|})\right)
\ee
or simply
\be
Z_\circ\{\beta_{\Delta}\} = \sum_n q^n Z_\circ^{(n)} \{\beta_{\Delta}\}
= \sum_n q^n \sum_{R: \ |R|=n} d_R^2
\exp \left(\sum_{\Delta: \ |\Delta| = n}
\beta_{\Delta} \varphi_R(\Delta)\right)
\ee
where, in principle, one can either impose the restriction
\be
\beta_{[\tilde\Delta,1^r]} = \beta_{\tilde\Delta}
\label{redbeta}
\ee
or not.
If (\ref{redbeta}) is not imposed, then,
making use of (\ref{chiphitr}),
one can further perform a Fourier transform of its $m$-th derivative
into $t$-variables:
\be
\sum_n q^n
\sum_{\stackrel{\Delta_1,\ldots,\Delta_m}{|\Delta_1|=\ldots=|\Delta_m|=n}}
\frac{\p^m Z_\circ^{(n)}\{\beta_\Delta\}}
{\p\beta_{\Delta_1}\ldots\p\beta_{\Delta_m}}
p^{(1)}(\Delta_1)\ldots p^{(m)}(\Delta_m) = \nn \\ =
\sum_n q^n
\sum_{R: \ |R|=n} d_R^{2-m}\chi_R(t^{(1)})\ldots\chi_R(t^{(m)})
\exp \left(\sum_{\tilde\Delta: \ |\tilde\Delta|=n}
\beta_{\tilde\Delta} \varphi_R(\tilde\Delta)\right)
\label{mchar}
\ee
\subsection{Extension to *-product}
With the *-product one can associate another, generalized
Hurwitz partition function \cite{MMN}:
\be
Z_*\{\beta_\Delta\} =
\left< \exp_*\left(\sum_\Delta \beta_\Delta \Delta\right)\right>
\ee
In variance with $Z_\circ\{\beta_\Delta\}$ in (\ref{circHPF}),
it satisfies the WDVV equations,
but in variance with individual components $Z_\circ^{(n)}\{\beta_\Delta\}$
(which also satisfy WDVV) it involves infinitely many time-variables
$\beta_\Delta$.
One can also rewrite $Z_*$ in terms of $\varphi_R(\Delta)$ characters,
but this time, in the case of *-products,
there would be no restriction on the sizes $|\Delta|$
in (\ref{Wvsphi}). Then, in this case the sum over $R$
will be restricted not to $|R|=n$, but to $|R|=|\Delta|$:
\be
\left< *_{i=1}^k \Delta_i \right> =
\sum_\Delta C^\Delta_{\Delta_1,\ldots,\Delta_k} <\Delta >
= \sum_\Delta C^\Delta_{\Delta_1,\ldots,\Delta_k} \sum_R d^2_R\varphi_R(\Delta)
\delta_{|R|,|\Delta|}
\label{Wvsphi*}
\ee
and, because of the $\Delta$-dependent projector,
in this formula one can not make any direct use of the relation
\be
\sum_\Delta C^\Delta_{\Delta_1,\ldots,\Delta_k}\varphi_R(\Delta)
= \prod_{i=1}^k \varphi_R(\Delta_i)
\label{Ccomb}
\ee
However, one can actually get rid of the projector!
The reason is that (\ref{chiphitr}) has important generalizations:
\be
\sum_\Delta d_R\varphi_R(\Delta)p\,(\Delta)\delta_{|\Delta|,|R|}
= \chi_R(t), \\
\sum_\Delta d_R\varphi_R(\Delta)p\,(\Delta)
= \chi_R(t_k+\delta_{k1}),
\label{chivarphiall} \\
\sum_{\tilde\Delta} d_R\varphi_R(\tilde\Delta)p\,(\tilde\Delta)
= \chi_R(1,t_2,t_3,\ldots), \\
d_R = \chi_R(1,0,0,\ldots) = \chi_R(\delta_{k1})
\label{dRchar}
\ee
Note that because of the property
(\ref{phiext}) all the sums over $\Delta$ are finite,
and these formulas are elementary, not transcendental.
Combining (\ref{chivarphiall}) and (\ref{dRchar}) with the celebrated
Cauchy completeness formula
\be\label{Cauchy}
\sum_R \chi_R(t)\chi_R(t') = \exp \left(\sum_k kt_kt'_k\right)
\ee
one obtains
\be
\sum_{R,\Delta} d_R^2 \varphi_R(\Delta) p(\Delta)
= \sum_R \chi_R(t_k+\delta_{k1})\chi_R(\delta_{k1}) =
e^{1+t_1}
= e \sum_{\Delta} <\Delta> p(\Delta)
= e \sum_{R,\Delta} d_R^2 \varphi_R(\Delta) p(\Delta)\delta_{|R|,|\Delta|}
\ee
In other words, we obtain the already-mentioned statement (\ref{eave}):
\be
<\Delta> = \sum_R d_R^2\varphi_R(\Delta) \delta_{|R|,|\Delta|}
= \frac{1}{e} \sum_R d_R^2\varphi_R(\Delta)
\ee
i.e. one can simply substitute everywhere the average (\ref{avedef}) by
the alternative one,
\be
<<\Delta>> = \sum_R d_R^2\varphi_R(\Delta) = e<\Delta>
\ee
where sum goes over Young diagrams $R$ of all sizes,
not restricted to $|R|=|\Delta|$.
The difference is actually exhausted by a factor of $e=2.718\ldots$
Coming back to (\ref{Wvsphi*}), one now knows how to eliminate
the unwanted projector from the r.h.s. and apply (\ref{Ccomb}):
\be
\left< *_{i=1}^k \Delta_i \right> = \frac{1}{e}
\left<\left< *_{i=1}^k \Delta_i \right>\right> = \frac{1}{e}
\sum_\Delta C^\Delta_{\Delta_1\ldots\Delta_k} <<\Delta >>
= \frac{1}{e}
\sum_\Delta C^\Delta_{\Delta_1\ldots\Delta_k} \sum_R d^2_R\varphi_R(\Delta)
= \frac{1}{e} \sum_R d^2_R\prod_{i=1}^k\varphi_R(\Delta_i)
\label{Wvsphi**}
\ee
In particular, the pair correlator is equal to
\be
\sum_{\Delta_1,\Delta_2}<\Delta_1*\Delta_2>p^{\Delta_1}\bar p^{\Delta_2}=
\sum_{\Delta_1,\Delta_2}\frac{1}{e} \sum_R d^2_R\varphi_R(\Delta_1)\varphi_R(\Delta_2)
p^{\Delta_1}\bar p^{\Delta_2}
\stackrel{(\ref{chivarphiall})}{=}\\=\frac{1}{e}
\sum_R\chi_R(t_k+\delta_{k1})\chi_R(\bar
t_k+\delta_{k1})\stackrel{(\ref{Cauchy})}{=}
\exp\left((t_1+1)(\bar t_1+1)-1+\sum_{k\ge 2}t_k\bar t_k
\right)\nn
\ee
From formula (\ref{Wvsphi**})
one obtains a character expansion of $Z_*$ and a much better
counterpart of (\ref{mchar}):
\be
Z_*\{\beta_\Delta\} =\
\left< \exp_*\left(\sum_\Delta \beta_\Delta \Delta\right)\right>\
= \frac{1}{e}\sum_R d^2_R \exp\left(\sum_\Delta \beta_\Delta \varphi_R(\Delta)\right)
\ee
and
\be
\sum_{\Delta_1,\ldots,\Delta_m}
\frac{\p^m Z_*\{\beta_\Delta\}}
{\p\beta_{\Delta_1}\ldots\p\beta_{\Delta_m}}
p^{(1)}(\Delta_1)\ldots p^{(m)}(\Delta_m) = \nn \\ =
{1\over e}\sum_n q^n
\sum_{R: \ |R|=n} d_R^{2-m}\chi_R(t^{(1)}_k+\delta_{k1})\ldots\chi_R(t^{(m)}_k+\delta_{k1})
\exp \left(\sum_{\Delta}
\beta_{\Delta} \varphi_R(\Delta)\right)
\label{mchar*}
\ee
We emphasize once again that there are no restrictions
on the sizes of any $\Delta_i$ and of $R$ in the sum.
In (\ref{mchar*}) we do not impose (\ref{redbeta}),
otherwise one should just write correlators instead of
$\beta$-derivatives at the l.h.s.
\section{WDVV equations}
Whenever the generating function is an average
of the CAA exponential, it satisfies the WDVV equations.
This happens, at least, in two cases.
The first one is for the $\circ$-product at the given level $|\Delta|$:
then one gets a trivial WDVV solution in form of
a finite linear combination of ordinary exponentials.
The second case is that of the *-product: then the number of time-variables
is infinite, even in the simplest case of the $[1^p]$-subring,
an interesting open question being to find an adequate
quasiclassical (dispersionless) hierarchy, which is
associated with this particular solution to the WDVV equations:
it is probably related to the KP-Whitham hierarchy over
the Lambert curve \cite{mari,mmhk,kaz,Takasaki}.
We start with the trivial case of the $\circ$-partition function, then comment
on the case of the $*$-partition function when the WDVV equation are awaited on
common grounds but one tediously check them directly. We postpone until the next
two section a discussion of the case of $[1^p]$-subring when some
less involved direct checks
of the WDVV equations can be done.
\subsection{$\circ$-products at given level}
For a given $n$ one introduces a function of $|S_n|$
variables $\beta_\Delta$:
\be\label{wdvvtr}
Z_n(\beta) = \left< \exp_\circ\left(
\sum_{\Delta:\ |\Delta|=n} \beta_\Delta \Delta\right) \right>
=
\sum_{R: \ |R|=n} d_R^2 \prod_{\Delta:\ |\Delta|=n}
e^{\beta_\Delta\varphi_R(\Delta)}
\ee
Each $Z_n$ satisfies the set of WDVV equations.
In this case this follows not only from the general arguments,
true for any topological field theory, but also
from a much simpler consideration:
one can obtain by a linear transform from $\{\beta_\Delta\}$ the separated variables
\be
\xi_R = \sum_{\Delta:\ |\Delta|=n} \beta_\Delta \varphi_R(\Delta)
\ee
so that
\be
\tilde Z_n(\xi) = \sum_R d_R^2e^{\xi_R} = Z_n(\beta)
\ee
Then the WDVV equations are trivially satisfied and, since this is no more than a linear
(and non-degenerate) change of beta-variables,
the original WDVV equations for $Z_n(\beta)$ are also true.
It is also evident that if one inserts into the sum (\ref{wdvvtr}) an arbitrary product of
characters $\chi_R(t^{(1)}_k)\ldots\chi_R(t^{(m)}_k)$, it does not spoil the
argument and, hence, the WDVV equations are still satisfied.
\subsection{The *-product case}
In this case one has to consider the WDVV equations with infinite matrices: there is
no any simple finite truncation for them. For instance, the simplest WDVV equation
$\check C_2\check C_3=\check C_3\check C_2$ (\ref{assC}) is
\be
\frac{<112><123>}{<11>} + \frac{<122><223>}{<22>} + \frac{<123><233>}{<33>} =
\frac{<113><122>}{<11>} + \frac{<123><222>}{<22>} + \frac{<133><223>}{<33>}
\ee
and it does {\it not} hold with
\be
<123> = <[1]*[2]*[3]> = 0, \ \ <113> = 0, \nn \\
<122> = 3/2, \ \ <223> = 1, \ \ <133> = 4/3, \nn \\
<11> = 1, \ \ <22>=1/2, \ \ <33> = 1/3
\ee
Indeed, vanishing entries in the first line reduce the equation just to
\be
\frac{<122><223>}{<22>} = \frac{<133><223>}{<33>}
\ee
which is not true.
What is the reason? In fact, $\check C_2\check C_3=\check C_3\check C_2$ holds
in the following way:
\be
<(2*2)*3> = C^3_{22}<3*3> = C^3_{22}C_{33}^{111}<111> = 3\cdot 2\cdot\frac{1}{6}, \nn\\
<2*(2*3)> = C_{23}^{[21]}<2*[21]> = C_{32}^{[21]}C_{[21],2}^{111}<111>
= 3\cdot 2\cdot \frac{1}{6}
\ee
i.e. as $\phantom._2(\check C_2\check C_3=\check C_3\check C_2)^{[111]}$:
\be
C^3_{22}C_{33}^{111} = C_{32}^{[21]}C_{[21],2}^{111}
\ee
However, while
\be
C^3_{22}C_{33}^{111} = \frac{<2*2*3>}{<3*3>}\frac{<3*3*111>}{<111*111>}
= \frac{<3*2*21>}{<21*21>}\frac{<2*21*111>}{<111*111>}= C_{32}^{[21]}C_{[21],2}^{111}
\ee
the symbolical expression neglecting the number of units does not hold:
\be
C_{22}^3C^1_{33} \stackrel{?}{=} \frac{<223>}{<33>}\frac{<133>}{<11>} \neq
\frac{<223>}{<22>}\frac{<122>}{<11>} \stackrel{?}{=} C^2_{32}C_{22}^1
\ee
Thus, reduction does not take place in the 1-sector.
Thus, one has to deal with infinite matrices. Though the WDVV equations still
have to be satisfied, checking them directly is a non-trivial problem. This is easier
to do in the simpler case of the $[1^p]$-subring case, which we discuss in the next
two sections.
\section{Sub-ring of $[1^p]$ operators and its action
on entire algebra}
In this section we discuss multiplication by the $[1^p]$ operators,
which form a closed sub-algebra of the entire algebra. Moreover, in
this case, it is possible to write down general formulas.
\subsection{*-subring of $[1^p]$ operators (single-line diagrams)
\label{1ring}}
The multiplication of $[1^p]$ operators is given by the formula
\be
[1^p]*[1^q]
= \sum_{i={\rm max}(0,p-q)}^p \frac{(q+i)!}{i!(p-i)!(q-p+i)!}\,[1^{q+i}]
= \sum_{s={\rm max}(p,q)}^{p+q} \frac{s!}{(s-p)!(s-q)!(p+q-s)!}\,[1^s]
\ee
Introduce $I(x) = \sum_p x^p[1^p]$. Then it follows that
\be
\boxed{
I(x) * I(y) = I(x+y+xy),} \nn \\
I(x)*I(y)*I(z) = I(x+y+xy)*I(z) = I(x+y+z+xy+yz+zx+xyz), \nn \\
\ldots, \nn \\
*_{i=1}^m I(x_i) =
I\left(\sum_i x_i + \sum_{i<j} x_ix_j + \sum_{i<j<k} x_ix_jx_k +\ldots
+ \prod_{i=1}^m x_i\right)
= I\left(-1 + \prod_i(x_i+1)\right)
\ee
\subsection{Action of $[1^p]$ operators on the entire algebra}
The multiplication of any operator by $[1^p]$ operators is described with the following
formulas.
Let $\Delta = [\tilde\Delta,1^r]$, with $1\notin \tilde\Delta$.
Let first $p\leq m = |\Delta|=|\tilde\Delta|+r$:
\be
[1^p]*[\Delta] = \sum_{i=0}^p\{1^p,\Delta\}_{m+i}, \nn \\
\{1^p,\Delta\}_m = \rho_{m-p}[1^p]\circ \Delta = \frac{m!}{p!(m-p)!}[1^m]\circ\Delta
= C^p_m\Delta, \nn \\
\{1^p,\Delta\}_{m+1} = \rho_{m-p+1}[1^p]\circ \rho_1[\Delta]
- \rho_1\left(\{1^p,\Delta\}_m\right) =\nn\\
= C^p_{m+1}[1^{m+1}]\circ (r+1)[\Delta,1]
- C^p_{m}\cdot(r+1)[\Delta,1] = (r+1)C^{p-1}_m[\Delta,1], \nn \\
\{1^p,\Delta\}_{m+2} = \rho_{m-p+2}[1^p]\circ \rho_2[\Delta]
- \rho_2\left(\{1^p,\Delta\}_m\right) - \rho_1\left(\{1^p,\Delta\}_{m+1}\right)
=\nn\\
= C^p_{m+2}[1^{m+2}]\circ \frac{(r+1)(r+2)}{2}[\Delta,1,1]
- \frac{(r+1)(r+2)}{2}C^p_m [\Delta,1,1]
- (r+2) \cdot(r+1)C^{p-1}_{m}[\Delta,1,1] = \nn \\
= \frac{(r+1)(r+2)}{2}\left(C^p_{m+2} - C^p_{m} - 2C^{p-1}_m\right)[\Delta,1,1]
= C^2_{r+2}C^{p-2}_m[\Delta,1,1], \nn \\
\ldots \nn \\
\boxed{
\[1^p]*\Delta = C^p_m\,\Delta + (r+1)C^{p-1}_m[\Delta,1] +
C^2_{r+2}C^{p-2}_m[\Delta,1,1] + \ldots
= \sum_{i=0}^p C_{r+i}^i C^{p-i}_m [\Delta,1^i]
}
\label{1p*}
\ee
In this form the formula holds also for $p>m$,
just the sum actually goes from $i={\rm max}(0,p-m)$.
Eq.(\ref{1p*}) can be rewritten also as
\be
\[1^p]*[\tilde\Delta,1^r] =
\sum_{i=0}^p C_{r+i}^i C^{p-i}_{|\tilde\Delta|+r} [\tilde\Delta,1^{r+i}]
\ee
and, further,
\be
I(x)*[\tilde\Delta,1^r] = \sum_{p,i} x^p C^{p-i}_{|\Delta|}C^i_{r+i}
[\tilde\Delta,1^{i+r}] =
(1+x)^{|\Delta|}\sum_i x^iC_{r+i}^i[\tilde\Delta,1^{i+r}]
\ee
If we introduce now a new generating function
$I_{\tilde\Delta}(x) = \sum_p x^p [\tilde\Delta,1^p]$, then
\be
\boxed{
I(x)*I_{\tilde\Delta}(y) = (1+x)^{|\tilde\Delta|}\sum_{i,r}
C_{r+i}^i x^i(1+x)^ry^r[1^{r+i}]
= (1+x)^{|\tilde\Delta|} I_{\tilde\Delta}(x+y+xy)
}
\ee
This formula describes the action of the $[1^p]$-subring
on the entire algebra of cut-and-join operators.
\subsection{$\circ$-correlators in the $[1^p]$ subring}
The $\circ$-multiplication is much simpler and in the
$[1^p]$ sector can be written by the two generating functions.
The first one is
\be
\sum_{n=0}^\infty q^n \left< [1^n]\circ [1^n]\right> t_1^n\bar t_1^n
=\sum_{n=0}^\infty q^n \left< [1^n]\right> t_1^n\bar t_1^n
=\sum_{n=0}^\infty \frac{q^n t_1^n\bar t_1^n}{n!} = \exp (qt_1\bar t_1)
\ee
Similarly,
\be
\sum_{n=0}^\infty q^n \left< [1^n]^{\circ m}\right> (t^{(1)}_1\ldots t^{(m)}_1)^n
= \exp \Big(qt^{(1)}_1\ldots t^{(m)}_1\Big)
\ee
The second generating function is
\be
\sum_{n=0}^\infty q^n \Big< \exp_\circ \left( \beta_{1^n}[1^n]\right)\Big>\
= \sum_{n=0}^\infty q^n \left< \delta_{n,0} + \beta_{1^n}[1^n] +
\frac{\beta_{1^n}^2}{2!}[1^n]\circ[1^n] + \ldots \right> =
\sum_{n=0}^\infty \frac{q^ne^{\beta_{1^n}}}{n!}
\label{beta1pcirc}
\ee
While each item in the sum is an average of a $\circ$-exponential,
the sum over $n$ is not, and one can {\it not} expect that {\it such}
partition functions satisfy the WDVV equations.
One can also consider a simplified version of
(\ref{beta1pcirc}), with all $\beta^{1^n}$ equal: $\beta_{1^n} = \beta_1$.
Then (\ref{beta1pcirc}) turns into
\be
\sum_{n=0}^\infty q^n \Big< \exp_\circ \left( \beta_{1}[1^n]\right)\Big>\
= \sum_{n=0}^\infty q^n \Big< \exp_\circ \left( \beta_{1}\rho_{n-1}[1]\right)\Big>\
= e^{\beta_1+q}
\ee
The standard Hurwitz partition function is direct generalization
of this formula.
\subsection{Connecting two multiplications}
Formula (\ref{JJJ}) connecting the two multiplications can be
further specified for the $[1^p]$ subring. Define in this case one
more generating function
\be
\sum_p x^p J_{[1^p]}(u) =
\sum_{p,m\geq 0} u^{p+m}x^p \rho_m([1^p]) = \sum_{m,p\geq 0}
\frac{(m+p)!}{m!p!} u^{p+m}x^p [1^{p+m}]
\ee
Then, in terms of $I(u) =
\sum_m u^m[1^m]$, one has: $J_{[1^p]}(u) = u^p\partial_u^p I(u)/p!$ and
one can easily relate (\ref{JJJ}) to $I(x)*I(y) = I(x+y +xy)$, so
that
\be
\oint J_{I(x)}(u)\circ J_{I(y)}\left(\frac{v}{u}\right)
\frac{du}{u} = J_{I(x+y+xy)}(v)
\ee
Note that in the generic case it would be
interesting to consider a generating function with the full set of
time variable $\{p_k\}$, $J(u|p) = \sum_\Delta J_\Delta(u)p_\Delta$,
so that
\be \oint J(u|p)\circ J\left(\frac{v}{u}\Big|\bar p\right)
\frac{du}{u} = \sum_{\Delta_1,\Delta_2} J_{\Delta_1*\Delta_2}(v)
\ee
and realize if there is an interesting expression for the r.h.s.
\section{Generalized Hurwitz partition function for
the $[1^p]$ subring}
\subsection{*-correlators}
Averaging converts $I(x)$ into the exponential:
\be
\Big< I(x) \Big> = \sum_p \frac{x^p}{p!} = e^x, \nn \\
\Big< *_i I(x_i) \Big> = \exp\left(-1 + \prod_i(x_i+1)\right)
\ee
or
\be
1+ \log \Big< *_i I(x_i) \Big> = \prod_i(1+x_i)
\ee
In particular,
\be
\sum_{k,l=0}^\infty \left< [1^k]*[1^l] \right> t_1^k \bar t_1^l
=\ <I(t_1+\bar t_1 + t_1\bar t_1) > \ = \exp (t_1+\bar t_1 + t_1\bar t_1)
\ee
Similarly
\be
\sum_{k_1,\ldots,k_m =0}^\infty \left< [1^{k_1}]*\ldots
*[1^{k_m}]\right> (t^{(1)}_1)^{k_1}\ldots (t^{(m)}_1)^{k_m} =
\exp\left(-1 + \prod_{i=1}^m \left((1+ t^{(i)}_m\right)\right)
\ee
\subsection{Correlators of *-exponentials}
It is convenient to introduce a grading of the diagram by its rescaling with a
formal parameter $q$: $[1^p]\to q^p[1^p]$.
The rescaled diagrams
(operators $q^p\hat W[1^p]$) also form a *-ring, but with rescaled
structure constants.
In terms of the generating function
$I_q(x) = \sum_p x^p q^p[1^p] = I(qx)$ one has
$I_q(x) * I_q(y) = I(qx)*I(qy) = I_q(x+y+qxy)$.
Note that for the $\circ$-multiplication $[1]^p\circ [1^p] = [1^p]$ and
logarithm of the average $\ \log\left<\sum_{p,q} x^py^q[1]^p\circ [1^q]
\delta_{p,q}\right> = xy$ is obtained from
$\ \displaystyle{{1\over q} \log <I_q(x)*I_q(y)>
={x+y\over q}+xy}\ $ in the limit of $q\rightarrow\infty$.
In order to check the associativity, one has to define
the partition function
\be
Z_*\{\beta|q\}= \left<
\exp_*\left(\sum_p \beta_{[1^p]}q^p[1^p]\right) \right> \ = 1 +
\sum_p \frac{1}{p!}q^p\beta_p + \frac{1}{2!}\sum_{p_1,p_2}
\beta_{p_1}\beta_{p_2}
\oint\frac{dx}{x^{p_1+1}}\oint\frac{dy}{y^{p_2+1}}e^{x+y+qxy} +
\ldots
\ee
and check if its third derivatives w.r.t. $\beta$'s satisfy the WDVV equations.
This quantity can be studied using the technique
developed above. For checks of the equation, one can use the perturbative
expansion of $Z_*\{\beta|q\}$ into the power series in $q$. Let us see how this works
in the leading order.
\subsection{WDVV equations for the $[1^p]$ subring}
Despite even for this subring the partition function depends on infinitely many
variables and, hence, the matrices of the third derivatives
\be\label{strc}
\left(\hat C_i\right)_{jk}
\equiv \hat C_{ijk}=C_{ijk} =
\frac{\p^3Z_*\{\beta|q\}}{\p\beta_i\p\beta_j\p\beta_k}
\ee
is infinite-dimensional, the associativity equations can be explicitly checked.
In the leading order approximation one has to check the associativity of the
non-deformed structure constants, i.e. (\ref{strc}) calculated at all $\beta_k=0$.
The generating functions for the structure constants $A_{pq}^s$ of the subring,
\be
[1^p]*[1^q] = \sum_{s={\rm max}(p,q)}^{p+q} A_{pq}^s [1^s]
\ee
are given by
\be
A^s(x,y) = \sum_{p,q} A_{pq}^s x^py^q = (x+y+xy)^s
\ee
or
\be
A(x,y;u) = \sum_{p,q,s} A_{pq}^s \frac{x^py^q}{u^{s+1}} = \frac{1}{u-(x+y+xy)}
\ee
The associativity is guaranteed by the symmetricity of
\be
\sum_{p,q,t} x^py^qz^t \sum_s A_{pq}^s A_{st}^r =
\oint A(x,y;u)A^r(u,z)du = (x+y+z+xy+yz+xz+xyz)^r
\label{ass}
\ee
w.r.t. $x\leftrightarrow z$ and $y\leftrightarrow z$.
Associativity condition (\ref{ass}) should be complemented by
\be
<[1^p]*[1^q]*[1^r]> = \sum_s A_{pq}^s <[1^s]*[1^r]>, \nn \\
e^{x+y+z+xy+yz+xz+xyz} = \oint A(x,y;u) e^{u+z+uz} du
\ee
which proves the WDVV equations in the leading order.
Further, one can switch on perturbations and check
the WDVV equations explicitly in higher orders. We certainly know that they
are correct basing on the general grounds, however, the procedure described here
allows one to check this iteratively.
\section*{Acknowledgements}
Our work is partly supported by Russian Federal Nuclear Energy
Agency under contract H.4e.45.90.11.1059, by Ministry of Education and Science of
the Russian Federation under contract 02.740.11.0608, by Russian government grant
11.G34.31.005, by RFBR
grants 10-02-00509 (A.Mir.), 10-02-00499 (A.Mor.) and 11-01-00289 (S.N.),
by joint grants 11-02-90453-Ukr, 09-02-93105-CNRSL, 09-02-91005-ANF,
10-02-92109-Yaf-a, 11-01-92612-Royal Society and by grant NSh 8462.2010.1.
|
0803.3232
|
\section{The {\sc cws-maxclique} algorithm}
\label{sec:alg}
The {\sc cws-maxclique} algorithm is a procedure to search for a
quantum error correction code $\mathcal{Q}=(\mathcal{G},\mathcal{C})$,
given a graph state $\mathcal{G}$ which maps quantum errors ${\mathcal
E}$ in the Pauli group
into binary error patterns, and a classical code ${\mathcal{C}}$,
which corrects the error patterns. We present this algorithm below,
beginning with a review of the basic definitions of CWS codes,
proceeding to the details of the procedure, then rounding up with an
evaluation of the computational complexity of the algorithm.
\subsection{Non-degenerate and degenerate CWS codes}
The basic concepts and definitions of CWS codes are described in a
previous paper\cite{CSSZ:07}, and may be summarized as follows. The
{\bf standard form CWS code} is fully characterized by a graph
$\mathcal{G}$ and a classical binary code $\mathcal{C}$, such that the
corresponding CWS code may be denoted by the pair ${\mathcal Q} =
(\mathcal{G},\mathcal{C})$. We define
\begin{equation}
Cl_{\mathcal{G}}({\mathcal E})=\{ Cl_{\mathcal{G}}(E)\ |\ E\in
{\mathcal E}\}
\end{equation}
as the set of classical errors induced by quantum errors ${\mathcal
E}$ acting on the graph $\mathcal{G}$; these are the errors that the
classical code $\mathcal{C}$ must detect. For each quantum error $E$,
it is sufficient to express $E$ in Pauli form as
$E=\pm Z^{\mathbf v}X^{\mathbf u}$ for some bit
strings ${\mathbf u}$ and ${\mathbf v}$. The mapping to classical
error strings is
\begin{equation}
Cl_\mathcal{G}(E=\pm Z^{\mathbf v}X^{\mathbf u})={\mathbf v}\oplus \bigoplus_{l=1}^n ({\mathbf u})_l{\mathbf r}_l\label{pattern}
\,,
\end{equation}
where $\mathbf{r}_l$ is the $l$th row of the adjacency matrix for
$\mathcal{G}$, and $(\mathbf{u})_l$ is the $l^{th}$ bit of $\mathbf{u}$.
Using these definitions, the main theorem of the CWS code construction
(Theorem 3 of \cite{CSSZ:07}) may be given as:
\begin{theorem}
\label{CSSZTheorem3}
A standard form CWS code, ${\mathcal Q} = (\mathcal{G},\mathcal{C})$
for graph state $\mathcal{G}$ and classical code ${\mathcal{C}}$,
detects errors from $\mathcal{E}$ if and only if $\mathcal{C}$ detects
errors from $Cl_\mathcal{G}(\mathcal{E})$ and in addition, for each $E
\in {\mathcal E}$,
\begin{eqnarray}
\label{clseneq0}{\rm either}~~ Cl_\mathcal{G}(E) & \neq& 0\\
\label{complus} {~\rm or ~~} \forall i\ Z^{{\mathbf c}_i}E&=&EZ^{{\mathbf c}_i}
\,,
\end{eqnarray}
where $Z^{{\mathbf c}_i}$ are codeword operators for ${\mathcal{C}}$
from $\{ Z^{\mathbf c}\}_{{\mathbf c} \in \mathcal{C}}$.
\end{theorem}
The case where $Cl_{\cal G}(E)\neq 0$ for all $E\in {\cal E}$ is the
non-degenerate case. For
degenerate CWS codes, it will be useful to introduce a new set of
classical bitstrings
\begin{align}
D_\mathcal{G}({\mathcal E})
=\{ & \mathbf{c}\in \{0,1\}^n\ |\ Cl_{\cal G}(E)=0\ \textrm{and}\ \\
& {\mathbf c}\cdot{\mathbf u}\neq 0\ \textrm{for some}\ E=\pm Z^{\mathbf v}X^{\mathbf u}\in {\mathcal E}\}
\,.
\end{align}
These bitstrings indicate codewords which are inadmissible, because they
violate the condition given by equations (\ref{clseneq0})
and (\ref{complus}) of Theorem~\ref{CSSZTheorem3}.
Specifically, fix a codeword ${\bf c}$, then for all $E\in {\cal E}$
we must have $Z^{\bf c}E=EZ^{\bf c}$ if $Cl_{\cal G}(E)=0$.
Writing $E=\pm Z^{\mathbf v}X^{\mathbf u}$, ${\bf c}$ is not an
admissible codeword if $Cl_{\cal G}(E)=0$ and
${\mathbf c}\cdot {\mathbf u}\neq 0$.
In other words, if a CWS code is degenerate, some low weight errors act
trivially on the code space (i.e. $Cl_{\cal G}(E)=0$), and these errors
must act trivially on each basis state generated from the graph state
${\mathcal G}$ (i.e. $[Z^{\mathbf c},E]=0$).
$D_\mathcal{G}({\mathcal E})$ describes basis states for which this is not
the case.
\subsection{The {\sc cws-maxclique} algorithm}
Given a graph ${\mathcal G}$, the problem of finding a CWS code
$\mathcal{Q}=(\mathcal{G},\mathcal{C})$, which corrects for quantum
errors ${\cal E}$, is reduced to a search for suitable classical
codes. It is thus natural to ask how such classical codes can be
found. One solution might be to use existing classical codes for this
construction. However, that approach gives sub-optimal code
parameters, due to the fact that ${\mathcal{C}}$ should be able to
detect errors of the highest weight of the induced error patterns in
$Cl_\mathcal{G}({\mathcal E})$. This means that the classical code
${\mathcal{C}}$ must have distance significantly greater than that of
the corresponding quantum code $(\mathcal{G},\mathcal{C})$, as shown
in the following example:
\begin{example}
Let $\mathcal{G}$ be an $n$ qubit ring graph. If ${\mathcal{E}}$ is
the set of single qubit Pauli $X$, $Y$, and $Z$ errors, then the
induced classical errors $Cl_\mathcal{G}({\mathcal E})$ are single,
triple, and double bit flips respectively. Choosing the classical
code $\mathcal{C}$ to be a binary $((n,K,7))$ code results in a CWS
code $(\mathcal{G},\mathcal{C})$ with parameters $((n,K,3))$.
However, $\mathcal{C}$ also detects many additional errors which are
unnecessary for this construction, such as all the one to six bit flip
errors; $Cl_\mathcal{G}({\mathcal E})$ only includes a subset of those
errors.
\label{classicalcode}
\end{example}
This example motivates a search for specific classical codes which
correct just the relevent errors for the CWS construction. However,
classical coding theory provides no efficient, systematic
constructions for codes that correct the potentially exotic error
patterns involved in the CWS construction. On the other hand, finding
a code with the best $K$ for given $n$ and $d$ is a problem which can
be naturally encoded into an NP-complete problem such as {\sc
maxclique}. This classic approach has been employed, for example, to
show that the $(10,K,3)$ classical code with $K=72$ has optimal
parameters\cite{Ostergard:99a}.
{\sc cws-maxclique} is a mapping onto {\sc maxclique}, of the problem
of finding the CWS code $(\mathcal{G},\mathcal{C})$ with the largest
possible dimension $K$, for given parameters $n$, $d$, and graph
$\mathcal{G}$. The {\sc cws-maxclique} algorithm gives steps to solve
this problem, and is given in detail in the
Algorithm~\ref{alg:CWSMaxClique} box. It proceeds in several simple
steps. The first step, \textbf{Setup}$({\mathcal E},\Lambda)$
(Algorithm~\ref{alg:setup}), finds the elements of
$Cl_\mathcal{G}({\mathcal E})$ and $D_\mathcal{G}({\mathcal E})$. The
second step, \textbf{MakeCWSCliqueGraph}$(\textsc{CL},\textsc{D})$
(Algorithm~\ref{alg:MakeCWSCliqueGraph}), constructs a graph, denoted
as the CWS ``clique graph,'' whose vertices are classical codewords
and whose edges indicate codewords that can be in the same classical
code together. When searching for ordinary classical codes using an
analogous procedure, the usual condition for joining two vertices by
an edge is that the vertices are Hamming distance $d$ apart. In our
situation, vertices are joined by an edge if there is no error induced
by the graph state that maps one codeword to the other. Finally, an
external subroutine
\textbf{findMaxClique}$(V,E)$ is called; this routine is to employ known
techniques to find the maximum clique in the CWS clique graph. The
clique-finding subroutine is not specified here because there are many
exact and heuristic techniques known in the community, for solving
this classic NP-complete problem. Note that in the detailed
description of the algorithms, two functions are used:
$\text{String}(i):\ \text{integer}\ i \rightarrow \text{binary string
of}\ i\ \text{with length}\ n$, and its inverse, $\text{Integer}(i):\
\text{binary string \text{with length}\ n}\ i \rightarrow
\text{integer of}\ i$. Also, an error configuration is a list of ordered
pairs $(\textsc{LOC},\textsc{TYPE})$ where $\textsc{LOC}$ is the coordinate of the
affected qubit and $\textsc{TYPE}$ is one of $X$, $Y$, or $Z$.
\begin{algorithm}
\caption{\textbf{Setup}$({\mathcal E},\Lambda)$: Compute $Cl_{\mathcal G}({\mathcal E})$ and $D_{\mathcal G}({\mathcal E})$, where $\mathcal
E$ is a set of Pauli errors
and $\Lambda$ is the adjacency matrix associated with graph
$\mathcal{G}$.}
\label{alg:setup}
\algsetup{indent=2em}
\begin{algorithmic}[1]
\REQUIRE $\Lambda^T=\Lambda$, $\Lambda_{ij}=\{0,1\}$ and
$\Lambda_{ii}=0$ \ENSURE \textsc{CL}$[i]=\delta(\text{String}(i)\in Cl_{\mathcal{G}}({\mathcal
E}))$ and $\textsc{D}[i]=\delta(\text{String}(i)\in D_{\mathcal{G}}({\mathcal E}))$
\FOR{$i\in\{0,1\}^n$} \STATE $\textsc{CL}[\text{Integer}(i)]\leftarrow 0$ \STATE
$\textsc{D}[\text{Integer}(i)]\leftarrow 0$ \ENDFOR
\FOR{error configuration $E\in {\mathcal E}$}
\STATE \textsc{err}$\leftarrow \text{String}(0)$ \STATE
\textsc{errx}$\leftarrow \text{String}(0)$ \FOR{$(\textsc{loc},\textsc{type})$ in
$E$} \IF{\textsc{type} is X or Y} \STATE \textsc{err} $\leftarrow$
\textsc{err} $\oplus\ (\text{row}\ \textsc{loc}\ \text{of}\ \Lambda)$ \STATE \textsc{errx}
$\leftarrow$ \textsc{err} $\oplus\ \text{String}(2^{\textsc{loc}})$ \ENDIF
\IF{\textsc{type} is Z or Y} \STATE \textsc{err} $\leftarrow$
\textsc{err} $\oplus\ \text{String}(2^{\textsc{loc}})$ \ENDIF \ENDFOR \STATE
\textsc{CL}[\text{Integer}(\textsc{err})] $\leftarrow 1$
\IF{\text{Integer}(\textsc{err}) is $0$}
\FOR{$i\in\{0,1\}^n$} \IF{$\textsc{errx}\cdot i\neq 0$} \STATE
\textsc{D}[i] $\leftarrow 1$ \ENDIF \ENDFOR \ENDIF \ENDFOR \RETURN
$(\textsc{CL},\textsc{D})$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{\textbf{MakeCWSCliqueGraph}$(\textsc{CL},\textsc{D})$:
Construct a graph whose vertices $V$ are classical codewords and whose
edges $E$ connect codewords that can belong to the same classical code,
according to the error model indicated by $Cl_{\mathcal G}({\mathcal
E})$ and $D_{\mathcal G}({\mathcal E})$.}
\label{alg:MakeCWSCliqueGraph}
\algsetup{indent=2em}
\begin{algorithmic}[1]
\REQUIRE $\textsc{CL}$ and $\textsc{D}$ are binary arrays of length $2^n$
\ENSURE $0^n\in V$, $0^n\neq v\in V\Rightarrow \textsc{D}[v]=0$ and $\textsc{CL}[v]=0$, $(v,w)\in E\Rightarrow \textsc{CL}[v\oplus w]=0$
\STATE $V\leftarrow \{0^n\}$
\STATE $E\leftarrow\emptyset$
\FOR{$s\in \{0,1\}^n$}
\IF{\textsc{D}$[s]=0$ and $\textsc{CL}[s]=0$}
\STATE $V\leftarrow V\cup\{s\}$
\FOR{$v\in V\setminus\{s\}$}
\IF{\textsc{CL}$[v\oplus s]=0$}
\STATE $E\leftarrow E\cup\{(v,s)\}$
\ENDIF
\ENDFOR
\ENDIF
\ENDFOR
\RETURN $(V,E)$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{\textbf{CWS-MAXCLIQUE}$({\mathcal E},\Lambda)$: Find a quantum code
$\mathcal{Q}$ detecting errors in $\mathcal{E}$, and providing the
largest possible dimension $K$ for the given input. The input
$\Lambda$ specifies the adjacency matrix of the graph ${\mathcal G}$.
The output ${\mathcal C}$ is a classical code such that ${\mathcal
Q}=(\mathcal{G},\mathcal{C})$ is a CWS code detecting errors in
$\mathcal{E}$.}
\label{alg:CWSMaxClique}
\algsetup{indent=2em}
\begin{algorithmic}[1]
\REQUIRE $\Lambda^T=\Lambda$, $\Lambda_{ij}=\{0,1\}$ and $\Lambda_{ii}=0\ \forall i$
\ENSURE $K=|\mathcal{C}|$ is as large as possible for the given input,
$0^n\in\mathcal{C}$, and
$\mathcal{C}$ satisfies the conditions in Theorem 3 of \cite{CSSZ:07}
\STATE $(\textsc{CL},\textsc{D})\leftarrow \textbf{Setup}({\mathcal E},\Lambda)$
\STATE $(V,E)\leftarrow \textbf{MakeCWSCliqueGraph}(\textsc{CL},\textsc{D})$
\STATE $\mathcal{C}\leftarrow \textbf{findMaxClique}(V,E)$
\RETURN $\mathcal{C}$
\end{algorithmic}
\end{algorithm}
\subsection{The complexity}
{\sc cws-maxclique} is not an efficient algorithm; the run-time is at
least of order $\sim 2^n$, because of the representation of the
bit-string sets $Cl_{\mathcal{G}}({\mathcal E})$ and
$D_{\mathcal{G}}({\mathcal E})$. These are needed to specify the CWS
clique graph, which has $2^n$ nodes. In principle, instead of storing
all this in memory, the vertices and edges of this graph could be
computed on the fly, during execution of the {\bf findMaxClique}
subroutine. However, these inefficiencies are not limiting factors,
because of the even larger size of the search space involved in
typical applications.
Typically, the goal is not to search for an optimal CWS code, given
${\mathcal G}$ and ${\mathcal E}$, but rather, to determine if an $((n,K,d))$ code exists when $n$ and $K$ are fixed.
When $K$ is fixed, finding a maximum clique is not
necessary; rather, a clique of size $K$ is desired. There are ${2^n
\choose K}$ such possible cliques. Checking whether a size $K$
subgraph of a CWS clique graph is a clique just requires checking if
that subgraph is fully connected. Given an adjacency matrix for the
CWS clique graph (and constant time access to the matrix elements),
checking a subgraph takes order $K^2$ steps.
Searching over the space of all possible graphs ${\mathcal G}$ involves
searching a space of graphs with $n$ vertices, with a total of $2^{n
\choose 2}$ possibilities. Therefore, the complexity of searching for an
$((n,K,d))$ CWS code is roughly
\begin{equation}
K^2 2^{n \choose 2}{2^n \choose K}.\label{complexity}
\end{equation}
However, several practical improvements allow this search space to be
pruned usefully. First, not all graphs ${\mathcal G}$ need be
considered; only those which are inequivalent under local Clifford
(LC) operations need be checked. The LC orbits of graphs are well
understood, and efficient algorithms exist to check for LC equivalence
\cite{Danielsen:05a,Danielsen:05b,VanDenNest:04a}. Therefore, the
factor $2^{n \choose 2}$ can be significantly reduced. A lower bound
on the number of LC inequivalent graphs is given in
\cite{Bahramgiri:06}, based on the number of non-isomorphic tree
graphs, which roughly scales as $3^{n}$. This reduction has played a
key role in allowing us to employ the {\sc cws-maxclique} algorithm on
spaces with parameters up to $n=11$ and $K=32$. However, no suitable
upper bound is presently known, which would give a quantitative
estimate of the extent of the search space reduction due to LC
equivalence.
A second practical improvement comes from intrinsic properties of CWS
codes, which rule out existence of codes of certain $((n,K,d))$
parameters, and relate the existence of certain parameter values with
the existence of others. We will return to discuss these structure
theorems in Section~\ref{sec:struct}.
\section{Boolean functions and Classical Codes}
\label{sec:bfunc}
The CWS construction unifies all known additive and non-additive
quantum error correction codes of good parameters, including both
degenerate and non-degenerate codes. An alternative
framework (``AC06'') for non-degenerate codes, has been presented by
Aggarwal $\&$ Calderbank \cite{Calderbank:06a}, based on a
correspondence between Boolean functions and projection operators.
Because AC06 implies a search algorithm for quantum codes which is in
a sense the reverse of that employed above, in {\sc cws-maxclique}, it
is interesting to consider the differences.
In this section we study the relationship between AC06 and the CWS
construction, by linking the AC06 Boolean function, which we interpret
to specify a certain classical code, to the classical code $\mathcal{C}$
used in the CWS construction. The components of the AC06 construction can
be naturally associated with those of the CWS construction. In this way,
we show that AC06 codes are spanned by a set of stabilizer states
generated from a single state and a set of Pauli operators. Therefore,
AC06 codes can be described completely, and in our opinion more transparently,
as CWS codes.
That this identification between AC06 and CWS is natural was mentioned
previously
\cite{CSSZ:07}, but the transform required has not been presented before.
It is well known that any stabilizer state is
equivalent under some LC transform to a graph state. Thus, supposing
that a local Clifford operation maps the AC06 stabilizer state to a graph
state, it would be nice if this Clifford also described
a transform from the Boolean function $f$ to the binary classical code
${\mathcal C}$ of the CWS construction.
Below, we show this mapping indeed exists, up to a technical subtlety
with regard to the choice of the generating set for the stabilizer.
The AC06 framework is not entirely complete since degenerate codes cannot
be described as presented in \cite{Calderbank:06a}.
Degenerate codes may, in some cases, outperform the best known nondegenerate
codes. Such an example may be provided by the $[[25,1,9]]$ code obtained by
concatenating the $[[5,1,3]]$ code, since this is the best known $[[25,1]]$
code, it is degenerate, there is no known nondegenerate $[[25,1,9]]$, and
it has the highest possible minimum distance \cite{Grassl:tables}.
We take the constraints given for degenerate codes in the CWS
construction and map these backwards to given new constraints for
degenerate codes in the AC06 framework.
Given a complete AC06 framework which includes both
non-degenerate and degenerate codes, we can then compare and contrast
the computational cost of the CWS and AC06 approaches for seeking
optimal parameter quantum codes.
When the search goal is to find an optimal
$((n,K,d))$ code for fixed $n$ and $K$, the AC06 framework seems at first to
involve a search over possibly $2^{2^n}$ Boolean functions, while {\sc
cws-maxclique} involves a search over $2^{n \choose 2}$ possible
graphs. This appears to give significant advantage to {\sc
cws-maxclique}. However, we find that with careful analysis of AC06,
and extending it include degenerate codes, the two
search algorithms have comparable complexity.
\subsection{AC06 quantum error-correcting codes are CWS codes}
A $n$-variable Boolean function is a mapping $f:\{0,1\}^n\rightarrow \{0,1\}$ that maps a binary $n$-vector
${\bf v}=(v_1,\dots,v_n)$ to a bit $f(v_1,\dots,v_n)$. A Boolean function is nonzero if there exists some
${\bf v}$ such that $f({\bf v})=1$. We know that a Boolean function is naturally associated with a classical code
\begin{equation}
{\cal C}_f=\{ {\bf c}\in\{0,1\}^n\ |\ f({\bf c})=1\}.
\end{equation}
A nonzero Boolean function $f$ can be represented as
\begin{equation}
f({\bf v})=\sum_{{\bf c}\in {\cal C}_f} v_1^{c_1}v_2^{c_2}\dots v_n^{c_n},
\end{equation}
where $v_i^1=v_i$ and $v_i^0=\bar{v_i}=v_i\oplus 1$. The summation is taken to be modulo $2$, i.e. XOR.
The weight of a Boolean function $f$ is $|{\cal C}_f|$.
The complementary set of a nonzero $n$-variable Boolean function $f({\bf v})$ is defined by
\begin{equation}
Cset_f=\{{\bf a}\in \{0,1\}^n\ |\ \sum_{{\bf c}\in {\cal C}_f} f({\bf c})f({\bf c}\oplus {\bf a})=0\}.
\end{equation}
We know that the complementarly set is simply the set of vectors ${\bf a}$ such that
${\cal C}_f\cap ({\cal C}_f\oplus {\bf a})=\emptyset$, i.e. it is the set of (classical) detectable errors of
${\cal C}_f$, since no codeword is mapped back into the code by ${\bf a}$.
\begin{definition}[Definition 6 of \cite{Calderbank:06a}]
Let $P$ and $Q$ be projection operators on a Hilbert space $H$ with $K=\textrm{image}(P)$ and
$L=\textrm{image}(Q)$. Then
\begin{itemize}
\item $P<Q$ iff $K\subset L$ and $K\neq L$
\item $P\vee Q$ is the projection of $H$ onto the span $K\vee L$, the smallest subspace of $H$ containing both $K$ and $L$
\item $P\wedge Q$ is the projection of $H$ onto $K\cap L$
\item $\bar{P}$ is the projection of $H$ onto $K^\perp$
\item $P\oplus Q=(P\wedge\bar{Q})\vee(\bar{P}\wedge Q)$.
\end{itemize}
\end{definition}
\begin{definition}[Definition 7 of \cite{Calderbank:06a}]
Given an arbitrary Boolean function $f(v_1,\dots,v_n)$, the projection function
$f(P_1,P_2,\dots,P_n)$ is the expression in which $v_i$ in the Boolean function is replaced by the projection
operator $P_i$, multiplication
(AND) in the Boolean logic is replaced by the meet operation $P\vee Q$ in the projection logic, summation (OR)
in the Boolean logic is replaced by the join operation $P\wedge Q$ in the projection logic, and the NOT
operation in the Boolean logic is replaced by the not operation $\bar{P}$ in the projection logic.
Note that summation modulo $2$ (XOR) is replaced by the cooresponding operation $P\oplus Q$ in the
projection logic.
\end{definition}
\begin{theorem}[Theorem 1 of \cite{Calderbank:06a}]
If $(P_1,P_2,\dots,P_n)$ are pairwise commutative projection operators of dimension $2^{n-1}$ such that
$(P_1P_2\dots P_n)$, $(P_1P_2\dots \bar{P_n})$, \dots, $(\bar{P_1}\bar{P_2}\dots\bar{P_n})$ are all one-dimensional
projection operators and $H$ is of dimension $2^n$, then $P_f=f(P_1,P_2,\dots,P_n)$ is an orthogonal projection
on a subspace of dimension $K=\textrm{Tr}(P_f)=\textrm{wt}(f)$.
\end{theorem}
Let $({\bf a}|{\bf b})$ denote the concatenation of two $n$-bit binary vectors ${\bf a}$ and ${\bf b}$.
The symplectic inner product of $2n$-bit binary vectors $({\bf a}|{\bf b})$ and $({\bf a}'|{\bf b}')$ is
\begin{align}
({\bf a}|{\bf b})\odot ({\bf a}'|{\bf b}') & = ({\bf a}|{\bf b})\left[\begin{array}{cc} 0 & I\\ I & 0\end{array}\right] ({\bf a}'|{\bf b}')^T \\
& = {\bf a}\cdot {\bf b}'\oplus {\bf a}'\cdot {\bf b}.
\end{align}
The symplectic weight of a vector $({\bf a}|{\bf b})$ is the number of indices $i$ at which either $a_i$ or
$b_i$ is nonzero. $E_{({\bf a}|{\bf b})}$ is defined by $e_1\otimes e_2\otimes \dots \otimes e_n$
where $e_i$ equals $I$ if $(a_i,b_i)=(0,0)$, $X$ if $(a_i,b_i)=(1,0)$,
$Z$ if $(a_i,b_i)=(0,1)$, and $Y$ if $(a_i,b_i)=(1,1)$ and the associated projector is
$P_{({\bf a}|{\bf b})}=\frac{1}{2}(I+E_{({\bf a}|{\bf b})})$.
The next definition specifies the ingredients of an AC06 quantum error-correcting code (AC06 QECC).
Theorem 1 of \cite{Calderbank:06a} defines a quantum code, but our definition of an AC06 QECC is based instead
on Theorem 2 of \cite{Calderbank:06a}, which provides sufficient conditions for the code
to be an error-correcting code.
\begin{definition}[AC06 QECC]
Let $f$ be an $n$ variable Boolean function and let $x_1,x_2,\dots,x_{2n}$
be a list of the $n$-bit column vectors of an $n\times 2n$ matrix $A_f$. An AC06 QECC
with data $(f,\{x_i\}_{i=1}^{2n})$ is the image of the projector $f(P_1,P_2,\dots,P_n)$,
where (i) the rows of $A_f$ are linearly independent with pairwise symplectic inner product zero
and (ii) $P_i=P_{({\bf a}_i|{\bf b}_i)}$ is associated to the $i$th row of $A_f$.
\end{definition}
\begin{theorem}[Theorem 2 of \cite{Calderbank:06a}]
Let $D_d$ be the set of all $2n$-bit vectors of symplectic weight less than $d$.
An AC06 QECC with data $(f,\{x_i\}_{i=1}^{2n})$ is an $((n,K,d))$ quantum code if
$f$ has weight $K$ and $\{ A_fw^T\ |\ w\in D_d \}\subseteq Cset_f$.
\end{theorem}
The main result of this subsection, stated and proven next, is that AC06 QECCs are CWS codes.
\begin{theorem}\label{thm:AC06eqCWS}
An AC06 quantum error-correcting code is a codeword stabilized quantum code.
\end{theorem}
\begin{IEEEproof}
Consider an AC06 QECC with data $(f,\{x_i\}_{i=1}^{2n})$. The matrix $A_f$, whose $2n$ columns are
$\{x_i\}_{i=1}^{2n}$, has linearly independent rows with pairwise symplectic inner products that are zero.
Therefore, $A_f$ corresponds naturally to a group generated by $n$ pairwise commuting operators $\{g_i\}_{i=1}^n$
from the $n$ qubit Pauli group. Let $|S_{\bf c}\rangle$ be the state stabilized by
$S=\langle (-1)^{c_i}g_i\rangle_{i=1}^n$ for some $n$-bit vector ${\bf c}$.
A nonzero Boolean function $f$ can be represented as
\begin{equation}
f({\bf v})=\sum_{{\bf c}\in {\cal C}_f} v_1^{c_1}v_2^{c_2}\dots v_n^{c_n},
\end{equation}
which corresponds, in this case, to the projector
\begin{equation}
f(P_1,P_2,\dots,P_n) = \sum_{{\bf c}\in {\cal C}_f} P_1^{c_1}P_2^{c_2}\dots P_n^{c_n},
\end{equation}
where $P_i^0=\bar{P_i}=\frac{1}{2}(I-g_i)$ and $P_i^1=P_i=\frac{1}{2}(I+g_i)$.
The term $P_1^{c_1}P_2^{c_2}\dots P_n^{c_n}$ projects onto the state $|S_{\bar{{\bf c}}}\rangle$,
where $\bar{\bf c}=\bar{c_1}\bar{c_2}\dots\bar{c_n}$, therefore
\begin{equation}
f(P_1,P_2,\dots,P_n) = \sum_{{\bf c}\in \bar{\cal C}_f} |S_{{\bf c}}\rangle\langle S_{{\bf c}}|.
\end{equation}
Hence, the AC06 QECC is spanned by a set of eigenstates of a stabilizer $S$, each of which
has a vector of eigenvalues given by a codeword ${\bf b}$ in the inverted code
$\bar{\cal C}_f$, where $b_i=0$ indicates a $+1$ eigenvalue for $g_i$ and $b_i=1$ indicates a $-1$ eigenvalue
for $g_i$.
To establish correspondence with a CWS code, we need to show that there is a mapping
$W$ from $n$-bit strings ${\bf c}$ to Pauli operators $W({\bf c})$ such that
$|S_{\mathbf{c}}\rangle=W(\mathbf{c})|S_{\bf 00\dots 0}\rangle$. Indeed, there is a
Clifford circuit $U$ that encodes $U|\underbrace{00\dots 0}_n\rangle=|S_{\bf 00\dots 0}\rangle$
and acts like $UZ_iU^\dag=g_i$ for $i=1,\dots,n$. Therefore,
$UX_iU^\dag$ anticommutes with $g_i$ and commutes with all $g_j$, $j\neq i$.
By this observation, the map
\begin{equation}
W(\mathbf{c}):=\prod_{i=1}^n \left[ UX_iU^\dag \right]^{c_i}
\end{equation}
has the desired properties, and we obtain the set of CWS word operators
$W(\bar{\mathcal C}_f)$ by applying $W$ to each codeword in $\bar{\mathcal C}_f$.
Therefore, the AC06 QECC with data $(f,\{x_i\}_{i=1}^{2n})$ is associated with
a CWS code (not in standard form) with stabilizer state $|S\rangle$ corresponding to $A_f$, classical
code $\bar{\mathcal C}_f$, and word operators $W(\bar{\mathcal C}_f)$.
\end{IEEEproof}
The mapping can be inverted to obtain data for an AC06 QECC from a CWS code
as well. There is freedom in the choice of generating set for the stabilizer
state in the CWS construction so it may be necessary to conjugate by a Pauli
operator to fix the signs of the stabilizer generators to $+1$ before mapping
them to the column vectors $\{x_i\}_{i=1}^{2n}$.
\begin{example}\label{ex:ac06tocws}
This detailed example demonstrates the mapping given in the proof of Theorem~\ref{thm:AC06eqCWS}
from an AC06 QECC $(f,\{x_i\}_{i=1}^{2n})=(f,A_f)$ to a CWS code $(S_A,{\mathcal C}',W(\bar{\mathcal C}_f))$.
The AC06 $((5,6,2))$ code is given by the boolean function
\begin{align*}
f(v) & = v_1v_2v_3 + v_3v_4v_5 + v_2v_3v_4 \\
& + v_1v_2v_5 + v_1v_4v_5 + v_2v_3v_4v_5
\end{align*}
and the matrix
\begin{equation*}
A_f = \left[\begin{array}{cccccccccc}
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\
0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 \\
0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \end{array}\right].
\end{equation*}
First, consider the boolean function $f$.
Indeed, $f(v)$ is a function of $n=5$ variables and has weight $K=6$. This can
be seen by writing $f$ in the form
\begin{align*}
f(v) & = \sum_{\mathbf{c}\in \{0,1\}^n} f(\mathbf{c})v_1^{c_1}\dots v_n^{c_n}
= \sum_{\mathbf{c}\in {\mathcal C}_f} v_1^{c_1}\dots v_n^{c_n} \\
& = v_1v_2v_3\bar{v_4}\bar{v_5}+\bar{v_1}\bar{v_2}v_3v_4v_5+\bar{v_1}v_2v_3v_4\bar{v_5}\\
& +v_1v_2\bar{v_3}\bar{v_4}v_5+v_1\bar{v_2}\bar{v_3}v_4v_5+\bar{v_1}v_2v_3v_4v_5
\end{align*}
where $v_i^{c_i}$ equals $v_i$ if $c_i=1$ and $\bar{v_i}$ if $c_i=0$.
The classical code ${\mathcal C}_f$ is the set of $n$-bit strings on which $f$
evaluates to $1$, i.e. $11100$, $00111$, $01110$, $11001$, $10011$, and
$01111$.
Second, observe that the rows of $A_f$ are indeed linearly independent and
pairwise orthogonal in the symplectic inner product. The rows of $A_f$
correspond to stabilizer generators $E_1=IZYYZ$, $E_2=ZYYZI$, $E_3=YYZIZ$,
$E_4=YZIZY$, and $E_5=IZIXX$, respectively. These are the generators of the
stabilizer $S_A$ for the state $|S\rangle$. The AC06 construction uses the fact
that the projectors $P_y=\frac{1}{2}(I+E_y)$, $y=1,\dots,n$, are pairwise
commutative projection operators of dimension $2^{n-1}$ and
$P_1P_2\dots P_n$, $P_1P_2\dots \tilde{P_n}$, \dots,
$\tilde{P_1}\tilde{P_2}\dots\tilde{P_n}$ are all $1$-dimensional projection
operators, so that $P_f:=f(P_1,\dots,P_n)$ is a projector onto a subspace
of dimension $wt(f)$ (Theorem 1 of \cite{Calderbank:06a}), where the boolean operations
are replaced by the operations defined in Definition 6 of
\cite{Calderbank:06a}. Considering just the first term of $P_f$, we see that
\begin{align*}
P_1\wedge & P_2\wedge P_3\wedge \tilde{P_4}\wedge\tilde{P_5} \\
& = P_1P_2P_3(I-P_4)(I-P_5) \\
& = \frac{1}{2^5} (I+E_1)(I+E_2)(I+E_3)(I-E_4)(I-E_5)
\end{align*}
is a projector onto a stabilizer state $W_1|S\rangle$ where $W_1$ is a
Pauli operator that commutes with $\{E_1,E_2,E_3\}$ and anticommutes with
$\{E_4,E_5\}$, i.e. $W_1=Z_5$. Notice that the partition of the generators
into commuting and anticommuting sets is given by the first codeword
$11100$ of ${\mathcal C}_f$. The terms are combined using
the operation $P\oplus Q=P+Q-2PQ$, which equals $P+Q$ when the projectors
are pairwise orthogonal, as they are when $P$ and $Q$ project onto stabilizer
states. Therefore, $P_f=\sum_{i=1}^K W_i|S\rangle\langle S|W_i^\dag$ where
the $W_i$ are chosen to commute or anticommute with the generators of the
stabilizer of $|S\rangle$ according to the codewords of ${\mathcal C}_f$. We
conclude that the AC06 $((5,6,2))$ code is a CWS code with stabilizer
$\langle IZYYZ, ZYYZI, YYZIZ, YZIZY, IZIXX\rangle$ and word operators
$\{ Z_5,Z_3,Z_4,Z_1,Z_2,X_3X_4X_5\}$ that correspond to the classical code
${\mathcal C}'=\bar{\mathcal C}_f=\{00011,11000,10001,00110,01100,10000\}$
specifying the generator's signs for each basis state of the quantum code.
We can arrange for the all-zeros codeword to be in ${\mathcal C'}$ by
multiplying each word operator by $X_3X_4X_5$ (and, hence, adding $10000$
to each codeword in ${\mathcal C}'$). This is a local operation, so the
code parameters do not change.
\end{example}
\subsection{Mapping from AC06 to the standard form of CWS}
Three distinct steps may be identified, in building a mapping between
the AC06 $(A_f,f)$ code, and the CWS $(\mathcal{G},\mathcal{C})$ code
in standard form,
\begin{equation}
(A_f,f) \stackrel{Stab}{\longrightarrow} (S_A,{\mathcal C}') %
\stackrel{LC}{\longrightarrow} ({\mathcal G_A},{\mathcal C}') %
\stackrel{Gen}{\longrightarrow} ({\mathcal G},{\mathcal C}) %
\,.
\end{equation}
First, $(A_f,f)$ is re-written as a stabilizer $S_A$ and a classical
code ${\mathcal C}'$, using standard definitions. The subscript $A$
on $S_A$ reminds us that the stabilizer is generated by the generators
$g_A=\langle g_1,\ldots,g_n\rangle$, where each generator $g_k$
corresponds to a row of $A_f$. Second, a (non-unique) local Clifford
transform $L$ turns $S_A$ into $\mathcal{G}_A$, leaving ${\mathcal
C}'$ invariant. $\mathcal{G}_A$ is a graph state with generators
$Lg_AL^{\dagger}$. Third, careful choice of appropriate generators
turn the classical code ${\mathcal C}'$ into the ${\mathcal C}$ used
in the CWS construction. A fourth issue that arises is the
limitation on $f$ needed to allow degenerate codes to be considered.
These three steps and the degeneracy issue are discussed below, one at
a time.
\subsubsection{$(A_f,f) \stackrel{Stab}{\longrightarrow} (S_A,C')$}
We have already accomplished this step by way of Theorem~\ref{thm:AC06eqCWS},
but we review it quickly to show the entire chain of steps to achieve
standard form.
The $n\times 2n$ matrix $A_f$ describes the generators of a quantum
stabilizer state, which we may denote as $S_A$, when the left $n\times
n$ half is interpreted as describing $X$ Pauli terms, and the right
half, $Z$ Pauli terms, following the standard
prescription\cite{Nielsen00b}. Let the generators of this stabilizer
be $g_A=\langle g_1,\ldots,g_n\rangle$; each generator $g_k$
corresponds to a row of $A_f$. Let $|S\>$ be the quantum state
stabilized by $S_A$.
\def\>{\rangle}
\def\<{\langle}
The Boolean function $f$ defines a classical code, through its action
on the $K$ bit strings $\mathbf{c}'_j = j_1\ldots j_n$; explicitly,
we may define
\begin{equation}
\mathcal{C}'=\{ \mathbf{c}'_j|f({\bar{\mathbf c}'_j})=1\}
\,,
\label{Cprime}
\end{equation}
where ${\bar{\mathbf c}'_j}$ denotes the complement of ${{\mathbf
c}'_j}$ (needed because of how $f$ is defined in AC06, see
Example~\ref{ex:ac06tocws}).
In the CWS standard form, the all-zeros codeword is in the classical code
${\mathcal C}'$, i.e. the state $|S\rangle$ is in the code. This can be
arranged by choosing one of the states $|S_{\mathbf{c}'_j}\>$ in the
code and applying to the whole code the local Pauli operation that maps
$|S_{\mathbf{c}'_j}\>$ to $|S\>$. Since this has no effect on the stabilizer
$S_A$, and the resulting code is locally equivalent to the original code,
we now assume without loss of generality that ${\mathcal C}'$
contains the all-zeros codeword.
\subsubsection{$(S_A,{\mathcal C}') \stackrel{LC}{\longrightarrow}
({\mathcal G}_A,{\mathcal C}')$}
The second step needed is an intermediate, but simple map,
transforming $S_A$ into graph state form\cite{VanDenNest:04a}. This
can be done using Clifford operations on individual qubits (``LC
transformations''). Importantly, though, we must also keep track of
how ${\mathcal C}'$ transforms when the stabilizer $S_A$ is
transformed, since ${\mathcal C}'$ is partially defined in terms of
$S_A$.
Let $L=\bigotimes\limits_{i=1}^n L_i$ be the $n$-qubit operation given
by the tensor product of single qubit Clifford operations $L_i$.
When transformed by $L$, the generators of the stabilizer $S_A$ map to
become
\begin{equation}
\< g_1,...,g_n\> \rightarrow \< g'_1,...,g'_n\>
\,,
\label{LC}
\end{equation}
where $g'_i=Lg_iL^{\dagger}$. Since $L$ also transforms $w_j$ to
$w'_j=Lw_jL^\dagger$, it follows that the commutation relations of
$w'_j$ with $g'_k$ are the same as between $w_j$ and $g_k$. Thus, LC
transformations leave ${\mathcal C}'$ unchanged, mapping
$(S_A,{\mathcal C}')$ into $({\mathcal G}_A,{\mathcal C}')$. Again,
just as for $S_A$, the subscript $A$ on ${\mathcal G}_A$ reminds us
that the generator of this graph state is $Lg_AL^{\dagger}$, and
originates from $A_f$.
\subsubsection{$({\mathcal G}_A,{\mathcal C}') \stackrel{Gen}{\longrightarrow}
({\mathcal G},{\mathcal C})$}
The final step in transforming the quantum code into CWS form involves
nailing down a degree of freedom which allows ${\mathcal C}$ to be
changed, without changing the stabilizer, or the quantum code
specified. In particular, $\mathcal{C}'$ is dependent on the choice
of generators for ${\mathcal G}_A$. Let $R$ be a binary valued,
invertible $n\times n$ matrix $R_{ji}$, which transforms a generator
set $\< g_1,g_2,\ldots,g_n\>$ into $\< g'_1,g'_2,\ldots,g'_n\>$, where
\begin{equation}
g'_i= \prod_{j=1}^n g_j^{R_{ji}}
\,.
\end{equation}
We may keep track of this transform by rewriting ${\mathcal G}_A$ as
${\mathcal G}$, though, of course, the stabilizer (and thus the
corresponding graph) must be left unchanged when the generator set is
changed. Upon this transformation by $R$, the code ${\mathcal C}'$
must also be transformed, to keep the quantum code invariant.
Specifically, if $\mathcal{C}'$ is written as a $K\times n$ matrix,
then:
\begin{theorem}
The quantum code $({\mathcal G}_A,\mathcal{C}')$ is the same as the
quantum code $({\mathcal G},\mathcal{C}'R)$. That is, if the
stabilizer generators are changed by $R$, the code must also be
transformed by matrix multiplication by $R$.
\end{theorem}
\begin{IEEEproof}
We have $w_jg_kw_j=(-1)^{j_k}g_k$, and we want to calculate $j_k'$ given
by $w_jg_k'w_j=(-1)^{j_k'}g_k'$. Note
\begin{align*}
w_jg_k'w_j & = w_j\prod_{k=1}^n g_k^{R_{kt}}w_j = \prod_{k=1}^n w_jg_k^{R_{kt}}w_j \\
& =\prod_{k=1}^n (w_jg_kw_j)^{R_{kt}}= \prod_{k=1}^n ((-1)^{j_k}g_k)^{R_{kt}} \\
& =\prod_{k=1}^n ((-1)^{j_kR_{kt}}g_k^{R_{kt}})=(\prod_{k=1}^n (-1)^{j_kR_{kt}})(\prod_{k=1}^n g_k^{R_{kt}}) \\
& = ((-1)^{\oplus_{k=1}^n j_kR_{kt}})\prod_{k=1}^n g_k^{R_{kt}}= (-1)^{j_k'}g_k',
\end{align*}
which gives $j_k'=\oplus_{k=1}^n j_kR_{kt}$.
\end{IEEEproof}
Essentially, this equivalence indicates that row reductions in the
symplectic $n\times 2n$ form of the stabilizer can leave the quantum
code invariant, if the same row reduction is done to the binary code.
Moreover, LC equivalence and the choice of generators of the
graph state do not change the error correcting property of the quantum
code. Thus, using a row reduction transform $R$, and letting ${\mathcal
C} = {\mathcal C'}R$, we conclude that $({\mathcal G},{\mathcal C})$
is a CWS code with dimension and distance identical to the original
AC06 code $(A_f,f)$.
It must be noted that the row reduction does change the errors (in
terms of binary strings) detected by the classical code. More
precisely, for a CWS code $({\mathcal G},{\mathcal C})$ in the
standard form that we have obtained from an AC06 code $(A_f,f)$,
we may define a corresponding $(A'_{f'},f')$ in the
language of AC06, by
\begin{eqnarray}
f'(\bar{\mathbf c}_j)&=& 1,\ \forall\ {\mathbf c}_j\in \mathcal{C}
\\ A'_{f'} &=& [I \, \Lambda]
\,,
\end{eqnarray}
where $I$ is the $n\times n$ identity matrix, and $\Lambda$ is the
adjacency matrix of the graph $\mathcal{G}$.
The complementary set $Cset_{f'}$ of the Boolean function $f'$ is no longer the same
as the the complementary set $Cset_f$ of the Boolean function $f$, but they have same size
due to the linearity of the transform relating $\mathcal{C'}$ and
$\mathcal{C}$. Moreover, given quantum code distance $d$, the set of
induced classical error strings $Cl_{\mathcal{G}}({\cal E})$ for
$({\mathcal G},{\mathcal C})$ is indeed the AC06 error set, specified as
$\{x_1,x_2\ldots x_{2k}\}*w^T$ in Theorem~2 of \cite{Calderbank:06a},
a subset of the complementary set $Cset_{f'}$ of $f'$.
\subsubsection{Degenerate codes}
The AC06 framework does not discuss how to allow for
degenerate quantum codes, whereas the CWS construction includes these
explicitly. The above mapping of AC06 to the standard form CWS codes
applies only to non-degenerate codes, but the method indicates how
degenerate codes can also be constructed using the AC06 framework, as
follows. Specifically, one must appropriately constrain the Boolean
function $f$ (ie $\mathcal{C}'$).
All degenerate quantum codes can be expressed using a certain form for
${\mathcal C}'$, illustrated by the following. Consider a degenerate
code of distance $d$, given stabilizer $S$. Define the set
\begin{eqnarray}
S_d &=& \{E | E\in S ~{\rm and}~\text{wt}(E)<d \}
\nonumber \\
&& \cup\ \{-E | E\in -S ~{\rm and}~\text{wt}(E)<d \}
\,,
\end{eqnarray}
where $\text{wt}(E)$ gives the weight of the Pauli operator $E$. If
the rank of $S_d$ is $r$, then $r$ independent elements $g_1,\ldots
g_r\in S_d$ can be chosen, such that
$\<g_1,\ldots,g_r,g_{r+1},\ldots,g_n\>$ generate $S$, but
$g_{r+1},\ldots g_n$ are not in $S_d$. According to the CWS
construction described in the first step above, these generators imply
a representation of a classical code $\mathcal{C'}$ with each codeword
being $0$ for the first $r$ coordinates. In other words, $\<
g_1,\ldots ,g_r\>$ stabilizes $(A_f,f)$. Due to the one--to--one
correspondence between $f$ and $\mathcal{C}'$, this gives a structure
for the values of $f$, from which a search for degenerate codes can
initiate.
\subsection{The algorithm $\&$ complexity}
Given the equivalence between AC06 and CWS codes, it is insightful to
compare the algorithms implied by each for finding new codes. Both
approaches construct a quantum code $(\mathcal{G},\mathcal{C})$, but
each analyze and calculate from different starting points. The search
algorithm based on the CWS construction starts from the analysis of
the structure of a given $\mathcal{G}$, takes a specification the
desired properties of $\mathcal{C}$, and searches for a satisfactory
$\mathcal{C}$, eg using the maximum clique algorithm. In contrast,
the search algorithm based on the AC06 framework starts from the
analysis of the structure of a given $f$ (ie, $\mathcal{C}'$), and
searches for a stabilizer state $A_f$ which is LC equivalent to some
graph state $\mathcal{G}$. This is why the two methods are in a
sense, the mirror image of each other.
How do the computational complexities of the two approaches compare?
AC06 implies an algorithm starting from a given classical code $f$ to
find the quantum code $(A_f,f)$. This suggests a need to consider
$2^{2^n}$ different Boolean functions. In contrast, the {\sc
cws-maxclique} algorithm starts from $2^{n \choose 2}$ possible
graphs (or ideally, a smaller set of just the different ones).
However, this comparison is incomplete. In practice, if we really want
to find an particular $((n,K,d))$ code, then there will be
${2^n\choose K}$ classical codes to look at, and for each code the
AC06 algorithm needs to search for $\sim 2^{2n^2}$ possible sets of
strings. For a given classical code, to check whether a particular
string is in the complementary set $Cset_f$ of the code takes $K^2$ steps. And to check
whether a chosen set of $2n$ strings gives a valid stabilizer state
$[A\, B]$ needs $n^2$ steps. Therefore, with the AC06 algorithm, the
complexity of searching for an $((n,K,d))$ code is roughly
\begin{equation}
n^2 K^2 2^{2n^2}{2^n \choose K}
\,.
\end{equation}
This is comparable but slightly worse than the result obtained for the {\sc
cws-maxclique} algorithm, in Eq.~(\ref{complexity}).
Some simplifications used in {\sc cws-maxclique} may also apply to
AC06; in particular, a reduction of the code search space due to LC
invariance should be considered. In practice, in order to find all
quantum codes $(A_f,f)$, we only need to consider the codes
$\mathcal{C}'$ equivalent under column reductions. For $K\geq n$, this
LC equivalence is the same as equivalence classification of all the
$((K,n'))$ binary linear codes, where $n'\leq n$. For fixed $n'$, the
number of such codes is given by the Gaussian binomial factor ${2^K
\choose n'}_{Gaussian}$ \cite{MacWilliams:77}. Note this
classification gives not only all the $((n',K))$ codes $\mathcal{C}'$
we need to start with, but also all the $((n',K'\leq K))$ codes
$\mathcal{C}'$. For instance, the $((K=4,n'=3))$ code
$\{(0,0,0,0),(0,0,0,1),(0,0,1,0)\}$, viewed by column, is an
$((n'=3,K'=3))$ code $\{(0,0,0),(0,0,1),(0,1,0)\}$, but not an
$((n'=3,K=4))$ code.
\section{Discussion}
{\sc cws-maxclique} is an algorithm which may be usefully employed in
the search for new quantum codes, both additive and non-additive, as
described by the CWS construction. Given $n$ and $K$, the algorithm
can be used to search for an $((n,K,d))$ code
$(\mathcal{G},\mathcal{C})$, with a complexity which grows roughly as
$2^{n^2}$. In practice, by employing a number of search space
simplifications, by pruning the set of graphs ${\mathcal G}$ to
explore based on LC equivalences, and by taking guidance from
structural theorems about CWS codes, {\sc cws-maxclique} and
randomized variants of it have been used realistically\cite{CSSZ:07}
to explore codes with parameters up to $n=11$ and $K=32$.
Many interesting questions arise in the construction of this
algorithm. For example, it is likely that {\sc cws-maxclique} can be
improved with more memory efficient implementations; reductions to
other NP-complete problems may also allow faster exploration of
specific search spaces. Moreover, many of the simplifications used
in {\sc cws-maxclique} should also be applicable to the algorithm
introduced by the AC06 framework; and in return, any code isomorphisms
useful in simplifying AC06 should apply to {\sc cws-maxclique}.
CWS codes present a rich structure, only partially described by the
three structural theorems presented here. We believe that there are
promising strategies for identifying new non-additive quantum codes
based on expanding known additive codes, but such a strategy has to be
executed carefully, because of limitations imposed by the theorems.
Nevertheless, given an optimal $((n,K,d))$ additive code, there is
hope for success with a strategy of adding codewords to $\mathcal{C}$
to search for $((n,K'\!>\!K,d))$ CWS codes, because of potential LU
equivalences with some non-additive code. This hope suggests that it
is worthwhile both to further explore conditions under which two CWS
codes can be linked by an LU transform, and to better understand the
structural properties of CWS codes constructed from nonlinear codes,
so that more new quantum codes can be found. Indeed, one successful
application of this idea results in new CWS codes encoding several
more qubits than the best known codes \cite{Grassl:08b}. It is an open
question to determine if these nonadditive ``quantum Goethals-Preparata
codes'' are LU equivalent to any additive quantum code.
Finally, despite the encompassing success of the CWS construction in
describing all known non-additive codes with good parameters, we point
out that there do exist codes, such as $((7,2,3))$ and $((9,2,3))$
codes, which are outside of the CWS construction. Since these codes
are not LU equivalent to any CWS code, further new ideas will need to
be developed to reach outside the stabilizer framework, for a complete
understanding of quantum error correction codes.
\section*{Acknowledgments}
JAS was supported by ARO contract DAAD19-01-C-0056, and AWC was
supported in part by the JST CREST Urabe Project and an internship
at the IBM T. J. Watson Research Center. We gratefully acknowledge
comments and suggestions from V. Aggarwal and A. R. Calderbank.
\bibliographystyle{IEEEtran}
\section{Introduction}
Quantum error correcting codes play a significant role in quantum
computation and quantum information. While considerable understanding
has now been obtained for a broad class of quantum codes, almost all
of this has focused on stabilizer codes, the quantum analogues of
classical additive codes. Recently, a number of {\em nonadditive}
quantum codes have been discovered, with superior coding parameters
$((n,K,d))$, the number of physical qubits being $n$, the dimension of
the encoded space $K$, and the code distance $d$ \cite{CSSZ:07,Yu:07a,Yu:07b}.
These new codes have inspired a search for more high-performance
non-additive quantum codes \cite{Grassl:08b}, a desire to understand how
non-additive codes relate to additive codes, and how these may be
understood through a cohesive set of basic principles.
A systematic construction, providing a unifying approach to both
additive and nonadditive quantum error-correcting codes, has been
obtained \cite{CSSZ:07}. This {\em codeword stabilized quantum codes}
(``CWS'' quantum codes) approach constructs the desired quantum code
based on a binary classical code $\mathcal{C}$, chosen to correct a
certain error pattern induced by a self-dual additive quantum code
which is without loss of generality, taken to be a graph state
$\mathcal{G}$. The construction thus reduces the problem of finding a
quantum code into a problem of finding a certain classical code. All
previously known nonadditive codes \cite{Rains:97a,Smolin:07a,Yu:07a,Feng:08}
with good parameters can be constructed within the CWS construction.
The natural challenge in these approaches is efficient identification
of suitable classical codes, from which the desired additive and
non-additive quantum codes can be constructed. It is apparent that
due to the error pattern induced by the graph state $\mathcal{G}$, the
binary classical code $\mathcal{C}$ does not coincide with the usual
binary classical code where the minimum Hamming distance is a more
important code parameter -- although interestingly, they do coincide in
the special case where $\mathcal{G}$ is an unconnected graph, so the
family of CWS quantum codes includes classical (``bit-flip'') codes
as depicted in Fig. \ref{fig1}.
\begin{figure}[htbp]
\centering
\includegraphics[width=2.00in]{allcodes}
\caption{The relationship of CWS codes with additive quantum codes and
classical codes: ALL: all quantum codes; CWS: CWS codes; ADD: additive
codes; CLA: classical codes.} \label{fig1}
\end{figure}
The CWS construction, observing that a classical code correcting
certain bit-flip error patterns gives rise to a quantum code, allows a
natural encoding of the problem of finding a quantum code
$\mathcal{Q}=(\mathcal{G},\mathcal{C})$ into an equivalent problem, of finding the
maximum clique of an induced graph, called the CWS clique graph. The
existence of such a mapping is not surprising, since {\sc maxclique}
is an NP-complete problem \cite{Sipser:05,Garey-Johnson:79}, and thus
can be used for a reduction from all unstructured search problems. In
practice, many heuristic and randomized Clique solvers and SAT solvers
have been developed, with reasonable run-times for small problem
sizes. And since the search for CWS codes starts from a graph state
$\mathcal{G}$, prior art in categorizing local Clifford (LC) orbits of
those states \cite{Danielsen:05a,Danielsen:05b} helps simplify the
problem. Nevertheless, without further simplification, a mapping of
the CWS quantum codes search problem to {\sc maxclique} leaves the
problem unsolved, due to the exponential computational cost of solving
{\sc maxclique}. The real situation is even worse. For a general graph
state, the search problem is NP-complete due to the reduction to {\sc
maxclique}. However, to search for all the quantum codes, we need to
search for all graphs of $n$ vertices, which contributes a factor of
order $2^{n^2}$.
Here, we present an algorithm for finding CWS codes, based on a
mapping to {\sc maxclique}. We show that despite the exponential
complexity of solving this {\sc cws-maxclique} problem, the algorithm
can be usefully employed to locate and identify a wide variety of
codes, by taking careful steps to prune the search space. In
particular, we show how the complexity cost can be reduced by using
known graph isomorphisms and LC equivalences of graph states. We also
present simplifying criteria for the search, arising from the
structural properties of CWS codes. We prove three theorems limiting
whether $((n,K,d))$ additive codes with optimal $K$ can be improved,
or not, by the CWS construction. These theorems allow significant
practical reduction of the search space involved in finding CWS codes
using {\sc cws-maxclique}. Furthermore,
these theorems also indicate the existence of quantum codes outside of
the CWS construction, as alluded to in Fig.~\ref{fig1}.
We also compare and contrast the CWS codes with another
framework (``AC06'') which was introduced
independently \cite{Calderbank:06a} and is based on a correspondence
between Boolean functions and projection operators.
We interpret the AC06 framework to use a quantum state and a
classical code, to generate the desired quantum code, but in a
sense, it works in the reverse direction, starting from the classical
code and obtaining the quantum state.
We show how the AC06
Boolean function $f$ is the analogue of our classical code
$\mathcal{C}$, up to a LC equivalence. This allows us to extend AC06
to degenerate codes, and to show that the AC06 framework can also be
used to construct a search algorithm for new quantum codes, with
comparable complexity to {\sc cws-maxclique}.
\section{The structure theorems}
\label{sec:struct}
The ability to search for CWS codes through solving the {\sc
maxclique} problem is unsurprising; any unstructured search problem
can be reduced to an NP-complete problem. Thus, as it stands, the
{\sc cws-maxclique} algorithm presented in Section~\ref{sec:alg} is
unsatisfactory (at least, for large cases), for the search space grows
exponentially with the problem size $n$. Moreover, as shown in
Section~\ref{sec:bfunc}, the complexity of the AC06 algorithm is
comparably bad, and is thus also unsatisfactory.
Since a major goal of the study of nonadditive codes is identification
of codes with parameters superior to all possible additive codes,
pruning the search space is worthwhile as a first step, before
applying such brute-force search.
Is there hope? All nonadditive quantum codes with good parameters
constructed so far have been CWS codes, as was shown in
\cite{CSSZ:07}. Also, very recently the $((10,24,3))$ CWS code was
enumerated\cite{Yu:07b}; this code saturates the linear programing
bound on code parameters. It thus seems that we should be optimistic
about finding more CWS codes that outperform additive codes.
We call an $((n,K,d))$ additive quantum code {\it optimal} if there does not
exist any $((n,2K,d))$ additive quantum code.
One might hope that improved codes could be built from optimal $((n,K,d))$
additive codes, using the idea that these
codes could be subcodes of larger (non-additive) CWS codes with
superior parameters. If this were true, then a promising strategy
would be to start with the optimal additive codes and try to increase
the dimension.
This strategy leads to useful knowledge about the structural
properties of CWS codes and reveals relations between codes with
parameters $((n,K,d))$ and $((n,K',d))$, where $K'>K$. These relations are
especially interesting when given extra knowledge about the nature of
the classical code ${\mathcal C}$ employed in the construction.
Surprisingly, we find that the low-dimensional CWS codes are
actually additive. In particular, we find that all $((n,3,d))$ CWS
codes are subcodes of some $((n,4,d))$ additive codes. Furthermore, we
find restrictions on how optimal additive codes can and cannot be
subcodes of larger CWS codes.
Before presenting these structure theorems, we review the
relationship between the linearity of $\mathcal{C}$ and the additivity of $\mathcal{Q}=(\mathcal{G},\mathcal{C})$.
\subsection{Linearity of $\mathcal{C}$ and additivity of $\mathcal{Q}=(\mathcal{G},\mathcal{C})$}
Recall from Theorems~4 and 5 in \cite{CSSZ:07} that the following
facts are true:
\begin{fact}
\label{fact1}
If $\mathcal{C}$ is a linear code (or equivalently, the word operators form a group),
then $\mathcal{Q}=(\mathcal{G},\mathcal{C})$ is an additive code.
\end{fact}
\begin{fact}
\label{fact2}
If $\mathcal{Q}$ is an additive code, then there exists a linear code $\mathcal{C}$
and a graph $\mathcal{G}$, such that $\mathcal{Q}=(\mathcal{G},\mathcal{C})$.
\end{fact}
However, when ${\mathcal C}$ is nonlinear, the question of whether
$({\mathcal G},{\mathcal C})$ is additive or not is completely open,
since it may or may not
be possible that $({\mathcal G},{\mathcal C})$ is local unitary (LU) equivalent to
some additive code.
The following example explicitly illustrates this possibility, by
presenting two CWS codes: $({\mathcal G},{\mathcal C}_2)$ with
nonlinear ${\mathcal C}_2$, and $({\mathcal G},{\mathcal C}_1)$ with
linear ${\mathcal C}_1$. The two codes are LU equivalent to each
other:
\begin{example}
\label{ex:lucode}
Let
\begin{eqnarray}
\mathcal{G}&=&\<XZZZ,ZXII,ZIXI,ZIIX\>
\\ \mathcal{C}_1 &=& \{0000, 0110, 0101, 0011 \}
\\ \mathcal{C}_2 &=& \{0000, 0110, 0101, 1011 \}
\,.
\end{eqnarray}
Note that $(\mathcal{G},\mathcal{C}_1)$ is an additive code since the
codewords of $\mathcal{C}_1$ form a group under binary addition (it is
thus a linear code). In contrast, since $\mathcal{C}_2$ is nonlinear
(its set of codewords are not closed under addition),
$(\mathcal{G},\mathcal{C}_2)$ is not LC equivalent to any additive
code. Nevertheless, we can show that ${\mathcal Q}_1 =
(\mathcal{G},\mathcal{C}_2)$ is LU equivalent to ${\mathcal Q}_2 =
(\mathcal{G},\mathcal{C}_1)$, by giving an explicit LU equivalence
between the projectors into the two quantum code spaces, $P_{1}$ and
$P_{2}$. For this purpose, it is convenient to first transform by
$H_{234}=H_2\otimes H_3\otimes H_4$ and disregard normalization factors,
such that
\begin{eqnarray}
P'_{1}&=& H_{234}P_{1}H_{234}
\nonumber\\ &=& I+XXXX+YYYY+ZZZZ
\\ P'_{2} &=& H_{234}P_{2}H_{234}
\nonumber\\ &=& I+ZZZZ
\nonumber\\ && +\frac{1}{2}(XXXX+YYYY+XXYY+YYXX
\nonumber\\ && -XYYX-YXXY-XYXY-YXYX)
\,.
\end{eqnarray}
From Theorem~4.2 of \cite{Roychowdhury:97}, LU equivalence need only
consider $U=U_1\otimes U_2\otimes U_3\otimes U_4$ where $U_i$ maps $X$
to $aX+bY$ and $Y$ to $bX-aY$. We find that
$UP'_{1}U^{\dagger}=P'_{2}$, if $U$ is defined such that
\begin{eqnarray}
U_i X_i U_i^\dagger &=& [ X_i - (-1)^{\lfloor i/2\rfloor} Y_i ] /\sqrt{2}
\\ U_i Y_i U_i^\dagger &=& [ X_i + (-1)^{\lfloor i/2\rfloor} Y_i ] /\sqrt{2}
\,,
\end{eqnarray}
where $\lfloor i/2\rfloor$ is $0$ for $i<2$ and $1$ otherwise. The
existence of this LU equivalence is unsurprising, since it is
known~\cite{Rains:97c} that any $((4,4,2))$ code is LU equivalent to
the additive $[[4,2,2]]$ code.
\end{example}
In general, for a CWS code $\mathcal{Q}=(\mathcal{G},\mathcal{C})$ with a
nonlinear $\mathcal{C}$, we cannot directly infer that $\mathcal{Q}$ is
nonadditive. However, for fixed $n$ and $d$, if we seek a code with optimal $K$ and
only find $((n,K'\geq K,d))$ codes $\mathcal{Q}=(\mathcal{G},\mathcal{C})$ with
nonlinear $\mathcal{C}$, then we can conclude that $\mathcal{Q}$ nonadditive.
Put another way, if we fix $n$ and $d$, do an
exhaustive search over all the graphs and classical codes, and only find
quantum codes with nonlinear classical codes $\mathcal{C}$ for the optimal
$((n,K,d))$ CWS codes, then we can conclude that the optimal $((n,K,d))$
CWS codes we found are indeed nonadditive.
This can be shown by contradiction:
if $\mathcal{Q}=(\mathcal{G},\mathcal{C})$ is additive,
then there exists some local unitary operation
$U=\bigotimes_{i=1}^{n}U_i$, where each $U_i$ is a single qubit operation,
such that $U\mathcal{Q}U^{\dag}=\mathcal{Q}'$ and $\mathcal{Q}'$ is additive.
Then, according to Fact~\ref{fact2}, there exists a linear code
$\mathcal{C}'$ and a graph $\mathcal{G}'$ such that
$\mathcal{Q}'=(\mathcal{G}',\mathcal{C}')$.
\subsection{Structure theorems}
We now present and prove some structure theorems governing
CWS codes, and provide several useful corollaries. Recall that
we say an additive $((n,K,d))$ quantum code is {\it optimal} if there is no
$((n,2K,d))$ additive quantum code.
Our first theorem concerns CWS codes with dimension $2$:
\begin{theorem}
All $((n,2,d))$ CWS codes are additive.
\label{theorem:cws_dim2}
\end{theorem}
\begin{IEEEproof}
By the CWS construction, an $((n,2,d))$ CWS code is spanned by basis
vectors of the form $\{w_1\ket{S},w_2\ket{S}\}$, with word operators
$w_1=I=Z^{\mathbf{c}_1},w_2=Z^{\mathbf{c}_2}$. However $\{w_1,w_2\}$
form a group. So according to Theorem~5 of \cite{CSSZ:07} (or Fact~1), this CWS
code is an additive code.
\end{IEEEproof}
A natural corollary of Theorem~\ref{theorem:cws_dim2} is
\begin{corollary}
If an additive code of parameters $((n,1,d))$ is optimal, then
there do not exist any CWS codes with parameters
$((n,K>1,d))$.
\label{Kgeq1}
\end{corollary}
From corollary~\ref{Kgeq1}, it follows that the $((7,2,3))$ and
$((9,2,3))$ nonadditive codes given in \cite{Pollatsek:03a} and the
$((11,2,3))$ code given in \cite{Roychowdhury:97} are not local
unitary (LU) equivalent to any CWS code, for they are not LU
equivalent to any additive code. This implies that there exist
codes that are outside the CWS construction, as was claimed in
Fig.~\ref{fig1}.
Now we present a theorem concerning CWS codes of dimension $3$:
\begin{theorem}
Any $((n,3,d))$ CWS code is a subcode of some $((n,4,d))$
stabilizer code.
\label{n3d}
\end{theorem}
\begin{IEEEproof}
By the CWS construction, any $((n,3,d))$ CWS code has the form
$(\mathcal{G},\mathcal{C}_1)$ with
$\mathcal{C}_1=\{{\mathbf c}_1\!=\!0,{\mathbf c}_2,{\mathbf c}_3\}$. Consider a new code
$(\mathcal{G},\mathcal{C}_2)$ with
$\mathcal{C}_2=\{{\mathbf c}_1\!=\!0,{\mathbf c}_2,{\mathbf c}_3,{\mathbf c}_2 \oplus {\mathbf c}_3\}$. From
Theorem~\ref{CSSZTheorem3}, it follows that $\mathcal{C}_1$ detects
errors in $Cl_\mathcal{G}(\mathcal{E})$. To prove Theorem~\ref{n3d},
we need to show that $\mathcal{C}_2$ also detects those errors. It is
clear that $\mathcal{C}_2$ is a group with generators ${\mathbf c}_2, {\mathbf c}_3$ and
that ${\mathbf c}_2 \oplus {\mathbf c}_3 \notin Cl_\mathcal{G}(\mathcal{E})$ because ${\mathbf c}_2
\oplus ({\mathbf c}_2 \oplus {\mathbf c}_3) = {\mathbf c}_3$. Therefore $\mathcal{C}_2$ detects all
of $Cl_\mathcal{G}(\mathcal{E})$. Theorem~\ref{CSSZTheorem3} also
requires that for each $E\in \mathcal{E}$ either $Cl_\mathcal{G}(E)\ne
0$ or for all $i$, $Z^{c_i}$ commutes with $E$. The latter constraint
is satisfied by $\mathcal{C}_2$ since $Z^{{\mathbf c}_2 \oplus {\mathbf c}_3} E = Z^{{\mathbf c}_2}
Z^{{\mathbf c}_3} E = E Z^{{\mathbf c}_2} Z^{{\mathbf c}_3}$. Finally, since
$\{I,Z^{{\mathbf c}_2},Z^{{\mathbf c}_3},Z^{{\mathbf c}_2\oplus {\mathbf c}_3}\}$ is a group (and thus a
linear code), according to Theorem~5 in \cite{CSSZ:07} (or Fact~1), this CWS code
is a stabilizer code.
\end{IEEEproof}
Two natural corollaries of Theorem~\ref{n3d} are:
\begin{corollary}
If an additive code of parameters $((n,2,d))$ is optimal, then there
do not exist any CWS codes with parameters $((n,K\!>\!2,d))$.
\end{corollary}
\begin{corollary}
There does not exist any $((7,3,3))$ CWS code, even though the linear
programing bound does not rule out this possibility.
\end{corollary}
The two structure theorems above imply that CWS codes with parameters
better than the optimal $((n,K,d))$ additive codes need dimension
$K\geq 4$. We do know examples where $K=4$, as the $((5,6,2))$ code
\cite{Rains:97a} and the $((5,5,2))$ code \cite{Smolin:07a} beat the
optimal additive code with parameters $((5,4,2))$
\cite{Calderbank97a}.
Theorem~\ref{n3d} says that a CWS code of dimension $3$ is a subcode
of some additive code with higher dimension. This invites a related
question: when might an optimal additive code, of dimension $K$, be a
subcode of some CWS code of higher dimension? Unfortunately, we can
show that in some sense, optimal additive codes cannot be subcodes of
larger CWS codes, though we cannot show the impossibility in the most general
setting, due to the fact that ${\mathcal C}$ may be nonlinear even if a CWS
code is additive.
Motivated by LU equivalences like the one demonstrated in
Example~\ref{ex:lucode}, we show that if $\mathcal{C}_1$
is a linear code, then an optimal additive code
$(\mathcal{G},\mathcal{C}_1)$ cannot be a subcode of any CWS code
$(\mathcal{G},\mathcal{C}_2)$, where
$\mathcal{C}_1\subset\mathcal{C}_2$:
\begin{theorem}
Given a CWS code $(\mathcal{G},\mathcal{C}_1)$ with parameters
$((n,K,d))$, if $\mathcal{B}$ is a linear subcode of $\mathcal{C}$
containing $J<K$ codewords, then there exists an additive code
$(\mathcal{G},\mathcal{C}_2)$ with parameters $((n,K'=2J,d))$.
\label{thm:nosupercode}
\end{theorem}
\begin{IEEEproof}
By the CWS construction the classical codewords
$\mathcal{C}_1=\{{\mathbf c}_1,{\mathbf c}_2,\ldots {\mathbf c}_K\}$ of
$(\mathcal{G},\mathcal{C}_1)$ can be arranged such that $c_1=0$. From
$\mathcal{B}$ construct the linear classical code $\mathcal{C}_2=
\{{\mathbf b}_1,{\mathbf b}_2\ldots {\mathbf b}_J,{\mathbf v} \oplus {\mathbf b}_1, {\mathbf v} \oplus {\mathbf b}_2 \ldots {\mathbf v} \oplus {\mathbf b}_J\}$
where ${\mathbf v} \in \mathcal{C}_1$ but ${\mathbf v} \notin \mathcal{B}$. Then
$(\mathcal{G},\mathcal{C}_2)$ is clearly an $n$-qubit CWS code with
$2J$ codewords. It is an additive (stabilizer) code by Theorem 5 of
\cite{CSSZ:07} since $\mathcal{C}_2$ is a group.
It remains to check the error-correction conditions.
Theorem~\ref{CSSZTheorem3} ensures that $\mathcal{C}_1$ detects errors
in $Cl_\mathcal{G}(\mathcal {E})$, {\em i.e.} no error can turn one
codeword into another:
\begin{equation}
{\mathbf c}_i \oplus {\mathbf c}_j \oplus {\mathbf e} \ne 0\ {\rm for\ all\ }{\mathbf e} \in
Cl_\mathcal{G}(\mathcal{ E})
\,.
\label{cicje}
\end{equation}
The same condition for $\mathcal{C}_2$ is
\begin{equation}
{\mathbf b}_i \oplus {\mathbf v}^k \oplus {\mathbf b}_j \oplus {\mathbf v}^l \oplus {\mathbf e} \ne 0
\,,
\end{equation}
where $k,l \in \{0,1\}$. Since the $b$s are a group this reduces to
\begin{equation}
{\mathbf b}_i \oplus {\mathbf v}^k \oplus {\mathbf e} \ne 0
\end{equation}
which is true, due to Eq.(\ref{cicje}), and the fact that ${\mathbf b}_i,0,{\mathbf v} \in
\mathcal{C}_1$ for all $i$.
Theorem~\ref{CSSZTheorem3} also tells us that for all $E\in
\mathcal{E}$ either (a) $Cl_\mathcal{G}(\mathcal{E})\ne 0$ or (b) for
all $i$, $[Z^{{\mathbf c}_i},E]=0$. $(\mathcal{G},\mathcal{C}_2)$ has the same
graph $\mathcal{G}$ as $(\mathcal{G},\mathcal{C}_1)$ so whenever (a)
is satisfied for $(\mathcal{G},\mathcal{C}_1)$ it will be for
$(\mathcal{G},\mathcal{C}_2)$. For $\mathcal{C}_2$ (b) becomes for
all $i=1,J$ and $k=0,1$ $[Z^{{\mathbf b}_i} Z^{{\mathbf v}^k}, E]=0$. Again, since ${\mathbf b}_i,{\mathbf v}
\in \mathcal{C}_1$ for all $i$, this is condition is met.
\end{IEEEproof}
\begin{corollary}
An optimal additive code $(\mathcal{G},\mathcal{C})$ (for which
$\mathcal{C}$ must be linear) cannot be extended to become a larger
CWS code merely by adding codewords to $\mathcal{C}$.
\label{cor:nosupercode}
\end{corollary}
\begin{IEEEproof}
If the code could be extended in this way, by adding even just one
vector, then there would exist an additive code with twice as many vectors
and the same distance as the original code. This contradicts the
statement that the original code is optimal.
\end{IEEEproof}
These structure theorems rule out certain strategies for finding
non-additive codes with parameters superior to additive codes, but
suggest other approaches. Since an additive $((n,K,d))$ code
$(\mathcal{G},\mathcal{C}_1)$ must have linear $\mathcal{C}_1$,
Theorem~\ref{thm:nosupercode} and corollary~\ref{cor:nosupercode} tell
us that in practice we cannot search for an $((n,K'\!>\!K,d))$ CWS code
$(\mathcal{G},\mathcal{C}_2)$ just by adding codewords to
$\mathcal{C}_1$. However, Example~\ref{ex:lucode} hints that we may
be able to shoehorn an optimal $((n,K,d))$ additive code into a CWS
code $(\mathcal{G},\mathcal{C})$ with nonlinear $\mathcal{C}$, via
some LU transform. This gives hope to a strategy of adding codewords
to $\mathcal{C}$ to search for $((n,K'\!>\!K,d))$ CWS codes; such hope
suggests that it is worthwhile both to further explore conditions
under which two CWS codes can be linked by an LU transform, and to
better understand the structural properties of CWS codes constructed
from nonlinear codes.
|
0803.2379
|
\section{The Alexander grading}
In this section, we give a proof of Formula \ref{alexDef} for the Alexander grading of the long oval complex. Let us first formulate a technical lemma.
\begin{lemma}
\label{MinorSum}
The minimal number $l$ such that a permutation $\sigma$ is a product of $l$ transpositions is written $\si(\sigma)$. Let $M$ be a matrix of size $n$ with all entries in its first row and first column equal to 1. Then, (\ref{lemma}) holds.
\begin{equation}
\label{lemma}
\det(M)=\sum_{1\le i,j\le n} (-1)^{i+j}\det(M_{i,j})
\end{equation}
Where $M_{i,j}$ denotes the minor of $M$ obtained by removing the row number $j$ and the column number $i$.
\begin{proof}
Let us denote the entries of $M$ by $m_{i,j}$ for $1\le i,j \le n$. Once we apply
\begin{equation}
\label{detFormula}
\det(A)=\sum_{\sigma \in {\Sigma}_n} a_{1,\sigma (1)}...a_{n,\sigma (n)}(-1)^{\si(\sigma)}
\end{equation}
to both sides of (\ref{lemma}), we get an equality between polynomials.
\begin{equation}
\sum_{\sigma \in {\Sigma}_n} m_{1,\sigma (1)}...m_{n,\sigma (n)}(-1)^{\si(\sigma)}=
\label{e3}
\end{equation}
\begin{equation}
\sum_{1<i,j\le n} (-1)^{i+j}
\sum_{\sigma \in {\Sigma}_n, \sigma(i)=j} (-1)^{\si(\sigma)+\sharp\{k:0<(k-i)(\sigma(k)-j)\}}\prod_{1\leq k\leq n, k\neq i}m_{k,\sigma(k)}
\label{e4}
\end{equation}
We will prove it by comparing coefficients of monomials on both sides, while setting $m_{1,j}=1$ and $m_{i,1}=1$. We classify the monomials appearing
in (\ref{e3}) in two types: $m_{1,\sigma (1)}...m_{n,\sigma (n)}(-1)^{\si(\sigma)}$ for $\sigma \in \Sigma_n$ with $\sigma(1)=1$ and $m_{1,\sigma (1)}...m_{n,\sigma (n)}(-1)^{\si(\sigma)}$ with $\sigma(1)\neq 1$. The sum of the monomials of the first type are exactly the monomials appearing in (\ref{e4}) coming from $\det(M_{1,1})$. The monomials of the second type all appear in (\ref{e4}) exactly three times, two times with the same sign as in (\ref{e3}), one time with opposite sign: (we use $\in$ to denote that a monomial is a summand of a polynomial written in canonical form)
\begin{equation}
m_{2,\sigma(2)}...m_{n,\sigma(n)}(-1)^{\si(\sigma)}\in \det(M_{1,\sigma(1)}),\det(M_{\sigma^{-1}(1),1}),-\det(M_{\sigma^{-1}(1),\sigma(1)}).
\end{equation}
All monomials of (\ref{e3}) are now accounted for in (\ref{e4}). But monomials of the type:
\begin{equation}
\pm m_{1,\sigma(1)}...m_{i-1,\sigma(i-1)}
m_{i+1,\sigma(i+1)}...m_{n,\sigma(n)}\in \det(M_{i,\sigma(i)})
\end{equation}
for some $i\neq 1$ with $\sigma(1)\neq 1$ and $\sigma(i)\neq 1$, remain in (\ref{e4}). Setting $\overline{\sigma}$ to be equal to $\sigma$ composed with the transposition exchanging $i$ and $\sigma^{-1}(1)$, we notice that
\begin{equation}
(-1)^{\si(\sigma)+\sharp\{k:0<(k-i)(\sigma(k)-\sigma(i))\}+i+\sigma(i)}\prod_{1\leq k\leq n, k\neq i}m_{k,\sigma(k)}
(\in \pm\det(M_{i,\sigma(i)}))
\end{equation}
is equal to
\begin{equation}
(-1)^{1+\si(\overline{\sigma})+\sharp\{k:0<(k-\sigma^{-1}(1))(\overline{\sigma}(k)-\sigma(i))\}+\sigma^{-1}(1)+\sigma(i)}
\prod_{1\leq k\leq n, k\neq \sigma^{-1}(1)}m_{k,\overline{\sigma}(k)}
\end{equation}
$$ \nonumber ({}\in \pm\det(M_{\sigma^{-1}(1),\sigma(i)})).$$
Therefore, those monomials cancel each other too.
\end{proof}
\end{lemma}
\begin{theorem}
The Alexander grading on generators of the long oval complex on a grid diagram $D$ is given by the same formula as the Alexander grading on generators of the MOS complex:
$$
\label{AGrading2}
A(x)=\sum_{p\in x} a(p)- \frac{1}{2} \Bigl(\sum_{o\in O}
a(o)\Bigr) -
\left(\frac{n-1}{2}\right).
$$
Here, n is the complexity of $D$, $a(p)$ the winding number of $D$ around $p$ and $O$ one of the two types of punctures.
\begin{proof}
It follows from the analytical theory of knot Floer homology, that the \textit{relative} Alexander grading on pairs of generators must be defined in the same way in the two complexes (see \cite{oms} and \cite{beliakova}). We prove the theorem by showing that the Euler characteristics of the long oval complex and the MOS complex are equal:
$$\chi(C_{\Long})=\chi(C_{\mathrm{MOS}}).$$
Since the Alexander polynomial is never null, this fixes the absolute Alexander grading.
Let $M$ be an $n\times n$ matrix, with entries $m_{i,j}$ equal to $t$ power the winding number of the knot around the grid diagram intersection point of coordinates $(i,j)$. The Euler characteristic of the MOS complex can be calculated by taking the determinant $M$ and multiplying by a constant $f(D)$ depending only on $D$.
\begin{figure}[h]
\mbox{\epsfysize=7cm \epsffile{Alex2.eps}}
\caption{{ A generator on a set of long ovals and the set of nearest intersection points on the grid.}}
\label{AlShift1}
\end{figure}
Drawing the diagram that gives rise to the long oval complex, as in Figure \ref{AlShift1}, we can associate sets of intersection points of the grid with oval complex generators. Working on the torus, we do this, as in Figure \ref{AlShift1}, by replacing intersection point of ovals by the nearest intersection point of the grid. It is easily checked (by pairing generators of opposite contributions, see Figure \ref{cancellation}) that the total contribution of the generators associated with sets having two points with the same vertical or horizontal coordinate is zero. But each set of intersection points of the grid with one point at most on each vertical or horizontal line of the grid is the associate of exactly one long oval generator.
\begin{figure}[h]
\mbox{\epsfysize=9cm \epsffile{AlexCancellation.eps}}
\caption{{A set of $n-1$ intersection points with two points on the same column is associated here with two generators of the long oval complex. Since those generators have the same Alexander grading and since their Maslov gradings differ by one, they cancel each other in the Euler characteristic.}}
\label{cancellation}
\end{figure}
Let $S$ be any of those sets of $n-1$ points and $g$ its associated generator. The set $S$ misses exactly one row $j$ and one column $i$ of the grid diagram. The set $S$ can be seen as a monomial of the minor $M_{i,j}$, the point of coordinate $(k,l)$ in $S$ corresponding to the entry $m_{k,l}$ of $M$. The Alexander index of $g$ is exactly the value of the monomial of the minor $M_{i,j}$. Moreover, the sign of the monomial in $(-1)^{i+j}\cdot M_{i,j}$ is the Maslov index of $g$.
Collecting those facts, we get:
$$
\chi(C_{\Long})=\big(\sum_{m} {(-1)^m\cdot \sum_{a} {\mid C_{\Long}^{m,a}\mid \cdot t^a}}\big)\cdot f(D)\\
\nonumber=\sum_{1\le i,j\le n} (-1)^{i+j}\det(M_{i,j})\cdot f(D)
$$
Using Lemma \ref{lemma},
$$
=\det(M)\cdot f(D)=\chi(C_{\mathrm{MOS}}).
$$
\end{proof}
\end{theorem}
\section{Bibliography}
\section{The implementation}
For reasons of simplicity, we describe our program only for ${\mathbb{Z}}/2{\mathbb{Z}}$ coefficients. Passing to $\mathbb{Z}$ coefficient doesn't create any new problem.
Our program computes the knot Floer homology $\widehat{HL}(K)$ of a knot $K$ by constructing a short oval complex $(C_{\Short}(D(K)),\partial_{\Short}(D(K))$ and taking its homology. Our program therefore contains three main parts:
\begin{itemize}
\item{A function taking as input a knot (in braid representation usually) and giving as output a rectangular diagram with a set of ovals. The rectangular diagram and the ovals are chosen in order to minimize the number of generators of the short oval complex constructed from the ovals.}
\item{A function listing the generators of the complex and calculating their Alexander and Maslov gradings.}
\item{A function that computes the boundary map. Since most of the running time of the program is spent by this function, it is heavily optimized.}
\item{A function calculating the homology of the complex. This function gives in fact $\widehat{HL}(K)\otimes V^{\otimes(n-1)}$. Extracting $\widehat{HL}(K)$ from it is easy.}
\end{itemize}
The first three parts are explained in the next three subsections.
The greatest task in the computation is to calculate the differential $\partial_{\Short}$. This differential, being a linear map, could be represented as a huge matrix $M_{\partial}$. However, because the differential preserves the Alexander grading and decreases the Maslov grading by one, the differential can be represented by a set of matrices of much more reasonable size, one for every pair of Alexander and Maslov gradings.
Our short oval complex is a direct sum of complexes with uniform Alexander gradings.
$$
C_{\Short}=\bigoplus_{a} {C_{\Short}^a}
$$
Here, $C_{\Short}^a$ is the subcomplex of $C_{\Short}$ generated by generators of Alexander grading $a$. Since we compute an homology of the form $\widehat{HL}(K)\otimes V^{\otimes(n-1)}$ with $V$ a vector space spanned by two vectors of different Alexander gradings, we don't even have to compute the whole complex $C_{\Short}$. For any set $A$ of $n-1$ Alexander gradings, the complexes $C_{\Short}^a$ can be deduced from the rest ($\bigoplus_{a\notin A} {C_{\Short}^a}$). For the Alexander gradings we choose to ignore, we don't have to compute the generators or the boundary map. Naturally, we choose to ignore the Alexander gradings that would be the most difficult to compute: those with many generators. Since the numbers of generators of the possible Alexander gradings is very unevenly distributed, a lot of time is saved.
\subsection{Generating a grid diagram}
\label{rectGen}
\begin{figure}[h]
\mbox{\epsfysize=120mm \epsffile{rect3.eps}}
\caption{{\ A representation by braid closure (top left) of the figure eight knot is transformed in four simple steps in a grid diagram (bottom right).}}
\label{rect}
\end{figure}
Generating a grid diagram representing a knot, either by hand or automatically, is not difficult (see Figure \ref{rect} for an example). However, the grid diagram obtained usually has much higher complexity than what is necessary. Therefore, our program simplifies knot diagrams before using them to construct complexes. This is done by exploring the set of equivalent rectangular diagrams during a fixed amount of time and picking the diagram of least complexity generated during this period. A set of simple moves (called Cromwell moves, see \cite{cromwell} or \cite{dy}) analogous to Reidemeister moves are used to this effect (see Figure \ref{dyMoves}). A cycling move is a circular permutation of the rows or column of a grid diagram. A stabilization move is the merging of two rows (or column) containing adjacent decorated squares accompanied by the deletion of the column (respectively the row) containing the two squares. A destabilization is simply the inverse of a stabilization. A castling move is an exchange of adjacent rows or columns of the grid diagram. (Castling moves are only possible if the decorations on the adjacent rows or columns are in a certain order.)
\begin{figure}[h]
\mbox{\epsfysize=120mm \epsffile{dyMoves.eps}}
\caption{{\ The three Cromwell moves:} (a) a cycling move, (b) a stabilization/destabilization move and (c) a castling move.}
\label{dyMoves}
\end{figure}
The search is optimized in four ways:
\begin{itemize}
\item{By caching (memorizing) already generated diagrams and exploring only diagrams of minimal complexity. This enables us to avoid exploring the same diagrams many times.}
\item{By considering as identical diagrams that can be obtained from one another by cycling moves. By reducing the space we have to explore, this makes caching much more beneficial. This is more or less equivalent to having the diagrams sitting on the torus instead of on the plane.}
\item{By exploring only diagrams of minimal complexity. In other words, moves increasing the complexity are not considered and each time a diagram of smaller complexity is discovered, the search begins anew from this diagram. This does not only gain time but also makes the memory consumption of the caching more reasonable. Moreover, searching in a monotonic way probably doesn't miss too many opportunities (see \cite{dy}).}
\item{By generalizing the stabilization move, so that simplifications can be discovered faster. For example, sequences of castlings in the same direction followed by a stabilization are considered as \emph{one} operation.}
\end{itemize}
Once we have a minimized grid diagram $D$ of complexity $n$, the next step is to embed this diagram in the plane and to choose $2n-2$ ovals on it. The generators of the complex $(C_{\Short}(D),\partial_{\Short}(D))$ that we want to calculate are sets of intersection point of ovals. In order to minimize the number of generators, we simply generate every possible embedding of the diagram and every choice of (shortened) ovals on them and pick the set of ovals that has the smallest number of intersection points. Of course, having a smaller number of intersection points of ovals does not guaranty having a smaller number of generators, but counting intersection points is easier than counting generators.
\subsection{Listing generators}
\label{genGen}
Although constructing the generators is not very difficult, some care must be taken to compute their Alexander and Maslov gradings efficiently. To compute the Alexander gradings quickly, we tabulate $J(\{p\}-(\mathbb{O}+\mathbb{X})/2,\mathbb{X}-\mathbb{O})$
beforehand for each intersection point $p$. Then for a given generator $g$, the Alexander grading is simply a sum of $n-1$ tabulated values and a precomputed constant.
The case of the Maslov grading is more difficult. The naive algorithm for computing $I(x,x)$ for a set $x\subset \mathbb{R}^2$, takes time $O(\vert x\vert^2)$. By using a divide and conquer approach (dividing $\mathbb{R}^2$ by horizontal and vertical lines that split $x$ in two for example), we can get an algorithm running in time $O(\vert x\vert \log {\vert x\vert})$. However, in this case, the constant is quite bad. Our solution consists in generating the generators in two stages. Each proto-generators\footnote{The proto-generators simply contains a pairing of the vertical ovals with the horizontal ones. They are therefore between $2^{n-1}$ and $4^{n-1}$ less numerous than the generators.} of the first stage giving raise to many generators in the second stage. This is done in a way that enables the worst part of the calculation, $I(x,x)$, to be done on the proto-generators and to be inherited by the generators. The rest of the calculation ($-I(x,\mathbb{O})-I(\mathbb{O},x)+I(\mathbb{O},\mathbb{O})$) uses the same tabulation method as the Alexander grading.
Since, the fiberedness and the genus of a knot only depend on the part of its homology with high Alexander grading, it is noteworthy that the set of generators $S$ of Alexander grading $>c$ can be listed in time $O(\vert S\vert \cdot n^3)$ (at least when $\vert S\vert>>n$). This comes from the fact that generators can be seen as matchings in a weighted bipartite graph between vertex representing the horizontal and vertical ovals. The weights can be chosen so that the Alexander grading of a generator is simply (up to a constant) the weight of the matching. Listing matchings with weights bigger than a constant is done by using a backtracking search that tests for cuts by the Hungarian algorithm for bipartite weighted matching (see \cite{kuhn}).
\subsection{The boundary map}
\label{bdmap}
Computing the boundary map means computing the entries $\partial_{x,y}$ of the matrix representing it as a function of the generators $x$ and $y$. When $\partial_{x,y}\neq 0$, we sometimes say that $x$ and $y$ are connected by the boundary map. We do not have a single method that always computes $\partial_{x,y}$ very quickly. However, $\partial_{x,y}$ is zero most of the time and, when $\partial_{x,y}$ is zero, we can usually determine that quickly. We therefore use two methods to compute $\partial_{x,y}$:
\begin{itemize}
\item{A fast method to check if there is a "domain" between two generators. The existence of a domain is a necessary condition for $\partial_{x,y}$ to be non-zero (see \ref{domains} below where the definition of domains is given).}
\item{A method that calculates $\partial_{x,y}$, which is inefficient, but which our program only calls when there is a domain between $x$ and $y$ (see \ref{chaining}).}
\end{itemize}
\subsubsection{Domains between generators}
\label{domains}
Let $D$ be a grid diagram on the plane with sets of vertical and horizontal ovals $\alpha$ and $\beta$. The ovals divide the plane in many bounded connected components and an unbounded component, which we call pieces. The bounded components are bigons, rectangles and triangles. \emph{Domains} are multisets (formal positive integral linear combinations) of pieces. For a domain $D$ and an intersection point $p$ of the ovals, the multiplicities in $D$ of the four pieces adjacent to $p$ listed in counter clockwise order starting from the upper right will be written $a_1(p),a_2(p),a_3(p),a_4(p)$. The corner index $c(p)$ of an intersection point $p$ is $a_1(p)+a_3(p)-a_2(p)-a_3(p)$. The sum of the pieces in an oval is called a \emph{periodic domain}.
\begin{lemma}
\label{periodic}
Given a set of transverse vertical and horizontal thin ovals in the plane, the only domains $D$ such that every intersection point of the curves has corner index zero are the sums of periodic domains and a multiple of the unbounded piece.
\begin{proof}
The result is obvious when the ovals have no intersection points. The lemma is then proved by induction on the number of intersection points.
\end{proof}
\end{lemma}
\begin{theorem}
\label{domTheo}
Given a collection of ovals on a grid diagram, the associated complex $(C,\partial)$ and two generators $x,y\in C$. If $\partial_{x,y}\neq 0$, there is a unique domain $D$ with the following properties:
\begin{itemize}
\item{The corner index at intersection points contained in $x$ but not in $y$ is 1.}
\item{The corner index at intersection points contained in $y$ but not in $x$ is -1.}
\item{All other intersection points have corner index zero.}
\item{Pieces of $D$ containing punctures have multiplicity zero in D.}
\end{itemize}
\begin{proof}
For long oval complexes, the existence and uniqueness of $D$ follow easily from the definition of the boundary map. For short oval complex, existence is proved by induction on the sequence of homotopies.
Uniqueness of domain between generators in short oval complexes follows from the four conditions on $D$. Since the unbounded piece contains a puncture, the fourth condition implies that the multiplicity in $D$ of the unbounded piece is null. Therefore, using Lemma
\ref{periodic} and the four three conditions on $D$, we get uniqueness of $D$ up to addition of linear combination of periodic domains. However, at least one puncture is in only one periodic domain $p_0$ and all the periodic domains can be arranged in a sequence $p_0,\ldots ,p_{2n-2}$ such that $p_i$ and $p_{i+1}$ have one puncture in common. Using the fourth condition on $D$, this sequence enables us to show by induction on $i$ that the multiplicity of $p_i$ in the difference of two domains verifying the four conditions must be zero.
\end{proof}
\end{theorem}
To determine the existence of a domain is therefore equivalent to solving in integers a system of linear equations and inequalities, the unknown being the multiplicities of the pieces in $D$. Luckily, the equations coming from the conditions on $D$ in Theorem \ref{domTheo} always have a unique solution, even over $\mathbb{Q}$. We can therefore decide efficiently if there is a domain between generators, by first solving a system of linear equations over $\mathbb{Q}$ and secondly checking that the solution obtained is non-negative and integral.
\subsubsection{Computing $\partial_{x,y}$}
\label{chaining}
The boundary map $\partial$ (for both long or short oval complexes) can be seen as a graph with vertices representing generators and edges representing the non-zero entries of the matrix $\partial$. Given an homotopy between long ovals and short ovals, the graph representing the short oval complex can be constructed by a sequence of modifications on the graph representing the long oval complex. The arguments used in this section are typical of algebraic Morse theory \cite{kozlov}.
Each time a pair of intersection points forming the corners of a bigon disappears (see Figure \ref{bigon2}), the graph is modified in the following way:
\begin{itemize}
\item{All generators containing one of the disappearing intersection points are deleted.}
\item{Let $a$ and $b$ be two generators connected by the bigon that disappears during the homotopy:
$$b\in \bigon_{0}(a).$$
For every generators $x$ and $y$ with $\partial_{x,b}=1$ and $\partial_{a,y}=1$, we put one edge between $x$ and $y$ exactly if it wasn't there before.}
\end{itemize}
This is a simple translation of Definition \ref{edgeDel} in graph theoretic language. Intuitively, it means that the edges of the graph of a short oval complex correspond to paths in the long oval complex graph.
Thinking in terms of graphs enables us to give a clearer condition for $\partial_{x,y}=1\pmod{2}$ in the short oval complex (see Figure \ref{path} for an illustration of the effect of an homotopy on the graph representing the boundary map).
\begin{figure}[t]
\mbox{\epsfysize=140mm \epsffile{path.eps}}
\caption{{\ The effect of the disappearance of a bigon. The small circles represent generators (a part of the corresponding generator is drawn beside each circle). The bipartite graph represents (a small part) of a boundary map between two consecutive Maslov gradings. The generators containing a disappearing intersection point disappear. We see that what was a path of length three between two generators, before the homotopy, becomes an edge between those two generators afterwards.}}
\label{path}
\end{figure}
\begin{theorem}
Let $(C_{\Long},\partial_{\Long})$ be a long oval complex. Let $(C_{\Short},\partial_{\Short})$ be a short oval complex obtained by an homotopy $(\alpha,\beta)_t$ for $t \in [0,l+1]$. Let $G$ be a graph representing $(C_{\Long},\partial_{\Long}).$ We associate to each edge of $G$, between generators connected by a bigon that disappears during the homotopy, the time at which it disappears. Then $x$ and $y$ are connected by the boundary map of the short oval complex exactly when the parity of the cardinality of the set of paths $W$ in $G$ verifying the following conditions is odd. Let $W$ be a sequence of edges $(e_1,...,e_k)$.
\begin{itemize}
\item{All the edges of $W$ are between generators of the same two Maslov gradings.}
\item{The path length $k$ is odd.}
\item{Each intermediate vertex on the path is adjacent to an edge of the path that disappears before it in the homology.}
\end{itemize}
\end{theorem}
Our program uses this combinatorial characterization of $\partial_{\Short}$ in a very straightforward way. It finds all possible paths in the long oval complex graph, by using a breadth first search, and it counts them. A simple caching of the generators explored is done to avoid exploring dead-ends many times. The point of this method is that the huge graph explored doesn't have to be stored at any time. It makes this method very efficient in terms of memory. Of course, the search is quite slow, but this method is not called often enough for speed to be a problem.
\subsection{Technical aspects of the programming}
Our program was written in Python\footnote{http://www.python.org/}. We used Idle and Eclipse \footnote{http://www.eclipse.org/} with PyDev \footnote{http://pydev.sourceforge.net/} as editing environment. Our program would have been much slower without the wonderful program called Psyco\footnote{http://psyco.sourceforge.net/} of Armin Rigo. Psyco is a kind of Just-In-Time compiler for Python which can apparently greatly accelerate (from two to a hundred time faster) almost any Python program. It enables programs in Python, a high level language, to run, even for the most computation intensive tasks, with speed comparable to programs in a low level language like C.
\section{Definitions of the complexes}
\subsection{MOS complex}
A \emph{grid diagram} (or grid diagram on the plane) of complexity $n$ is a $n\times n$ square grid on the plane, with each square decorated by an $X$, an $O$ or nothing, such that each column or row of the grid contains exactly one $X$ and one $O$. The set of all $O$ (respectively all $X$) is called $\mathbb{O}$ (respectively $\mathbb{X}$). (See Figure \ref{gridDiag}.)
\begin{figure}[h]
\mbox{\epsfysize=50mm \epsffile{trefoil1.eps}}
\caption{{\ A grid diagram for the $3_1$ knot.} The white points represent $\mathbb{X}$ and the black points $\mathbb{O}$.}
\label{gridDiag}
\end{figure}
We call the center of a decorated square a \emph{puncture}. More information about grid diagrams can be found in \cite{dy}.
A grid diagram represents a link in the following way. If we draw a line segment between pairs constituted of an $X$ and an $O$ sitting in the same row or in the same column, we obtain a figure in the plane. Assuming that vertical segments always pass over horizontal segments when they intersect, we can see this figure as a projection of a link. We say that the grid diagram represents the same link as this projection. It is not difficult to see that a knot can always be represented by many different grid diagrams.
Starting with a grid diagram on the plan, by identifying the uppermost and bottommost lines of the grid and the leftmost and rightmost lines, we obtain a decoration of a toroidal grid called a\emph{ grid diagram on the torus}. After this operation, the vertical and horizontal lines of the grid become meridional and longitudinal circles. Each grid diagram on the torus of complexity $n$ is associated in a natural way to $n^2$ grid diagrams in the plane.
\begin{definition}
Let $D$ be a grid diagram on the torus of complexity $n$, and $D'$ one of its analog in the plane.
The \emph{MOS complex} $(C(D),\partial(D))$ is defined in the following way: $C(D)$ is the vector space over ${\mathbb{Z}}/2{\mathbb{Z}}$, which has as basis the unordered $n$-tuples of intersection points of circles of the grid that contains exactly one intersection point on every meridional or longitudinal circle. The boundary map is defined as follows: $$\partial(x)=\sum_{y\in \rect_0(x)} y.$$ We define $\rect_0(x)$ to be the set of generators $y$ such that:
\begin{itemize}
\item{The generators $x$ and $y$ have exactly $n-2$ intersections in common.}
\item{The four intersection points that are either only in $x$ or only in $y$ form the four corners of a rectangle $R$. In $D'$, the upper right corner of $R$ is in $x$.}
\item{The rectangle $R$ is empty in the sense that it doesn't contain a puncture of the grid diagram or any intersection point of $x$.}
\end{itemize}
\end{definition}
We often think of the boundary map as a matrix with entries indexed by the generators of the complex: $$\partial(x)=\sum_{y} \partial_{x,y}\cdot y.$$ In other words, $\partial=(\partial_{x,y})$ for $x,y$ generators.
\begin{definition}
let $S,T \subset \mathbb{R}^2$ be two finite sets of points in the plane. $I(S,T)$ is defined as the number of pairs $(a,b)\in S,\, (c,d) \in T$ with $a<c$ and $b<d$. We also define $J(S,T)=(I(S,T)+I(T,S))/2$. We consider the natural bilinear extensions of $I$ and $J$ to formal sums of intersection points.
\end{definition}
\begin{definition}
\label{masDef}
A generator $g \in C$ has Maslov grading $M(x)=I(x,x)-I(x,\mathbb{O})-I(\mathbb{O},x)+I(\mathbb{O},\mathbb{O})+1$. It will also sometimes be practical to speak of the Maslov grading of an arbitrary set of points.
\end{definition}
\begin{definition}
\label{alexDef}
A generator $g \in C(D)$ for a grid diagram $D$ of complexity $n$ has Alexander grading $A(x)=J(x-{(\mathbb{O}+\mathbb{X})}/2 ,\mathbb{X}-\mathbb{O})-{(n-1)}/2$.
\end{definition}
It is also possible to write the Alexander grading of a generator $x$ in a diagram $D$ as the sum of the winding number of $D$ around the intersection points $p \in x$ and a constant depending only on $D$. Let $a(p)$ denote the average of the winding number of the knot projection represented by $D$ in a small ball around $p$.
$$
\label{AGrading}
A(x)=\sum_{p\in x} a(p)- \frac{1}{2} \Bigl(\sum_{o\in O}
a(o)\Bigr) -
\left(\frac{n-1}{2}\right)
$$
Despite a small difference in the definition of $a(p)$, the formula above is equivalent to the formula for the Alexander grading given in \cite{oms}.
\begin{theorem}
\label{MOSTh}
(C.Manolescu, P.Ozsv\' ath, S.Sarkar) Let $D$ be a grid diagram of complexity $n$ of the knot $K$, $(C(D),\partial(D))$ the MOS complex constructed from $D$.
\begin{itemize}
\item{The chain complex $(C(D),\partial(D))$ is bigraded by the Alexander and Maslov gradings. The boundary map preserves the Alexander grading and decreases the Maslov grading by 1.}
\item{The homology of $H_* (C(D),\partial)$ is $\widehat{HL}(K)\otimes V^{\otimes(n-1)}$}, where $V$ is a vector space with basis composed of one vector of Alexander and Maslov gradings -1 and one vector with Maslov and Alexander gradings zero.
\end{itemize}
\end{theorem}
\begin{figure}[t]
\mbox{\epsfysize=8cm \epsffile{long.eps}}
\caption{{\ A grid diagram for the $5_2$ knot with a collection of long ovals.}}
\label{longFig}
\end{figure}
\subsection{Beliakova's complexes}
\label{BelComp}
We will now construct two complexes that have the same homology as the MOS complex, and for which a theorem almost identical to Theorem \ref{MOSTh} is proven in \cite{beliakova}.
The construction of the \emph{long oval complex} $(C_{\Long}(D),\partial_{\Long}(D))$ also starts with the grid diagram $D(K)$ of the knot $K$. But this time, the construction is done in the plane. Let us call $n$ the complexity of the grid diagram $D$. We begin by drawing $n-1$ long thin vertical (respectively horizontal) ovals around all but one pair of punctures with identical first (respectively second) coordinates (see Figure \ref{longFig}). We call the vertical ovals $\boldsymbol{\alpha}=\{\alpha_1,\ldots ,\alpha_{n-1} \}$ and the horizontal ovals $\boldsymbol{\beta}=\{\beta_1,\ldots ,\beta_{n-1}\}$. While drawing those ovals, we also require that at least one puncture is in the unbounded component of $\mathbb{R}^2/(\alpha_1 \cup ... \alpha_{n-1}\cup \beta_1 \cup ... \beta_{n-1})$
\begin{definition}
\label{longOvalDef}
Let $D$ be a grid diagram of complexity $n$ in the plane. The vector space over ${\mathbb{Z}}/2{\mathbb{Z}}$ called $C_{\Long}(D)$ has as basis elements the sets of $n-1$ intersection points between vertical and horizontal ovals with exactly one intersection point on each oval. An Alexander grading on those generators is defined by exactly the same formula as in the case of the MOS complex. The Maslov grading $M(x)$ of a generator $x$ is defined to be $I(x,x)-I(x,\mathbb{O})-I(\mathbb{O},x)+I(\mathbb{O},\mathbb{O})$. (The same formula as for the MOS complex minus one).
The boundary map $\partial_{\Long}(D)$ is defined as follows:
$$\partial(x)=\sum_{y\in \rect_0(x)\cup y\in \bigon_0(x)} y.$$
The set $\rect_0(x)$ contains the generators $y$ such that:
\begin{itemize}
\item{The generators $x$ and $y$ have exactly $n-3$ intersection points in common.}
\item{The four intersection points that are either only in $x$ or only in $y$ form the four corners of a rectangle $R$. The sides of $R$ are arcs of ovals. The upper right corner of $R$ is in $x$.}
\item{The rectangle $R$ is empty in the sense that it doesn't contain any puncture of the grid diagram or any intersection point of $x$.}
\end{itemize}
The set $\bigon_0(x)$ contains the generators $y$ such that:
\begin{itemize}
\item{The generators $x$ and $y$ have exactly $n-2$ intersection points in common.}
\item{The two intersection points that are either only in $x$ or only in $y$ constitutes the two corners of a bigon $L$. The sides of $L$ are one arc of horizontal oval and one arc of vertical oval. A counterclockwise rotation around $L$, along the arc of the horizontal oval, leads from the corner in $x$ to the corner in $y$.}
\item{The bigon $L$ is empty in the sense that it doesn't contain any puncture of the grid diagram.}
\end{itemize}
\end{definition}
\begin{figure}[t]
\mbox{\epsfysize=8cm \epsffile{short.eps}}
\caption{{\ A grid diagram for the $5_2$ knot with a collection of short ovals.} The blue points represent a generator.}
\label{shortFig}
\end{figure}
The long oval complex can be reduced, by a sequence of homotopies $(h_1,...,h_l)$, to a much smaller (in terms of the rank of $C_{\Long}(D)$) complex, which we will call the \emph{short oval complex} $(C_{\Short}(D),\partial_{\Short}(D))$ (see Figure \ref{shortFig}). This sequence of homotopies corresponds to a progressive shortening of the curves $\boldsymbol{\alpha}$ and the curves $\boldsymbol{\beta}$. This shortening is an homotopy written $(\boldsymbol{\alpha},\boldsymbol{\beta})_t$ for $t \in [0,l+1]$ such that there is no $t$ for which a curve meets a puncture. Among such homotopies, we choose one that minimizes the number of intersection points of the curves at $t=l+1$. We choose this progressive shortening such that one pair of intersection points disappears for each $t\in \{1,...,l\}$ (see Figure \ref{bigon2} for an example of the corners of a bigon disappearing). To each $(\boldsymbol{\alpha},\boldsymbol{\beta})_t$ $t\in [0,l+1]$, we associate a complex $(C_{\lfloor t\rfloor}(D),\partial_{\lfloor t\rfloor}(D))$. The complex $C_{k}(D)$ is generated by sets of $n-1$ intersection points of $\boldsymbol{\alpha}_k$ and $\boldsymbol{\beta}_k$ such that each intersection point is on exactly one oval. The boundary maps $\partial_k(D)$ are defined inductively.
\begin{figure}[h]
\mbox{\epsfysize=40mm \epsffile{bigon2.eps}}
\caption{{\ The disappearance of a bigon during an homotopy from the long oval complex to the short oval complex.}}
\label{bigon2}
\end{figure}
\begin{definition}
\label{edgeDel}
Let $p_1$ and $p_2$ be the intersection points that disappear at time $t=k$, assume $\{ p_1\}$ has a bigger Maslov grading than $\{ p_2\}$.
For $x$ a generator of $C_{k}(D)$, let $\eta (x)$ be either $0$, if $x$ does not contain $p_2$, or, if $x$ does contain $p_2$, a generator identical to $x$ except that it contains $p_1$ instead of $p_2$.
$$\partial_k=\pi \circ(\partial_{k-1}+\partial_{k-1}\circ \eta \circ \partial_{k-1})\circ \iota$$
Where $\pi$ is the natural projection from $C_{k-1}(D)$ to $C_k(D)$ that sends generators containing $p_1$ or $p_2$ to zero and $\iota$ is the natural injection from $C_k(D)$ to $C_{k-1}(D)$.
\end{definition}
The homotopy $h_k$ is an homotopy between $(C_{k-1}(D),\partial_{k-1}(D))$ and $(C_{k}(D),\partial_{k}(D))$. It corresponds to the "cancellation" of two intersection points. The existence of these homotopies is Lemma 2.1 of \cite{beliakova}.
It is noteworthy that this construction of $(C_{\Short}(D),\partial_{\Short}(D))$ is not canonical. The differential $\partial_{\Short}$ depends on the whole $(\boldsymbol{\alpha},\boldsymbol{\beta})_t$ (for $t\in [0,l+1]$) and not just on $(\boldsymbol{\alpha},\boldsymbol{\beta})_{l+1}$.
\begin{theorem}
\label{BTh}
(Beliakova) Let $D$ be a grid diagram of complexity $n$ of the knot $K$, $(C_{\Short}(D),\partial_{\Short}(D))$ a short oval complex constructed from $D$.
\begin{itemize}
\item{The chain complex $(C_{\Short}(D),\partial_{\Short}(D))$ is bigraded by the Alexander and Maslov gradings. The boundary map preserves the Alexander grading and decreases the Maslov grading by 1.}
\item{The homology $H_* (C_{\Short}(D),\partial_{\Short}(D))$ is $\widehat{HL}(K)\otimes V^{\otimes(n-1)}$}, where $V$ is a vector space with basis composed of one vector of Alexander and Maslov gradings -1 and one vector with Maslov and Alexander gradings zero.
\end{itemize}
\end{theorem}
\section{Introduction}
Knot Floer homology $HL$ was introduced in \cite{knots} by Ozsv{\'a}th and Szab{\'o} and independently in \cite{rasmussen} by Rasmussen. In this article, we will restrict ourselves to the study of $\widehat{HL}$, a simplified version of $HL$.
The knot Floer homology $\widehat{HL}(K)$ of a knot $K$ is the homology of a bigraded complex $(C,\partial)$ with Maslov grading $m$ and Alexander grading $a$. $$\widehat{HL}(K)=\bigoplus_{m,a \in \mathbb{Z}} HL_{m} (K,a)$$ We will work with either ${\mathbb Z}$ or ${\mathbb Z}/2{\mathbb Z}$ as coefficient ring. Knot Floer homology, even in its simplest version $\widehat{HL}(K)$ over ${\mathbb Z}/2{\mathbb Z}$, gives much information about a knot: The Seifert genus $g(K)$ of $K$ is
$$\underset{a}\argmax (\, \widehat{HL}(K,a)\neq 0)$$ (see \cite{os}).
The knot $K$ is fibered if and only if
$\mathrm{rank\,} \widehat{HL}(K,g(K))=1 $ (see \cite{Ghiggini} and \cite{YiNi}). It is also possible to extract from $HL$ a bound for the slice genus of the knot, see \cite{4BallGenus}.
The first combinatorial way for calculating knot Floer homology was given in \cite{oms} for ${\mathbb Z}/2{\mathbb Z}$ coefficients and was then extended for $\mathbb{Z}$ coefficients in \cite{moz}. Those papers explain how to construct a complex $(C,\partial)$ from the grid diagram (also called rectangular diagram or arc presentation) of $K$, so that $\widehat{HL}(K)=H_{\ast}(C)\otimes V^{\otimes(n-1)}$ (see Theorem \ref{MOSTh} for the definition of the free module $V$). However, the number of generators in $C$ increases very fast with the size of the grid diagram, making the practical computation of this complex difficult. A computer program extracting knot Floer homology from this complex was written by Baldwin and Gillam (see \cite{bg}). This program manages to alleviate the problem of the number of generators by taking advantage of the sparseness of the matrix describing the boundary map $\partial$.
In \cite{beliakova}, Beliakova proposed a new complex $(C_{\Long},\partial_{\Long})$ for knot Floer homology, which we will call the long oval complex. She explained how it is homotopic to a complex $(C_{\Short},\partial_{\Short})$, which we will call short oval complex. The main interest of the construction is that the short oval complex usually has a much smaller number of generators than the MOS complex. Those two complexes are described in Section 2.
In Section 3, we extend Beliakova's complexes, originally defined with ${\mathbb Z}/2{\mathbb Z}$ coefficients, to $\mathbb Z$ coefficients. In Section 4, the formula used to calculate the Alexander grading of generators in oval complexes is proved.
Section 5 explains how our program computes the short oval complex and extracts its homology. It also describes algorithms and optimizations that could have a larger interest. Many could, for example, be used to work with the MOS complex.
\subsection{Main results.} We developed a program that can be used to determine the $\widehat{HL}$ with $\mathbb{Z}$ or ${\mathbb Z}/2{\mathbb Z}$ coefficients of knots with less than 13 crossings efficiently and can be used to determine fiberedness and the Seifert genus of even bigger knots\footnote{Our program is accessible on our homepage "http://www.math.unizh.ch/assistenten/jdroz" or via the Knot Atlas "http: /katlas.math.toronto.edu/wiki/Main\_Page"}. Using our program, we show that the $\widehat{HL}$ of prime non-alternating knots with less than 12 crossings contains no torsion and checked the $\widehat{HL}$ computations given in \cite{bg}.
\subsection{Acknowledgements.} I would like, above all, to express my gratitude to Anna Beliakova for the original impetus of the project, numerous interesting discussions, advice and ideas. I also would like to thank Dror Bar-Natan for good advice and his support in making my program a part of the Knot Atlas. The resources of the Knot Atlas\footnote{http: /katlas.math.toronto.edu/wiki/Main\_Page} and of the knot data tables of Alexander Stoimenov\footnote{http://www.kurims.kyoto-u.ac.jp/~stoimeno/ptab/index.html} were very useful to me. Some of the figures in this article are courtesy of Anna Beliakova.
\section{Sign Assignment}
We start by defining an extension $(C'_{\Long}(D),\partial '_{\Long}(D))$ of the long oval complex, the homology of which is $\widehat{HL}$ over $\mathbb{Z}$. The complex $(C'_{\Long}(D),\partial '_{\Long}(D))$ will be said to be a sign assignment over the long oval complex.
\begin{definition}
\label{signAssignment}
Let $(C',\partial')$ be a complex with $\mathbb{Z}$ as coefficient ring and basis $\boldsymbol{b'}=\{b'_1,\ldots,b'_n\}$. Let $(C,\partial)$ be a complex with ${\mathbb{Z}}/2{\mathbb{Z}}$ as coefficient ring and basis $\boldsymbol{b}=\{b_1,\ldots,b_n\}$.
Let us assume that the matrices $(\partial'_{i,j})$ and $(\partial_{i,j})$ represent the boundary maps in the basis $\boldsymbol{b'}$ and $\boldsymbol{b}$. We say that $(C',\partial')$ is a \emph{sign assignment} on $(C,\partial)$ if the following two conditions are true for all integers $1\leq i,j\leq n$.
\begin{itemize}
\item{$\partial_{i,j}=0\: \Rightarrow \: \partial'_{i,j}=0$}
\item{$\partial_{i,j}\neq 0\: \Rightarrow \: \partial'_{i,j}=\pm 1$}
\end{itemize}
\end{definition}
Our approach is completely analogous to the one taken in \cite{moz} for the same purpose. We assume for the following definitions that the curves in $\boldsymbol{\alpha}\cap\boldsymbol{\beta}$ are ordered. We say that a point sits positively on an oval if it sits on the right side of an $\alpha$ curve or the upper side of a $\beta$ curve.
\begin{definition}
\label{signDef}
Let $x$ and $y$ be generators of the long oval complex. We define $\sign(x,y)$ in the following way:
\begin{itemize}
\item{If $y \in \rect_0(x)$ (we say that $x$ and $y$ are \emph{connected by a rectangle}):
Let $(a,b)$ (respectively $(c,d)$) denote the coordinates of the leftmost (respectively rightmost) point that is in $x$ but not in $y$. Let $\mathrm{D}(x,y)$ be the number of points $p=(p_1,p_2)$ in $x$ such that $a\leq p_1 \leq c$ and $p_2 \leq b$. In other words, $\mathrm{D}(x,y)$ is the number of points below the rectangle between $x$ and $y$.
$$\sign(x,y)=(-1)^{I(x,\{ (x_1,x_2)\in x\mid x_2 \leq d\})+\mathrm{D}(x,y)\cdot(I(x,\{ (x_1,x_2)\in x\mid b<x_2\leq d\})+1)}$$
}
\item{If $y \in \bigon_0(x)$ (we say that $x$ and $y$ are \emph{connected by a bigon}):
Let $E$ be the oval on which $x$ and $y$ have intersection points on opposite sides (the intersection points are placed symmetrically with respect to the great axis of $E$). Let $\mathrm{pre}(x,y)$ be the number of ovals that come before $E$ in the ordering and on which an intersection point of $x$ sits positively.
$$\sign(x,y)=(-1)^{I(x,x)+\mathrm{pre}(x,y)}$$}
\item{Otherwise, $\sign(x,y)$ is undefined.}
\end{itemize}
\end{definition}
\begin{definition}
The complex $C'_{\Long}(D)$ is a free module with $\mathbb{Z}$ coefficients on the same generators as $C_{\Long}(D)$.
The boundary map $\partial '_{\Long}$ is the linear map equal to $\sum_{y\in \rect_0(x) \cup \bigon_0(x)} \sign(x,y) \cdot y$ for each generator $x$ of $C'$.
\end{definition}
For generators ($x$,$y$) between which there is a rectangle ($y \in \rect_0(x)$), our formula is taken from \cite{moz}.
\begin{lemma}
\label{d2iszero}
The boundary map $\partial '_{\Long}$ verifies the condition ${\partial '}_{\Long}^2=0$ and thus $(C'_{\Long}(D),\partial '_{\Long}(D))$ is a chain complex.
\begin{proof}
Let $x$ and $z$ be two generators of $C'_{\Long}(D)$ with $M(x)-2=M(z)$. The number of generators $y$ such that $y\in \rect_0(x) \cup \bigon_0(x)$ and $z\in \rect_0(y) \cup \bigon_0(y)$ is either 0 or 2. If those two generators exist, we call them $y_1$ and $y_2$.
The lemma is equivalent to the claim that when $y_1$ and $y_2$ exist, $\sign(x,y1)\cdot\sign(x,y2)\cdot\sign(y1,z)\cdot\sign(y2,z)=-1$.
We must check this in three cases:
\begin{itemize}
\item{First case: $y_1,y_2\in \rect_0(x)$. This implies that $z\in \rect_0(y_1)$ and $z\in \rect_0(y_2)$. Therefore, the claim follows from \cite{moz} Section 4.1, where an analogous claim for the MOS complex is proven.}
\item{Second case: $y_1\in \rect_0(x)$ and $y_2 \in \bigon_0(x)$. This implies $z\in \rect_0(y_2)$ and $z \in\bigon_0(y_1)$. Since the formula for $\sign$ for generators connected by a rectangle only depends on which pairs of ovals have common intersection points, we have $\sign(x,y_1)=\sign(y_2,z)$. Since $\mathrm{pre}(x,y_2)=\mathrm{pre}(y_1,z)$ and $I(x,x)\not\equiv I(y_1,y_1) \pmod{2}$, $\sign(x,y_2)=-\sign(y_1,z)$ and the claim follows.}
\item{Third case: $y_1,y_2 \in \bigon_0(x)$. This implies $z \in\bigon_0(y_1)$ and $z \in\bigon_0(y_2)$. We assume w.l.o.g. that the oval containing the bigons connecting $x$ and $y_1$ and $y_2$ and $z$ comes before the oval containing the bigon connecting the other pairs. We then have $\mathrm{pre}(y_1,z)\not\equiv \mathrm{pre}(x,y_2)\pmod{2}$ and $\mathrm{pre}(y_2,z)= \mathrm{pre}(x,y_1)$. Because for a generator $g$, $I(g,g)$ does not depend on which side of the ovals the intersection points of $g$ are, we have $I(x,x)=I(y_2,y_2)=I(y_1,y_1)$. The claim follows.}
\end{itemize}
\end{proof}
\end{lemma}
\begin{theorem}
Let $K$ be a knot with rectangular diagram $D$. The homology
of $(C'_{\Long}(D),\partial '_{\Long}(D))$ is $\widehat{HL}(K)\otimes V^{\otimes(n-1)}$ (see Theorem \ref{MOSTh} for the definition of $V$).
\begin{proof}
(sketch) We begin by extending our sign assignment for $(C_{\Long}(D),\partial_{\Long}(D))$ to a sign assignment for a modification of the long oval complex called $(C_{\mathrm{full}},\partial_{\mathrm{full}})$ that has the full $HL(K)\otimes V^{\otimes(n-1)}$ (and not $\widehat{HL}(K))\otimes V^{\otimes(n-1)}$) as homology. In \cite{moz} Section 2.3, the complex for $\widehat{HL}(K)\otimes V^{\otimes(n-1)}$ is "extracted" from the more complicated complex for $HL$ in the same way that the long oval complex can be extracted from $(C_{\mathrm{full}},\partial_{\mathrm{full}})$.
We then prove that there is, up to quasi-isomorphism, only one sign assignment $(C'_{\mathrm{full}},\partial'_{\mathrm{full}})$ with ${\partial'_{\mathrm{full}}}^2=0$. This is analogous with \cite{moz} Section 4.1. The existence of a sign assignment for $(C_{\mathrm{full}},\partial_{\mathrm{full}})$ with homology $HL(K)\otimes V^{\otimes(n-1)}$ is a consequence of the analytical theory of link Floer homology (see \cite{knots} for the general theory and \cite{beliakova} for the special case of the long oval complex). Since our (extended) sign assignment verifies the condition ${\partial'_{\mathrm{full}}}^2=0$ (this is proved in the same way as Lemma \ref{d2iszero}), it must be quasi-isomorphic to the one coming from the analytical theory. Quasi-isomorphic complexes have identical homologies. The theorem follows by restricting the sign assignment on $\partial_{\mathrm{full}}$ to a sign assignment on $\partial_{\mathrm{long}}$ and noticing that because the homology of $(C'_{\mathrm{full}},\partial'_{\mathrm{full}})$ is $HL(K)\otimes V^{\otimes(n-1)}$, the homology of $(C'_{\Long}(D),\partial '_{\Long}(D))$ will be $\widehat{HL}(K)\otimes V^{\otimes(n-1)}$.
\end{proof}
\end{theorem}
The short oval complex over ${\mathbb{Z}}/2{\mathbb{Z}}$ can now be extended to a complex with $\mathbb{Z}$ coefficients. We get a "signed" short oval complex by applying to the new long oval complex a tiny modification of the inductive construction we used in the $\mathrm{modulo\,}{2}$ case. Instead of Definition \ref{edgeDel}, we take:
\begin{definition}
\label{edgeDelSigned}
Let $p_1$ and $p_2$ be the intersection points that disappear at time $t=k$, assume $\{ p_1\}$ has a bigger Maslov grading than $\{ p_2\}$.
For $x$ a generator of $C_{k}(D)$, let $\eta (x)$ be either $0$, if $x$ does not contain $p_2$, or, if $x$ does contain $p_2$, a generator identical to $x$ except that it contains $p_1$ instead of $p_2$.
$$\partial_k=\pi \circ(\partial_{k-1}-\partial_{k-1}\circ \eta \circ \partial_{k-1})\circ \iota$$
Where $\pi$ is the natural projection from $C_{k-1}(D)$ to $C_k(D)$ that sends generators containing $p_1$ or $p_2$ to zero and $\iota$ is the natural injection from $C_k(D)$ to $C_{k-1}(D)$.
\end{definition}
And instead of lemma 2.1 of \cite{beliakova} we use:
\begin{lemma}
\label{elimStep}
The complexes $C_{k}(D)$ and $C_{k-1}(D)$ of Definition \ref{edgeDelSigned} are homotopy equivalent.
\begin{proof}
This lemma can be proved either by "adding" signs to the proof in \cite{beliakova} or by noticing that the generators containing a corner of the bigon can be matched in the sense of discrete Morse theory and that $C_{k}(D)$ is $C_{k-1}(D)$ with the matching collapsed (see \cite{kozlov}).
\end{proof}
\end{lemma}
Our new short oval complex is not a sign assignment over the one defined in the previous section since some coefficients of the boundary map can be of absolute value greater than one.
|
0803.2680
|
\section{Introduction}
The galaxy MCG$-6$-30-15 is a well-known Seyfert type I at redshift $z=0.00775$,
and was one of the first to show evidence in ASCA observations for a broad wing
of emission extending to lower energies from the 6.4\,keV\,Fe\,K$\alpha$ line
\citep{tanaka95}. A common interpretation of this emission is that it arises
as reflection from the inner regions of the accretion disc, where the photon
energy is smeared to low energies by relativistic effects
\citep[e.g.][]{tanaka95,iwasawa96}. The existence of the ``red wing'' has been
subsequently confirmed by more recent observations with {\em XMM-Newton}\ and {\em Suzaku}\
(\citealt{wilms}, \citealt{fabian02}, \citealt{vaughanfabian04},
\citealt{reynolds04}, \citealt{miniutti07}), but
investigation of models for the physical origin of this component have concentrated
on the ``blurred reflection'' hypothesis (see \citealt{brenneman06} for a
summary of the history of analysis of the X-ray spectrum).
However, it has long been known that absorption can conspire to
produce continuum shapes similar to those observed in the
AGN with red wings (e.g. in NGC\,3516, \citealt{turnerea05}
and NGC\,3783, \citealt{reeves04}), and
MCG$-6$-30-15 itself is known to show strong absorption lines
indicating the presence of absorption by zones of gas with a very
broad range of ionisation. In the Fe\,K$\alpha$ region both
\citet{young05} in high-resolution {\em Chandra}\ {\sc heg} data and
\citet{miniutti07} in {\em Suzaku}\ {\sc xis} data detected lines at 6.7 and
7.0\,keV, most likely identified as \ion{Fe}{xxv} and \ion{Fe}{xxvi}
with an outflow velocity $\sim 1800$\,km\,s$^{-1}$. This
identification of a highly ionised outflowing zone is substantiated by
the detection of matching lines from \ion{Si}{xiv} and \ion{S}{xvi} at
2.0 and 2.6\,keV in the {\em Chandra}\ {\sc meg} data \citep{young05}.
High-resolution data at softer energies also reveal lines from a
broad range of ionisation: \citet{lee01} showed that at least two zones
of gas with ionisation parameter $\log\xi \simeq 0.7$ and 2.5 were required
to explain the lines detected in the {\em Chandra}\ {\sc meg} data, and these
lines were further confirmed by detections in {\em XMM-Newton}\ {\sc rgs} data
\citep{turner03, turner04}. \citet{lee01} also interpreted the
strong edge observed at 0.7\,keV as being an edge from \ion{Fe}{i}, possibly
arising in dust grains. To date there has been little published evaluation
of the possible effect of such zones on the observed continuum from this source,
although some analyses have included some of the known absorption zones
\citep[e.g.][]{brenneman06}:
in this paper we shall investigate absorption-dominated models that both explain the
continuum shapes and the absorption lines that are observed.
A further motivation for this work is that
it has been recognised for some time that the red wing component is surprisingly
constant in amplitude \citep{iwasawa96,vaughanfabian04}
(although \citealt{ponti04} did find evidence for additional
variability around 5\,keV in the 2000 {\em XMM-Newton}\ observation).
If the red wing
emission is indeed reflected emission from the accretion disk we would
expect its amplitude to vary in phase with the primary continuum, counter
to what is found in most observations: the only occasion on which it
has been inferred that this expected property did occur was when the source
was in its lowest flux state, as observed with {\em XMM-Newton}\ in 2000
\citep{reynolds04}.
The general lack of red wing
variability has led to the development of a ``light-bending''
model in which the observed primary continuum variations are caused by variations
in height of the illuminating continuum coupled with distortions of photon
geodesics near the black hole, rather than by rest-frame intensity variations,
so that the reflected emission appears more constant than the primary continuum
seen by the observer \citep{fabianvaughan03, miniutti03, miniutti04}.
The same model has also been
invoked to explain the high flux observed at $E>20$\,keV in {\em Suzaku}\ {\sc pin}
data as being the Compton reflection ``hump'' enhanced in amplitude by the
same light-bending effects \citep{miniutti07}.
Other models that seek to explain the constant
red wing amplitude \citep[e.g.][]{merloni06, nayakshin, zycki04}
do not explicitly produce an
explanation for the high energy excess, and some are unlikely to apply to
MCG$-6$-30-15 \citep{zycki01}
and on long timescales \citep[][hereafter M07]{miller07}.
However, light-bending is not the only way
to produce a constant red wing and a high energy excess. The constancy of the
component could be explained if the reflection were distant from the primary
source, so that light travel time smooths out any illumination variations,
or models with a variable covering fraction of absorption can produce a
similar effect (see the discussion in M07
and \citealt{turner07}, hereafter T07).
Furthermore, the high energy excess may be explained as either being a combination
of reflection with no light-bending enhancement and a contribution from a
highly absorbed (N$_{\rm H} \ga 10^{23}$\,cm$^{-2}$)
continuum component: or indeed perhaps even an absorbed component alone.
The existence of a high-energy excess on its own does not require light-bending
or relativistic blurring of reflected emission.
Thus our aim in this paper is to investigate the
extent to which the full X-ray spectrum of MCG$-6$-30-15 may be explained by the
effect of absorption of the intrinsic emission. We are aided
in this by two key advances since previous analyses. First, there now exists
a significant body of high quality data, from three independent observatories, {\em Chandra}\ ,
{\em XMM-Newton}\ and {\em Suzaku}\ . The {\em Suzaku}\ data include low-resolution spectral measurements
up to $\sim 45 $\,keV, and in this paper we analyse, for the first time, the entire
set of CCD-resolution data that is available for this source, and we test physical
models against the full energy range available, $\sim 0.5-45$\,keV. We also test
the models for consistency with the high-resolution {\em Chandra}\ {\sc hetgs} and {\em XMM-Newton}\
{\sc rgs} data.
The second advance
is to make as much use as possible of the spectral variability exhibited by this
(and other) AGN: the spectrum appears to change shape in a systematic way as the
total X-ray flux of the source varies, implying that the emission we see is made up
of a number of components whose individual spectra differ. We use a method
of principal components analysis, described below, that retains the full spectral
information content of the data, to first set the basic parameters of models describing
each of those components, and then we test the resulting model against the actual
data.
\section{Analysis methods}
\subsection{Principal components analysis}\label{pca}
The starting point for the physical models developed in this paper is
a model-free principal components analysis (hereafter PCA)
of the spectral variations
of MCG$-6$-30-15. Such an analysis was first applied to this source
with low spectral resolution by \citet{vaughanfabian04}. In this paper
we use the method developed and described by M07, which uses
singular value decomposition to achieve a principal components decomposition
of the spectral variations whilst retaining the full spectral resolution
of {\em XMM-Newton}\ {\sc pn} and {\em Suzaku}\ {\sc xis} data.
In the case of the analysis of Mrk\,766 (M07) this provided significant
additional information, as it revealed the presence of absorption edge and line
features associated with a hard spectral component, and it provided
model-independent evidence for the presence of both ionised absorption associated
with the principal varying component and also ionised Fe emission. We shall see
below that similar features appear in MCG$-6$-30-15.
The optimal-resolution
PCA procedure adopted is identical to that of M07 and we refer to
that paper for full details of the method.
The random errors in the
principal components arising
from photon shot noise are estimated by a Monte Carlo simulation, in which the PCA is
repeated for synthetic spectra that are perturbed by random noise about the true
spectra. The resulting errors are correlated between the principal components but do
allow some estimation of the effect of shot noise.
The basic output from the PCA is a set of eigenvectors, each one representing the
spectrum of an additive mode of variation. The generic varying power-law, that is a
ubiquitous feature of AGN X-ray spectra, appears as ``eigenvector one'' in this analysis.
In addition it was found in Mrk\,766 that the bulk of the variation could be
described by a single eigenvector superimposed on a quasi-constant component of emission,
which in that paper was named the ``offset component''.
But there is not a unique
association of physical components with the eigenvectors and offset components. For
example, there could in the source be components with exactly the spectra of eigenvector
one (a varying absorbed power-law) and the offset component (which could
be reflected emission from an extended region where light-travel-time effects dampen any
intrinsic variation in the illumination). Alternatively
there could instead be two variable additive components
whose variations are correlated, such that eigenvector one represents the net overall
effect of their joint variation. A possible physical model that achieves the latter is
if spectral variations are caused by variations in the covering fraction of an
absorber passing across an extended source - in this case the offset component represents the
view of the source when it is most covered, the highest flux state represents the source
when it is most uncovered, and eigenvector one represents the difference between these
states. These interpretations can produce identical PCA results and are indistinguishable:
we must rely on physical models to attempt to discern which interpretation is closest to
reality. In practice, it may be that both a quasi-constant reflection
component and a variable covering-fraction
absorber are present, a hybrid model also mentioned by M07 and T07.
Such an analysis effectively assumes that the spectral variations may be described
by additive spectral components. In the case of Mrk\,766 it was found that although
the chief variations could be modelled in this way, there were still significant
non-additive variations which were found to be caused by absorption variations
(M07, T07). In this parallel analysis of MCG$-6$-30-15 we shall
first investigate the PCA, and fit models to the principal components, and then
also explore directly fitting to the data the models that arise from the PCA.
We shall not, however, explore temporal absorption variations, although we shall see
that there is evidence for these also in MCG$-6$-30-15.
\subsection{Data grouping and spectral fitting}
The principal statistical tool used here will be goodness-of-fit
testing using the binned $\chi^2$ statistic.
As in M07, we adopt an optimum spectral binning of the
data, with spectral bins whose width in energy are equal to half the
energy-dependent instrumental FWHM. Any spectral features arising
in finer binning cannot be intrinsic to the source and we should not
fit models with binning finer than this. It is commonplace in the
literature on X-ray observations for goodness-of-fit statistics to
be quoted with much finer binning than the instrumental resolution:
such reported values can be
misleading, in that the reduced $\chi^2$ in such cases may appear
low, but the goodness-of-fit probability might in fact be rather poor.
The sensitivity to departures
of the model from the data is significantly reduced with binning that is
too fine.
To illustrate this here with a simplified example, suppose data is divided into $n$ bins,
fit by a model with $p$ free parameters
that yields a goodness-of-fit $\chi^2_n = n - p + \Delta\chi^2$,
where $\Delta\chi^2$ is the excess $\chi^2$ over the expectation value $n-p$
for a well-fitting model. If the data
are then more finely divided into $m$ bins, $m>n$,
with the data errors increasing as the
inverse square root of the bin width, then if the model does not
have any variations on this finer scale we can use the properties of the
non-central $\chi^2$ distribution to infer an expected new
goodness-of-fit, $\chi^2_m \simeq m - p + \Delta\chi^2$ where $\Delta\chi^2$
has the same value as in the case of $n$ bins.
For example, for
$\chi^2_n=400$, $n=300$, we would obtain $\chi^2_m=3100$ for $m=3000$
(adopting the simplification $p=0$).
The first case has a reduced $\chi^2=1.33$ and the model would be rejected
at significance level $p=10^{-4}$. The second case has reduced $\chi^2=1.033$
and the model would only be rejected at $p=0.1$.
In this paper we adopt the optimum energy binning for maximum sensitivity,
and hence the goodness-of-fit statistics will indicate worse fits than if we had adopted
finer energy binning. Where relevant, we shall compare goodness-of-fit with
previous published results for MCG$-6$-30-15 by calculating $\chi^2$ with
similar binning to that adopted by other authors.
\subsection{Systematic uncertainties in data and models}
Photon counts are
everywhere sufficiently high for the approximation that the
error in each bin has a normal distribution to be valid.
However,
the datasets being analysed comprise long observations on this
bright AGN, and in fact the goodness-of-fit is no longer
dominated entirely by photon statistics, especially at the
low energy end of the spectra.
Systematic errors in the data exist through uncertainty in the
calibration. These uncertainties have not yet been fully quantified
for {\em Suzaku}\ , but for {\em XMM-Newton}\ it seems that there can be energy-dependent
discrepancies of 5-10\,percent between {\sc pn} and {\sc mos} instruments,
depending on the spectrum of the source, and discrepancies of
up to 20 percent in the {\em XMM-Newton}\ {\sc pn} v. {\em Suzaku}\ {\sc xis} cross-calibration
\citep{stuhlinger}.
There are also systematic uncertainties in the models, in the sense
that, in order to keep the number of free parameters to a minimum,
we inevitably fit simplified models to what in reality must be
a complex set of emission and absorption processes. For example,
it is commonplace to use the {\sc reflion} models of \citet{rossfabian}
to model reflection spectra. However, those models assume
a constant density slab at normal inclination, whereas in practice
there must be a complex geometry. Emission-line fluxes in particular
will be orientation dependent \citep[e.g.][]{george91, zycki94}, and
it has been suggested that a disk atmosphere in hydrostatic pressure
equilibrium would result in a quite different spectrum
\citep{donenayakshin}. Such systematic ``model error'' is
difficult to quantify.
There is no rigorous way of taking such systematic errors into account
in the goodness-of-fit testing. Within {\sc xspec} \citep{arnaud}
it is possible
to treat the systematic error as being a constant fractional error
that is added in quadrature with the random error. In this paper we
adopt as standard a fractional systematic error of 3\,percent, a value
that is around the lowest systematic uncertainty we might expect given
the cross-instrument comparisons that have been made to date. Where
relevant, when comparing our results with previous results in the literature,
we shall quote goodness-of-fit statistics assuming zero systematic error.
\section{The data}
\subsection{XMM-Newton data}
The {\em XMM-Newton}\ dataset comprises two observations made in 2000,
previously described and analysed by \citet{wilms}, \citet{reynolds04}
and \citet{ponti04},
and three closely-spaced observations made in 2001,
previously described and analysed by
\citet{fabian02}, \citet{vaughanfabian04} and \citet{brenneman06}.
The observation dates, IDs, duration of the observations and on-source
exposure times after screening are given
in Table\,\ref{tabledata}.
\begin{table}
\caption
{Datasets used in this analysis, giving observation date, ID,
observation duration and on-source exposure time
({\sc pn} times are given for {\em XMM-Newton}\ , {\sc xis} times are given for {\em Suzaku}\ ).}
\label{tabledata}
\begin{tabular}{lrlrr}
\hline\hline
Observatory & \multicolumn{1}{c}{date} & \multicolumn{1}{c}{ID}
& dur & exp \\
& & & /ks & /ks \\
\hline
{\em XMM-Newton}\ & 11 Jul 2000 & 0111570101 & 43.2 & 41.1 \\
{\em XMM-Newton}\ & 11-12 Jul 2000 & 0111570201 & 46.0 & 32.3 \\
{\em XMM-Newton}\ & 31 Jul 2001 & 0029740101 & 79.5 & 57.7 \\
& - 1 Aug 2001 & & & \\
{\em XMM-Newton}\ & 2-3 Aug 2001 & 0029740701 & 125.9 & 69.0 \\
{\em XMM-Newton}\ & 4-5 Aug 2001 & 0029740801 & 125.0 & 81.5 \\
{\em Suzaku}\ & 9-14 Jan 2006 & 700007010 & 354.2 & 103.5 \\
{\em Suzaku}\ & 23-26 Jan 2006 & 700007020 & 215.3 & 72.1 \\
{\em Suzaku}\ & 27-30 Jan 2006 & 700007030 & 208.3 & 77.4 \\
{\em Chandra}\ & 19-27 May 2004 & 4759-4762 & 530.0 & 521.8 \\
\hline
\end{tabular}
\end{table}
In this analysis we use data from the {\sc epic pn} CCD detector
\citep{struder} in the energy range $0.4-9.8$\,keV. Data from the
Metal Oxide Semi-Conductor ({\sc mos}) CCDs were not used as they suffer significant
photon pileup and inferior signal-to-noise.
All {\sc pn} observations utilised the medium filter and the small window mode.
Data were processed using {\sc sas v7.0}
using standard criteria with instrument patterns 0--4 and removing periods
of high background (where the rate in the background cell exceeded
0.15\,count\,s$^{-1}$). Source data were extracted
from a circular cell of radius $50''$ centred on the source, and
background data were taken from a source-free region of approximately the same size
within the same {\sc pn} chip.
The total 2000 and 2001 {\sc pn} exposures were 73\,ks and 208\,ks, respectively.
The typical deadtime correction was a factor 0.7.
During 2000
the mean {\sc pn} count rate was $\sim 3.709$ $\pm0.010$\,count\,s$^{-1}$
and during 2001
$\sim 4.908$ $\pm0.005$\,count\,s$^{-1}$ in the 2--10 keV band.
The mean background level in the screened data was $< 1$\% of the
mean source rate in this band.
We also used higher resolution spectra from the Reflection Grating Spectrometer
({\sc rgs}, \citealt{denherder01}) which
were taken from the pipeline processing and were coadded for each {\sc rgs}
grating to yield two spectra for
each of the 2000, 2001 epochs. As the individual exposures were
taken just days apart and the source was centred the same on the detector for each
observation,
the responses were indistinguishable for the parts that were coadded and this
summation did not result in any significant loss of effective resolution within the data.
The total {\sc rgs 1} exposure was 108\,ks for 2000 and 331\,ks for 2001;
the {\sc rgs 2} exposure was 105\,ks
for 2000, 323\,ks for 2001. During 2000 first-order
{\sc rgs} data yielded 0.71$\pm 0.003$\,count\,s$^{-1}$
and $0.64 \pm 0.03$\,count\,s$^{-1}$ for the summed
{\sc rgs 1} and {\sc rgs 2} data respectively, and
during 2001, 0.89$\pm 0.002$\,count\,s$^{-1}$
and 0.90$\pm 0.02$\,count\,s$^{-1}$ for {\sc rgs 1} and {\sc rgs 2}, respectively.
The {\sc rgs} background level was $\sim 11\%$ of the total count rate.
\subsection{{\em Suzaku}\ data}
The {\em Suzaku}\ dataset comprises three observations made in 2006 as
tabulated and previously described and analysed by \citet{miniutti07}.
Because of the much shorter duration and the changing instrumental
response we do not use a shorter observation made on 2005 Aug 17. The
observation dates, IDs, total duration of the {\sc xis} observations
and on-source exposure times after screening are given in
Table\,\ref{tabledata} ({\sc xis 0} typically had a slightly lower
on-source exposure time than the other detectors). We used {\sc xis}
and {\sc hxd pin} events from v2.0.6.13 of the {\em Suzaku}\ pipeline. Both
the {\sc xis} and {\sc hxd pin} data were reduced using v6.3.2 of {\sc
HEAsoft} and screened with {\sc xselect} to exclude data during and
within 436 seconds of entry/exit from the South Atlantic Anomaly
(SAA). Additionally we excluded data with an Earth elevation angle
less than 5$^\circ$ and Earth day-time elevation angles less than
20$^\circ$. A cut-off rigidity (COR) of $>6$\,GeV was applied to
lower particle background. The source was observed at the nominal
centre position for the {\sc xis} for all observations. The {\sc
xis-fi} CCDs were in $3 \times 3$ and $5 \times 5$ editmodes. Data
from the back-illuminated {\sc xis} 1 detector were not used because
its effective area is lower than that of the front-illuminated XIS
0/2/3 at 6 keV, while its background rate is larger at high energies.
For the {\sc xis} CCDs we selected good events with grades 0,2,3,4,
and 6 and removed hot and flickering pixels using the {\sc sisclean}
script.
The {\sc xis} products were extracted from circular regions of
2.9\arcmin radius while background spectra were extracted from a
region of the same size offset from the source (and avoiding the chip
corners with the calibration sources). The response matrix (rmfs) and
ancillary response (arf) files were then created using the tasks {\sc
xisrmfgen} and {\sc xissimarfgen}, respectively. {\sc xissimarfgen}
accounts for the hydrocarbon contamination on the optical blocking
filter. The background was $<1\%$ of the total {\sc xis} count rate
in the full {\sc xis} band for each CCD. The {\sc xis} 0,2,3 were
combined to produce a single {\sc xis-fi} spectrum, along with the
response files with the appropriate (1/3) weighting.
The {\sc pin} background events file was provided by the {\sc hxd}
instrument team, used in conjunction with the source events file to
create a common good time interval set applicable to both the source
and background. The background events file was generated using ten
times the actual background count rate, so we increased the effective
exposure time of the background spectra by a factor of 10. We found
the deadtime correction factor using the {\sc hxddtcor} task with the
extracted source spectra and the unfiltered source events files. The
total on-source exposure time, after screening, was 292\,ks, somewhat
longer than for the {\sc xis}. The contribution of the cosmic X-ray
background \citep{boldt87,gruber99} was taken into account as
described later. The source comprised 26\% of the total counts,
clearly above the $1\sigma$ 3.2\% {\sc hxd pin} background systematic
level. The response file ae\_hxd\_pinxinome1\_20070914.rsp provided by
the instrument team was used in all the spectral fits. In the
subsequent analysis, the {\sc pin} flux has been decreased by a factor
1.09 to be consistent for observations at the {\sc xis} pointing
position with the cross-calibration study of \citet{koyama}, revised
for the {\em Suzaku}\ revision\,2 analysis pipeline.\footnote{The cross
normalisation of the {\sc hxd/pin} vs. {\sc xis-fi} has been
determined to be in the range 1.06--1.09 for {\sc xis} nominal
pointing from observations of the Crab (see
ftp://legacy.gsfc.nasa.gov/suzaku/doc/xrt/suzakumemo-2007-11.pdf).}
\subsection{{\em Chandra}\ {\sc hetgs} data}
{\it Chandra} {\sc hetgs} exposures for MCG$-6$-30-15 were taken
during May 2004 yielding 522\,ks of good data
(OBSIDs 4759-62) as detailed by \citet{young05}
(Table\,\ref{tabledata} gives the summed duration of each of the
four OBSIDs). An earlier
observation made in 2000 has been analysed by \citet{lee01} but is not
included here owing to its much shorter duration. Data were reduced
using {\sc ciao} v3.4 and {\sc caldb} v3.4.0 and following standard
procedures for extraction of HETG spectra, except that we
used a narrower extraction strip than the {\sc tgextract} default.
This reason for the default processing cut-off is that the overlap of
the MEG and HEG strips depends on the extraction strip widths, and if
the latter are too large, a larger intersection of the MEG and HEG
strips results, cutting off the HEG data prematurely. Specifically,
we used \verb+width_factor_hetg=20+ in the tool \verb+tg_create_mask+,
instead of the default value of 35. As the {\sc hetgs} spectra have a
very low signal-to-noise ratio it was necessary to coadd the positive
and negative first-order spectra, and coadd all four OBSIDs, to create
a high quality summed first-order {\sc heg} and {\sc meg} spectra for
fitting. After such co-addition, we binned the spectra to 4096
channels before grouping. The summed {\it Chandra} {\sc hetgs}
exposure yielded a 2--7\,keV count rate $\sim 0.1559 \pm
0.0006$\,count\,s$^{-1}$ and $\sim 0.1926 \pm 0.0007$\,count\,s$^{-1}$
in the summed {\sc heg} and {\sc meg} first order spectra,
respectively. The background level in this case is so low as to be
negligible.
\section{Results - the Principal Components}
\subsection{Principal components analysis}
We first analyse the {\em XMM-Newton}\ and {\em Suzaku}\ data using PCA, following the
methods described above and in M07. There are a number of statistical
measures we can employ to test how good the PCA model fits the data,
and how many components are required. Tables\,\ref{table1} and
\ref{table2} show for the two datasets how much of the observed
variance is accounted for by each component, and how well the data are
described by a source model comprising the offset component plus the
first $n$ eigenvectors, measured by the $\chi^2$ statistic. The
latter is the mean $\chi^2$ averaged over all timeslices. It can be
seen that at energies above 2\,keV a single eigenvector is all that is
required in addition to the offset component to adequately describe
the spectral variations. Considering the whole energy range, more
eigenvectors are needed, indicating the effects of variable absorption
as found for Mrk\,766 (M07, T07). In fact, there are some time slices
that have anomalously high $\chi^2$ values, as indicated in Fig.\,\ref{figchisq}
which shows for each timeslice in the {\em Suzaku}\ dataset
the value of $\chi^2$ obtained for the full
energy range when allowing an increasing number of components. Most
time slices are well fit by a single varying component, but there are a
few time slices, especially in the first observation, where even adopting
3 principal components still leads to a high $\chi^2$ value. Some of these
``badly fitting'' events are correlated in time with each other, and we suspect
that they are caused by absorption variations, as found for Mrk\,766 (T07).
\begin{table}
\caption{
Statistics of the Principal Components Analysis of the {\em Suzaku}\ data.
Component 0 indicates
the fit of the mean spectrum alone, subsequent components are eigenvectors
ordered by their eigenvalues (column 2). Column 3 shows the fractional
variance accounted for by each component, columns 4 and 5 give the
goodness-of-fit ($\chi^2$ and number of degrees of freedom) for the entire
range and for $2-50$\,keV alone.}
\label{table1}
\begin{tabular}{crrrrrr}
\hline\hline
comp. & eigenvalue & fractional & $\chi^2$ & /dof & $\chi^2$ & /dof \\
& & variance & \multicolumn{2}{c}{0.4--50 keV} & \multicolumn{2}{c}{2--50 keV}\\
\hline
0 & - & - & 4736 & 171 & 1661 & 114 \\
1 & 0.00167 & 0.645 & 235 & 170 & 113 & 113 \\
2 & 0.00052 & 0.201 & 216 & 169 & 108 & 112 \\
3 & 0.00012 & 0.046 & 205 & 168 & 104 & 111 \\
4 & 0.00005 & 0.019 & 191 & 167 & 99 & 110 \\
5 & 0.00002 & 0.009 & 165 & 166 & 93 & 109 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{
Statistics of the Principal Components Analysis of the {\em XMM-Newton}\ data,
with columns as for Table\,\ref{table1} with appropriate energy ranges.
}
\label{table2}
\begin{tabular}{crrrrrr}
\hline\hline
comp. & eigenvalue & fractional & $\chi^2$ & /dof & $\chi^2$ & /dof \\
& & variance & \multicolumn{2}{c}{0.4--9.8 keV} & \multicolumn{2}{c}{2--9.8 keV}\\
\hline
0 & - & - & 22322 & 167 & 3922 & 125 \\
1 & 0.000977 & 0.928 & 953 & 166 & 123 & 124 \\
2 & 0.000016 & 0.015 & 325 & 165 & 105 & 123 \\
3 & 0.000012 & 0.012 & 208 & 164 & 86 & 122 \\
4 & 0.000009 & 0.008 & 161 & 163 & 75 & 121 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig1.ps}
}}
\caption{$\chi^2$ values for the full energy range for
each time slice in the {\em Suzaku}\ data,
assuming a constant component plus: a single eigenvector (red triangles);
two eigenvectors (green squares); or three eigenvectors (blue circles).
The number of degrees of freedom is 170-168 respectively. Dashed vertical
lines indicate the start of a new observation, separated in time from the
previous observation.
}
\label{figchisq}
\end{figure}
\begin{figure*}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig2a.ps}
}}
\hspace*{0.04\textwidth}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig2b.ps}
}}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig2c.ps}
}}
\hspace*{0.04\textwidth}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig2d.ps}
}}
}\end{minipage}
\caption{
Principal component spectra for (top) {\em Suzaku}\ data and
(bottom) {\em XMM-Newton}\ data. Left-hand panels show limits on the offset
component, right-hand panels show
eigenvector one, top curve (red), and eigenvector two, lower
curve, falling below the plot boundary (green).
Spectral points in the range $0.4-11$\,keV are from the pn or {\sc xis}
instruments, those in the range $15-45$\,keV are from the
{\sc pin}. See text for further details.
}
\label{figpcaspectra}
\end{figure*}
The resulting component spectra are shown in Fig.\,\ref{figpcaspectra}.
The offset component is not uniquely defined, but exists between the limits
shown: only a systematic shift between those limits is allowed (more specifically,
the offset component can be shifted by linear combinations of whichever
eigenvectors are allowed to describe the variations). As well as the {\sc pin} detector
background, a contribution from the
expected cosmic X-ray background has also been subtracted from the {\sc pin} data, using
the same model and amplitude as \citet{miniutti07}. This component is also plotted,
so it may be seen that this makes an important but not dominant contribution to
the total measured {\sc pin} flux.
Random uncertainties
on the offset component and eigenvector one are
estimated from a Monte-Carlo realisation as described in section\,\ref{pca}
and are shown on the upper curve only.
Only every fifth error bar is plotted, for
clarity. Eigenvector one is plotted with an amplitude equal to the mean value
in each dataset.
For clarity we do not plot further eigenvectors:
although the presence of eigenvector two seems
statistically significant, its amplitude is low and it
has little effect on the spectral shape of the
offset and first eigenvector components.
\subsection{Spectral models}
\subsubsection{Overview}
As in Mrk\,766, the offset component has a hard continuum shape with a
soft excess, a weak
line at the energy of low ionisation Fe\,K$\alpha$, an edge and absorption
lines, also detected by \citet{miniutti07} in the total spectrum.
The {\sc pin} data reveal a high flux in the $15-40$\,keV range as found by
\citet{miniutti07}, although we note that with improved deadtime correction
and background subtraction there is now no indication of any fall-off to
higher energies, consistent with the general lack of cut-offs in AGN spectra
at these energies \citep[e.g.][]{panessa08}.
The first eigenvector has the appearance of an absorbed powerlaw. The
most basic interpretation of these two components are that the offset component
is a reflection component and eigenvector one is a variable-amplitude
absorbed powerlaw, the variations of which yield the observed spectral variability.
This model has formed the basis for previous analyses of
the X-ray spectrum of MCG--6-30-15 and other AGN
(e.g. \citealt{fabian02, fabianvaughan03, vaughanfabian04}) and is the one
we adopt here. An alternative would be to consider the possible spectral
variations arising from a thermal comptonising plasma
\citep[e.g.][]{haardt97}, but such models appear to be disfavoured for the
analysis presented here: first, the soft excess is not bright enough
compared with the \citet{haardt97} model; second, in that model it is expected
that the Compton excess should decrease as the $2-10$\,keV flux increases,
counter to what is observed; and finally the PCA presented here provides
strong evidence that the source variability is primarily
well described by the additive variations of a small number of components.
We shall start by fitting basic models to these components. All fits were
made with {\sc xspec}\,v11 \citep{arnaud}. Models of ionised absorption
were created using {\sc xstar} \citep{kallman04} which models the absorption
from a spherically-symmetric shell of gas around a central source.
It is particularly important to include the best possible atomic physics
calculations in the absorber models: \citet{kallman04} have shown how
improved calculations lead to bound-free edges that are significantly
less sharp than would otherwise be obtained, and those authors point out
that this may have a significant effect on the interpretation of the edge
around Fe\,K$\alpha$. In the models made here,
fittable parameters were the absorbing column,
the ionisation parameter $\xi$ at the inner face of the absorbing shell
and the redshift. For simplicity, solar abundances were assumed for all models.
The ionising spectrum was assumed to be a power-law of photon index
$\Gamma=2.2$ between 13.6\,eV and 13.6\,keV: other values of photon index can
produce similar absorption spectra but with a shift in the effective ionisation
parameter. The gas density was assumed to be $10^{10}$\,cm$^{-3}$ for all
zones except the highest ionisation zone, where a value $10^8$\,cm$^{-3}$
was assumed.
Although this parameter affects the absorption spectral shape, this again
is largely degenerate with the ionisation parameter, although if the assumed
absorbing shell is sufficiently thick
(as may occur for the combination of high $\xi$ and low density values)
then there can be significant variation in $\xi$ through the zone arising
from inverse-square-law dilution, thus leading to a broader range of ionisation
states than might be the case in a denser, thinner shell of gas. All the absorption
models are affected by such uncertainties.
We adopt units of erg\,cm\,s$^{-1}$ for $\xi$.
\subsubsection{Joint fit to eigenvector one and offset components}
Qualitatively, the PCA ``eigenvector one'' in both the {\em Suzaku}\ and the {\em XMM-Newton}\ data
has the appearance of a powerlaw affected by ionised absorption. In reality
the absorbing zones are complex, and their physical parameters are
unlikely to be unambiguously measurable in data with CCD resolution. However,
we know {\em a priori} of the existence of ionised absorbing zones from
the previous analyses of {\em Chandra}\ and {\em XMM-Newton}\ high-resolution grating data,
so we can start by seeing whether a model that includes those zones can
explain the CCD spectra we observe.
We first create a model based on
the simplest interpretation of the PCA, namely that the eigenvector one
represents a variable-amplitude powerlaw, with ionised absorption, and
that the offset component arises from distant reflection, with light
travel-time erasing any reflected amplitude variations. The primary absorbing
layers that have already been identified in the grating data comprise:
\begin{list}{}{\itemsep=3mm \leftmargin=0mm}
\item {\em Zone 1} with $\log\xi \simeq 2$ \citep{lee01}
\item {\em Zone 2} with $\log\xi \simeq 0.5$ \citep{lee01}
\item {\em Zone 3}, a highly ionised, $\log\xi \ga 3.5$, outflow
at line-of-sight velocity $v \simeq 1800$\,km\,s$^{-1}$ \citep{young05}.
The $6.7$ and $6.97$\,keV lines only appear in the offset component in the
PCA, not on eigenvector one, so for now we assume that this zone is only associated
with the offset component (when fitting to the data later, we allow zone 3
to absorb all components).
This zone
also produces velocity-shifted lines of 2.0\,keV\,\ion{Si}{xiv}\,Ly$\alpha$
and 2.62\,keV\,\ion{S}{xvi}\,Ly$\alpha$ which were observed by \citet{young05}
in the {\em Chandra}\ {\sc meg} data.
In this absorption layer the ratio of the 6.7 and 6.97\,keV lines depends on
both the ionisation and on the microturbulent velocity dispersion of the gas,
since the lines are easily saturated, and the relatively high equivalent
width of the lines is most easily achieved by models with line broadening.
Hence the absorption model is broadened by a velocity
dispersion of 500\,km\,s$^{-1}$, consistent with the findings of \citet{young05}
(see section\,\ref{datafitting}), and we fix the ionisation parameter at
$\log\xi = 3.85$ as described later in section\,\ref{models}.
\item {\em Fe\,I edge} at 0.707\,keV.
A further feature apparent in the high-resolution data, but not in data of
CCD resolution, is the complex absorption edge structure at $\sim 0.7$\,keV,
discussed extensively by \citet{branduardi01}, \citet{sako03},
\citet{lee01} and \citet{turner03, turner04}. Here we adopt the model advocated by the
last three authors, in which the edge primarily arises from neutral Fe which
is possibly in the form of dust grains (see \citealt{ballantyne03dust}
for a discussion of the possible location and origin of the dust).
Rather than attempting a
complex model of this region, we simply add a single edge at the systemic redshift
whose rest-frame energy corresponds to the 0.707\,keV {\sc l3} Fe\,I edge
\citep{lee01}, with fixed edge optical depth $\tau = 0.4$, as indicated
by the high-resolution data (in fitting to the grating data later, we allow
$\tau$ to be a free parameter).
\end{list}
We initially model the offset component as low-ionisation reflection, adopting
the constant density slab model of \citet{ross99} and \citet{rossfabian} and calculated
by the {\sc xspec} extended {\sc reflionx} model made available by those authors. The
illumination was assumed to be the same powerlaw for both
eigenvector one and the offset component, and absorbing zones 1 \& 2 and the
dust edge were assumed to cover both components.
We assume the 6.4\,keV\,Fe\,K$\alpha$ emission line arises
in the low-ionisation reflection, which in the {\sc reflionx} models restricts the
ionisation to be $\xi \la 120$\,erg\,cm\,s$^{-1}$ for photon index $\Gamma \simeq 2.2$.
Better overall $\chi^2$ values may be obtained by relaxing this constraint, but
such fits are inconsistent with the data in the Fe\,K$\alpha$ regime, so
we apply this constraint throughout. It is possible, of course, that the
6.4\,keV line arises in photoionised gas, and not from reflection, and it is
also possible that there is a wide range of reflection ionisation
(e.g. \citealt{ballantyne03}), but we shall start with simpler models to evaluate
the extent to which they describe the observations.
Further constraints on the model are available from the soft band high resolution
{\em XMM-Newton}\ {\sc rgs} and {\em Chandra}\ {\sc hetgs} grating data. One particular feature
is that there is no evidence in any of the grating data for strong
0.65\,keV\,\ion{O}{viii} emission, whereas the {\sc reflionx} models
lead us to expect significant line emission in the soft band to match the
observed 6.4\,keV\,Fe\,K$\alpha$ emission, if this arises from reflection.
So it must be that either the Fe\,K$\alpha$ line does not have a reflection
origin,
or its ionisation is sufficiently low that no \ion{O}{viii} emission is expected,
or the reflection is affected by absorption in the soft band. In
the models described here, we adopt the latter solution, although we should bear
in mind the former possibilities. This interpretation of the Fe\,line leads to
the need to allow a further layer of absorption associated with the distant
reflection, {\em zone 4}, which may in fact be an atmosphere associated with
the reflection region. This absorption zone hardens the reflection spectrum
and thus for a given $2-10$\,keV flux increases the flux in the {\em Suzaku}\ {\sc pin}
band, and
allowing the reflection to be ionised explains why the equivalent width of
the Fe\,emission is lower than expected from neutral reflection
\citep{george91, zycki94, ross99}.
We should expect some emission from the absorbing gas in zone 4, with an equivalent
width of a few tens of eV \citep{leahy93}, more detailed models could include
this emission component also.
We also allow a column of cold Galactic gas, as an additional free parameter
but constrained to have a minimum hydrogen column density of
$4\times 10^{20}$\,cm$^{-2}$.
\begin{table}
\caption
{Fit parameters and statistics for the joint fit of Model A to
eigenvector one and the offset components,
for each dataset.
First rows show photon index $\Gamma$, {\sc reflionx} log ionisation parameter
$\xi$, where $\xi$ is in units
of erg\,cm\,s$^{-1}$,
and Galactic absorption
hydrogen column density $N_H$ in units of $10^{22}$\,cm$^{-2}$.
The ionisation parameter and column for each of the four zones discussed
in the text follow, in the same units.
Also shown is goodness-of-fit $\chi^2$ and
number of degrees of freedom,
assuming a systematic fractional error of 0.03.
Brackets indicate parameters that were fixed.
}
\label{table:pcafitsA}
\begin{center}
\begin{tabular}{lrrr}
\hline\hline
& & \multicolumn{2}{c}{Model A} \\
\multicolumn{2}{c}{parameter} & \hspace*{5mm} {\em Suzaku}\ & {\em XMM-Newton}\ \\
\hline
$\Gamma$ & & 2.20 & 2.13 \\
\multicolumn{2}{l}{$\log\xi_{\rm REFLIONX}$} & 2.08 & 2.08 \\
Galactic & $N_H$ & 0.065 & 0.040 \\
\hline
\multicolumn{4}{l}{{\em Chandra}\ {\sc hetgs} \& {\em XMM-Newton}\ {\sc rgs} zones}\\
{\em zone 1} & $\log\xi$ & 2.17 & 2.11 \\
& $N_H$ & 1.44 & 0.46 \\
{\em zone 2} & $\log\xi$ & $-0.05$ & $-0.40$\\
& $N_H$ & 0.01 & 0.02 \\
{\em zone 3} & $\log\xi$ & (3.85) & (3.85) \\
& $N_H$ & 3.00 & 10.1 \\
\hline
\multicolumn{4}{l}{additional offset component absorption zone}\\
{\em zone 4} & $\log\xi$ & 0.42 & 1.33 \\
& $N_H$ & 4.88 & 8.33 \\
\hline
\multicolumn{2}{l}{$\chi^2$/dof} & 460/336 & 643/313\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption
{Fit parameters and statistics for the joint fit of Model B to
eigenvector one and the offset components,
for each dataset, table entries as in Table\,\ref{table:pcafitsA}.
}
\label{table:pcafitsB}
\begin{center}
\begin{tabular}{lrrr}
\hline\hline
& & \multicolumn{2}{c}{Model B} \\
\multicolumn{2}{c}{parameter} & \hspace*{5mm} {\em Suzaku}\ & {\em XMM-Newton}\ \\
\hline
$\Gamma$ & & 2.23 & 2.21 \\
\multicolumn{2}{l}{$\log\xi_{\rm REFLIONX}$} & 2.04 & 1.94 \\
Galactic & $N_H$ & 0.051 & 0.045 \\
\hline
\multicolumn{4}{l}{{\em Chandra}\ {\sc hetgs} \& {\em XMM-Newton}\ {\sc rgs} zones}\\
{\em zone 1} & $\log\xi$ & 2.36 & 2.14 \\
& $N_H$ & 0.45 & 0.41 \\
{\em zone 2} & $\log\xi$ & 0.22 & $-0.42$ \\
& $N_H$ & 0.07 & 0.04 \\
{\em zone 3} & $\log\xi$ & (3.85) & (3.85) \\
& $N_H$ & 2.60 & 2.90 \\
\hline
\multicolumn{4}{l}{additional offset component absorption zones}\\
{\em zone 4} & $\log\xi$ & 1.95 & 2.04 \\
& $N_H$ & 34.0 & 28.1 \\
{\em zone 5} & $\log\xi$ & 1.35 & 1.89 \\
& $N_H$ & 3.59 & 9.81 \\
\hline
\multicolumn{2}{l}{$\chi^2$/dof} & 261/332 & 257/309 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig3a.ps}
}}
\hspace*{0.04\textwidth}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig3b.ps}
}}
}\end{minipage}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig3c.ps}
}}
\hspace*{0.04\textwidth}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig3d.ps}
}}
}\end{minipage}
\caption
{The fit of model B, to the principal component spectra: (left) {\em Suzaku}\ 0.5-45\,keV,
(right) {\em XMM-Newton}\ 0.4-10\,keV; (top) eigenvector one, (bottom) offset component.
The model is shown in units of Ef(E), points with
error bars show the `unfolded' component spectrum values.
}
\label{fig:pcafit}
\end{figure*}
This basic model based on the grating observations provides a
qualitative fit to the principal components describing the variable spectrum of
MCG--6-30-15, and indicates that the absorbed, two-component source model is
a good basis for further investigation (Table\,\ref{table:pcafitsA}, model A.
In the results tabulated,
the {\sc reflionx} ionisation parameter reached its maximum allowed
value in the fit.
Zone 3 is assumed to absorb only
the offset component in fitting to the PCA components.
We do not at this stage quote statistical errors
on parameter values, deferring those until the model fits to the full
dataset).
Eigenvector one is well-fit in both datasets,
although the fit of purely absorbed low-ionisation reflection to the
{\em XMM-Newton}\ offset component is rather poor.
One problem that has already become clear with the general problem
of fitting reflection models to the {\em Suzaku}\ data is that if the hard
component is purely reflection, its amplitude at $\sim 20$\,keV requires a
high reflected intensity, about three times that of the directly-viewed
power-law \citep{ballantyne03,miniutti07}.
One solution to this problem is to suppose that
not all the hard offset component is purely reflection, but that it actually
is composed at least in part of an absorbed component. Such an absorbed
component can appear in the offset component if its covering fraction is
variable (see the discussion in M07 and T07) - in this case the offset
component effectively represents the appearance of the source in its lowest,
most covered, state.
If we add an absorbed amount of the
incident powerlaw onto both components (as might be required if this
arises as variable partial covering), with a new absorbing {\em zone 5},
it reduces the number of degrees of freedom by four
and improves $\chi^2$ by 199 and 386 for the {\em Suzaku}\ and {\em XMM-Newton}\ datasets,
respectively (Table\,\ref{table:pcafitsB}, model B): a substantial improvement
over the purely absorbed reflection model.
The amplitude of reflected emission is
reduced accordingly, thereby decreasing the requirement for an unusually
high reflected intensity. If the reflector also has an uninterrupted view
of the full power-law component, this further decreases the ratio of reflected
to incident light. This fit is shown
in Fig.\,\ref{fig:pcafit} for both datasets. However,
eigenvector one becomes less easy to interpret in this
case, as it comprises contributions from both the direct continuum
and the variable-covering absorbed continuum. For a robust determination of
the model parameters and for testing its goodness-of-fit it is therefore
essential that we fit the model directly to the data, as in the next section.
\section{Results - model fits to data}\label{datafitting}
\subsection{Fitting methodology and initial constraints}\label{models}
Fitting to the PCA components should only be considered as yielding an indication
of the model components that may be required, for two reasons. First, although
the offset component and eigenvector one alone provide a good description of the
variable X-ray spectrum at energies above 2\,keV (Tables\,\ref{table1}, \ref{table2}),
more complex source behaviour is implied at softer energies. Second, there is no
unique interpretation of the offset component: this component essentially describes
the appearance of the source around its lowest possible flux state, and the
offset component spectrum we deduce may either be pure reflection or pure absorption,
for a source with a variable absorption covering fraction,
or some combination of the two, as indicated by the PCA.
Hence we need to test the model against the actual
data. This is done in this section.
Model B is summarised by Fig.\,\ref{model} which shows the model
fitted to the mean {\em Suzaku}\ spectrum as described below and showing the three emission
components of the model
(``direct'' power-law, with ionised absorption; ``partially-covered''
power-law, with higher opacity absorption from zone 5;
and low-ionisation reflection with absorption from zone 4).
We also show the component of cosmic X-ray background emission included in the
fit to {\sc hxd pin} data.
\begin{figure}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig4.ps}
}}
}\end{minipage}
\caption
{
Illustration of the spectral model. The upper curve shows the model fitted to
the mean {\em Suzaku}\ spectrum, with {\sc xis} data below 11\,keV and {\sc pin}
data above 15\,keV. Points with error bars show the unfolded data (see
section \ref{suzakufit} for details).
The three emission components are
shown as (a) primary directly-viewed power-law, absorbed by zones 1 \& 2;
(b) partially-covered power-law, absorbed by zones 1, 2, 3 \& 5;
(c) reflection, absorbed by zones 1, 2, 3 \& 4.
In the fit to the PCA components, zone 3 is excluded from eigenvector one;
in the fit to the actual data, zone 3 is allowed to absorb all components.
Also shown is the
expected contribution of the cosmic X-ray background to the {\em Suzaku}\ {\sc pin} band
(d) included in the model.
}
\label{model}
\end{figure}
\begin{figure}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig5.ps}
}}
}\end{minipage}
\caption
{Simultaneous fits to the {\em XMM-Newton}\ data split into 5 flux states, 0.55-9.7\,keV.
The model is shown in units of Ef(E), points with
error bars show the `unfolded' data.
}
\label{xmmfluxstates}
\end{figure}
The full dataset that we investigate here comprises observations taken at a
number of epochs with a variety of instruments.
In fitting to the data we require the model to fit simultaneously any data
that were obtained simultaneously (e.g. {\sc rgs} and {\sc pn} data for
{\em XMM-Newton}\ or {\sc pin} and {\sc xis} data for {\em Suzaku}\ ). We do allow variations
in model parameter values between datasets taken at different epochs, although
the model components are not changed.
Some model components are better constrained by some datasets than others. A
particular case is the high-ionisation outflowing
zone 3, which is most strongly constrained
by the high-resolution grating data, and in particular by the {\em Chandra}\ {\sc heg}
data around 6.7\,keV \citep{young05}. In order to ensure that the models fitted
to CCD-resolution data are consistent with the {\em Chandra}\ grating dataset, we first
estimate parameters for zone 3 by fitting a simple model to the {\em Chandra}\ data
that reproduces the
outflowing 6.7\,keV and 6.97\,keV line equivalent widths and redshift. Those
model parameters are then fixed in fits to the other datasets. Once the
other datasets have also been fitted, the full model is then also fitted to
the {\em Chandra}\ grating data as a final check on the model.
Because these
lines are easily saturated, the equivalent widths are strongly dependent on the
assumed turbulent velocity dispersion in the absorbing gas.
A low turbulent velocity would require
a high column, close to Compton-thick, in order to obtain the equivalent widths
observed. A higher turbulent velocity dispersion would lead to line widths
inconsistent with those observed (these statements were tested using
{\sc xstar} models created with velocity dispersion
$\sigma = 10, 300, 500, 1000$\,km\,s$^{-1}$).
We use a model with $\sigma = 500$\,km\,s$^{-1}$.
Thus the column and ionisation fit parameters we obtain would need to
be modified if the true $\sigma$ differs from 500\,km\,s$^{-1}$, but
a comparable goodness of fit can be obtained. We find higher velocity
dispersions need lower columns at fixed ionisation parameter to reproduce
the observed line equivalent widths, but that
higher values of ionisation parameter are needed to reproduce the
6.7, 6.97\,keV line ratios.
The initial absorber parameter values found from the {\em Chandra}\ data were
$N_{\rm H}=2.1\times10^{22}$\,cm$^{-2}$, $\log\xi=3.85$
and outflow velocity 1800\,km\,s$^{-1}$ with respect to the systemic redshift.
The same value of $\xi$ and the outflow velocity were adopted when fitting to
the PCA components.
In fitting to the PCA, we observed that the zone 3 absorption lines
appeared only on the offset component, which implies that either
(i) the absorber is only in front of the region responsible for the offset
component, or (ii) the equivalent widths of the zone 3 lines decrease with
increasing flux, perhaps because of increasing ionisation. The same effect
is seen in Mrk\,766 (T07). In fitting to the data we initially adopt the
same zone 3 column and ionisation for all flux states of the source, but later
we shall investigate fits in which the ionisation of zone 3 is allowed to
vary between different flux states.
\subsection{Fits to {\em XMM-Newton}\ multiple flux states}\label{secfluxstates}
The model is now compared with the variable X-ray spectrum more directly
than in the PCA,
by dividing the data into a number of ``flux states'' each with a different
spectrum, and fitting jointly to those spectra. We start with the {\em XMM-Newton}\ data.
\begin{figure*}
\begin{minipage}{\textwidth}{
\begin{minipage}{0.475\textwidth}{
\resizebox{\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig6a.ps}
}}}\end{minipage}
\hspace*{0.04\textwidth}
\vspace*{0.05\textwidth}
\begin{minipage}{0.475\textwidth}{
\resizebox{\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig6b.ps}
}}}\end{minipage}
}\end{minipage}
\caption
{
Fit to (left) the 2000 and (right) the 2001
{\em XMM-Newton}\ {\sc rgs} spectra of MCG--6-30-15:
The model is shown in units of Ef(E), points with
error bars show the `unfolded' data.
}
\label{figrgs}
\end{figure*}
\begin{table}
\caption{Model parameter values for the {\em XMM-Newton}\ {\sc pn + rgs},
{\em Suzaku}\ {\sc xis + pin} and {\em Chandra}\ {\sc meg + heg}
fits,
where $\Gamma$ is power-law photon index, $\xi$ is ionisation parameter in units
of erg\,cm\,s$^{-1}$ and N$_{\rm H}$ is absorber hydrogen column density assuming
solar abundances, in units of $10^{22}$\,erg\,cm\,s$^{-1}$.
Quoted uncertainties are 68\,percent confidence intervals.
Brackets indicate values that were not free parameters for that particular dataset.
The final rows give goodness-of-fit:
the data for each mission were jointly fit, but the contributions to $\chi^2$
are quoted separately for each of the subsets of {\em XMM-Newton}\ data:
$^a$ {\sc pn}, $^b$ 2001 {\sc rgs}, $^c$ 2000 {\sc rgs} (see text).
}
\label{parvals}
\begin{tabular}{lr@{ $\pm$ }lr@{ $\pm$ }lr@{ $\pm$ }l}
\hline\hline
parameter & \multicolumn{2}{c}{{\em XMM-Newton}\ } & \multicolumn{2}{c}{{\em Suzaku}\ } & \multicolumn{2}{c}{{\em Chandra}\ } \\
\hline
$\Gamma$ & 2.284 & 0.013 & 2.265 & 0.017 & \multicolumn{2}{c}{(2.284)} \\
$\log\xi_{\rm REFLIONX}$ & 2.04 & 0.01 & 1.97 & 0.03 & \multicolumn{2}{c}{(2.04)} \\
N$_{\rm H}$(Gal) & 0.040 & 0.004 & 0.052 & 0.003 & 0.057 & 0.002 \\
$\tau_{\rm 0.7\,keV\,edge}$ & 0.42 & 0.02 & 0.36 & 0.03 & 0.45 & 0.03 \\
N$_{\rm H}$(1) & 0.26 & 0.02 & 0.45 & 0.05 & 0.11 & 0.08 \\
$\log\xi$(1) & 2.64 & 0.02 & 2.78 & 0.07 & 2.33 & 0.05 \\
N$_{\rm H}$(2) & 0.016 & 0.001 & 0.03 & 0.001 & 0.022 & 0.001 \\
$\log\xi$(2) & 0.25 & 0.08 & $-0.11$ & 0.1 & $-0.04$ & 0.16 \\
N$_{\rm H}$(3) & \multicolumn{2}{c}{(2.10)} & \multicolumn{2}{c}{(2.10)} & \multicolumn{2}{c}{(2.10)} \\
$\log\xi$(3) & \multicolumn{2}{c}{(3.85)} & \multicolumn{2}{c}{(3.85)} & \multicolumn{2}{c}{(3.85)} \\
N$_{\rm H}$(4) & 34.8 & 0.05 & 54.9 & 0.05 & \multicolumn{2}{c}{(34.8)} \\
$\log\xi$(4) & 1.83 & 0.01 & 1.94 & 0.01 & \multicolumn{2}{c}{(1.83)} \\
N$_{\rm H}$(5) & 5.40 & 0.08 & 4.32 & 0.11 & \multicolumn{2}{c}{(5.40)} \\
$\log\xi$(5) & 1.75 & 0.02 & 1.39 & 0.05 & \multicolumn{2}{c}{(1.75)} \\
\hline
flux-states & \multicolumn{2}{c}{$799/783^{a}$} & \multicolumn{2}{c}{715/841} & \multicolumn{2}{c}{} \\
$\chi^2$/dof & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\
\hline
mean spectra & \multicolumn{2}{c}{$102/159^{a}$} & \multicolumn{2}{c}{115/171} & \multicolumn{2}{c}{3406/3341} \\
$\chi^2$/dof & \multicolumn{2}{c}{$1965/873^{b}$} & \multicolumn{2}{c}{}&\multicolumn{2}{c}{} \\
& \multicolumn{2}{c}{$1820/1074^{c}$} & \multicolumn{2}{c}{}&\multicolumn{2}{c}{} \\
\hline
\end{tabular}
\end{table}
The {\em XMM-Newton}\ {\sc pn} data were each divided
into 5 flux states, separated by equal logarithmic intervals of flux defined
in the 1-2\,keV band, spanning the range of flux covered by the data.
These states were fit simultaneously, with a single
value of parameters such as power-law index and absorber
column density and ionisation
parameters, with
absorber properties initially chosen to match the fits to the PCA.
The optical depth of the dust edge was also allowed to be a single free
parameter, the same for all flux states.
For each flux state,
the direct, absorbed power-law and reflected component amplitudes were allowed to vary
independently. Hence there were 15 free parameters, of which three were allowed
to vary between flux states. We fix abundances to solar values throughout.
\begin{figure}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig7.ps}
}}
\caption
{
Fit to the mean {\em XMM-Newton}\ spectrum of MCG--6-30-15.
The model is shown in units of Ef(E), points with
error bars show the `unfolded' data.
}
\label{figxmmmean}
\end{figure}
\begin{figure}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig8.ps}
}}
}\end{minipage}
\caption
{Simultaneous fits to the {\em Suzaku}\ data split into 5 flux states, 0.5-45\,keV.
The model is shown in units of Ef(E), points with
error bars show the `unfolded' data.
}
\label{suzakufluxstates}
\end{figure}
As discussed above, it is important to further constrain the soft-band emission
from the models, to be consistent with the high-resolution grating data. The
{\em XMM-Newton}\ observations comprised simultaneous {\sc rgs} and {\sc pn} observations,
and hence the models were fit simultaneously to the {\sc pn} flux state and
the {\sc rgs} data. Because of low signal-to-noise, the {\sc rgs} data were
not themselves divided into flux states, and the model was required to fit the
mean {\sc rgs} spectrum. Because of its limited energy range, the amplitudes of
the various components are not well constrained by the {\sc rgs} data,
so as well as the individual {\sc pn} flux state data, the {\sc pn} mean
spectrum was also included in the fit and the {\sc rgs} parameters were tied to
the parameters for that component. A nominal cross-calibration constant factor was allowed
for each of {\sc rgs 1} and {\sc rgs 2}
between the {\sc rgs} and {\sc pn} fits, although the value of this parameter
was unity to within one percent.
The 2001 {\sc rgs} data are of significantly higher
signal-to-noise than the 2000 data, so only the 2001 data was used to constrain
the model parameters using the above procedure.
Having obtained the absorber and emitter parameters,
we present also the results of jointly fitting the same model to the 2000 data,
with the same absorber parameters and only differing normalisations of the emission
components.
This joint analysis yields an overall goodness-of-fit $\chi^2 = 2773$ for
1689 degrees of freedom. The excess in $\chi^2$ arises entirely in the
{\sc rgs} portion of the data, although the model still describes
the high-resolution spectrum with an accuracy about 5\,percent and
correctly reproduces the chief lines and edges seen in the {\sc rgs} data.
The model fit to the {\sc rgs} portion
is discussed more below and in section\,\ref{abundances}.
Fig.\,\ref{xmmfluxstates} shows the fit to the {\sc pn} data portion.
The contribution to $\chi^2$ from the {\sc pn} data alone is
$\chi^2 = 799$ for 783 degrees of freedom (Table\,\ref{parvals})
(counting all free parameters when quoting the number of degrees of freedom).
Parameter uncertainties quoted in the table are 68\,percent confidence intervals.
We note that this is not the best fit that
can be achieved if the {\sc rgs} data are ignored: but the fit to the
{\sc pn} data that is constrained by both
{\sc pn} and {\sc rgs} data is nonetheless a good fit.
The largest discrepancy between the {\sc pn} data
and the model occurs in the soft band in the
lowest flux state, and likely indicates the effect of time variable
absorption, which we have not taken into account in this analysis.
\begin{figure}
\hspace*{-0.04\textwidth}
\begin{minipage}{\textwidth}{
\vspace*{-0.03\textwidth}
\resizebox{0.53\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig9.ps}
}}
}\end{minipage}
\caption
{
Fit to the mean {\em Suzaku}\ spectrum of MCG--6-30-15:
with inset showing the zoom-in to the
4-10\,keV region.
The model is shown in units of Ef(E), points with
error bars show the `unfolded' data.
}
\label{figmean}
\end{figure}
Fig.\,\ref{figrgs} shows the 2001 {\sc rgs} portion of the model fit, and
also shows the fit of the same model to the 2000 {\sc rgs} data as mentioned above.
The {\sc rgs 1} and {\sc rgs 2} data are overplotted on the figure rather than being
averaged together in regions of overlap. The models fit less
well than for the {\sc pn}, with the {\sc rgs} contribution to $\chi^2$ being
$\chi^2 = 1965$ for 873 degrees of freedom for the 2001 data in the energy range
shown, $0.56-1.4$\,keV (to be conservative, again, all jointly-fit parameters are counted
as being free for the calculation of the number of degrees of freedom).
The 2000 data are of lower signal-to-noise and as a consequence have a better
goodness-of-fit, $\chi^2 = 1820$ for 1074 degrees of freedom in the same
energy range.
Note that, as above, the quoted fit is not the best-fit of the model to the {\sc rgs}
data alone, but rather is the contribution to $\chi^2$ of the {\sc rgs} data
to the joint {\sc pn} and {\sc rgs} best fit, and therefore is degraded by
any cross-calibration uncertainty between the two instruments.
Given also the complexity of the {\sc rgs} spectra
and the relative simplicity of the models, including the assumption of solar abundances
(section\,\ref{abundances}),
these relatively poor {\sc rgs} fits are not too surprising.
The largest discrepancies between model and data in the energy range considered are
at energies just above the 0.7\,keV edge,
and the simple, single-edge model that we have adopted
is likely too simplistic. We do not have sufficient information to
attempt more sophisticated models of this region.
However, the reproduction of the overall continuum shape, the lack of strong
emission lines and the good correspondence with the
majority of the absorption features indicates that
the small number of zones that have been included do account for the majority
of the features.
Many of the absorption lines identified by \citet{turner04} are reproduced
by the model, adopting those authors' line identifications we find a good
match for
0.65\,keV\,\ion{O}{viii}\,Ly$\alpha$,
0.77\,keV\,\ion{O}{viii}\,Ly$\beta$,
0.87\,keV\,\ion{Fe}{xviii},
0.92\,keV\,\ion{Fe}{xix},
0.96\,keV\,\ion{Fe}{xx},
1.02\,keV\,\ion{Ne}{x}\,Ly$\alpha$ and
1.35\,keV\,\ion{Mg}{xi}.
These lines chiefly originate in zone 1, and some from zone 3,
with zone 2 providing most of the
lines and edges of \ion{O}{v}-\ion{O}{vii}.
We also fit the same additive model to the mean {\em XMM-Newton}\ spectrum
where model parameters were fixed at the values found above,
only allowing the normalisations of the three emission components to float.
This results in a somewhat better-than-expected fit (assuming systematic fractional
error 0.03), $\chi^2=102$ with 159 degrees of freedom
(Table\,\ref{parvals}, Fig.\,\ref{figxmmmean}),
although of course the same data were used to generate the model with more
free parameters when analysing the multiple flux state data above.
\subsection{Fits to {\em Suzaku}\ multiple flux states and mean spectrum}\label{suzakufit}
We carry out a similar procedure for fitting to the {\em Suzaku}\ data, except now
there is no simultaneous high resolution data, but there is simultaneous
{\sc xis} and {\sc pin} data.
The flux-state XIS data were noisy above 10.5\,keV, so the XIS data were truncated
at that energy.
When fitting to the {\em Suzaku}\ {\sc pin}
data, the data were not corrected for the contribution of the cosmic X-ray
background (CXB), but instead the CXB model adopted by \citet{miniutti07} was
included in the fit, with a normalisation allowed to float by $\pm 5$\,percent
to allow for the uncertainty in the model and in the {\em Suzaku}\ absolute calibration.
The results are displayed in
Fig.\,\ref{suzakufluxstates}. Fitting to all five flux states with this model
yields $\chi^2 = 715$ for 841 degrees of freedom, best-fitting parameter values
are given in Table\,\ref{parvals}.
We also fit the same additive model to the mean {\em Suzaku}\ spectrum
where again the
model parameters were fixed at the values found in the flux states analysis,
only allowing the normalisations of the three emission components to float.
This results in goodness-of-fit
$\chi^2=115$ for 171 degrees of freedom
(Table\,\ref{parvals}, Fig.\,\ref{figmean}).
An even lower value of $\chi^2$ may be obtained by allowing
other parameters to float, but as the model would then no longer fit the variable
flux state data we do not allow this freedom.
\begin{figure}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig10.ps}
}}
\caption{Model fit to {\em Chandra}\ {\sc heg} spectrum in the region $5-8$\,keV
(solid line) with unfolded data shown as points with errors.
}
\label{fig:heg}
\end{figure}
\begin{figure*}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig11a.ps}
}}
\hspace*{0.04\textwidth}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig11b.ps}
}}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig11c.ps}
}}
\hspace*{0.04\textwidth}
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig11d.ps}
}}
}\end{minipage}
\caption{
Model fit to the {\em Chandra}\ {\sc meg} data, showing selected energy ranges,
with the model shown as the solid line, unfolded data as points with error bars.
}
\label{fig:meg}
\end{figure*}
\subsection{Fits to {\em Chandra}\ {\sc hetgs} data}
The available {\em Chandra}\ {\sc hetgs} data has previously been analysed by
\citet{lee01} and \citet{young05}
who have demonstrated its diagnostic power in detecting the presence
of absorbing layers.
The data were taken at a different epoch from the {\em XMM-Newton}\ and
{\em Suzaku}\ data, with a different instrument, and so we expect to need to vary some
of the parameters in order to obtain a good fit.
The mean {\sc heg} spectrum over the range $3-8$\,keV and
the mean {\sc meg} spectrum over the range $0.55-5$\,keV
were jointly fit with the same model as the other datasets, with free
parameters as before, except that the power-law slope, {\sc reflionx}
ionisation parameter and some absorption zones
were fixed at the {\em XMM-Newton}\ values (see Table\,\ref{parvals}; the mean {\em Chandra}\
spectrum on its own does not sufficiently constrain these parameters).
The resulting parameter values
are shown in Table\,\ref{parvals}, which resulted in a goodness-of-fit
$\chi^2=3406$ for 3341 degrees of freedom for the joint fit.
Viewing the fit in more detail,
we first consider the {\sc heg} data in the region around 7\,keV. The pair of
absorption lines at this energy have already been discussed in this paper and
by \citet{young05, brenneman06} and \citet{miniutti07}. Fig.\,\ref{fig:heg}
shows the model fitted to the data, and it can be seen both that the absorption
lines are reproduced by this model and that no other strong lines are expected,
and it is therefore consistent with the constraint discussed by
\citet{young05} that there should not be significant amounts of absorption from
intermediate ionisation states of Fe, which would produce absorption at
$\sim 6.5$\,keV. This constraint is avoided because the red wing is largely
produced by zone 5, which has ionisation sufficiently low that the Fe\,L shell
is filled, at $\log\xi \la 2$ \citep{kallman04}, and it is the partial covering
of this zone that allows the continuum shape to be correctly reproduced as well
as explaining the variability properties (discussed further in
section\,\ref{partialcovering}).
Further confirmation of the high-ionisation layer comes from the {\em Chandra}\
{\sc meg} data, in which numerous absorption features arising in this layer are
visible, including the features at 2.0 and 2.62\,keV discussed by \citet{young05}
(Fig.\,\ref{fig:meg}).
As in the {\sc rgs} data, the {\sc meg} data reveal
numerous absorption lines in the soft band, many of which were previously
identified by \citet{lee01} and \citet{turner04}, including
0.615\,keV\,\ion{O}{v},
0.635\,keV\,\ion{O}{vi},
0.66\,keV and 0.69\,keV\,\ion{O}{vii},
0.65\,keV\,\ion{O}{viii}\,Ly$\alpha$,
0.77\,keV\,\ion{O}{viii}\,Ly$\beta$,
0.92\,keV\,\ion{Fe}{xix},
1.02\,keV\,\ion{Ne}{x}\,Ly$\alpha$,
1.35\,keV\,\ion{Mg}{xi} and
2.0\,keV\,\ion{Si}{xiv}\,Ly$\alpha$
(we again adopt the line identifications of \citet{lee01} and \citet{turner04}).
Again, the higher ionisation lines chiefly originate in zones 1 and 3, the
lower ionisation lines in zone 2.
\subsection{The effect of non-solar abundances}\label{abundances}
In the above models, solar abundances were assumed throughout. However, there is
evidence in MCG--6-30-15 for departures from those values: \citet{turner04}
point out that the observed equivalent widths of absorption lines in their
model are too low by a factor $\sim 1.25$, an effect that is also clearly seen
in the fits shown here (e.g. Fig.\,\ref{figrgs}). The effect could be explained
either by altering the assumed velocity dispersion, or by altering the assuming abundance.Clearly, allowing either of these parameters to float would result in an overall
goodness of fit better than presented here. Because of the degeneracies in such
fits with other parameters, not only velocity dispersion but also column density
and ionisation parameter, the current data is insufficient to
unambiguously determine absolute values of element abundances. We can, however,
estimate the size of the effect of changing the assumed abundance to an alternate
value. \citet{turner04} suggest O may need to be enhanced by a substantial
factor, perhaps as high as $3-4$, to
explain the observed equivalent widths. We have therefore replaced the absorber
model used for zone\,1 with a new
{\sc xstar} model with $\alpha$-element abundance enhanced by a factor two, leaving
other abundances fixed at solar. Consistent with the inference of \citet{turner04},
we find an improved overall fit to the {\em XMM-Newton}\ combined {\sc pn}/2001\,{\sc rgs} data.
The overall $\chi^2$ for the combined fit shows a modest
improvement by $\Delta\chi^2 = 39$, from a value 2773 to 2734
(with 1689 degrees of freedom in both cases).
As expected, the equivalent widths of the O, Ne and Mg
lines match better to the data, but the Fe lines remain with model equivalent widths
that are too small. It seems likely therefore that there is an overall enhancement of
all elements, not only the $\alpha$ elements.
The chief aim of this paper is to investigate the extent to which an absorption model
can describe the full X-ray spectrum of MCG--6-30-15. In order not to confuse
the general properties of the absorption model with specific details about the
likely element abundances, we do not here investigate further the more detailed effect
of non-solar abundances.
\subsection{Variation in absorber ionisation}\label{ionvar}
We can also test the absorber models for possible variation in ionisation
parameter with flux. There is evidence for this in the high ionisation
outflow, zone 3, in that the equivalent width of the absorption lines decrease
with increasing source brightness (equivalently, they only appear on the PCA
offset component, and not on eigenvector one). This might arise if the zone 3
absorber is localised to the region responsible for the offset component,
but the alternative explanation investigated here is that the variation
in equivalent width is an effect of varying ionisation.
Fig.\,\ref{xivar} shows the variation in $\xi$ for
each of the five {\em XMM-Newton}\ and {\em Suzaku}\ flux states when this is allowed to vary
between each state. The goodness-of-fit improves
by $\Delta\chi^2 = 15$ to $\chi^2=822$ for 838
degrees of freedom: in itself this is a weak return for introducing a further
four free parameters, but it does reveal the expected relationship between
ionisation and flux. Within each dataset there is
a clear trend for $\xi$ to be proportional to the continuum amplitude, although
the {\em Suzaku}\ points appear offset to lower ionisation, implying a change in
either absorber density or ionising continuum spectrum between 2001 and 2006.
\section{Discussion}
\subsection{Model complexity and uniqueness}
The model presented above provides a good description of the variable X-ray
spectrum of MCG--6-30-15 throughout all its flux states, and correctly
reproduces the hard low-state and softer high-state spectra. It also
reproduces the high energy $15-40$\,keV flux and its relative lack of variability,
and removes the requirement for an unexpectedly high reflection albedo.
The model is complex, requiring five intrinsic absorption zones, plus a dust
edge, but we know already from the {\em Chandra}\ and {\em XMM-Newton}\ grating data that multiple
zones covering a wide range of ionisation, with at least two kinematically
distinct regions, are required by the data. We should not be surprised by the
complexity of the model, three of the zones have been detected by previous
authors, but of course the next step in proving or disproving the
specific model presented here would be to search for high resolution signatures
of the additional heavy absorbing zones that are required.
For the ionisation predicted from the model in this paper, there may
be a weak signature of the Fe\,K$\beta$ UTA arising from
ionisation states less ionised that \ion{Fe}{xvii}.
This would be too weak to detect in the
present data but may be a detectable signature in data from
future missions with calorimeter detectors.
We might compare the model
complexity with other studies: \citep{brenneman06}, for example, fit the spectrum
expected from close to a Kerr black hole to the {\em XMM-Newton}\ data for MCG$-6$-30-15,
including some of the ionised absorbing zones and the dust edge, over the range
$0.6-10$\,keV, but only fitting the mean spectrum. That model does not include
all the ionised zones seen with {\em Chandra}\ , but still has a total of 18 parameters,
of which eight are associated with the relativistically
blurred line component, and their model 4 results in a goodness-of-fit $\chi^2=1742$ for
1375 degrees of freedom.
It seems that whatever physical premise
is taken for the origin of the red wing, models of some complexity are needed.
Conversely,
although the absorption model presented here has been successful, we cannot claim
that it is unique. In this paper we have concentrated on a model in
which absorption plays a dominant role, and have developed the model constituents
in a systematic manner based on fits to the variable-spectrum low resolution data
and the high-resolution grating data. This exercise at least shows that it
is possible to explain the X-ray spectrum of MCG--6-30-15 by such a model.
Many previous studies have concentrated solely on modelling the hard, low-variability
``offset'' component as being relativistically-blurred reflected emission from the
inner accretion disc
(\citealt{wilms}, \citealt{vaughanfabian04}, \citealt{reynolds04}, \citealt{miniutti07},
\citealt{brenneman06}), but
those studies have only tested that model against the mean spectra, albeit of
data obtained when the source was in differing flux states, and often over a
restricted range in energy (e.g., $3-45$\,keV in the study of \citealt{miniutti07}).
The model presented in this paper does not have any such blurred component,
but this does not prove that such a component does not exist. After all,
an arbitrarily low amplitude of such a component could always be added into the
model described in this paper. Nonetheless, the model-fitting presented here
does show that the offset component need not be considered as being {\em dominated}
by relativistically-blurred reflection. One important consequence is that
it is not possible to use the spectral shape of the red wing
in the current generation of data to deduce parameters such as black hole spin.
Given the extreme
spin parameters that are deduced for AGN such as MCG$-6$-30-15
(in this case $a=0.989_{-.002}^{+.009}$, \citealt{brenneman06})
this has important consequences for models of black hole evolution, as
``chaotic'' mergers produce lower expected spin parameter values, and such
extreme high values are expected only to be achieved by long-lived steady accretion
\citep{berti08,king08}.
\begin{figure}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig12.ps}
}}
}\end{minipage}
\caption
{
The fitted variation in $\xi$ as a function of normalisation of the primary
power-law, for the {\em XMM-Newton}\ (solid points) and {\em Suzaku}\ (open points) flux-states.
}
\label{xivar}
\end{figure}
\subsection{The role of absorber partial covering}\label{partialcovering}
The absorbed power-law component that forms part of the model presented here
is key to both understanding the soft-band continuum shape and the high flux
observed in the {\sc pin} data at high energy.
Many previous authors have suggested that absorption partially covers an
X-ray source and that variations in covering fraction of an absorbing zone
may be linked to flux and spectral variations in both AGN and
Galactic black hole systems
\citep[e.g.][{\em inter alia}]{holt80, reichert85, boller02, boller03,
immler03, tanaka04, gallo04a, gallo04b, pounds04, turnerea05, grupe07}.
\citet{vaughanfabian04} previously tested a partial
covering model for the 2001 {\em XMM-Newton}\ spectrum of MCG$-6$-30-15 and concluded that,
although such a model provided an acceptable fit, a relativistically-blurred
model was superior. The model tested in that case was solely a neutral
absorber however, a key difference with the models tested here.
\citet{mckernan98} also suggested specifically for
MCG--6-30-15 that a sharp dip in flux was caused by occultation by an
absorber, an idea revived most recently for NGC\,3516 by \citet{turner08}.
Mrk\,766 shows extremely similar X-ray spectral variability to MCG--6-30-15
and M07 and T07 have suggested that variable partial covering may play an
important or perhaps even dominant role in the X-ray spectral variability
of this source.
To investigate this further we show in Fig.\,\ref{absorber-variations} the
amplitude in the absorbed component compared with the amplitude in the direct
component, as measured from the normalisation of the power-law in each
case. It may be seen that, in the model presented here, the absorbed component
does vary coherently with the direct continuum at high flux states, but
not with a dependence that passes through the origin.
We can see from this diagram why the PCA results in an offset component that
contains a signficant amount of absorbed continuum: a linear extrapolation
through the points in Fig.\,\ref{absorber-variations} would hit the
y-axis at a positive value of the absorbed component flux.
At the lowest observed
fluxes, the correlation seems to break down, with a larger scatter in the
two components.
The simplest
interpretation of the observed trends is that in the highest flux states the
source has a covering fraction around 50\,percent, but that
at lower flux states the covering fraction is more variable and may increase
towards 100\,percent. If this interpretation is correct it implies that the
covering fraction is a function of the flux state of the source and perhaps
indicates a dependence of either source size or absorber extent on flux state.
The trends seen here and the interpretation is however model
dependent, and at this stage we can do no more than suggest this interpretation.
\begin{figure}
\begin{minipage}{\textwidth}{
\resizebox{0.475\textwidth}{!}{
\rotatebox{-90}{
\includegraphics{9590fig13.ps}
}}
}\end{minipage}
\caption{The variation in amplitude of the absorbed component (y-axis) as a
function of the amplitude of the direct power-law component (x-axis), for the
fits to the {\em Suzaku}\ (open symbols) and {\em XMM-Newton}\ (solid symbols) data.}
\label{absorber-variations}
\end{figure}
\subsection{Hard-band reflection}
A problem that any model must address is the relatively high flux observed
above 20\,keV \citep{ballantyne03,miniutti07}. In the model presented here,
some fraction of the hard-band flux is still provided by distant reflection.
If that reflection has a view of the entire unabsorbed source output, then
the reflected intensity relative to that expected from a disc subtending
2$\pi$\,sr (the ``R'' parameter) has a value around 1.7 (estimated as in
\citealt{miniutti07} from a comparison of component fluxes in the hard band):
still greater than unity but substantially smaller than $R \ga 3$ as
required by the previous work. In fact, almost any amount of hard-band flux
could be obtained if there are further even more opaque partial covering layers or
heavily-absorbed reflection zones, and recently the source PDS\,456 has been
found to exhibit just such a heavily absorbed zone (Reeves et al. in preparation).
If the narrow Fe\,K$\alpha$ emission line originates in optically thin gas rather
than optically-thick reflection, it may even be the case that the heavily
absorbed reflection component could be largely replaced by further high-opacity
partial covering layers. We do not explore such models further
in this paper.
\subsection{The location of the Fe emission-line region}
A number of previous authors have suggested using the width of the narrow
6.4\,keV\,Fe\,K$\alpha$ emission line to give an indication of its origin
\citep{lee02,yaqoob04}.
In addition to the red wing, there appears to be a resolved
6.4\,keV\,Fe\,K$\alpha$ line whose width, if interpreted as due to Doppler
broadening, is FWHM$\sim 10,000$\,km\,s$^{-1}$. In the model presented
here, some of this width is provided by a Compton shoulder on the line in a
moderately ionised reflector, modelled by {\sc reflionx}, although there may
still be some residual excess emission on the blue side of the line. In this
case it is difficult to disentangle Compton broadening from velocity
broadening and the line width gives no clear indication of the location of
the reflection, other than not being close to the black hole.
If the line instead is emitted from optically-thin gas, the
linewidth may indicate an origin in the broad line region. In addition to this
component, the long {\em Chandra}\ exposure reveals the presence of a weak
(equivalent width $\sim 20$\,eV) line unresolved at the {\sc heg} resolution
(FWHM $< 3000$\,km\,s$^{-1}$) that may indicate a further component of reflection
or emission even more distant from the central source.
\subsection{The location of the absorbers, rapid variability and time delays}
The analysis presented here has only dealt with variability on timescales
$\ga 20$\,ks. Within the model presented, there is no requirement for any
material within radii where relativistic blurring is significant ($\la 20$\,r$_g$)
but we suggest that the heavy partial-covering absorbing layer originates
on scales of the accretion disc, perhaps around 100\,r$_g$. Perhaps the most
compelling evidence for this is the observation of an apparent eclipse-like event
in the light curve of MCG$-6$-30-15 \citep{mckernan98} which may be explained
as a clumpy disc wind occulting part of the source. M07 and T07 also
suggested absorption by a clumpy disc wind in Mrk\,766 and a similar eclipse
event to that seen in MCG$-6$-30-15 has been observed in NGC\,3516 \citep{turner08}.
Full spectral models of such a wind do not currently exist, although
\citet{schurch07} have created approximate spectra by layering (1D) {\sc xstar}
absorption zones and \citet{sim08}
are developing steady axisymmetric wind models based on Monte-Carlo radiative transfer
that qualitatively reproduce many of the features seen in AGN and Galactic
black hole binary system X-ray spectra. \citet{dorodnitsyn08} have calculated
transmission spectra in a time-dependent axisymmetric model of parsec-scale
flows in AGN. We can hope that detailed comparison of observed spectra with
realistic wind models will become possible in the near future.
In the model presented here there is also a component of low-ionisation reflection.
the amplitude of this is so constant that it is very likely from distant material,
light-days or further away,
with any reflection variability erased by light travel time delays. However,
\citet{ponti04} have found evidence for a transient reflection signal delayed
after a continuum flare by a few ks. The statistical significance of the
delayed flare is low and needs to be confirmed by detection of further similar
events.
\citet{goosmann07} have modelled the phenomenon as reflection from a dense clumpy medium
surrounding the accretion region at $\sim 70$\,r$_g$.
If our hypothesised absorbing clumpy disk wind is present, we might
expect to see some reflection contribution from it, depending on its scattering
optical depth and covering factor.
We suggest that the clumpy disc wind
may also be visible in reflection on short-lived
occasions after a flare when viewed with adequate time resolution. On the
20\,ks timescales used in the analysis in this paper such reflection,
weak in normal circumstances, would simply be included as an additional component
in the variable component of the spectrum (PCA eigenvector one).
We may also be able to constrain the location of the absorbing zones from their
response, or lack of, to the continuum variations. Of the various zones, only
the high ionisation outflow, zone 3, seems to show evidence for ionisation
variation that tracks the continuum brightness (section\,\ref{ionvar}).
This interpretation of the change in equivalent width is not unique, it could
also arise if zone 3 were only associated with the heavily-absorbed
components, but if the ionisation-variation interpretation is correct it implies
that the recombination time in the absorbing zone
is no larger than the typical continuum variability timescale,
and in turn that the depth
of the zone should be $\la 10$ light days,
assuming a \ion{Fe}{xxvi} radiative recombination rate coefficient
$\alpha \simeq 10^{-11}$\,cm$^3$\,s$^{-1}$ \citep{shull}.
This might place the
wind within the broad-line region. In the other absorbing zones that are
detected in the grating data, there do also appear to be ionisation variations,
but these do not appear to be systematically correlated with the source brightness
\citep{gibson07}, which might imply that their densities are rather low, or else
that more complex radiative transfer is present. There is currently no constraint
on ionisation variation of the heavily absorbing zones 4 and 5.
\subsection{Comparison with other AGN}
Much of the analysis in this paper has followed that carried out for Mrk\,766,
a narrow-line Seyfert\,I, by M07 and T07. There are strong similarities between
the X-ray spectra of the two sources:
\begin{enumerate}
\setlength\itemsep{0em}
\item Both sources show similar systematic variation in spectral shape with flux
on 20\,ks timescales, with a hard component in the low flux state that shows a
significant edge around the Fe K$\alpha$ region.
\item The PCA leads to similar spectral components in both sources, with the
``red wing'' component being dominated by an apparently quasi-constant
hard spectral component.
\item Both sources appear to have absorption from a high-ionisation
outflow, which leads to significant modification of the observed spectral shape
around 7\,keV.
\item The soft excess appears to be explained by ionised absorption.
\end{enumerate}
The implication is that MCG$-6$-30-15 is not a special case, but that it is a
examplar of a general class of AGN whose X-ray spectra are dominated by the
effects of absorption, possibly in an outflowing wind. \citet{nandra07} have
attempted to quantify the occurrence of ``red wings'' in AGN, with the aim of
establishing the prevalence of detectable relativistic blurring. They find
that 45\,percent of AGN in their sample are best fit by a relativistically
blurred component, including MCG$-6$-30-15 and Mrk\,766. The results we obtain
suggest that a partial covering model such as presented here would provide an
alternative explanation of the observed red wings in AGN. The high occurrence
rate in the \citet{nandra07} sample would then imply a high prevalence of
significant wind absorption, that in turn would imply a high global covering
fraction for the wind.
If the wind explanation is correct,
the inference of X-ray winds in the detailed studies of three AGN,
Mrk\,766, MCG$-6$-30-15 and NGC\,3516 \citep{turner08},
would require that the phenomenon be a property of sources across the range
of narrow-line Seyfert\,I and broad-line Seyfert\,I AGN characteristics,
across a range of black hole mass, as indicated by the
wide range of sources for which a red wing
is claimed in the \citeauthor{nandra07} study.
\section{Conclusions}
We have investigated a model of X-ray spectral variability for MCG$-6$-30-15
based on the absorbing zones identified in high-resolution grating data.
We find the ``soft excess'' may be explained entirely by the combined effect of those
zones (including soft-band dust absorption).
High-resolution principal components analysis, achieved using singular value
decomposition, indicates the presence of a less variable heavily absorbed
component that until now has been interpreted as a relativistically-blurred
Fe line. This component may be modelled by a combination of distant (constant
amplitude) absorbed reflection and the effect of a variable covering fraction
of absorption of the primary continuum source.
The model has been applied
both to the PCA and the actual data accumulated from the
{\em XMM-Newton}\ {\sc pn} and {\sc rgs} instruments (simultaneously-fitted)
in 2000 and 2001 over the energy range
$0.5-10$\,keV, to the {\em Suzaku}\ {\sc xis}, $0.5-10.5$\,keV, and
{\sc pin}, $15-45$\,keV, data (simultaneously fitted) from 2006
and to the {\em Chandra}\ {\sc hetgs} data
({\sc heg} and {\sc meg} simultaneously fitted) from 2004.
This is the most
comprehensive analysis of the MCG$-6$-30-15 dataset yet published.
Remarkably, the absorption model fits the entire dataset over its entire
range, explaining simultaneously the soft-band excess, the ``red wing'' and
its lack of variability and the high hard band
({\em BeppoSAX} and {\em Suzaku}\ {\sc pin}) flux and its lack of variability,
and fits not only the CCD-resolution data but also matches well the
absorption lines and edges seen in the high resolution grating data.
The best-fit parameters show that the partial-covering absorber
is ionised, but with $\xi < 100$\,erg\,cm\,s$^{-1}$,
so no Fe\,K$\alpha$ absorption is expected from this component,
and the absence of observed 6.5\,keV\,Fe\,K$\alpha$ absorption \citep{young05}
is not therefore a constraint on this model.
No relativistically blurred component is required to fit this dataset.
We suggest the absorbing material is primarily a clumpy disc wind.
\begin{acknowledgements}
This paper is based
on observations obtained with {\em XMM-Newton}, {\em Suzaku}\ and {\em Chandra}.
{\em XMM-Newton}\ is an ESA science mission with instruments and contributions
directly funded by ESA Member States and NASA.
{\em Suzaku}\ is a collaboration between
ISAS/JAXA, NASA/GSFC and MIT.
This research has made use of data obtained from the High Energy Astrophysics
Science Archive Research Center (HEASARC), provided by NASA's
Goddard Space Flight Center.
TJT acknowledges NASA grant ADP03-0000-00006.
\end{acknowledgements}
|
0803.3788
|
\section*{Introduction}
\subsection{The problem and the main result} Shimura, at the end of his fundamental paper \cite{Shimura3} on elliptic
modular forms of half-integral weight, mentioned certain
questions that were open at the time: one of them asked whether every modular form of weight 1/2 is a
linear combination of theta series in one variable. This was
answered in the affirmative by Serre--Stark \cite{Serre-Stark} who
gave an explicit basis for the space of modular forms of weight 1/2,
level $N$ and character $\psi$ in terms of certain theta series.
These theta series are denoted by $\theta_{\chi, t}$ where $\chi$ is
a primitive Dirichlet character and $t$ a positive integer so that
$\chi$ and $t$ are related in a precise manner to $N$ and $\psi$.
Such an explicit result has several nice applications, see for
instance Tunnell's work~\cite{Tunnell} on the ancient congruent
number problem.
It seems natural to generalize the Serre-Stark theorem to fields other than $\mathbb Q$, that is, to find an explicit basis in terms of theta
series for \emph{Hilbert modular forms} of weight $( \frac{1}{2},
\frac{1}{2}, ... \frac{1}{2})$. In this paper we achieve that in the case of a
totally real field $F$ of narrow class number 1 when the level
$\c$ and character $\psi$ of the form have certain nice properties.
In particular we assume that no prime dividing $\c$ splits in the
extension $F/\mathbb Q$ and that the Dirichlet character $\psi$ of the form
is trivial at the units (or equivalently, the corresponding finite order Hecke
character is trivial at all infinite places).
Under these assumptions we prove that the space of Hilbert modular forms of
weight $( \frac{1}{2}, \frac{1}{2}, ... \frac{1}{2})$, level $\c$
and character $\psi$ has a basis consisting of theta series that are almost
identical to the ones in Serre-Stark's theorem.
We note here that Shimura proved
(see Theorem~\ref{t:shimuragen}) that the space of weight $1/2$ Hilbert modular forms of \emph{all levels}
is \emph{spanned} by certain theta series; however his results do not seem
to give a \emph{basis}, nor do they appear to apply to a \emph{particular level}. Also, as noted by Deligne in a letter (appended at the end of \cite{Serre-Stark}) the problem can be attacked using the tools of representation theory. This was carried through successfully by Gelbart--Piatetski-Shapiro~\cite{gelpia}; however their result, like Shimura's, only finds a spanning set and also does not consider the levels.
We now briefly state the main result. Let $F$ be a totally real number field of narrow class number 1 and degree $n$ over $\mathbb Q$. Let $R$ be its ring of integers, $R^+ \subset R$ the subset of totally positive elements and $U$ the subgroup of units in $R$. For an ideal $\c$ of $R$ that is divisible by 4 and all of whose prime divisors are non-split, and a primitive Dirichlet character $\psi$ trivial on $U$, we let $M(\c, \psi)$ denote the space of Hilbert modular forms over $F$ of parallel weight $1/2$, level $\c$ and character $\psi$. For a primitive Dirichlet character $\chi$ trivial on $U$ and of conductor $r(\chi)$, and an element $t \in R^+$, we define the theta-series $\theta_{\chi, t}$ on the $n$-fold product of the upper half plane by $$\theta_{\chi, t}(z)= \sum_{x\in R} \chi^{-1}(x)e^{\pi i tr(x^2z)}.$$ Then our main theorem says the following.
\emph{A basis for $M(\c, \psi)$ is obtained by taking all the theta-series $\theta_{\chi, t}$ where we let $t$ vary over a set of representatives of $R^+/U^2$ and let $\chi$ satisfy, in addition to the conditions mentioned above, the following:
\begin{enumerate}
\item $4 r(\chi)^2t$ divides $\c$,
\item $\psi= \chi \epsilon_t$ where $\epsilon_t$ is the character associated to the quadratic extension $F(\sqrt{t})$.
\end{enumerate} }
A few words about our methods. Though our techniques are similar to those of~\cite{Serre-Stark},
there are certain complications which arise because we are no longer
dealing with $\mathbb Q$; as a result many of the proofs of~\cite{Serre-Stark} do not extend to our case easily. Here are two main points of difference:
First, in section~\ref{s:oper} we prove various properties of certain operators (such as the symmetry operator) that are crucial to the theory of newforms for Hilbert modular forms of parallel weight $1/2$. In \cite
{Serre-Stark} these properties can be easily checked by hand and are left as exercises; however that is not the case here because we \emph{do not} have a simple closed formula for the automorphy factor. So we use an expression for the automorphy factor from Garrett's book~\cite{Garrett} and certain relations due to Shimura (and do some messy computations) to prove these properties. Furthermore we have to be very careful in the way we normalize these operators (and take into consideration the fact that the different of the field is no longer equal to 1) so that things work out.
Secondly, the proof of the crucial Theorem~\ref{t:newtheta} does not quite go through in a manner similar to~\cite{Serre-Stark}; the clever divisibility argument at the end of that proof breaks down here because of primes above $2$. We use a completely different method to get around this conundrum; we essentially use the fact that the size of the Fourier coefficients is bounded by Shimura's work.
Thus, the basis problem for the Hilbert modular case is not a
completely straightforward extension of \cite{Serre-Stark}, which,
we hope, justifies this article. Besides, we indicate, in a short section at the end, a motivation for solving this problem, by pointing out two potential applications which we hope to take up elsewhere.
\subsection{Structure of the paper}
In Section~\ref{s:prelim} we lay down notation, give some important
definitions and results that will be used throughout the paper,
state an important result due to Shimura and give the precise
statement of our main theorem.
In Section~\ref{s:hecke} we define the Hecke operators and write down
their action on Fourier coefficients.
Section~\ref{s:easy}, titled `Easy pickings' is the analogue of
\cite[Section 5]{Serre-Stark}. All the proofs carry over
\textit{mutatis mutandis} from there. We have included them for
completeness.
Section~\ref{s:oper} is similarly analogous to \cite[Subsection 3.4]{Serre-Stark}. In this section we define some important operators
(there are some differences from the corresponding definitions in
\cite{Serre-Stark} which arise because our definition of a modular
form is not \emph{quite} the same as Serre-Stark's) and prove the
same results as in there. However the calculations now are of a
higher order of difficulty than in \cite{Serre-Stark} because, unlike
in the classical case, there is no simple formula for the automorphy
factor. As a result the proofs are more technical. This is probably
the hardest part of the paper, involving messy computations.
In Section~\ref{s:newforms} we outline the theory of newforms for
our purposes. The proofs are but formal consequences of the results
of the previous two sections and essentially identical to the
corresponding proofs in \cite{Serre-Stark}. Therefore we do not
include them.
In Section~\ref{main}, we define the $L$-series and use it to
characterize a newform. Using that, we prove our main
theorem. At the end of this section, we illustrate our theorem by writing down bases for the spaces of weight $1/2$ Hilbert modular forms over $\mathbb Q(\sqrt{2})$ for various levels.
Finally, in Section~\ref{s:app} we mention some potential applications of our work.
\section{Preliminaries}\label{s:prelim}
\subsection{Notation} Let
$F$ be a totally real number field, $R$ its ring of integers, $D$ its discriminant and
$\delta$ its different. By abuse of notation we also use $\delta$ to
denote a fixed totally positive generator of the different. We
assume that $F$ has narrow class number one and we let $n$ denote
the degree of $F/\mathbb Q$. Let the group of units of $F$ be denoted by
$U$ and the group of totally positive units by $U^2$ (since the
field is of narrow class number one, all totally positive units are
squares). For any $t \in F$, we use the notation $t
>> 0$ to mean that $t$ is totally positive.
We denote the adelization of $F$ by $F_\mathbf{A}$ and the ideles by
$F_\mathbf{A}^{\times}$. For any $x \in F$ let $N(x)$ denote its
norm over $\mathbb Q$. For an ideal $\mathbf m \subset R$, we will let $N(\mathbf m)$
denote the cardinality of $R/\mathbf m$. Let $\infty$ denote the set of
Archimedean places of $F$ and $\mathbf{f}$ denote the finite places. For $g \in F_\mathbf{A}^{\times}$ we denote $g_{\mathbf m} = \prod_{v | \mathbf m}g_v$ and $g_{\infty}=\prod_{v \in \infty}g_v$.
For $v \in \infty$ we denote the positive elements of $F_v$ by
$F_v^{+}$. By $F_{\infty}^\circ$ we mean the connected component at
infinity of the identity, i.e. $$F_{\infty}^\circ= \prod_{v \in
\infty } F_v^{+} \simeq (\mathbb R^+)^n.$$
Let $\mathbb{H}^n$ (resp. $\mathbb{C}^n$) denote the n-fold product
of the upper half plane (resp. complex plane). For $z =
(z_1,..,z_n)$ in $ \mathbb{C}^n$ or $\mathbb R^n$ and any $\alpha \in \mathbb R$ we
put $$ z^\alpha = \prod_{i=1}^n z_i^\alpha,\quad e(z) =
\prod_{i=1}^n e^{2 \pi i z_i},\quad N(z) = \prod_{i=1}^n z_i, \quad
tr(z)= \sum _{i=1}^n z_i.$$ We also use the symbol $e(z)$ for $z \in
F$ using the $n$ embeddings of $F$ in $\mathbb R$.
Furthermore, for any prime(i.e. a finite place) $p$ in $\mathbb R$ we
define the character $e_p$ on $F$ as follows: For $x \in F$ let
$$e_p(x) = e^{-2\pi i y}$$ where $y\in \cap_{q \neq p'}(\mathbb Z_q \cap
\mathbb Q)$, $y- Tr_{F_p/\mathbb Q_{p'}}(x) \in \mathbb Z_{p'}$. Here $q$ is any prime in
$\mathbb Z$ and $p'$ is the prime below $p$, that is, $p' = p \cap \mathbb Q.$
\subsection{Conventions on characters}\label{s:dirichlet}
Let $\mathbf m$ be an ideal of $R$. A Dirichlet character mod $\mathbf m$ is a
function $\phi$ from $R$ to the unit circle such that:
\begin{enumerate}
\item There exists a homomorphism $\overline{\phi}$ from the finite
group $(R/\mathbf m)^\times$ to the unit circle such that for any $a$ in
$R$ that is relatively prime to $\mathbf m$ we have $\phi(a) =
\overline{\phi}(\overline{a})$
\item For any $a$ that shares a common factor with $\mathbf m$, we have
$\phi(a) = 0$.
\end{enumerate}
Such a $\phi$ is called \emph{primitive} if
$\overline{\phi}$ does not factor through $(R/\mathbf m')^\times$ for some
proper divisor $\mathbf m'$ of $\mathbf m$.
By a Hecke character of $F$ we mean a character of
$F_\mathbf{A}^{\times}$ which is trivial on $F^{\times}$ and has
values in the unit circle. For a Hecke character $\psi$ and any
place $v$, $\psi_v$ denotes the restriction of $\psi$ to $F_v^\times$. For
an ideal $\c$ of $R$, let $$\psi_\c = \prod_{v \mid \c} \psi_v,$$ $$\psi_\mathbf f = \prod_{v | f}\psi_v$$
and
$$\psi_\infty = \prod_{v \in \infty} \psi_v.$$ The conductor of $\psi$
always refers to its finite part and is denoted by $r(\psi)$. In a mild abuse of notation we will use, for $g$ in $F^\times$ or even $F_\mathbf{A}^{\times}$, $\psi_\c(g)$ (resp. $\psi_\infty(g)$) to mean $\psi_\c(g_\c)$ (resp. $\psi_\infty(g_\infty)$). Also
let $\psi^\ast$ denote the corresponding character on the ideals as
defined in \cite[p. 238]{Shimura4}. In particular, if $I$ is an
ideal of $R$ generated by $s$ and $(I , r(\psi)) = 1$, we have
$$\psi^\ast(I) = \overline{\psi_\c(s)\psi_\infty(s)}$$ for any $\c$
divisible by $r(\psi)$ with $(I, \c)=1$. We will use the notation $\psi^\ast(a)$ for
$a\in R$ to denote $\psi^\ast((a))$.
For any $\tau \in F$ let
$\epsilon_\tau$ denote the Hecke character of $F$ corresponding to
$F(\tau^{1/2})/F$.
\textbf{Comment}: It is well known that any \emph{finite order}
Hecke character $\psi$ of $F$ gives rise to a primitive Dirichlet character
mod $r(\psi)$. This correspondence is bijective. Moreover, for such a finite order Hecke character $\psi$, we have $\psi_\infty(g) = \prod_{v \in \infty}\sgn(g_v)^{e_v}$ with each $e_v = 0$ or $1$. Thus $\psi_\infty$ is trivial if and only if each $e_v=0.$ It can be checked that this happens if and only if the corresponding Dirichlet
character is trivial on the units of $R$.
\subsection{Conventions on modular forms}
\emph{In the rest of this paper, unless mentioned otherwise, we will use the term Hecke character to mean finite order Hecke character.}
Given a $2 \times 2$ matrix $\alpha= \left( \begin{array}{ccc}
a & b \\
c & d \\
\end{array} \right)$ we write $a=a_\alpha, \ b=b_\alpha, \
c=c_\alpha$ and $d= d_\alpha$.
Let $G = SL_{2}(F)$. We will consider weight
$(\frac{1}{2},...,\frac{1}{2})$ modular forms on the congruence
subgroups of $G$.
For any two fractional ideals $\mathfrak{f}, \mathfrak{g}$ of $R$,
let $\Gamma[\mathfrak{f}, \mathfrak{g}]$ denote the subgroup of $G$
consisting of matrices $\gamma$ such that
$a_{\gamma}, \ d_{\gamma} \in R$, $b_{\gamma} \in \mathfrak{f}, \
c_{\gamma} \in \mathfrak{g}$. Let $\mathbf D$ denote the group $\Gamma[2
\delta^{-1}, 2 \delta]$. A congruence subgroup is a subgroup of $\mathbf D$
that contains a principal congruence subgroup $\Gamma(N)$ for some
integer $N$, where
$$\Gamma(N) = \{\gamma \in SL_2(R) : \gamma \equiv I \pmod{N} \}.$$
For any two integral ideals $\c,\d$, with $4 | \c$ we use the
notation $$\Gamma_{\c,\d} = \Gamma[2\delta^{-1}\d,2^{-1}\delta\c]$$
and $$\Gamma_\c = \Gamma[2\delta^{-1},2^{-1}\delta\c]. $$ Note that
$\Gamma_{\c,\d}$ and $\Gamma_\c$ are congruence subgroups.
For $\gamma \in \mathbf D$, $z \in \mathbb{H}^n$, let $h(\gamma, z)$ denote
the automorphy factor $\frac{\theta(\gamma z)}{\theta(z)}$, where
$$\theta(z) =\sum_{x \in R} e(x^2 z /2).$$ A generalization of this
automorphy factor is introduced in \cite{Shimura2} where many of its
properties are proved.
Now, let $\gamma \in \mathbf D$, and $f$ be a holomorphic function on
$\H^n$. We use the notation $$(f
\parallel \gamma)(z) = h(\gamma, z)^{-1}f(\gamma(z)).$$
Suppose now that $\c$ is an ideal as above and $\psi$ is a Hecke
character whose conductor divides $\mathbf{c}$ and $\psi_\infty(-1)
= 1$. Let $M(\mathbf{c} , \psi)$ denote the space of modular
forms of weight $( \frac{1}{2}, ... \frac{1}{2})$ on
$\Gamma_\mathbf{c}$ with character $\psi$. In other words,
$M(\mathbf{c} , \psi)$ is the set of holomorphic functions $f$ on
$\H^n$ satisfying $$f
\parallel \gamma = \psi_\mathbf{c}(a_\gamma)f$$
for all $\gamma \in
\Gamma_\mathbf{c}$. Note that our definition follows \cite{Shimura1}
(and is slightly different from \cite{Serre-Stark} where $f$
satisfies $f
\parallel \gamma = \psi_\mathbf{c}(d_\gamma)f$).
For each such $\mathbf{c}$ let $M^1(\mathbf{c})$ be the union of all
$M(\mathbf{c} , \psi)$ with $\psi$ varying over all Hecke characters
with conductor dividing $\mathbf{c}$ and $\psi_\infty(-1) = 1.$
Let $M^1$ be the union of all the $M^1(\mathbf{c})$ as $\mathbf{c}$
varies over the integral ideals of $R$ divisible by 4. Finally, let
$M$ be the space of all weight $( \frac{1}{2}, ... \frac{1}{2})$ modular forms on congruence subgroups of
$G$. Clearly for any such $\mathbf{c} , \psi$, $M(\mathbf{c} , \psi)
\subset M^1(\mathbf{c}) \subset M^1 \subset M$.
Any $f \in M$ has a \emph{Fourier expansion}
$$f(z) = \sum_{\xi \in F} a(\xi)e(\xi z /2).$$ We call $a(\xi)$ the
Fourier coefficient for the place $\xi$.
If $f$ belongs to $M^1$ then by \cite[p.~780]{Shimura1}, the
Fourier coefficients associated to places outside $R$ are zero. Thus
$f$ has a Fourier expansion
$$f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2).$$
We are interested in the question of finding a basis for each of the
spaces $M(\mathbf{c} , \psi)$.
\subsection{Theta functions}
Let $\eta$ be a locally constant function on $F$, i.e. a
complex valued function for which there exists two
$\mathbb{Z}$-lattices $L$ and $M$ in $F$ such that $\eta(x)=0$ for
$x$ not in $L$ and $\eta(x)$ depends only on $x$ modulo $M$.
The following alternate criterion will be useful.
\begin{proposition}\label{p:criterion}
A function $\eta : F \rightarrow \mathbb{C}$ is locally constant if
and only if there exist integers $m,n$ such that $\eta(x) = 0$ for
$x$ not in $\frac{1}{m} R$ and $\eta(x)$ depends only on $x$ mod
$(n)$.
\end{proposition}
\begin{proof} Any $\mathbb{Z}$-lattice contains $(n)$ and is contained in
$(\frac{1}{m})$ for some $m,n$.
\end{proof}
Let $\mathfrak{L}(F)$ denote this space of locally constant
functions. We define the function $\theta_{\eta}$ on $\H ^n$ by
$$\theta_{\eta}(z) = \sum_{\xi \in F} \eta( \xi) e(\xi^2 z
/2).$$
We have the following proposition.
\begin{proposition}
Let $\eta \in \mathfrak{L}(F)$. Then $\theta_{\eta} \in M$.
\end{proposition}
\begin{proof}
This follows from \cite[Lemma 4.1]{Shimura1}. Indeed the
proof there makes it clear that $\theta_\eta$ is a modular form for
the largest congruence group contained in $\{\alpha \in D, ^\alpha
\eta = \eta\}$, where $^\alpha \eta$ denotes the action of $\alpha$
on $\eta$ as described in \cite[p. 775]{Shimura1}.
\end{proof}
\subsection{An important example}\label{s:exam}
The following example from \cite{Shimura1} introduces the theta
series that is fundamental to this paper.
\begin{example} [\cite{Shimura1} , p. 784-785]
Let $\chi$ be a Hecke character of $F$ of conductor $f$ such that
$\chi_\infty(-1) = 1$. Suppose $\omega_v$ denotes the characteristic
function of $R_v$ and let $$\eta(x) = \prod_{v\in \mathbf{f}}
\eta_v(x_v)$$ where:
\begin{itemize}
\item $\eta_v = \omega_v$ if $v \nmid f$
\item $\eta_v = \chi_v(t)^{-1}$ if $v \mid f$ and $\mid t\mid _v = 1$
\item $\eta_v = 0$ if $v \mid f$ and $\mid t\mid _v \neq 1$
\end{itemize}
Then $\theta_\eta(z) \in M(4f^2 , \chi)$.
\end{example}
For any Hecke character $\chi$ of $F$ such that $\chi_\infty(-1) =
1$ we define $\theta_\chi$ to equal $\theta_\eta$ where $\eta$ is as
in the above example. Thus $\theta_\chi \in M(4r(\chi)^2 , \chi).$
For any totally positive $t \in F$ let $\theta_{\chi,t}(z) :=
\theta_\chi(tz)$. We have $\theta_{\chi, t} \in M(\c , \psi)$
whenever $(4r(\psi)^2t) \mid \c$ and $\psi = \chi\epsilon_t$. Refer
to Lemma~\ref{l:shift} for a proof of this fact. Similarly, for any
function $\eta \in \mathfrak{L}(F)$ let $\theta_{\eta,t}(z):=
\theta_\eta(tz)$.
\subsection{Two generating sets}
The following important theorem is due to Shimura and is contained
in \cite{Shimura1}.
\begin{theorem}[Shimura] \label{t:shimuragen}
$M$ is spanned by the functions $\theta_{\eta,t}$ for $t \in F$
totally positive and $\eta \in \mathfrak{L}(F).$
\end{theorem}
What about the space of forms $M^1$?
We make the following preliminary observations:
Any $f \in M$, by the above theorem, can be written as
\begin{equation}\label{e:eqftheta}
f(z) = \theta_{\eta_1}(t_1z) + \theta_{\eta_2}(t_2z) + ..+
\theta_{\eta_k}(t_k z)
\end{equation}
with $0 <<t_i \in F$.
Replacing each $\eta_i(z)$ by $\frac{\eta_i(z) + \eta_i(-z)}{2}$, we
may assume that $\eta_i(z) = \eta_i(-z)$. Note that this does not
change the functions $\theta_{\eta_i}$.
Also, we may assume that the $t_i$ are distinct mod $(F^{\ast})^2$.
For, if $t_1 = s^2t_2$, say, then $$\theta_{\eta_1}(t_1z) +
\theta_{\eta_2}(t_2z) = \theta_\eta(t_2z)$$ where $\eta(z) =
\eta_1(z/s) + \eta_2(z)$, and so we may combine those two summands
into a single one.
Furthermore, if $\eta_i(\xi) = 0$ for $\xi$ not in $(\frac{1}{m})R$,
then $\eta_i(\frac{\xi}{m}) = 0$ for $\xi$ not in $R$. Moreover,
observe that $\theta_{\eta_i}(t_iz) = \theta_{\eta_i^{'}}(t_iz/m^2)$
where $\eta_i^{'}(z) = \eta_i(z/m)$. So, in~\eqref{e:eqftheta} we may assume that
each $\eta_i$ is 0 outside $R$. We can now give a set of generators
for $M^1$.
\begin{theorem}\label{t:shimuragen2} $M^1$ is spanned by the functions $\theta_{\eta,t}$ where $t \in R$ is totally positive,
and $\eta \in \mathfrak{L}(F)$ satisfies $\eta(z) =0$ if $z$ does
not belong to $R$.
\end{theorem}
\begin{proof} Any $f \in M^1$, by the above comments can be written as
\begin{equation}\label{e:ftheta2} f(z) = \theta_{\eta_1}(t_1z) + \theta_{\eta_2}(t_2z) + ..+
\theta_{\eta_k}(t_k z)
\end{equation}
where $0 <<t_i \in F$ are distinct mod $(F^{\ast})^2$ and $\eta_i
\in \mathfrak{L}(F)$ are $0$ outside $R$.
Then, because the $t_i$ are distinct mod $(F^{\ast})^2$ the various
$\theta_{\eta_i}(t_iz)$ contribute distinct terms to the Fourier
expansion of $f$. However only the Fourier coefficients
corresponding to elements of $R$ can be non-zero.
So for each $i$ we must have $\eta_i(\xi) = 0$ whenever $\xi^2t_i$
not in $R$. For a fixed $t_i$, the set of $\xi \in R$ such that
$\xi^2t_i \in R$ is an ideal, hence generated by some $h$. Put
$\eta_i^{'}(z) = \eta_i(hz)$. Then $\theta_{\eta_i}(t_iz) =
\theta_{\eta_i^{'}}(t_i h^2z)$. Thus replacing $\eta_i$ by
$\eta_i^{'}$ and $t_i$ by $t_i h^2$ we see that $\eta_i^{'}$ is
still 0 outside $R$, but now $t_i h^2$ also belongs to $R$.
In other words we have shown that in~\ref{e:ftheta2}, under the assumption $f
\in M^1$ we can have $0<<t \in R$, and $\eta$ is $0$ outside $R$.
Conversely any such sum is in $M^1$ by \cite[Proposition
3.2]{Shimura1} and \cite[p. 154]{Garrett}.
This completes the proof. \end{proof}
\begin{corollary} \label{c:bound}
Let $f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2)$ be an element of
$M(\c, \psi)$ for some $(\c, \psi)$. Then there is a constant $C_f$
such that $\mid a(\xi)\mid < C_f$ for all $\xi \in R$.
\end{corollary}
\begin{proof}
By the above theorem, it suffices to prove that $\theta_{\eta,t}$
has this property. But that follows easily from
Proposition~\ref{p:criterion}.
\end{proof}
\subsection{Statement of the main theorem}\label{s:main}
Let $R^+$ denotes the set of
totally positive elements in $R$. Fix a complete set of
representatives $T$ of $R^{+}/U^2$.
Suppose $\c$ is an integral ideal and $\psi$ a
Hecke character. Define $\Omega(\c,
\psi)$ to be the set of pairs $(\chi, t)$ such that:
\begin{enumerate}
\item $\chi$ is a Hecke character with $\chi_\infty$ trivial and $t \in T.$
\item $4r(\chi)^2t$ divides $\c.$
\item $\psi = \chi\epsilon_t.$
\end{enumerate}
Recall the definition of $\theta_{\chi, t}$ from
Section~\ref{s:exam}. Our main Theorem is as follows:
\begin{theorem} \label{t:main}Suppose $\c$ is an integral ideal divisible by $4$. Let $\psi$ be a
Hecke character of $F$ such that $\psi_\infty$ is
trivial and $r(\psi)$ divides $\c$. Assume that any prime ideal $\mathfrak p$
dividing $\c$ has the property that $\mathfrak p$ is the unique prime
ideal of $R$ that lies above $\mathfrak p \cap \mathbb Z$. Then the functions $\theta_{\chi,t}$ with $(\chi, t) \in
\Omega(\c, \psi)$ form a basis of $M(\c, \psi).$
\end{theorem}
We prove this theorem in section~\ref{main}.
\section{Hecke operators}\label{s:hecke}
\subsection{Some definitions}
Let $GL_2^+(F)$ denote the subgroup of $GL_2(F)$ consisting of
matrices whose determinant is totally positive. Let $\mathcal G$ denote the
group extension of $GL_2^+(F)$ consisting of pairs $[A, \phi(z) ]$
where $A =\left( \begin{array}{ccc}
a & b \\
c & d \\
\end{array} \right) \in GL_2^+(F)$ and $\phi(z)$ is a holomorphic
function on $\H^n$ satisfying $\phi(z)^2 = t N (\mbox{det} A)^{-1/2}
\prod (c^{(i)}z_{i} + d^{(i)}) $ where $A^{(i)} =\left(
\begin{array}{ccc}
a^{(i)} & b^{(i)} \\
c^{(i)} & d^{(i)} \\
\end{array} \right) $ are the various embeddings of $A$ in $GL_2(\mathbb R)$ and $t$ is a complex number with $\mid t \mid =
1$. The group law in $\mathcal G$ is given by $[A, \phi(z)][B, \psi(z)] =
[AB, \phi(Bz)\psi(z)]$.
The group $\mathcal G$ acts on the $\emph{right}$ of the space of
holomorphic functions on $\H^n$ as follows: For a holomorphic
function $f$ on $\H^n$ define $f \mid [A, \phi(z)] =
\phi(z)^{-1}f(Az)$. Note also that the group $D$ embeds in $\mathcal G$ via
$A \rightarrow [A, h(A, z)$. Furthermore, we have $(f \parallel
A)(z) = f \mid [A, h(A , z)]$.
For $\gamma = w_1tw_2$ where $w_1, w_2 \in \mathbf D$ and $t
=\begin{pmatrix}
1/a & 0 \\
0 & a \\
\end{pmatrix}$ for some $a \in R$ , define $J_\Xi(\gamma, z)= h(w_1w_2,z)$ . The quantities $J_\Xi(\gamma, z)$ and
$h(\gamma, z)$ coincide whenever $\gamma \in \mathbf D$. We also recall from~\cite{Shimura1} that:
\begin{enumerate}
\item $J_\Xi\left(\begin{pmatrix}
1/p & 2b/(\delta p) \\
0 & p \\
\end{pmatrix}, z\right) = N(p)^{1/2}$ for a prime $p$ and element $b$ in $R$.
\item $J_\Xi\left(\begin{pmatrix}
1 & 2h/(\delta p) \\
0 & 1 \\
\end{pmatrix}, z\right) = N(p)^{1/2} \left(\sum_{x\in
(R/p)}e_p(hx^2/(p\delta) )\right)^{-1}$ where $p$ is a prime and $h
\in R$ is not divisible by $p$.
\item $J_\Xi\left(\begin{pmatrix}
p & 0 \\
0 & 1/p \\
\end{pmatrix}, z\right) = N(p)^{-1/2}$.\\
\end{enumerate}
A key property of $J_\Xi$ is that it is a partial automorphy factor.
To be precise, it has the following properties (see \cite{Shimura1}):\\
(a)$J_\Xi(y_1xy_2 , z) = h(y_1, xy_2(z))J_\Xi(x, y_2(z))h(y_2, z)$,
if $y_1 , y_2$ belong to $\mathbf D$.
(b)$J_\Xi(k^{-1}, z) = J_\Xi(k, k^{-1}(z))^{-1}$, where $k \in \mathbf D \sigma
\mathbf D$ with $\sigma =\begin{pmatrix}
1/a & 0 \\
0 & a \\
\end{pmatrix}$ with $a$ relatively prime to $2$.
Let $\gamma \in \Gamma_{\c}.$ We now give a complicated, but still useful, formula for $h(\gamma,z)$.
For $d \in R - \{0\}$, define $$\epsilon(d) = (i \sgn d)^{1/2}2^{-n/2} D^{-1/2} \sum_{v\in \delta^{-1}/2R}e(-v^2d/4).$$ We also define $\widetilde{\epsilon}(d) = i^s$ where $s$ is the number of negative embeddings of $d$. Then~\cite[p. 142]{Garrett} tells us that \begin{equation}\label{e:hformula}h(\gamma, z) = \epsilon(d_\gamma) \widetilde{\epsilon}(d_\gamma) (\epsilon_{c_\gamma})^\ast(a_\gamma)(c_\gamma z + d_\gamma)^{1/2}\end{equation}
Also, by~\cite[p. 146]{Garrett} we have $$\theta\left(\begin{pmatrix}0& -\delta^{-1}\\ \delta &0\end{pmatrix} z\right) / \theta(z) = (- \i z)^{1/2}N(\delta)^{1/2}.$$ We extend the notation $h(\gamma, z)$ to this case by defining \begin{equation}\label{hwformula}h\left(\begin{pmatrix}0& -\delta^{-1}\\ \delta &0\end{pmatrix}, z\right) = (- \i z)^{1/2}N(\delta)^{1/2}\end{equation}
For each totally positive prime element $p \in R$ we define a
\emph{Hecke operator} $T_{p^2}$ on $M(\c, \psi)$ that sends $f $ to
$f \mid T_{p^2}$ where
\begin{align*}
f \mid T_{p^2} = N(p)^{-3/2}&\overline{\psi_\infty(p)}\quad \bigg(\sum_{b
\in R/p^2} f \mid \left[
\begin{pmatrix}
1/p & 2b/(\delta p) \\
0 & p \\
\end{pmatrix},J_\Xi\left(\begin{pmatrix}
1/p & 2b/(\delta p) \\
0 & p \\
\end{pmatrix}, z\right) \right] \\
&+ \quad \overline{\psi_\c(p)}\sum_{h\in (R/p)^\times} f \mid
\left[
\begin{pmatrix}
1 & 2h/(\delta p) \\
0 & 1 \\
\end{pmatrix}, J_\Xi
\left(\begin{pmatrix}
1 & 2h/\delta p \\
0 & 1 \\
\end{pmatrix} ,z \right) \right] \\
&+\quad \overline{\psi_\c(p^2)} f \mid \left[
\begin{pmatrix}
p & 0 \\
0 & 1/p \\
\end{pmatrix}, J_\Xi\left(\begin{pmatrix}
p & 0 \\
0 & 1/p \\
\end{pmatrix}, z\right) \right]\quad \bigg).
\end{align*}
\subsection{Action of the Hecke operator on Fourier coefficients}
The next proposition, which is a restatement of (\cite{Shimura1},
Proposition 5.4) gives the explicit action of $T_{p^2}$ on the
Fourier coefficients of a modular form. In particular it also shows
that if $p$ and $p'$ are two totally positive elements that generate
the same prime ideal, then $T_{p^2}$ coincides with $T_{p'^2}$
\begin{proposition}[Shimura]\label{p:Hecke}
Suppose $p$ be a totally positive prime element of $R$ and $f \in
M(\mathbf{c}, \psi)$ is given by
$$f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2).$$
Then
$$(f \mid T_{p^2})(z) = \sum_{\xi \in R}
b(\xi)e(\xi z /2)$$
where $$\psi_\infty(p)b(\xi) = a(\xi p^2) + \left\{
\begin{array}{ll}
\overline{\psi_\mathbf{c}(p)}N(p)^{-1}(\frac{\xi}{p})a(\xi)
+ \overline{\psi_\mathbf{c}(p^2)}N(p)^{-1}a(\xi/p^2)& \textrm{if $p \nmid \mathbf{c}$}\\
\\
0 & \textrm{if $p \mid \mathbf{c}$}
\end{array} \right.$$
where $a(\xi/p^2) := 0$ if $p^2 \nmid \xi$.
\end{proposition}
\begin{corollary}\label{cbound2}
Suppose $$f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2)$$ is an element
of $M(\c, \psi)$ and $f\mid T_{p^2} = c_p f$ for some prime $p \mid
\c$. Then
$$a(\xi p^{2n}) = (\psi_\infty (p))^n c_p^n a(\xi)$$ and $\mid c_p
\mid \leq 1$.
\end{corollary}
\begin{proof} The assertion about $a(\xi p^2)$ follows from the
above Proposition. Now Corollary~\ref{c:bound} implies that $\mid
c_p \mid \leq 1$.
\end{proof}
\section{Easy pickings}\label{s:easy}
\subsection{Eigenforms of Hecke operators}
Consider the Petersson scalar product $<f,g>$ on $M(\mathbf{c},
\psi)$. The definition is analogous to the classical case, see for
instance \cite{Shimura1}.
By standard calculations~\cite[Proposition 5.3]{Shimura1},
$\overline{\psi^\ast(p^2)}T_{p^2}$ is a Hermitian operator if $p$
does not divide $\c$. Hence:
\begin{lemma}\label{l:basis} There is a basis of $M(\mathbf{c}, \psi)$ consisting of
eigenforms for all the $T_{p^2}$ where $p>>0$ is a prime in $R$ and
$p \nmid \mathbf{c}$.
\end{lemma}
So it is important to study the modular forms that are eigenvalues
for the Hecke operators. But first we prove an auxillary lemma.
\begin{lemma}\label{l:numbasis} The following hold:
(a) There is a basis of $M(\mathbf{c}, \psi)$ consisting
of forms whose coefficients belong to a number field.
(b)If $f(z) = \sum_{\xi \in R}a(\xi)e(\xi z /2) \in M(\mathbf{c},
\psi)$ has each $a(\xi)$ algebraic, then the $a(\xi)$ have bounded
denominators (i.e. there exists a non zero integer $D$ such that
$Da(\xi)$ is an algebraic integer for all $\xi$).\end{lemma}
\begin{proof}
(a) is just~\cite[Proposition 8.5]{Shimura1} while (b) follows
easily from Theorem~\ref{t:shimuragen2} above. \end{proof}
\begin{lemma} Let $f(z) = \sum_{\xi \in R}a(\xi)e(\xi z
/2) \in M(\mathbf{c}, \psi)$ be an eigenvector of $T_{p^2}$
with eigenvalue $c_p$ where $p \nmid \mathbf{c}$.
Suppose $0<< m \in R$ such that $p^2 \nmid m$. Then:
(a) $a(mp^{2n}) =
a(m)\overline{\psi_\mathbf{c}(p)^n}(\frac{m}{p})^n$ for every $n
\geq 0$
(b) If $a(m)\neq 0$, then $p \nmid m$ and $c_p
=\psi^\ast(p)(\frac{m}{p})(1 + N(p)^{-1})$
\end{lemma}
\begin{proof} Since $T_{p^2}$ maps forms with algebraic coefficients
into themselves, it follows from Lemma~\ref{l:numbasis} by simple
linear algebra that the eigenvalue $c_p$ is algebraic and that the
corresponding eigenspace is generated by forms with algebraic
coefficients. So we assume that the coefficients $a(\xi)$ are
algebraic.
Consider the power series $A(T)= \sum_{n=0}^{\infty}a(mp^{2n})T^n$
Using Proposition \ref{p:Hecke}, we get, by the same argument as in~\cite[p. 452]{Shimura3}.
$$ A(T) = a(m)\frac{1 - \alpha T}{(1 - \beta T)(1 - \gamma
T)}$$
where
$$\alpha =\overline{\psi_\mathbf{c}(p)}N(p)^{-1}(\frac{m}{p})$$
and
$$\beta + \gamma = \psi_\infty (p)c_p,\quad \beta\gamma =\overline{\psi_\mathbf{c}(p^2)}N(p)^{-1}.$$
This already implies that $a(m) = 0$ implies $a(mp^{2n}) =0 \forall
n$. Hence we may assume that $a(m) \neq 0$ in which case $A(T)$ is a
non zero rational function of $T$. Viewing $A(T)$ as a function over
a suitable finite extension of $\mathbb{Q}_p$, we see using
Lemma~\ref{l:numbasis}(b) that $A(T)$ converges in the $p$-adic unit
disk $U$ defined by $\mid T \mid_p < 1$; hence $A(T)$ cannot have a
pole in $U$. However, since $\beta \gamma =
\overline{\psi_\mathbf{c}(p^2)}N(p)^{-1}$ one of $\beta^{-1} ,
\gamma^{-1}$ belongs to $U$. Assume it is $\beta^{-1}$. Since $A(T)$
is holomorphic we must then have $\alpha = \beta$. So
$A(T)=\frac{a(m)}{(1 - \gamma T)}$ and so $a(mp^{2n}) = \gamma^n
a(m)$. Since $\beta \gamma \neq 0$ we have $\alpha \neq 0$, hence $p
\nmid m$. Moreover $\gamma = \beta \gamma / \alpha =
\overline{\psi_\mathbf{c}(p)}(\frac{m}{p})$. So $a(mp^{2n}) =
\gamma^n a(m) = a(m)\overline{\psi_\mathbf{c}(p)^n}(\frac{m}{p})^n$.
This proves (a) while (b) follows from $c_p =
\overline{\psi_\infty(p)}(\alpha + \gamma)$. \end{proof}
An element $t \in R$ is called squarefree if it is not divisible by
the square of a prime element of $R$.
\begin{theorem}\label{t:serre}
Let $$f(z) = \sum_{\xi \in R} a(\xi) e(\xi z / 2)$$ be a non zero
element of $M(\c, \psi)$ and let $\c'$ be an ideal of $R$ such that
$ \c \mid \c '$. Assume that for all primes $p \nmid \c'$ we have
$f\mid T_{p^2} = c_pf$ where $c_p \in \mathbb{C}$. Then there exists
a unique(up to multiplication by an unit) totally positive
squarefree element $t \in R$ such that $a(\xi) = 0$ unless
$\frac{\xi}{t}$ is the square of an element of $R$. Moreover
\begin{enumerate}
\item $t \mid \c'$
\item $c_p = \psi^\ast(p)(\frac{t}{p})(1 + N(p)^{-1})$ \ if $p \nmid
\c'$
\item $a(\xi u^2) = a(\xi)\overline{\psi_\mathbf{c}(u)}(\frac{t}{u})$
\ if $(u \ , \ \c') = 1$
\end{enumerate}
\end{theorem}
\begin{proof}. Let $\xi, \xi' \in R$ such that $a(\xi), a(\xi') \neq
0$. We first show that $\xi' / \xi$ is a square. Let $P$ be the set
of primes $p$ with $p \nmid (\c' \xi \xi')$. If $p\in P$, the
previous lemma shows that
$$\overline{\psi_\infty(p)}\overline{\psi_\mathbf{c}(p)}(\frac{\xi}{p})(1 + N(p)^{-1}) = c_p = \overline{\psi_\infty(p)}\overline{\psi_\mathbf{c}(p)}(\frac{\xi'}{p})(1 +
N(p)^{-1})$$
Hence $$(\frac{\xi}{p}) = (\frac{\xi'}{p})$$ for almost all p. But
this means that almost all primes split in the extension
$F(\sqrt{\xi\xi'}) / F$ and hence by a well known result the
extension must be trivial, i.e. $\xi' /\xi$ is a square. Write $\xi
= tv^2 , \xi' = tv'^2$ with $t$ totally positive square free.
This proves the first assertion of the theorem, i.e. the existence
of $t$. Now write $v= p^nu$ with $p \nmid \c'$ and $(p,u) = 1$. So $
\xi = t p^{2n} u^2$. Applying the previous lemma to $tu^2$ we have
$a(\xi) = a(tu^2)\overline{ \psi_ \c(p)^n }(\frac{tu^2}{p})^n$.
Hence $a(tu^2) \neq 0$ and part (b) of the lemma above shows that $p
\nmid t$ and $c_p = \psi^\ast(\mathfrak p)(\frac{t}{p})(1 + N(p)^{-1})$.
Hence every prime factor of $t$ divides $\c'$ ; since $t$ is
squarefree this implies $t \mid \c'$, and (1)~,(2) are proved. As
for (3), it is enough to check it for $u = p$ with $p \nmid \c'$ and
$u$ a unit. The case of $u = p$ follows from writing $\xi = \xi_0
p^{2a}$ with $p^2 \nmid \xi_0$ and applying part (a) of the previous
lemma, while the case of an unit follows from \cite{Shimura1},
Proposition 3.1. \end{proof}
\section{Some operators}\label{s:oper}
Note that all operators on spaces of modular forms defined in this
paper act on the \emph{right}. This is done so that composition of
operators is compatible with multiplication in $\mathcal G$.
Fix a totally positive generator $c$ of $\c$. We define the
following operators on $M(\c, \psi)$.
\begin{itemize}
\item(The shift operator) For any totally positive $m \in R$, the shift operator $V(m)$ is defined as
$$V(m) = N(m)^{-1/4}\left[\begin{pmatrix}
m & 0 \\
0 & 1 \\
\end{pmatrix}, N(m)^{-1/4}\right]$$
Thus $$(f \mid V(m))(z) = f(mz).$$
\item(The symmetry operator) The symmetry operator
$W(c)$ is defined as
$$W(c) = [W_0, J_\Xi(W_0,z)][V_0(c), N(c)^{-1/4}]$$ where $W_0
=\begin{pmatrix}
0 & -2\delta^{-1} \\
2^{-1}\delta & 0 \\
\end{pmatrix}$ and $V_0(c) = \begin{pmatrix}
c & 0 \\
0 & 1 \\
\end{pmatrix}$.
Thus $$(f \mid W(c))(z) = f\mid \left[
\begin{pmatrix}
0 & -2\delta^{-1} \\
2^{-1} \delta c & 0 \\
\end{pmatrix}, (- \i z)^{1/2}
N(2^{-2}\delta^2 c)^{1/4} \right]$$
$$= (- \i z)^{-1/2} N(2^{-2}\delta^2
c)^{-1/4}f(\frac{-4}{c\delta^2z})$$
Observe that $(f\mid W(c))\mid W(c) = f.$
\item(The conjugation operator). The conjugation operator $H$ is
defined by $$(f\mid H)(z) = \overline{f(-\overline{z})}$$
\end{itemize}
\begin{lemma}\label{l:shift}
The operators $V(m), W(c)$ and $H$ take $M(\c, \psi)$ to $M(m\c,
\psi\epsilon_m) , M(\c, \overline{\psi}\epsilon_c)$ and $M(\c,
\overline{\psi})$ respectively. Further, if $f \in M(\c , \psi)$, we
have: \renewcommand{\theenumi}{\roman{enumi}} \begin{enumerate}
\item $(f \mid V(m))\mid T_{p^2} = (f \mid T_{p^2})\mid V(m) \quad$ when $p \nmid
m$
\item $(f\mid H)\mid T_{p^2} = (f \mid T_{p^2})\mid H$
\item $(f \mid W(c))\mid T_{p^2} = \psi_\c(p^2)(f \mid T_{p^2})\mid W(c) \quad$ when $p \nmid
c$
\end{enumerate}
\end{lemma}
\begin{proof} The statements about $H$ are trivial while those about
$V(m)$ follow from (\cite{Shimura1}, Proposition 3.2) and
Proposition~\ref{p:Hecke} above.
We now prove the statements concerning $W(c)$. Let $W = \begin{pmatrix}
0 & -2\delta^{-1} \\
2^{-1} \delta c & 0 \\
\end{pmatrix}$ and $\omega(z) = (- \i z)^{1/2}
N(2^{-2}\delta^2 c)^{1/4} $ ; by definition, $W(c)= [W, \omega(z)].$
Also, recall that $W_0
=\begin{pmatrix}
0 & -2\delta^{-1} \\
2^{-1}\delta & 0 \\
\end{pmatrix}.$
Let us prove that $W(c)$ takes $M(\c, \psi)$ to $M(\c,
\overline{\psi}\epsilon_c)$. We need to show that
$$((f\mid W(c))
\parallel \gamma)(z) = \overline{\psi_\c(a_\gamma)}
(\epsilon_c)_\c(a_\gamma)(f \mid W(c))(z)$$ for any $\gamma \in
\Gamma_\c$.
We write $$\Gamma = \left(
\begin{array}{ccc}
d_\gamma & -c_\gamma2^2\delta^{-2}c^{-1} \\
-\delta^22^{-2}cb_\gamma & a_\gamma \\
\end{array} \right) \in \Gamma_\c.$$ Using the fact that
$\Gamma^{-1}W \gamma = W$ and $(f \parallel \Gamma) = f$ we are
reduced to proving that
\begin{equation}\label{eqneedproveh}\frac{h(\Gamma , W z) )}{h(\gamma , z)} =
(\epsilon_c)_\c(a_\gamma) \frac{(- \i (\gamma z))^{1/2}}{(- \i
z)^{1/2}}.\end{equation}
Now put $\gamma' = V_0(c)\gamma V_0(c)^{-1}, z' =cz$. We note that $$h(\Gamma, Wz) = \theta (W_0\gamma'z')/ \theta(W_0 z').$$
Using the definition $h(G,z) = \theta(Gz)/\theta(z)$ for $G \in \mathbf D$ or $G = \begin{pmatrix}1/2&0\\0&2\end{pmatrix}W_0$ we have
$$h(\Gamma, Wz) = \frac{h\left(\begin{pmatrix}1/2&0\\0&2\end{pmatrix}W_0 , V_0(1/4)\gamma'z'\right)h(V_0(c/4)\gamma V_0(c/4)^{-1}, cz/4)}{h\left(\begin{pmatrix}1/2&0\\
0&2\end{pmatrix}W_0, V_0(1/4)z'\right)}.$$ Use~\eqref{hwformula} on the factors $h\left(\begin{pmatrix}1/2&0\\0&2\end{pmatrix}W_0 , V_0(1/4)\gamma'z'\right)$, $h\left(\begin{pmatrix}1/2&0\\
0&2\end{pmatrix}W_0, V_0(1/4)z'\right)$ to get
$$h(\Gamma , W z) = \frac{(- \i (\gamma z))^{1/2}}{(- \i
z)^{1/2}}h(V_0(c/4)\gamma V_0(c/4)^{-1}, cz/4) .$$ Now using~\eqref{e:hformula} we get $$ h(V_0(c/4)\gamma V_0(c/4)^{-1}, cz/4) = h(\gamma,z)(\epsilon_c)_\c(a_\gamma)$$ and this completes the proof of~\eqref{eqneedproveh}.
As for (iii), it follows from the following identities in $\mathcal G$,
which can be verified by explicit computation using the (partial)
automorphy property of $J_\Xi$.
\begin{enumerate}
\item Let $B =\begin{pmatrix}
1/p & 2b/(\delta p) \\
0 & p \\
\end{pmatrix}$ where $b \in (R/p^2)^\times$. Suppose $b' \in (R/p^2)^\times$
be the element such that $bb'c \equiv -1$ mod $p^2$ and let $B'
=\begin{pmatrix}
1/p & 2b'/(\delta p) \\
0 & p \\
\end{pmatrix}$.
Define $\gamma \in \Gamma_\c$ by $\gamma = \begin{pmatrix}
p^2 & -2b\delta^{-1} \\
-2^{-1}\delta b' c & (1+bb'c)/p^2 \\
\end{pmatrix} .$
Then we have
$$[\gamma, h(\gamma, z)]\ [B, J_\Xi(B, z)]\ [W, \omega(z)] = [W, \omega(z)]\ [B' ,
J_\Xi(B' , z)].$$
\item Let $C =\begin{pmatrix}
1 & 2h/(\delta p) \\
0 & 1 \\
\end{pmatrix}$ where $h \in (R/p)^\times$. Suppose $h' \in (R/p)^\times$
be the element such that $hh'c \equiv -1$ mod $p$ and let $B'
=\begin{pmatrix}
1/p & 2h'\delta^{-1} \\
0 & p \\
\end{pmatrix}$.\\
Define $\gamma ,\gamma' \in \Gamma_\c$ by $\gamma = \begin{pmatrix}
p & -2h\delta^{-1} \\
-2^{-1}\delta h' c & (1+hh'c)/p \\
\end{pmatrix}, \gamma' = \begin{pmatrix}
(1+hh'c)/p & -2h'\delta^{-1} \\
-2^{-1}\delta h c & p \\
\end{pmatrix} .$
Then we have
$$[\gamma, h(\gamma, z)]\ [C, J_\Xi(C, z)]\ [W, \omega(z)] = [W, \omega(z)]\ [B' , J_\Xi(B',z)].$$
$$[\gamma', h(\gamma', z)]\ [W, \omega(z)]\ [C, J_\Xi(C,z)] = (\epsilon_c)_\c(p)[B' , J_\Xi(B' , z)]\ [W,
\omega(z)].$$
\item Let $D =\begin{pmatrix}
p & 0 \\
0 & 1/p \\
\end{pmatrix}$ and $E
=\begin{pmatrix}
1/p & 0 \\
0 & p \\
\end{pmatrix}$.
Then we have
$$[D, N(p)^{-1/2}]\ [W, \omega(z)] = [W, \omega(z)]\ [E ,N(p)^{1/2}].$$
$$[W,\omega(z)]\ [D, N(p)^{-1/2}] = [E , N(p)^{1/2}]\ [W,
\omega(z)].$$
\end{enumerate}
\end{proof}
Now, for a totally positive prime $p_0 \in R$ dividing $c/4$ let us write
$\Gamma_{\c /p_0}$ as a disjoint union of cosets modulo $\Gamma_\c$:
$$\Gamma_{\c /p_0} = \coprod_{\beta \in S} \Gamma_\c \beta$$
We define the trace operator $S'(\psi) = S'(\psi, c, p_0)$ on $M(\c,
\psi)$ by $$(f\mid S'(\psi))(z) = \sum_{\beta \in S}
\psi(d_{\beta})(f
\parallel \beta) (z).$$ It is easy to see that this operator does not depend
on the choice of the $\beta$'s. Moreover if $r(\psi) \mid (\c/p_0)$,
$S'(\psi)$ takes $M(\c, \psi)$ to $M(\c/p_0 ,\psi)$ and if $f \in
M(\c / p_0 , \psi)$ then $f\mid S'(\psi) = uf$ where $u=\mid S\mid$.
A routine calculation similar to above also shows that $S'(\psi)$ commutes with
$T_{p^2}$ for $p \nmid \c$.
We now define the operator $S(\psi) = S(\psi, c, p_0)$ on $M(\c,
\psi)$ by: $$S(\psi) = \frac{1}{u}N(p_0)^{1/4}
W(c)S'(\overline{\psi}\epsilon_c)W(c/p_0)
$$
\begin{lemma}
Let $p_0 \in R$ be a totally positive prime dividing $c/4$, such
that $r(\psi \epsilon_{p_0}) \mid (c/p_0)$. Then:
\begin{enumerate}
\item $S(\psi, c, p_0)$ maps $M(\c, \psi)$ into $M(\c/p_{0},
\psi\epsilon_{p_0})$
\item If $m$ is a totally positive element of $R$ that is prime to
$p_0$, and $f$ belongs to $M(\c, \psi)$, then
$$f\mid S(\psi, c, p_0)=f\mid S(\psi, mc, p_0).$$
\item $S(\psi)$ commutes with all the $T_{p^2}$
\item If $g \in M(\c/p_0, \psi\epsilon_{p_0})$, then $(g\mid V(p_0))\mid S(\psi, c,
p_0) = g$.
\item Let $p\in R$ be a totally positive prime such that $p \mid
(c/4)$, $p \neq p_0$ and $r(\psi \epsilon_{p}) \mid (c/p)$. If $g
\in M(\c /p, \psi \epsilon_{p})$, we have: $$(g\mid V(p))\mid
S(\psi, c, p_0) = (g\mid S(\psi \epsilon_{p}, c/p, p_0))\mid V(p).$$
\end{enumerate}
\end{lemma}
\begin{proof} The main ingredient for this proof was Lemma~\ref{l:shift}; otherwise the proof is identical to the proof of~\cite[Lemma 3]{Serre-Stark}.
(1) follows directly from Lemma~\ref{l:shift} and the comments above.
Now note that if $p_0 \nmid m$ and $ \gamma =\left(
\begin{array}{ccc}
a & b \\
c & d \\
\end{array} \right)$ runs over a set of representatives of
$\Gamma_{m\c} \backslash \Gamma_{m\c/p_0}$ then $ \gamma ' =\left(
\begin{array}{ccc}
a & bm \\
c/m & d \\
\end{array} \right)$ runs over a set of representatives of $\Gamma_{\c}\backslash \Gamma_{\c/p_0}$
To prove (2) we now only need to observe that $$W(mc)[\gamma,
h(\gamma,z)]W(mc/p_0) = [m I, 1]W(c)[\gamma', h(\gamma',
z)]W(c/p_0)$$
(3) follows from the commutativity of the Hecke operators with the
individual operators that make up $S(\psi)$.
As for (4) observe that $$(g\mid V(p_0))\mid W(c) =
N(p_0)^{-1/4}g\mid W(c/p_0).$$ The right side is invariant by
$\frac{1}{u}S'(\overline{\psi}\epsilon_c)$ and is sent to
$N(p_0)^{-1/4}g$ by $W(c/p_0)$.
Finally (5) follows from the following identities which can be checked by explicit computation:
$$\left[\begin{pmatrix}
p & 0 \\
0 & 1 \\
\end{pmatrix}, N(p)^{-1/4}\right]W(c) = [pI, 1]W(c/p),$$
$$W(c/p_0) = W(c/pp_0)\left[\begin{pmatrix}
p & 0 \\
0 & 1 \\
\end{pmatrix}, N(p)^{-1/4}\right].$$
.
\end{proof}
\begin{lemma}\label{l:g}
Let $m$ be a totally positive element of $R$ and $f\in M(\c, \psi)$.
Let $g(z) = f(z/m)$. Then $g \parallel \gamma =
\psi_\c(a_\gamma)\epsilon_m^\ast(a_\gamma)g$ for all $\gamma \in
\Gamma_{\c, m}$
\end{lemma}
\begin{proof}
Let $A=\begin{pmatrix}
1 & 0 \\
0 & p \\
\end{pmatrix}$ and let $\gamma' = A\gamma A^{-1}$. Note that $\gamma' \in \Gamma_\c$.
Then we have
$$[A, N(p)^{1/4}][\gamma , h(\gamma, z)] = [I,
\epsilon_m^\ast(a_\gamma)][\gamma', h(\gamma',z)][A, N(p)^{1/4}].$$
The result is now obtained by letting the above expression act on
$f$.
\end{proof}
Now, for any totally positive prime $p \mid (\c/4)$ we define the
operator $U(p) = U(p , \c)$ on $M(\c, \psi)$ by:
$$U(p) = N(p)^{-3/4} \sum_{j\in (R/p)}
\left[\begin{pmatrix}
1 & 2j\delta^{-1} \\
0 & p \\
\end{pmatrix}, N(p)^{1/4}\right]
$$
\begin{lemma}\label{l:U}
$U(p)$ takes $M(\c, \psi)$ to $M(\c, \psi\epsilon_p)$ and if $$f(z)
= \sum_{\xi \in R} a(\xi)e(\xi z /2),$$ then $$(f\mid U(p))(z) =
\sum_{\xi \in R} a(p\xi)e(\xi z /2)$$
\end{lemma}
\begin{proof}
Let $A_j=\begin{pmatrix}
1 & 2j\delta^{-1} \\
0 & 1 \\
\end{pmatrix}.$
Put $g(z) = f(z/p).$ Note that $$f\mid \left[\begin{pmatrix}
1 & 2j\delta^{-1} \\
0 & p \\
\end{pmatrix}, N(p)^{1/4}\right] = N(p)^{-1/4}g \parallel A_j$$
So for any $\gamma \in \Gamma_c$, $$f\mid \gamma = N(p)^{-1}
\sum_{j\in (R/p)}g\parallel (A_j\gamma).$$ But it is not hard to see
that $A_j$ varies over a set of right coset representatives of
$\Gamma_{\c,p}$ in $\Gamma_\c$; hence $A_j \gamma = \gamma'_i A_i$
with $\gamma'_i \in \Gamma_{\c,p}$ and distinct $j$ give rise to
distinct $i$. Note also that $a_{\gamma'_i} \equiv a_\gamma $ mod $
\c.$ Therefore \begin{align*} &N(p)^{-1}\sum_{j\in (R/p)}g\parallel
(A_j\gamma) \\& =N(p)^{-1} (\psi\epsilon_p)_\c(a_\gamma)\sum_{j\in
(R/p)}g\parallel A_i \\& =(\psi\epsilon_p)_\c(a_\gamma)f
\end{align*}
This proves that $U(p)$ takes $M(\c, \psi)$ to $M(\c,
\psi\epsilon_p)$.
As for the assertion about the Fourier
coefficients, note that \begin{align*}
(f\mid U(p))(z) &= N(p)^{-1}\sum_{j \in (R/p)}f(\frac{z + 2j \delta^{-1}}{p})\\ &=N(p)^{-1}\sum_{\xi \in R}a(\xi)e(\xi z/2p)\left(\sum_{j \in (R/p)} e(\frac{\xi j \delta^{-1}}{p})\right).
\end{align*}
The result now follows from the fact that $$\sum_{j \in (R/p)} e(\frac{\xi j \delta^{-1}}{p}) = \begin{cases} N(p) & \text{ if } p | \xi \\ 0 & \text{ otherwise }\end{cases}$$
\end{proof}
Finally, define the operator $K(p) = 1- U(p, p\c)V(p)$.
\begin{lemma}
If $f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2) \in M(\c , \psi)$ then
$f\mid K(p) \in M(\c p^2, \psi)$ and equals
$\sum_{(\xi,p)=1}a(\xi)e(\xi z /2)$. Further, if $p' \nmid p\c$ then
$T_{p'^2}$ and $K(p)$ commute.
\end{lemma}
\begin{proof}. This follows immediately from the above lemma and the
properties of $V(m)$ proved earlier. \end{proof}
\section{Newforms}\label{s:newforms}
\subsection{Definition of newforms and basic results}
Let $f \in M(\c, \psi)$ be an eigenvector of all but finitely many
$T_{p^2}$. We say that $f$ is an oldform if there exists a totally
positive prime $p$ dividing $\c /4$ such that one of the following
hold.
(a) $r(\psi)$ divides $(\c/p)$ and $f \in M(\c / p, \psi)$.
(b) $r(\psi \epsilon_p) \mid(\c/p)$ and $f = g\mid V(p)$ with $g\in
M(\c/p, \psi\epsilon_p)$.\\
We denote by $M^O(\c, \psi)$ the subspace of $M(\c , \psi)$ spanned
by oldforms. If $f \in M(\c, \psi)$ is an eigenvector of all but
finitely many $T_{p^2}$ and $f$ does not belong to $M^O(\c, \psi)$ ,
we say that $f$ is a newform of level $\c$.
The following two lemmas are proved exactly as in
\cite{Serre-Stark}. They are essentially formal consequences of all
the lemmas in the previous subsection.
\begin{lemma}
The symmetry operator and the conjugation operator take oldforms to
oldforms and newforms to newforms. \end{lemma}
\begin{lemma}
Let $h \in M^0(\c, \psi)$ be a non zero eigenform of all but
finitely many $T(p^2)$. Then there is a proper divisor $\c'$ of
$\c$, a character $\chi$ such that $r(\chi) \mid \c'$ and a newform
$g \in M(\c' , \psi)$ such that $h$ and $g$ have the same
eigenvalues for almost all $T(p^2)$.
\end{lemma}
We also have
\begin{lemma}
Let $p$ be a totally positive prime, and let $f(z) = \sum_{\xi \in
R} a(\xi)e(\xi z /2) \in M(\c , \psi)$ be non-zero and assume that
$a(\xi)=0$ for all $\xi$ not divisible by $p$. Then $p$ divides
$\c/4$, $r(\psi\epsilon_p)$ divides $\c/p$ and $f = g\mid V(p)$ with
$g\in M(\c/p, \psi\epsilon_p)$
\end{lemma}
\begin{proof}
Put $g(z) = f(z/p)$ and let $\c' = \c/p$ if $p \mid \c/4$ and $\c' =
\c$ otherwise. By Lemma~\ref{l:g} we have $$g\parallel \gamma =
(\psi\epsilon_p)_{p\c}(a_\gamma)g$$ for all $\gamma \in
\Gamma_{\c',p}$. Moreover, as $g$ has a Fourier expansion with
non-zero coefficients only in places corresponding to elements of
$R$, it follows that the above equation holds for $\gamma
=\begin{pmatrix}
1 & 2\delta^{-1} \\
0 & 1 \\
\end{pmatrix}.$ By \cite[Lemma 3.4]{Shimura1}, the equation holds for
all $\gamma \in \Gamma_{\c'}$. Since $g$ is non-zero this implies
that $r(\psi\epsilon_p) | \c'$ which is possible only if $p$
divides $\c/4$. Thus $\c' = \c/p$ and hence $g\in M(\c/p,
\psi\epsilon_p).$
\end{proof}
The above lemmas allow us to derive our next theorem, which is the
main result that enables us to recognize oldforms. The proof of the
theorem is identical to that of Theorem 1 in \cite{Serre-Stark} and
will not be given here.
\begin{theorem}
Let $m$ be a totally positive element of $R$ and $f(z) = \sum_{\xi
\in R} a(\xi)e_{\mathbf{a}}(\xi z /2)$ be an element of $M(\c,
\psi)$ such that $a(\xi) = 0 $ for all $\xi$ with $(\xi, m)=1$.
Further assume that $f$ is an eigenform of all but finitely many
$T_{p'^2}$. Then $f \in M^0(\c , \psi)$. \end{theorem}
\subsection{Structure of newforms }
Suppose $f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2) \in M(\c, \psi)$ is
a newform. By theorem 3.1 there is a square free $t \in R$, unique
up to multiplication by $U^2$, such that $a(\xi) = 0$ if $\xi/t$ is
not a square.
The proofs of the next four lemmas are again identical to the
corresponding lemmas in \cite{Serre-Stark} and are omitted.
\begin{lemma}\label{l:normalized}
We have $t \in U^2$ and $a(1) \neq 0.$
\end{lemma}
\begin{lemma}\label{l:scalar}
Let $g \in M(\c, \psi)$ be an eigenform of all but finitely many
$T(p^2)$, with the same eigenvalues as $f$. Then g is a scalar
multiple of $f$.
\end{lemma}
Because of Lemma~\ref{l:normalized}, we can divide by $a(1)$ and
henceforth assume that $f$ is normalized, i.e. $a(1) = 1$.
\begin{lemma}\label{l:eigenform}
Let $f \in M(\c, \psi)$ be a newform. Then $f$ is an eigenform for
every $T(p^2)$. Further, if $4p \mid \c$ , then the eigenvalue $c_p
= 0$.
\end{lemma}
\begin{lemma}\label{l:square}
The level $\c$ of the newform $f$ is a square and $f \mid W(c)$ is a
multiple of $f\mid H.$
\end{lemma}
\section{L-series and the proof of the main theorem}\label{main}
\subsection{The L-series}
Let $f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2)$ be an element of
$M(\c, \psi)$. For any ideal $I$, we define $a(I) = a(\xi)$ where
$\xi$ is any totally positive generator of $I$. Because $a(\xi u^2)
= \psi_\infty (u)a(\xi)$ for any unit $u$ by (\cite{Shimura1}, Prop.
5.4), it follows that if we assume that $\psi_\infty$ is trivial,
then $a(I)$ is well-defined. In that case we define the $L$-series
$L(s,f)$ by
$$L(s,f) = \sum \frac{a(I)}{N(I)^s}$$ where the sum is taken over
all non-zero ideals of $R$.
\begin{theorem}\label{t:lseries}
Suppose $f\in M(\c, \psi)$ where $\psi_\infty$ is trivial and assume
that $\c$ is the square of an ideal. Then $L(s,f)$ can be
analytically continued to an entire function (with the exception of
a simple pole at $s = 1/2$ if $f$ is not a cusp form). Moreover, if
$$\Lambda(s ,f) = (2\pi)^{-ns} \Gamma(s)^n N(\delta)^s
N(\c)^{(s/2)}L(s,f)$$ then the following relation holds
$$\Lambda(s,f) = \Lambda(1/2 -s , g)$$ where $g = f\mid W(c)$ .
\end{theorem}
\begin{proof}
Let $$f(z) = \sum_{\xi \in R} a(\xi)e(\xi z /2)$$ and
$$g(z)= \sum_{\xi \in R} b(\xi)e(\xi z /2).$$ Also let $\c =
(c_1)^2$ where $c_1$ is totally positive (we can do this because $F$
has narrow class number one).
Put $f_1 = f - a(0)$ , $g_1 = g - b(0)$. Recall that
$F_{\infty}^\circ$ denotes $\prod_{v \in \infty } F_v^{+}$ which can
be naturally identified with $(\mathbb R^+)^n$. Thus there is an action of
$U^2$ on $(\mathbb R^+)^n$ and for later purposes it is important to note
that this action preserves the norm of an element. Now consider the
coset space $(\mathbb R^+)^n/U^2$. Define the integral
\begin{equation}\label{e:integral} \Phi(s) =\int_{(\mathbb R^+)^n/U^2}f_1(\frac{2\i
y}{c_1\delta})\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j}.\end{equation}
We first observe that this integral is convergent for all $s$ with
$Re(s)>1/2$. Indeed, by the unit theorem, we may choose the
fundamental domain $F_{\infty}^\circ/U^2$ such that the ratios
$y_i/y_j$ are all bounded, and hence all the $y_j$ go to zero or
infinity together. As the $y_j \rightarrow \infty$ the rapid decay
of $f_1$ assures convergence. As they go to zero, we use the
following equation, which follows easily from $g = f\mid W(c)$:
\begin{equation}\label{e:fg}
f_1(\frac{2\i }{c_1\delta y}) = \prod y_j^{1/2}g_1(\frac{2\i
y}{c_1\delta}) \quad + b(0)\prod y_j^{1/2} \quad - a(0)
\end{equation}
to obtain the same result.
Now, write the right side of (\ref{e:integral}) as
\begin{align*}
& \quad \quad \sum_{I\neq 0} a(I) \int_{(\mathbb R^+)^n/U^2} \sum_{(\alpha)
= I, \alpha
>>0}e^{-2\pi tr(\alpha y/(c_1\delta))}
\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j}.\\
& =\sum_{\alpha \in R^+/U^2}\sum_{\epsilon \in
U^2}a(\alpha\epsilon)\int_{(\mathbb R^+)^n/U^2}e^{-2\pi tr(\alpha\epsilon
y/(c_1\delta))}\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j}.\\
& =\sum_{\alpha \in R^+/U^2}a(\alpha)\int_{(\mathbb R^+)^n/U^2}e^{-2\pi
tr(\alpha y/(c_1\delta))}\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j}.\\
&
=\sum_{\alpha \in
R^+/U^2}a(\alpha)\prod_{j=1}^n[(2\pi)^{-s}(c_1^{(j)}\delta^{(j)}/\alpha^{(j)})^s\Gamma(s)]\\
&=(2\pi)^{-ns}\Gamma(s)^n N(\delta)^s N(\c)^{s/2}L(s,f)\\
&=\Lambda(s,f).
\end{align*}
On the other hand, we have,
\begin{align*}
& \quad \int_{y\in (\mathbb R^+)^n/U^2, N(y)<1}f_1(\frac{2\i
y}{c_1\delta})\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j}\\
& =\int_{y\in (\mathbb R^+)^n/U^2, N(y)<1}(f(\frac{2\i
y}{c_1\delta})-a_0)\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j}\\
&=-a_0\int_{y\in (\mathbb R^+)^n/U^2,
N(y)<1}\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j} + \int_{y\in
(\mathbb R^+)^n/U^2,
N(y)>1}f(\frac{2\i}{yc_1\delta})\prod_{j=1}^{n}y_j^{-s}\frac{dy_j}{y_j}\\
&= -a_0\int_{y\in (\mathbb R^+)^n/U^2,
N(y)<1}\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j} + \int_{y\in
(\mathbb R^+)^n/U^2, N(y)>1}(g(\frac{2\i y}{c_1\delta}) -
b_0)\prod_{j=1}^{n}y_j^{1/2 - s}\frac{dy_j}{y_j} \\
&\quad +b_0\int_{y\in (\mathbb R^+)^n/U^2,
N(y)<1}\prod_{j=1}^{n}y_j^{s-1/2}\frac{dy_j}{y_j}\\
&= -\frac{a_0C}{s} - \quad \frac{b_0C}{(1/2 -s)} +\quad \int_{y\in
(\mathbb R^+)^n/U^2, N(y)>1}g_1(\frac{2\i
y}{c_1\delta})\prod_{j=1}^{n}y_j^{(1/2 - s)}\frac{dy_j}{y_j}.
\end{align*}
for some constant $C$. Note that in the last step, we have used the
Dirichlet unit theorem.
Hence we have shown that
\begin{align*}
(5) \quad \quad \quad \quad \quad \quad \Phi(s) + \frac{a_0C}{s}
+ \frac{b_0C}{(1/2 -s)} = &\int_{y\in (\mathbb R^+)^n/U^2,
N(y)>1}[f_1(\frac{2\i
y}{c_1\delta})\prod_{j=1}^{n}y_j^s\frac{dy_j}{y_j}\\
& \quad + g_1(\frac{2\i y}{c_1\delta})\prod_{j=1}^{n}y_j^{(1/2 -
s)}\frac{dy_j}{y_j} ]
\end{align*}
The right side consists of integrals over regions that are bounded
away from 0 (by the Dirichlet unit theorem) and hence the rapid
decay of $f_1$ and $g_1$ near infinity imply that these integrals
converge for all s. This proves that $\Phi(s)$ is a meromorphic
function with simple poles at $0, 1/2$ if $f$ is not a cusp form. As
a corollary, we obtain that $L(s,f)$ can be analytically continued
to the entire complex plane (with a simple pole at $1/2$ if $f$ is
not a cusp form).
To see the functional equation just exchange the roles of $f$ and
$g$ in (5).
\end{proof}
We call a prime ideal $\mathfrak p$ of $R$ \emph{non-split} if $\mathfrak p$ is the
unique prime ideal of $R$ that lies above $\mathfrak p \cap \mathbb Z$. We call an
ideal $I$ of $R$ non-split if all its prime divisors are non-split.
\begin{theorem}\label{t:newtheta}
Suppose $\c$ is non-split. Let $f$ be a normalized newform in $M(\c,
\psi)$ with $\psi_\infty$ trivial. Then $\c = 4r(\psi)^2$ and $f =
\frac{1}{2}\theta_{\psi}$
\end{theorem}
\begin{proof}. By Theorem~\ref{t:serre}, Lemma~\ref{l:normalized} and Lemma~\ref{l:eigenform}, we have the
product decomposition
$$L(s,f) = \prod_{\mathfrak p \mid \c}\left(1-\frac{c_p}{N(\mathfrak p)^{2s}}\right)^{-1}
\prod_{\mathfrak p \nmid
\c}\left(1-\frac{\psi^\ast(\mathfrak p)}{N(\mathfrak p)^{2s}}\right)^{-1}$$
Furthermore by Lemma~\ref{l:square} and Theorem~\ref{t:lseries} we
have
$$(2\pi)^{-ns} \Gamma(s)^n L(s,f) = C_1(2\pi)^{-n(1/2 -s)}
(\Gamma(1/2 - s))^n N(c\delta^2)^{1/2 -s}L(1/2 -s,Hf)$$ for some
constant $C_1$.
Consider, on the other hand, the function $L(2s, \psi)$ defined by
$$L(2s, \psi) = \frac{\psi^\ast(I)}{N(I)^{2s}} = \prod_{\mathfrak p \nmid r(\psi)}\left(1 -
\frac{\psi^\ast(\mathfrak p)}{N(\mathfrak p)^{2s}}\right)^{-1}$$
Then, from (\cite{Bump}, p. 78-79) we know that $$(2\pi)^{-ns}
\Gamma(s)^n L(2s, \psi) = C_2 (2\pi)^{-n(1/2 -s)}N(4 r(\psi)^2
\delta^2)^{1/2 -s}(\Gamma(1/2 -s))^n L(1- 2s ,\overline{ \psi})$$
Dividing these equations we have $$\prod_{\mathfrak p \in S}(\frac{1-c_p
N(\mathfrak p)^{-2s}}{1-\psi^\ast(\mathfrak p)N(\mathfrak p)^{-2s}}) = C_3 N(\c /
4r(\psi)^2)^{-(1/2-s)}\prod_{\mathfrak p \in S}(\frac{1-\overline{c_p}
N(\mathfrak p)^{2s - 1}}{1-\overline{\psi^\ast(\mathfrak p)}N(\mathfrak p)^{2s - 1}})$$ where
$S$ is the set of prime ideals $\mathfrak p$ for which $c_p \neq
\psi^\ast(\mathfrak p)$, $\mathfrak p \mid \c$.
If, for some $\mathfrak p \in S$, we have $\psi^\ast(\mathfrak p) \neq 0$, then the
left side of the above equation has an infinity of poles on the line
$Re(s) = 0$, only finitely many of which can appear on the right
side. This can be seen as follows: if $\mathfrak p$ is a prime in S then by
assumption, it is the only prime with that norm and so we can find
infinitely many $s$ such that $\psi^\ast(\mathfrak p)N(\mathfrak p)^{-2s} = 1$ but
none of the expressions $c_{p'} N(\mathfrak p')^{-2s}, \overline{c_{p'}}
N(\mathfrak p')^{2s - 1}, \overline{\psi^\ast(\mathfrak p')}N(\mathfrak p')^{2s - 1}$ equals 1
for any $\mathfrak p' \in S, \mathfrak p' \neq \mathfrak p.$
Hence $\mathfrak p \in S$ implies $\psi^\ast(\mathfrak p) = 0$, (in other words $\mathfrak p
\mid r(\psi)$) and hence $c_p \neq 0$ since $c_p \neq
\psi^\ast(\mathfrak p)$. But $c_p = 0$ if $4\mathfrak p \mid \c$. It follows that
either $S$ is empty or consists of the unique prime that lies above
$2$. Meanwhile, the equation simplifies to
$$\prod_{\mathfrak p \in S}(1-c_p N(\mathfrak p)^{-2s}) = C_4 N(\c \mathbf m^2 /
4r(\psi)^2)^s\prod_{\mathfrak p \in S}(1-c_p' N(\mathfrak p)^{-2s})$$ where $c_p' =
N(\mathfrak p)/\overline{c_p}$ and $\mathbf m = \prod_{\mathfrak p \in S}\mathfrak p$.
We claim that $S$ is empty. Suppose not, then $S = \{ \mathfrak p \}$ where
$\mathfrak p$ is the unique prime above $2$. Then, if $c_p \neq c_{p'}$ we
can find a zero of the left side of the above identity that is not a
zero of the right side. Hence we must have $c_p = c_{p'}$. This
implies that $\mid c_p^2 \mid= N(\mathfrak p)$. But that contradicts
Corollary~\ref{cbound2}
Thus $S$ is empty and we have $\c = 4r(\psi)^2$, $L(2s , \psi) = L(s
, f)$. This implies that for any non zero ideal $I$ which has a
common factor with $\c$ we have that $a(I) = \psi^\ast(L)$ if $I =
L^2$ for some ideal $L$ coprime to $r(\psi)$ and $a(I) = 0$ in all
other cases. This, coupled with Theorem~\ref{t:serre} shows that $f$
and $\frac{1}{2}\theta_\psi$ have the Fourier coefficients at $\xi$
for all $\xi \neq 0$; hence they also have the same constant
coefficient. Thus $f = \frac{1}{2}\theta_\psi.$ \end{proof}
\subsection{Proof of Theorem~\ref{t:main}}
\begin{proof}. We break the proof into two parts:
\begin{case}The $\theta_{\psi, t}$ are linearly independent. \end{case}
Since $t$ and $\psi$ determine $\chi$, each $t$ occurs as the second
entry of at most one $(\chi, t)$ in $\Omega(\c, \psi)$. Suppose we
have
$$\lambda_1\theta_{\psi_1, t_1} + \lambda_2\theta_{\psi_2, t_2}
+...+\lambda_m\theta_{\psi_m, t_m} = 0$$ with the number of primes
in the prime decomposition of $t_1$ being less than or equal to that
for the other $t_i$ and $\lambda_i \neq 0$ for each $i$. Then the
coefficient at place $t_1$ is $2\lambda_1$ for $\theta_{\psi_1,
t_1}$ and 0 for the others, thus showing that $\lambda_1 = 0$, a
contradiction.
\begin{case} The $\theta_{\psi, t}$
span $M(\c, \psi)$. \end{case}
We use induction on the number of (not necessarily distinct)prime
factors of $\c$. By Lemma~\ref{l:basis}, it suffices to show that any
eigenform $f$ of all the $T_{p^2}, p\nmid \c$ is a linear
combination of the $\theta_{\chi,t}$ with $(\chi, t) \in \Omega(\c,
\psi)$. If $f$ is a newform, this follows from
Theorem~\ref{t:newtheta}. If not, we may assume $f$ is an oldform.
Now we have two cases.
In the first case, $r(\psi)$ divides $\c/ p$ and $f\in M(\c/p,
\psi)$. Since $\c/ p$ is also non-split the induction hypothesis
shows that $f$ is a linear combination of the $\theta_{\chi, t}$
with $(\chi, t)$ in $\Omega(\c/p, \psi)$ and hence in $\Omega(\c ,
\psi)$.
In the second case, $r(\psi\epsilon_p)$ divides $\c/p$ and $f =
V(p)g$ with $g \in M(\c/p, \psi\epsilon_p)$. Because $\c /p$ is
non-split, and $(\psi \epsilon_p)$ is totally even because $\psi$ is,
the induction hypothesis shows that $g$ is a linear combination of
the $\theta_{\chi,t}$ with $(\chi,t) \in \Omega(\c/p,
\psi\epsilon_p)$ and hence $f$ is a linear combination of the
$\theta_{\chi, tp}$, with $(\chi, tp) \in \Omega(\c , \psi)$. This
completes the proof.
\end{proof}
\subsection{Examples}
In this section we specialize to the case $F= \mathbb Q(\sqrt{2})$, and $\c
= (\sqrt{2})^n$. For brevity, let $q = \sqrt{2}$. Note that the ring
of integers $R$ is simply $\mathbb Z(\sqrt{2})$ and the unit group is $<-1>
\times <1+q>$. Note also that $F$ has narrow class number 1, and the
prime $2$ is ramified in $F$, which allows us to apply the theorems
of the last section with $\c = (q)^n$.
The theorem will apply to any Hecke character $\psi$ of $F$ with
$\psi_\infty$ trivial and such that $r(\psi)$ divides $q^n$. For
simplicity we only find the quadratic (of order 2) Hecke characters
if this type, and give the explicit bases for each of the
corresponding spaces of modular forms. It suffices to find the
quadratic Dirichlet characters mod $q^n$ that are trivial on units.
For that we need to analyze the structure of the groups
$(R/q^n)^\times$.
\begin{proposition}\label{p:anal}
Let $U_n$ denote the multiplicative group $(R/q^n)^\times$. Then, if
$n \leq 4$, $U_n$ is generated by the units of $R$ and hence there
is no nontrivial even Hecke character with conductor dividing $q^n$.
On the other hand, if $n>4$, the following hold: \begin{enumerate}
\item $U_n$ is the direct sum of the cyclic groups generated by $(1+q)
, (-1)$ and $(3+4q)$.
\item $(1+q)$ has order $2^{\lfloor \frac{n}{2}\rfloor}$ while $3+4q$
has order $2^{\lfloor \frac{n-3}{2}\rfloor}$ in the group
$(R/q^n)^\times$.
\item Let $k=\lfloor \frac{n}{2}\rfloor$, $l= \lfloor
\frac{n-3}{2}\rfloor$. Then $U_n$ is isomorphic to $(\mathbb Z/2^k\mathbb Z)
\oplus (\mathbb Z/2^l\mathbb Z) \oplus (\mathbb Z/2\mathbb Z)$
\end{enumerate}
\end{proposition}
\begin{proof}. The case $n \leq 4$ can be checked easily by hand.
For the case $n \geq 5$, first observe that the cardinality of $U_n$
is $2^{n-1}$. This follows from the fact that the elements of
$(R/q^n)$ can be written as $(a+bq)$ with $a \in (\mathbb Z/2^k\mathbb Z)$ and $b
\in (\mathbb Z/2^{l+1}\mathbb Z)$ with $k,l$ as in part (3) of the Proposition,
and such an element is invertible iff $a$ is odd.
We first prove (2). The same method is used to calculate the orders
of $3+4q$ and $1+q$; the idea is to write $(a+bq)^{2^{k+1}} - 1 =
(a+bq)^{2^k} - 1)(a+bq)^{2^k} + 1).$ If for some $n>2$, $q^n$
\emph{exactly} divides $(a+bq)^{2k} - 1)$ then $q^{n+2}$ exactly
divides $(a+bq)^{2^{k+1}} - 1 $. Since $q^5$ exactly divides
$(1+q)^4 -1$ and $q^6$ exactly divides $(3+4q)^2 -1$, the result
follows.
Now, it is easy to see that the subgroups generated by $(1+q)$ and
$(3+4q)$ have trivial intersection, and further, that $(-1)$ does
not lie in the subgroup generated by these two elements. Thus (a)
follows by comparing cardinalities, and clearly (3) is a direct
consequence of (1) and (2).
\end{proof}
\begin{corollary}
Let $\phi$ denote the Hecke character $\epsilon_u$ where $u=2+q$.
Then $r(\phi) = (q^5)$ and $\phi$ is the unique non-trivial Hecke
character with $\psi_\infty$ trivial that is quadratic (of order 2)
and whose conductor divides $q^n$.
\end{corollary}
\begin{proof}
Observe that any Dirichlet character mod($q^n$) that is
trivial on the units of $R$ must be, by the previous proposition, a
character on the group generated by $3+4q$. Furthermore, if this
character is quadratic, it must be either the trivial character or
the character that takes value $-1$ on $(3+4q)$; it is not hard to
see (since $(3+4q)$ is inert in the extension $F(\sqrt{u})/F$) that
this corresponds to the Hecke character $\epsilon_u$. \end{proof}
This leads us to the following theorem.
\begin{theorem}
Let $n \geq 5$, $\c = (q^n)$ , $u = 2+q$. Let $\phi$ denote the
Hecke character $\epsilon_u$ and $\mathbf{1}$ denote the trivial
character. Then: \begin{enumerate}
\item A basis for $M(\c, \mathbf{1})$ comprises of the functions $\{
\theta_{\mathbf{1}, 2^k} , 0 \leq k \leq \lfloor \frac{n-4}{2}
\rfloor$; $\theta_{\phi, 2^ku} , 0 \leq k \leq \lfloor
\frac{n-15}{2} \rfloor \}$. Thus the dimension of the space $M(\c,
\mathbf{1})$ is $( \lfloor \frac{n-2}{2} \rfloor + max\{\lfloor
\frac{n-13}{2} \rfloor, 0\} )$.
\item A basis for $M(\c, \phi)$ comprises of the functions $\{
\theta_{\mathbf{1}, 2^ku} , 0 \leq k \leq \lfloor \frac{n-5}{2}
\rfloor$; $\theta_{\phi, 2^k} , 0 \leq k \leq \lfloor \frac{n-14}{2}
\rfloor \}$. Thus the dimension of the space $M(\c, \mathbf{1})$ is
$( \lfloor \frac{n-3}{2} \rfloor + max\{\lfloor \frac{n-12}{2}
\rfloor, 0\} )$.
\end{enumerate}
\end{theorem}
\begin{proof}. This follows from Theorem~\ref{t:main} and the above Corollary.
\end{proof}
\section{Potential applications }\label{s:app}
In this section we put our work in context by mentioning a couple of potential applications which we hope to take up elsewhere.
\subsection{The Congruence number problem}
An ancient Diophantine problem (the so-called congruence number problem) asks for a good criterion to determine whether an integer is the area of a right angled triangle with rational sides. Such integers are referred to as congruent numbers.
This was solved by Tunnell~\cite{Tunnell}. Tunnell's work begins with the observation that $n$ is congruent if and only if the rank of the elliptic curve $E = y^2 = x^3 - n^2x$ over $\mathbb Q$ is non-zero. This is easy to prove by elementary number theory. Now, by the Birch--Swinnerton-Dyer conjecture (one direction of which is known in this case by the work of Coates--Wiles) the above condition is equivalent to the value of the $L$-function $L(E,s)$ at 1 (the central value) being equal to 0. However, it is not hard to show that $L(E,s)$ equals $L(\phi \otimes \epsilon_n,s)$ where $\phi$ is the unique normalized newform of weight 2, level 32 and trivial character while $\epsilon_n$ is the quadratic character associated to $\mathbb Q(\sqrt{n})$. By work of Waldspurger the value $L(\phi\otimes\epsilon_n,1)$ is related to the value $c_n ^2$ where $c_n$ is the $n$'th Fourier coefficient of the weight $3/2$ modular form that maps to $\phi$ under the Shimura correspondence.
Tunnell's main contribution to the problem was to find explicitly the weight $3/2$ form above. Using the Serre-Stark theorem, he was able to write this form as a product of an explicit theta-series and a standard weight 1 form. As a result, it was possible to express $c_n^2$ and consequently the vanishing condition on $L(E,1)$ in a simple combinatorial form.
One may ask the same question over our totally real number field $F$. We call an element $\alpha \in R$, $F$-congruent if there exist positive $X, Y, Z \in F$ such that $X^2 + Y^2 = Z^2$ and $XY = 2 \alpha$, with possibly a signature restriction. In the case of real quadratic fields, things work out nicely, though with a slight modification~\cite{Achimescu}. Thus we hope that one can resolve the congruent number problem over $F$ in a manner similar to what was achieved by Tunnell over $\mathbb Q$. One of the crucial points is the construction of an appropriate weight $3/2$ Hilbert modular form; we hope to achieve this by using our basis of weight $1/2$ forms and multiplying it by a appropriate weight $1$ Hilbert modular form.
\subsection{Construction of interesting weight $1$ forms} There are not many explicit examples that illustrate the conjectural correspondence between Galois representations and weight $1$ forms. Buhler~\cite{Buhler} was able to construct a (classical) modular form of level 800 that corresponds to an icosahedral Galois representation. We believe that our main theorem may be useful in constructing interesting (that is, not a base change and non-dihedral) Hilbert modular forms of a specified level whose $L$- function matches a (possibly icosahedral) Galois representation of that level.
Given a Hilbert modular form $f$ of weight $1$ and two forms $g_1, g_2$ of weight $1/2$, $fg_1g_2$ is a form of weight 2. This suggests the following procedure. We fix a form $F$ of weight $2$ and level $N$ and consider the functions $F/(g_1g_2)$ where $(g_1,g_2)$ varies over pairs of basis forms of weight $1/2$, as given by our main theorem. It seems likely that this method will lead to the construction of explicit, interesting examples of weight 1 Hilbert modular forms that correspond to Galois representations.
|
0803.3968
|
\section{Introduction}
The statistical mechanics of string networks has been the object of numerous studies because
of the importance of strings or string-like entities across all energy scales.
In general, either because of the large number of configurational
microstates or because of the large number of excited quantum
states that such a network possesses, the networks undergo
transitions in which, as temperatures rise, strings proliferate.
In the language of configurational states such a transition is
termed a Feynman-Shockley transition, after Feynman's description
of the $\lambda$-transition of $^4$He in terms of vortex
production \cite{Kleinert}. From the viewpoint of counting excited
states it is called a Hagedorn transition \cite{Hagedorn}.
[Henceforth we follow the common usage of {\it Hagedorn
transition} to apply to both cases, which are similar in structure
in many ways.]
Specifically, in QCD, the sudden proliferation of colour flux
tubes (the original dual hadronic strings) explains quark
deconfinement as temperature rises (see, for example,
\cite{Patel1,Patel2,MR}). In cosmology at the GUT scale, where
cosmic strings arise in all reasonable supersymmetric models
incorporating electroweak unification \cite{Mairi}, the
statistical mechanics of cosmic string networks has been
investigated in order to understand their properties at formation
and their late time scaling solutions, crucial for determining
their cosmological consequences \cite{EdRay2,EdRay3}. For
fundamental strings there has been substantial work on exploring
the effects of such transitions on the extremely early universe
\cite{AtickWitten,Mitchell1,Mitchell2,AlbrechtTurok,
Sakellariadou3}.
More recently, attention has turned again to fundamental string
networks, following new developments in superstring theory.
Indeed, a network of cosmic superstrings is expected to form when
a brane and anti-brane annihilate at the end of string-motivated
brane inflation models. The network contains fundamental F-strings, Dirichlet D-strings,
and $(p,q)$-strings which are bound states of $p$ F-strings and
$q$ D-strings
\cite{Copeland,m2,m3,Firouzjahi:2006vp}, meeting
at Y-junctions (or vertices).
The presence of Y-junctions, as well as the spectrum of tensions of the strings,
is a key characteristic of such
networks and leads to more complicated dynamics. Much work has
been done to determine how $(p,q)$-like string networks evolve, both by analytic methods
and numerical simulations, with particular regard to scaling
solutions, their effect on the CMB as well as other observable
consequences
\cite{Tye:2005fn,Sakellariadou:2004wq,Avgoustidis:2005nv,Copeland:2005cy,Saffin:2005cs,
Hindmarsh:2006qn,us1,Wells,Hassan,Jon,Arrtu,Siemens}.
Other than being stable against break-up, such strings differ from
earlier superstrings in that, due to the warping of space-time,
their tensions are not of the Planck scale but many orders of
magnitude smaller. As a result any Hagedorn transitions may even
arise later than the reheating of the universe, and hence be of
direct relevance for astrophysics. A necessary first step in
seeing whether this is the case is to determine the phase diagram
for the Hagedorn transitions of a network with more than one type
of string, and this is the goal of the present paper.
Our approach is to attempt to map the thermodynamics of string
networks with junctions into the thermodynamics of a set of
interacting dual fields, whereby the Hagedorn transitions of the
strings become conventional transitions of the fields, a situation
with which we are familiar. One can imagine several ways to
attempt this. We adopt the simplest, generalising the methods for
describing quark deconfinement mentioned above (with its flux-tube
Y-junctions) to something more like $(p,q)$-strings.
Hence we investigate the {\it equilibrium
statistical mechanics} of cosmic superstring networks using
methods motivated by \cite{Patel1,Patel2,MR}. However, it
is important to note that there is at least one major difference between
cosmic superstrings and QCD fluxlines: with multiple tensions
(from different string types), we expect cosmic superstring
networks to show multiple Hagedorn transitions.
In subsequent sections we derive and analyse the phase structure
of a {\it three}-string model with junctions. This is a reduced
model of realistic cosmic superstrings, for which $(p,q)\in
\mathbb{Z} \times \mathbb{Z}$ form a doubly-infinite family. Since
string tension (or energy/unit length) increases with $p, q$, all
but low values will be suppressed at high temperature. We
therefore adopt the simplest non-trivial scheme, taking the two
lightest strings and their bound-state (and anti-strings), all
which have different tensions $\sigma_\alpha$, $\alpha=(1,2,3)$. For
example, depending on parameters, these could be the $(1,0)$,
$(0,1)$ and $(1,1)$ strings.
We show that as the system is heated,
the lightest tension strings first undergo the Hagedorn
transition, despite the presence of Y-junctions. Conversely, at
low temperatures, only the lightest strings remain, before they
disappear into loops. Our results are summarized in figure
\ref{fig:1}.
\begin{figure}
\centerline{\includegraphics[width=0.42\textwidth]{figure1.pdf}}
\caption{Different critical temperatures for our simplified model
of cosmic superstrings (with tensions $\sigma_1 < \sigma_2 <
\sigma_3$) with Y-junctions. The lower Hagedorn temperature $T_1$
is determined by $\sigma_1$ whereas the higher Hagedorn
temperature $T_*$ is a determined by all the $\sigma_{\alpha}$
$(\alpha=1,2,3)$. $n_v$ denotes the density of vertices (or
Y-junctions) joining infinite strings at temperature $T$.}
\label{fig:1}
\end{figure}
These conclusions may have important consequences for $(p,q)$
string networks in that, if only the lightest strings remain after
a non-adiabatic quench, no significant r\^ole would be played by
the junctions whose properties have been studied so extensively.
The dynamics would then be that of a single string type with no
junctions (though there may be loops containing strings of
different types; as explained below, our analysis is limited to
infinite strings). This is not an idle proposition in that,
although our analysis in this paper assumes adiabatic behaviour,
we have learned elsewhere that universality classes of equilibrium
systems at their adiabatic transitions can become universality
classes of non-equilibrium systems at fast quenches \cite{Kibble}:
these points will be the content of a separate paper. Other works \cite{us1,
Avgoustidis:2005nv,Jon,Tye:2005fn} based on studying the dynamics of
string networks with junctions also suggest that at late times only the lightest strings may remain.
The paper is set up as follows. In section \ref{sec:underst} we
first review some relevant aspects of string statistical mechanics
in the simplest case: {\it one type} of string and {\it no} junctions. In particular,
the duality between strings and fields is discussed.
In section \ref{sec:XY} we still consider only strings of a single tension and type, but
now these are allowed to meet at a junction. This section paves the way for section \ref{sec:3}
in which we consider the general case of strings of three different tensions $\sigma_\alpha$ and types,
meeting at junctions.
As explained in section \ref{sec:XY} there is significant complexity involved in adding junctions
when discussing string statistical mechanics, and hence this section is central to the development of the paper.
Furthermore, technically, junctions can be introduced in different ways, and as a result we are forced to discuss in detail
two specific models (`bosonic' and `fermionic') to do so.
While bosonic models are closer to the physical system we eventually wish to describe (and discussed in
section \ref{sec:underst} when there are no junctions), only fermionic models can be generalized to the
three-string case of section \ref{sec:3}. At the end of section \ref{sec:XY} we compare these two models,
and conclude that they both essentially agree in their phase structure. This justifies the use of fermionic
models in section \ref{sec:3} where the analysis resulting in the conclusions
drawn in figure \ref{fig:1} is straightforward. Finally, we also show, following ideas from
QCD, that the string system with junctions can be rewritten a generalised
spin model (XY model).
\section{Understanding the Hagedorn transition}
\label{sec:underst}
In this section we
discuss the nature of the Hagedorn
transition for strings of a {\it single} type, with tension $\sigma$, and {\it no} Y-junctions.
As mentioned in the introduction, we proceed by using the
duality between string configurations and fields to write the
partition function for the string network as that of an effective
field theory \cite{Stone}. As a result, the Hagedorn transition
can be mapped onto a transition of the effective field.
Furthermore, provided the right questions are asked, one can work
with the canonical rather than the microcanonical ensemble.
Consider a classical {\it static} picture of {\it
non-interacting} strings in $D$-spatial dimensions. These are taken to lie on a
hypercubic lattice of spacing $a$, and the energy
$E$ of the strings only depends on the total string length $L$
through $E = \sigma L$. Near the critical temperature,
correlations are large and the details of the lattice structure
should be unimportant. We also assume that the network can be
thought of as a set of random walks.
Now recall
the {\it duality} between (non-oriented) Brownian paths in $D$ spatial dimensions and a
scalar field $\varphi$ of mass $m$, as exemplified by the identity
%
\begin{eqnarray}
&&\langle\varphi({\bf x})\varphi({\bf 0})\rangle =
\nonumber
\\
&& \int_0^{\infty} d\tau\,e^{-\tau m^2}\int^{{\bf x}({\tau})
= {\bf x}}_{{\bf x}({0}) = {\bf 0}}{\cal D}{\bf x}\,\exp\bigg[-\int_0^{\tau}d\tau '\,
\frac{1}{4}\bigg(\frac{d{\bf x}}{d\tau'}\bigg)^2\bigg].\nonumber
\end{eqnarray}
%
This identity can be used to construct an effective {\it action} (or, more accurately, a free
energy) for the string partition function $Z$ at temperature $T =
\beta^{-1}$ in terms of $\varphi$ as \cite{Stone}
%
\begin{equation}
Z = \int{\cal D}\varphi\,\exp\bigg[-\int dx^D\bigg(\frac{a^2}{4D}(\nabla\varphi)^2
+\frac{1}{2}M^2\varphi^2\bigg)\bigg],
\label{dualZ}
\end{equation}
%
where the mass term is
%
\begin{eqnarray}
M^2 = \sigma a\beta \bigg(1 -
\frac{T}{T_H}\bigg).\nonumber
\end{eqnarray}
%
The Hagedorn transition temperature, $T_H = \beta_H^{-1}$, is the
solution to
\begin{equation}
J(\beta)\equiv e^{-\beta\sigma a} = \frac{1}{2D}.\label{J}
\end{equation}
%
The normalisation of $\varphi$ has been chosen here so that $M^2$ is dimensionless.
(Note that one would have recovered the same temperature $T_H$ for a gas of strings by
counting single-loop configurations on the lattice \cite{Patel1,EdRay3}).
It is important to observe that {\it below} the Hagedorn transition
$T<T_H$, $\varphi$ is a massive free field with $M^2$ {\it
positive}. For $T>T_H$, with $M^2<0$, it describes a {\it
tachyon}. Here fluctuations are large and for this reason the
canonical ensemble often dropped in favour of the microcanonical
ensemble \cite{Mitchell1}. However, in the conventional picture of
spontaneous symmetry breaking we are familiar with the way in
which tachyons describe instabilities (in field space); they are
understood as corresponding to an inappropriate choice of ground
state, the true ground states appearing naturally once
back-reaction is taken into account.
For example, the inclusion of a repulsive point-interaction modifies the free energy to \cite{Stone}
%
\begin{equation}
S = \int dx^D\bigg(\frac{a^2}{4D}(\nabla\varphi)^2
+\frac{1}{2}M^2\varphi^2 +\lambda\varphi^4\bigg),
\label{S0eff}
\end{equation}
%
thus permitting $\langle\varphi\rangle$ to remain finite for
$T>T_H$. For our $(p,q)$ networks, the system has more
complicated interactions than such a simple local repulsion. In
particular, were the strings allowed to interact at Y-junctions,
we would expect them to induce additional cubic $\mu\varphi^3$
terms
--- as we shall see in a different context below. However, the
general implications are much the same.
The vanishing of the order parameter $\langle\varphi\rangle$ at
$T\leq T_H$ can be understood in the following way. Examination of
the partition function shows that total string density is
proportional to $\langle\varphi^2\rangle$, whereas
$\langle\varphi\rangle^2$ measures the density in infinite string
(i.e.~string that crosses space) \cite{Stone,Ray}. It is the
vanishing of {\it infinite} string that characterises the Hagedorn
transition, and not the vanishing of string.
Although large loops are energetically unfavourable, some loops
will always exist below the transition (in an adiabatic limit).
Superficially, free energies like (\ref{S0eff}) look like those of
high-temperature quantum field theories on dimensional
compactification. Either from calculating the thermal propagator
for excitations at the relevant groundstate or by counting
microstates of a loop gas we get the same result that, in the
vicinity of the transition, the loop distribution is dominated by
the smallest possible loops (the ultraviolet limit) \cite{Ray}.
\section{Mean field transitions; XY models}
\label{sec:XY}
As discussed in section \ref{sec:underst}, we anticipate that Y-junctions will induce cubic
interaction terms in the dual field theory. However, we do not know how to
introduce them in the exact framework of section
\ref{sec:underst}, even when the junctions are between strings of
the {\it same type and tension} $\sigma$ --- the setup considered in the
present section.
In this section we discuss a mean-field procedure
which allows junctions to be incorporated, and which shows how such
cubic interaction terms arise. As in section \ref{sec:underst},
one can then construct an analogue effective potential, $V(\varphi
)$, for a field $\varphi$, whose vanishing describes the
transition. Unfortunately, it is not possible to
extend this construction to the full effective action, and as a result
it is not possible to identify the field fluctuations that
describe finite loops: our analysis is restricted to infinite
string and the transitions triggered by its creation. Nonetheless,
knowing that loops are there enables us to complete the picture,
qualitatively. It is the mean field procedure presented in this section which
will be generalised to the three-string model in section \ref{sec:3}.
Again we work in $D$ spatial dimensions, on a periodic hyper-cubic
lattice of $N$ sites and lattice size $a=1$. Let $i$ label a
lattice site, and $\mu=1,\ldots,D$ the (positive) unit vectors in
$D$-dimensional space.
There is now a technical complication,
related to how we
allow the strings to populate the lattice. Although there is an
energetic penalty in having more than one string on a link, in the
first instance we do not wish to restrict the number to unity. To
do so could imply an effective repulsion between strings that is a
lattice artefact, and which might induce misleading terms in the
effective potential for the analogue field $\varphi$. Without this
restriction the models are termed `bosonic'.
Models in which at most one string (of any type) can lie on a link
are termed `fermionic'. In practice we shall find, when we come to
mimicking $(p, q)$ strings, that only a fermionic model can
accommodate junctions of three string types.
An important result of this section is that our concern about
fermionic models is largely unjustified (though we feel it is necessary, for
reasons of clarity, to discuss it in detail): both bosonic and
fermionic models essentially agree for the small $\varphi$
values that are relevant for transitions, and for which the mean
field approximation is more reliable.
Further, both of these models rewrites the string system as a
generalised XY model, permitting us to think of the Hagedorn
transition as one of spin ordering. This suggests ways of going
beyond the mean field approximation, although we shall not do so here.
\subsection{Bosonic models}
With conventional lattice notation, let $n_{i,\mu}^{+}$ ($n_{i,\mu}^{-}$) be the number ($0,1,2,\ldots$) of
strings
(anti-strings) on the link between the lattice points $i$ and
$i+\mu$.
For strings with no junctions, the Hamiltonian
\begin{equation} H = \sum_{i=1}^N
\sigma \sum_\mu (n_{i,\mu}^+ + n_{i,\mu}^-)
\label{HB}
\end{equation}
%
gives the requisite energy $E=\sigma L$ to a network of total
length $L$.
Now, depending on the string network we wish to model, there is more than one way to proceed.
We discuss the mean field potenial in each case, making links with sections \ref{sec:underst} and \ref{sec:3}.
\subsubsection{Massless junctions}
First we allow the strings to have $N_v$-fold {\it massless} junctions i.e.~no
extra cost in energy. [We are primarily concerned with $N_v =
3$.]
Since the junctions considered are
massless they do not appear in the Hamiltonian, which is still
given by (\ref{HB}).
Rather, the existence of junctions imposes constraints on the $n_{i,\mu}^{+}$
($n_{i,\mu}^{-}$). Junctions or anti-junctions are permitted on
site $i$ provided the flux into that site is an integer multiple of $N_v$:
\begin{equation}
\alpha_i \equiv \sum_\mu \left[ (n_{i,\mu}^+
- n_{i-\mu,\mu}^+) - (n_{i,\mu}^- - n_{i-\mu,\mu}^-) \right] = 0
\; {\rm mod} \; N_v \label{conb},
\end{equation}
%
a constraint which can be implemented through
%
\begin{eqnarray}
\delta_{\alpha=0 \; {\rm mod} \; N_v} = \frac{1}{N_v}
\sum_{k_i=1}^{N_v} e^{i \alpha \theta_i} \qquad {\rm where} \qquad
\theta_i = \frac{2\pi k_i}{N_v}.\nonumber
\end{eqnarray}
%
Using this representation in the canonical partition function
%
\begin{eqnarray}
Z
= \sum_{n_{i,\mu}^\pm} e^{-\beta \sigma \sum_{i,\mu}(n_{i,\mu}^+ +
n_{i,\mu}^-)} \left(\prod_i \delta_{\alpha_i=0 \; {\rm mod} \; N_v}
\right) \nonumber
\end{eqnarray}
%
enables us to write $Z$ as
%
\begin{eqnarray}
Z &=& \prod_i \frac{1}{N_v} \sum_{k_i} \left(
\sum_{n_{i,\mu}^+} e^{-\sum_{i,\mu}[\beta \sigma n_{i,\mu}^{+} + i
(\theta_{i+\mu}-\theta_{i})n_{i,\mu}^+]} \right)\times \nonumber
\\
&\times&\left(
\sum_{n_{i,\mu}^{-}} e^{-\sum_{i,\mu}[\beta \sigma n_{i,\mu}^{-} - i
(\theta_{i+\mu}-\theta_{i})n_{i,\mu}^-]} \right), \nonumber \end{eqnarray}
where the different signs in front of the lattice variables $\theta_{i}$
in the two terms in round brackets reflect the signs in (\ref{conb}).
The summations can be performed, to obtain
\begin{eqnarray} Z =
\left(\frac{1}{N_v}\right)^N \sum_{k_i} e^{-\sum_{i,\mu}
\ln(1+J(\beta)^2-2J(\beta)
\cos(\theta_i-\theta_{i-\mu}))}
\nonumber
\end{eqnarray}
where $J(\beta) = e^{-\beta \sigma}$ as in (\ref{J}). That is, the
Hamiltonian of the network is, up to a constant,
\begin{equation}
\beta H= \sum_{i,\mu}
\ln[1+ J(\beta)^2-2J(\beta)
\cos(\theta_i-\theta_{i-\mu})].
\end{equation}
It is not possible to evaluate $Z$ exactly.
Hence we resort to the mean field approximation scheme (see for example
\cite{Kleinert}), which consists of introducing a trial
Hamiltonian $H_0$ in which each variable of the system is
decoupled from the other but
depends on an external constant source $\varphi$. An obvious
choice here is
%
\begin{equation}
H_0(\varphi) = - \frac{\varphi}{\beta} \sum_i \cos
\theta_i \; .
\end{equation}
On writing
%
\begin{eqnarray}
H = H_0(\varphi) + [H - H_0(\varphi)],\nonumber
\end{eqnarray}
%
then
%
\begin{eqnarray}
Z &=& \sum_{{\rm config}}
e^{-\beta H_0(\varphi)} e^{-\beta[H -H_0(\varphi)]}
\nonumber
\\
&=&
Z_0(\varphi)
\left\langle e^{-\beta[H -H_0(\varphi)]}\right\rangle_0 \nonumber
\\
&\geq& Z_0(\varphi) e^{-\beta \langle H -H_0(\varphi)\rangle_0
} , \nonumber
\end{eqnarray}
%
where the zero subscript denotes $\varphi$-dependent averaging with regard to $H_0(\varphi)$.
As a result the free energy $F = -T\ln Z$ satisfies
%
\begin{equation}
F(\varphi) \leq NV(\varphi)\equiv F_0(\varphi) + \langle
H\rangle_0 -\langle H_0(\varphi)\rangle_0,
\label{interV}
\end{equation}
%
where $V(\varphi )$ is the mean field effective potential (and $F_0 = -T\ln Z_0$).
Our aim is then to minimize $V$ in order to find $\varphi_{min}$, which
determines the density of infinite string (see below).
We now carry out the calculation explicitly in the case of Y-junctions for which $N_v=3$. Then
\begin{eqnarray} Z_0(\varphi) = \left[\frac{1}{3} \left(\sum_{k=1}^3
e^{\varphi\cos(2\pi k/3)} \right) \right]^N =
\tilde{I}_0(\varphi)^N \nonumber \end{eqnarray}
where
%
\begin{eqnarray}
\tilde{I}_0
= \frac{1}{3}\left(e^\varphi + 2 e^{-\varphi/2}\right).
\nonumber
\end{eqnarray}
%
Now use the results that
%
\begin{equation}
\langle \ln (1 +p^2 - 2 p \cos \theta) \rangle = -2
\sum_{m=1}^\infty
\frac{p^m}{m}
\langle \cos m\theta \rangle, \qquad (|p|<1)
\label{lnBoson}
\end{equation}
%
for all measures, and that
%
\begin{equation}
\langle\cos m\theta \rangle_0 = \frac{\tilde{I}_m (\varphi)}{\tilde{I}_0 (\varphi)}
\end{equation}
%
for the case in point, where
%
\begin{eqnarray}
\tilde{I}_m (\varphi)&=&
\frac{1}{3} \sum_k e^{\varphi\cos(2\pi k/3)} \cos( 2\pi m k/3)
\nonumber
\\
& =&
\frac{1}{3} \left(e^\varphi + 2 e^{-\varphi/2}\cos(2\pi
m/3)\right)\nonumber
\end{eqnarray}
is a discrete version of the Bessel function. Hence, using (\ref{interV}) we obtain
\begin{eqnarray}
\beta V(\varphi) &=& -\ln(\tilde{I}_0(\varphi)) +
\varphi
\left(\frac{\tilde{I}_1(\varphi)} {\tilde{I}_0(\varphi)} \right)
\nonumber
\\
&& \qquad -2D \sum_{m=1}^\infty \frac{J(\beta)^m}{m}
\left(\frac{\tilde{I}_m(\varphi)} {\tilde{I}_0(\varphi)}
\right)^2. \label{Bvertices}
\end{eqnarray}
%
The periodicity (modulo 3) of the $\tilde{I}_m(\varphi)$ enables us to perform the
summation explicitly, to give
%
\begin{eqnarray}
\beta V(\varphi) &=& -\ln(e^\varphi +
2e^{-\varphi/2}) + \varphi\left(\frac{e^\varphi - e^{-\varphi/2}}
{e^\varphi + 2e^{-\varphi/2}} \right)
\nonumber
\\
&& \qquad - 2DG(\beta) \left(\frac{e^\varphi -
e^{-\varphi/2}}{e^\varphi + 2e^{-\varphi/2}}\right)^2
\label{vertex}
\end{eqnarray}
where
\begin{eqnarray}
G = \frac{1}{3} \ln \left(
\frac{1+J+J^2}{(1-J)^2} \right ) = J + \frac{1}{2}J^2 + .... \nonumber
\end{eqnarray}
for small $J(\beta)$.
Notice that, because the sum over $m$ in (\ref{Bvertices}) just reproduces the first term
with a modified coefficient, $V(\varphi)$ of (\ref{vertex}) can be shown to be {\it
exactly} the mean-field potential arising from the Hamiltonian
\begin{equation}
H^{disc}_{XY} = -\frac{G(\beta)}{\beta} \sum_{i,\mu}
{\bf{s}}_i \cdot {\bf{s}}_{i+\mu}. \label{HXY} \end{equation}
i.e.~the Hamiltonian for a system of unit spins in the plane with
nearest neighbour interactions in which their relative angles are
constrained to multiples of $2\pi /N_v$ (here $N_v=3$); a discrete
XY model. The mean field trial Hamiltonian $H_0$ in this case is $
H_0(\varphi) = - \frac{\varphi}{\beta}{\bf n}.\sum_i{\bf{s}}_i$
for an arbitrary unit vector {\bf n}
in which the spins are decoupled; in other words, an external
magnetic field proportional to $\varphi$.
In order to understand the phase structure of the model (either as a spin system or as a gas of
strings with junctions),
consider first the series expansion of $V(\varphi)$;
%
\begin{equation}
\beta V(\varphi) = \frac{1}{2}m^2\varphi^2 +
\frac{1}{3}\mu\varphi^3 + \frac{1}{4}\lambda\varphi^4 + \ldots,
\label{quartic} \end{equation}
up to constant terms,
%
where
\begin{equation}
m^2 = \frac{1}{2}(1-2DG), \mu = \frac{1}{4}(1-3DG), \lambda = -\frac{3}{16}(1-2DG).\label{Bstrings} \end{equation}
Observe that the field becomes massless at the temperature for which $2DG(\beta) =1$, which is in
good
agreement with the Hagedorn temperature of the free dual theory
of (\ref{dualZ}) since $G(\beta) \simeq J(\beta)$ for
$J = 1/2D\ll 1$. Furthermore, as anticipated, the Y-junctions have
induced a cubic term in the potential. In addition they have also induced a quartic interaction,
vanishing when the field
becomes massless, that is repulsive when the field becomes tachyonic.
%
As a result of the cubic term, the potential in equation (\ref{vertex}) can be
shown to have a weak first order phase transition. The critical
temperature, however, cannot be obtained from (\ref{Bstrings}) as
it occurs at values of $\varphi\simeq 1$. Numerically, however,
one finds that $2G_{crit}(D=3)\simeq 0.31$ and $2G_{crit}(D=4)\simeq
0.23$. We shall not consider the first order transition further,
since it is not reliably robust against rapid quenches which is
what we ultimately have in mind.
\subsubsection{Massive junctions}
Alternatively one might want to model string networks with massive junctions --- that is,
is to introduce junctions with an energy cost $v$. (These can model massive monopoles, which may be formed
at the vertex in different symmetry breaking schemes \cite{VV}.) We can then recover massless vertices by taking $v\rightarrow 0$.
Furthermore this construction allows one to calculate the average
density of vertices at temperature $T$, by simply differentiating
$Z$ with respect to $v$. This will be discussed in section
\ref{sec:3}.
To add massive vertices, we allocate a vertex number $p_i^{\pm} =
(0,1,2\ldots)$ to each lattice site, constrained by
\begin{eqnarray} \alpha_i &
\equiv& \sum_\mu \left[ (n_{i,\mu}^+ - n_{i-\mu,\mu}^+) -
(n_{i,\mu}^- - n_{i-\mu,\mu}^-) \right] \nonumber
\\
&& \qquad \;+ \; 3 (p_i^+ -
p_i^-) = 0
\label{heavyVB}
\end{eqnarray}
for Y-junctions,
while the Hamiltonian acquires an extra term
\begin{equation}
H_I = \sum_{i=1}^N v (p_i^+ + p_i^-).
\end{equation}
Performing the sums over the $n^{\pm}_{i,\mu}$ and the $p_i^{\pm}$ leads to a
Hamiltonian
\begin{eqnarray}
\beta H &=& -\sum_{i,\mu}\ln [ 1 + J^2(\beta) - 2J(\beta)
\cos(\theta_{i+\mu} - \theta_{i})] \nonumber
\\
&-& \sum_i\ln [ 1 + K^2(\beta)
- 2K(\beta) \cos 3\theta_{i}], \label{HB2}
\end{eqnarray}
%
where the $\theta_i$ are now {\it continuous variables}, the Lagrange multipliers that arise from
imposing the constraints
%
\begin{equation}
\delta_{\alpha_i,0} = \frac{1}{2\pi} \int_0^{2\pi} d\theta_i
e^{i \alpha_i \theta_i}.
\end{equation}
Also we have defined
\begin{equation} {K(\beta)}=e^{-\beta v}, \label{Kdef}
\end{equation}
analogously to $J$
in (\ref{J}).
Then, carrying out the same mean field treatment as above yields
\begin{eqnarray}
\beta V^{(K)}(\varphi) &=& -\ln({I}_0(\varphi)) + \varphi\left(\frac{{I}_1(\varphi)} {{I}_0(\varphi)}
\right)
\nonumber
\\
&& -2D \sum_{m=1}^\infty \frac{J(\beta)^m}{m}
\left(\frac{{I}_m(\varphi)} {{I}_0(\varphi)} \right)^2
\nonumber
\\
&&
- 2 \sum_{m=1}^\infty \frac{{K(\beta)}^m}{m} \left(\frac{{I}_{3m}(\varphi)}
{{I}_0(\varphi)} \right),
\label{V0boson}
\end{eqnarray}
where the $I_m$ are (continuous) Bessel functions.
For non-zero $K$ cubic terms arise from the $I_{3}$ Bessel
function, to give rise to a potential of the form (\ref{quartic}),
with coefficients
\begin{equation}
m^2 = \frac{1}{2}(1-2DJ), \; \; \; \mu = -\frac{K}{8}, \; \; \; \lambda = -\frac{3}{16}\bigg(1-\frac{8DJ}{3}\bigg).\label{Bstrings2} \end{equation}
As expected, we have tachyonic instability at $J= 1/2D$ and a
cubic term in the potential.
The slightly different behaviour of (\ref{Bstrings2}) and
(\ref{Bstrings}) is to be expected, since since we are
implementing the boundary conditions that count vertices
differently in the two cases: in other words, they correspond to different
implementations of the mean field approach. However, since the
mean field result is, strictly, an upper bound, we could, if we
wished, only retain that solution that is numerically lower. In
practice, this is not necessary since there is close numerical
agreement at relevant temperatures. Massless junctions correspond
to taking $K=1$ for which $\mu = -1/8$, the value arising in
(\ref{Bstrings}) when $2DG = 1$. Further, a numerical study of
(\ref{V0boson}) shows that the transition tends to become first
order as ${K} \rightarrow 1$, in agreement with the discussion of
(\ref{Bstrings}).
\subsubsection{No junctions}
For continuous `bosonic' string with no junctions both approaches
give the identical result. In the first case, we eliminate
junctions by taking $N_v\rightarrow\infty$, whereby the discrete
Bessel functions are replaced by their continuous counterparts. In
the second, taking $v\rightarrow \infty$ ($K=0$) just recreates
the same series.
In each case, on expanding $V^{(0)}(\varphi)$ for small $\varphi$
we recover the second order transition at the Hagedorn temperature
$T_H$ of Section II (see equation (\ref{J})) when $2DJ(\beta) =
1$, and when the $\varphi$ field becomes tachyonic. However, it
can be seen that $V^{(0)}$ of (\ref{V0boson}) becomes unbounded
below as $T\rightarrow\infty$. This is not quite the behaviour of
(\ref{dualZ}), for which the potential is unbounded below for all
$T> T_H$, showing the limitations of the mean field approach for
very large $|\varphi|$. Nonetheless, this simple example shows how
the introduction of vertices induces interaction terms in the
effective potential to stabilise the ground states.
\subsection{Fermionic models}
We now consider the most simple `fermionic' models. It is these
which can straightforwardly be extended to the general three
string-type model of section \ref{sec:3}. We will also address the concern raised at the beginning of this section: that
the `fermionic' model might add an effective repulsion between
strings, which could induce misleading terms in the effective
potential. We will show that this is not the case.
Thus, we now
restrict the number of strings on each link to $n_{i,\mu}\in
\{0,\pm 1\}$. That is, the link from site $i$
to $i+\mu$ contains either a single string, a single anti-string,
or no string at all.
\subsubsection{No junctions}
With no junctions, the Hamiltonian is
\begin{equation}
H =\sum_{i=1}^{N}\sum_{\mu=1}^D \sigma n_{i,\mu}^2, \label{Hg}
\end{equation}
subject to the constraint
\begin{equation}
\alpha_i \equiv \sum_{\mu} \left[ n_{i,\mu}-n_{i-\mu,\mu}\right]
=0.
\end{equation}
Performing the sums over the $n_{i,\mu}$ leads to a
Hamiltonian
\begin{eqnarray}
\beta H = -\sum_{i,\mu}\ln [ 1 + 2J(\beta)
\cos(\theta_{i+\mu} - \theta_{i})],
\label{HF}
\end{eqnarray}
%
where the $\theta_i$ are again the Lagrange multipliers that arise from
imposing the constraints
%
\begin{equation}
\delta_{\alpha_i,0} = \frac{1}{2\pi} \int_0^{2\pi} d\theta_i
e^{i \alpha_i \theta_i}.
\end{equation}
Defining ${\bar J}$ by
\begin{equation}
J =\frac{\bar J}{1 + {\bar J}^2},
\label{barJ}
\end{equation}
whereby $J(\beta)\approx {\bar J}(\beta)$ when $J\ll 1$, a similar calculation to that above
(see also section \ref{sec:3}) shows that the mean-field potential
is, for ${\bar J}<1$,
%
\begin{eqnarray}
&& \beta V^{(0)}_{F}(\varphi) = - \ln I_0(\varphi) \nonumber
\\
&& \qquad + \; \varphi
\left(\frac{I_1(\varphi)}{I_0(\varphi)}\right) + 2D
\sum_{m=1}^{\infty}
\frac{(-\bar{J}(\beta))^m }{m}
\left(\frac{I_m(\varphi)}{I_0(\varphi)}\right)^2. \qquad
\label{V0fermion}
\end{eqnarray}
The $\varphi$ field now becomes massless at
$2D{\bar J}(\beta) = 1$, with a second order transition. With $J\approx{\bar J}$ this is slightly displaced from
that of the bosonic strings but, at the qualitative level at which we are working,
can be said to agree. Note that both
potentials (\ref{V0fermion}) and (\ref{V0boson}) show a $\mathbb Z_2$
symmetry under $\varphi\rightarrow -\varphi$ that is broken {\it
above} $T_H$, and restored {\it below} $T_H$, contrary to the
usual pattern of symmetry breaking, but as in section \ref{sec:underst}.
On comparing (\ref{V0fermion}) with (\ref{V0boson}) we see that
they differ in that the former has alternating signs in
the Bessel function expansion, whereas the latter does not.
Because higher terms in the series in powers of ${\bar J}(\beta)$
become significant only at increasingly large $\varphi$, the
artificial repulsion induced by the `fermionic' assumption (that is, of no
more than one string per link) is a large-$\varphi$ effect in the
mean field approximation, and hence where the approximation is at its least
reliable.
However, since the transitions are determined by small $\varphi$,
we can use either. This is an important result of this section.
In fact, for $J$ small, both approximate the mean-field potential
of the XY-model, with spin-spin Hamiltonian
\begin{eqnarray}
H_{XY} &=& - \frac{1}{\beta}\sum_{i,\mu}
2J\cos(\theta_{i+\mu} - \theta_{i})
\nonumber
\\
&=& -\frac{2J}{\beta} \sum_{i,\mu}
{\bf{s}}_i \cdot {\bf{s}}_{i+\mu}.
\end{eqnarray}
%
This follows from expanding (\ref{HF}), for which
\begin{equation}
V_{XY}(\varphi) = - \ln I_0(\varphi) + \varphi
\left(\frac{I_1(\varphi)}{I_0(\varphi)}\right) - 2DJ(\beta)
\left(\frac{I_1(\varphi)}{I_0(\varphi)}\right)^2,
\label{VXY} \end{equation}
showing a second order transition at $2DJ(\beta) = 1$. Rather than
just perform a series expansion in $\varphi$ as in
(\ref{Bstrings}), more generally we see that extrema of
$V_{XY}(\varphi)$ satisfy
\begin{equation}
{\bar\varphi} - 4DJ(\beta)u({\bar\varphi}) = 0,
\label{cubic}
\end{equation}
where $u(\varphi) = I_1(\varphi)/I_0(\varphi)$.
$\bar\varphi = 0$ is
always a solution to (\ref{cubic}). For $2DJ(\beta) > 1$ there is a further pair of solutions,
$\pm{\bar\varphi},\,\,\,{\bar\varphi} >0$, which are the minima.
We note, for future use, when we need to count extrema, that (\ref{cubic}) behaves like the
cubic equation obtained from just retaining terms up to
$O(\varphi^4)$ in the expansion of the potential in the existence
of three roots. The inclusion of higher terms in the series in $\bar{J}$ does
not seem to affect this empirically and it is not necessary to go
beyond the XY model, now and hereafter.
When the XY model is a good approximation we could, in principle,
use known results about it without resorting to the mean-field
approximation. In practice, we know of no work on the generalised
XY models appropriate to the three-string models (with or without junctions) and stay with the
mean-field approximation.
To give a meaning to ${\bar\varphi}$ we note that the average
density of (infinite) strings is proportional to
${\bar\varphi}^2$, as anticipated, given by
\begin{equation}
\rho = \frac{1}{N} \langle \sum_\mu n_{i,\mu}^2 \rangle
= -J(\beta)\beta
\frac{\partial V_{XY}}{\partial J} =
\frac{{\bar\varphi}^2}{4DJ(\beta)}.
\label{rhostrings}
\end{equation}
\subsubsection{Massive junctions}
We end this section by
including Y-junctions in the fermionic model (still of a single
string type). Given that the occupation numbers are limited to
$0,\pm1$, there is no analogue of the mod 3 description for
massless vertices discussed in the bosonic case (see equation
(\ref{conb})). We therefore consider massive vertices. There is now a single vertex
number $p_i=\{0,\pm 1\}$ constrained by
\begin{equation}
\alpha_i \equiv \sum_{\mu} \left[ n_{i,\mu}-n_{i-\mu,\mu}\right]
+ 3p_i =0
\label{con}
\end{equation}
with the Hamiltonian acquiring an additional term
\begin{equation}
H_I = \sum_i v p_i^2.
\end{equation}
On defining ${\bar K}$ by
\begin{equation}
K =\frac{\bar K}{1 + {\bar K}^2},
\label{barK}
\end{equation}
the mean-field potential
is, for $({\bar J},{\bar K}<1)$,
%
\begin{equation}
\beta V^{(K)}_{F}(\varphi) =\beta V^{(0)}_{F}(\varphi) + 2
\sum_{m=1}^{\infty}
\frac{(-\bar{K}(\beta))^m }{m}
\left(\frac{I_{3m}(\varphi)}{I_0(\varphi)}\right)^2. \qquad
\label{Vfermion}
\end{equation}
[This follows from the generalisation of
(\ref{lnBoson}), used earlier in (\ref{V0fermion}) that, up to a constant,
\begin{eqnarray}
&&\langle \ln (1 + 2K \cos \alpha) \rangle =
-2 \sum_{m=1}^\infty\frac{ \left( - \bar{K} \right)^m}{m}
\langle \cos m\alpha \rangle \qquad
\label{horrible}
\end{eqnarray}
for all measures and $\bar{K}<1$, together with the specific result
\begin{eqnarray}
\langle \cos m\theta \rangle_0 \equiv \frac{\int
\frac{d\theta}{2\pi} e^{ \varphi \cos\theta } \cos m\theta}{\int
\frac{d\theta}{2\pi} e^{ \varphi \cos\theta }} &=& \frac{I_m(\varphi
)}{I_0(\varphi )}
\nonumber
\end{eqnarray}
for our choice of $H_0$.]
We note that unfortunately, for a simple cubic lattice, the
requirement that $\bar{K}<1$, necessary for convergence of the
series in (\ref{horrible}),
imposes $K<1/2$. Hence that the mean field approximation is not valid for light vertices in the fermionic case
(as opposed to the bosonic one in (\ref{V0boson})). We consider this constraint to be an artefact of the lattice
fermionic approximation.
Despite that, note that mean field potential (\ref{Vfermion}) leads to an XY model in the presence of an external
source
\cite{Patel1,MR} in which we retain only the first term in the power series in ${\bar K}$ in (\ref{Vfermion})
(or the first term in the series in $K$ in (\ref{V0boson})).
As a result, there is always a second-order transition, as in the bosonic
case.
Finally we also note that the density of string (\ref{rhostrings}) is unchanged by
the inclusion of junctions.
\subsection{Summary of section \ref{sec:XY}}
In summary, in this section we have seen how the inclusion of Y-junctions in a
model of a single string type can provide the back-reaction
necessary to prevent tachyonic instability at the Hagedorn
temperature. Further, provided we restrict ourselves just to
infinite string, whose density is the order parameter, we can go
beyond the Hagedorn temperature, still with the canonical
ensemble.
We have also discussed two models, `bosonic' and `fermionic', and
shown that the concern raised about fermionic models at the
beginning of this section is unjustified: both models essentially
agree for the small $\varphi$ values that are relevant for
transitions, and for which the mean field approximation is more
reliable.
We have also shown how the value $\bar{\varphi}$ of the field at
the minimum of the effective potential is related to the density
of infinite strings in the system. As we discuss in section \ref{sec:3}, it is
equally apparent that the density of vertices is also determined
by $\bar{\varphi}$ and obtained by differentiating the partition
function with respect to $v$.
With this behind us, we now consider the case of three different
string types with Y-junctions, as a model for $(p,q)$ strings. We
note that, oddly, the analysis of QCD confinement of
\cite{Patel1,Patel2,MR}, that we have called upon in this paper,
was performed in the context of a single-string model, not
permitting `colour'. Although this was not our intention, a more
realistic description of QCD is given by the model that follows,
in the limit of equal tensions, in which our three string types
correspond to coloured flux tubes.
\section{Three strings, fermionic model}
\label{sec:3}
The basics of our model are the following.
As stated in the introduction, we model the $(p,q)$ string network
by a network of three different types of fundamental strings,
labelled by $\alpha = 1, 2, 3$ as red, green and blue, say.
Generally the strings also have different tensions
$\sigma_\alpha$. The strings do not interact with each other (nor
with themselves), except at a Y-junction (or vertex) which is
defined to be a point at which three strings of {\it different} colours
meet.
Following equation (\ref{quartic}), our expectation is that the effective
potential will take the generic form
\begin{eqnarray}
\beta V(\varphi_1,\varphi_2,\varphi_3) &=& \sum_{\alpha}\bigg[
\frac{1}{2}m_{\alpha}^2\varphi_{\alpha}^2 +
\frac{1}{4}\lambda_{\alpha}\varphi_{\alpha}^4
\bigg]
\nonumber
\\
&& \qquad + \, \mu \, \varphi_1\varphi_2\varphi_3 + ...\label{V3}
\end{eqnarray}
Potentials of the type (\ref{V3}), with temperature-dependent
coefficients,
have been studied in other contexts e.g.~transformations of vortex types in superfluid $^3$He \cite{Volovik}.
We know that (\ref{V3}) is valid if Y-junctions are excluded, when $\mu = 0$.
In this case, from the single string models
\begin{equation}
m_{\alpha}^2 \propto (1-2DJ_{\alpha}(\beta )),
\end{equation}
with $J_\alpha = e^{-\sigma_{\alpha}\beta}$.
In the following discussion we suppose that
\begin{equation}
\sigma_1 \leq \sigma_2 \leq \sigma_3 \qquad \Longleftrightarrow \qquad
J_1 \geq J_2 \geq J_3.
\end{equation}
The critical ${J}^{crit}_\alpha = 1/(2D)$ define three critical
inverse temperatures $\beta_\alpha = T_{\alpha}^{-1}$ with
\begin{equation}
\beta_3 < \beta_2 <\beta_1
\end{equation}
in the vicinity of which $m_{\alpha}^2\propto (1-T/T_{\alpha})$.
That is, with no interactions we expect three
sequential Hagedorn transitions as, on cooling, the heavier
strings disappear from the picture, leaving the lightest until
last before it disappears in turn, leaving just small loops.
Our aim is to understand the effect that Y-junctions have on this
picture.
In practice, we are not able to recreate (\ref{V3}) in a bosonic
model with coloured Y-junctions, with arbitrary numbers of strings
on each link. (The reason is that we are unable to write down a
generalised form of the constraint (\ref{conb}) in the 3-string case.) We therefore restrict ourselves
to a fermionic model, in which there is at most one string of each
type on a link. As discussed in the previous section, we
expect that the effective repulsion this implies can be ignored at
small field values. As in the case of the single string type, in
order to be able to use mean field theory we are obliged to give
the vertex a non-zero mass $v$.
As before, we assume that the energy of
the different strings is proportional to their length $L$
($E_\alpha=\sigma_\alpha L$). The different strings are described
respectively by the variables $n^{\alpha}_{i,\mu}$, which all take values in $\{ 0,\pm 1\}$. There are also
vertices, described by the variable $p_i \in \{ 0,\pm 1\}$,
joining strings of 3 different types.
The Hamiltonian of the system takes the same form as the for the
single string case,
\begin{equation} H = \sum_i \left[ \sum_\mu
\sum_{\alpha}\sigma_{\alpha}(n^{\alpha}_{i,\mu})^2 + v p_i^2
\right] \end{equation}
We now
need to impose the constraint that a junction is where three different colour
strings meet: this is done by
%
\begin{eqnarray}
\gamma^{\alpha}_i =\sum_\mu (n^{\alpha}_{i,\mu} - n^{\alpha}_{i-\mu,\mu}) + p_i &=& 0, \qquad \forall
\alpha.
\label{con2}
\end{eqnarray}
%
Although summing over $\alpha$ would essentially recreate the
constraints (\ref{con}), equation (\ref{con2}) is more specific.
In particular, (\ref{con2}) does not forbid different string types from lying
on top of each other.
As in the previous section, the constraints are imposed in the standard way through Lagrange
multipliers, which is equivalent to writing the Kroneker delta as
%
\begin{equation}
\delta_{\gamma_i^{\alpha},0} = \frac{1}{2\pi} \int_0^{2\pi} d\theta^{\alpha}_i
e^{i \gamma^{\alpha}_i \theta^{\alpha}_i}
\end{equation}
(no $\alpha$ summation) for each $\gamma^{\alpha}$.
%
Hence the partition function is
%
\begin{eqnarray}
&&Z(\beta,v,\sigma_{\alpha}) =
\nonumber
\\
&&= \int \prod_{i,\alpha} \frac{d\theta^{\alpha}_i}{2\pi}
\sum_{n_{i,\mu}} e^{-\sum_{i,\mu} \sum_{\alpha}[\beta
\sigma_{\alpha} (n^{\alpha}_{i,\mu})^2 + i
n^{\alpha}_{i,\mu}(\theta^{\alpha}_{i+\mu} -
\theta^{\alpha}_{i})]} \nonumber
\\
&& \qquad \times \sum_{p_i}e^{-\sum_{i}[\beta v p_i^2 + i
p_i\sum_{\alpha} \theta^{\alpha}_i]}
\end{eqnarray}
%
which, on carrying out the summations gives
%
\begin{eqnarray}
Z(\beta,v,\sigma_\alpha) &=& \int \prod_{i,\alpha} \frac{d\theta^{\alpha}_i}{2\pi}
\prod_{\mu} [ 1 + 2J_\alpha
\cos(\theta^{\alpha}_{i+\mu} - \theta^{\alpha}_{i})] \nonumber
\\
&& \qquad \times [ 1 + 2K \cos(\sum_{\alpha}\theta^{\alpha}_i)]
\label{full} .
\end{eqnarray}
This corresponds to the Hamiltonian
%
\begin{eqnarray}
\beta H = -\sum_{i,\mu,\alpha}\ln [ 1 + 2J_\alpha
\cos(\theta^{\alpha}_{i+\mu} - \theta^{\alpha}_{i})]
\nonumber
\\
-\sum_i\ln [ 1 + 2K \cos(\sum_{\alpha}\theta^{\alpha}_i)] \nonumber
\\
\approx \sum_{i,\mu,\alpha} 2J_\alpha \cos(\theta^{\alpha}_{i+\mu}
- \theta^{\alpha}_{i}) + \sum_i 2K
\cos(\sum_{\alpha}\theta^{\alpha}_i)\label{H3}
\end{eqnarray}
%
for small $J_{\alpha}$ and $K$.
The mean field
treatment therefore contains three variational parameters
$\varphi_\alpha$. Following the same steps as in section \ref{sec:XY},
the trial partition functions which decouple
different lattice sites are
\begin{equation} Z_0^\alpha(\beta,\sigma_\alpha,\varphi_\alpha) = \int
\prod_{i} \frac{d\theta^\alpha_i}{2\pi} e^{\sum_i \varphi_\alpha
\cos\theta^\alpha_i } = \left[I_0(\varphi_\alpha)\right]^N,
\end{equation}
while the mean field effective potential is
\begin{eqnarray}
&& \beta V(\varphi_{\alpha}) =\sum_{\alpha}\bigg[ - \ln
I_0(\varphi_{\alpha}) +
\nonumber
\\
&& + \left. \varphi_{\alpha}
\left(\frac{I_{\alpha}(\varphi_{\alpha})}{I_0(\varphi_{\alpha})}\right)
+ 2D \sum_{m=1}^{\infty}
\frac{(-\bar{J_{\alpha}})^m }{m}
\left(\frac{I_m(\varphi_{\alpha})}{I_0(\varphi_{\alpha})}\right)^2
\right] \nonumber
\\
&& + \;
2 \sum_{m=1}^{\infty}
\frac{(-\bar{K})^m}{m}
\left(\frac{I_m(\varphi_1)}{I_0(\varphi_1)}
\frac{I_m(\varphi_2)}{I_0(\varphi_2)}
\frac{I_m(\varphi_3)}{I_0(\varphi_3)}\right),
\label{general} \end{eqnarray}
where each
$\bar{J}_{\alpha}$ is defined as in (\ref{barJ}), and
${\bar K}$ is given in (\ref{barK}).
As discussed in section \ref{sec:XY}, it is sufficient for our purposes to approximate $ \beta
V(\varphi_{\alpha})$ by the first term in the series of
(\ref{general}),
\begin{eqnarray}
&& \beta V_{XY}(\varphi_{\alpha}) =\sum_{\alpha}\bigg[ - \ln
I_0(\varphi_{\alpha}) +
\nonumber
\\
&& + \left. \varphi_{\alpha}
\left(\frac{I_{\alpha}(\varphi_{\alpha})}{I_0(\varphi_{\alpha})}\right)
-2D J
\left(\frac{I_1(\varphi_{\alpha})}{I_0(\varphi_{\alpha})}\right)^2
\right] \nonumber
\\
&& - \;
2K
\left(\frac{I_1(\varphi_1)}{I_0(\varphi_1)}
\frac{I_1(\varphi_2)}{I_0(\varphi_2)}
\frac{I_1(\varphi_3)}{I_0(\varphi_3)}\right).
\label{3XY} \end{eqnarray}
This corresponds to making the small $J,K$ approximation in
(\ref{H3}). That is, the model (\ref{full}) is a generalised XY
model, consisting of three spin-like variables defined on each
lattice site $i$, making angles $\theta^{\alpha}_i$ with respect
to some fixed axis, interacting amongst themselves through the
$K$-dependent term.
We have achieved our goal in that, if we expand
$V_{XY}(\varphi_{\alpha})$ of (\ref{3XY}) (or, indeed the full
$V(\varphi_{\alpha})$ of (\ref{general})) in powers of
$\varphi_{\alpha}$ we recover the generic potential (\ref{V3}) as
the first few terms in the series.
However, we can say more. As in our earlier examples, attaching a
nominal energy to each vertex allows us to calculate the density
of vertices. Specifically, the density of vertices on infinite
strings is
\begin{eqnarray}
n_v &=& \frac{1}{N} \langle \sum_i p_i^2 \rangle
= -K\beta
\frac{\partial V_{XY}}{\partial K} \nonumber
\\
&\propto& \left(\frac{I_1({\bar\varphi_1)}}{I_0({\bar\varphi_1)}}
\frac{I_1({\bar\varphi_2)}}{I_0({\bar\varphi_2)}}
\frac{I_1({\bar\varphi_3)}}{I_0({\bar\varphi_3)}}\right)\propto
{\bar\varphi_1}{\bar\varphi_2}{\bar\varphi_3}
\label{nv}
\end{eqnarray}
at the minimum
$({\bar\varphi_1},{\bar\varphi_2},{\bar\varphi_3)}$ of $V_{XY}(\varphi_\alpha)$.
The small loops corresponding to the field fluctuations that are
invisible to our mean field analysis contain vertices not counted
in (\ref{nv}).
As in section \ref{sec:XY}, we now look for the extrema of the
potential in order to determine the density of infinite string and the density of vertices. As
expected from section \ref{sec:XY}, a full numerical analysis
(that we have performed) without the XY approximation does not
alter our qualitative conclusions and barely changes our
quantitative results.
As we noted earlier, in the main works on QCD
(\cite{Patel1,Patel2}) all flux strings were taken to be of a
single kind, leading to a very different potential, in which
$I_1(\varphi_1) I_1(\varphi_2) I_1(\varphi_3)$ is replaced by
$I_{3}(\varphi)$ for example. In particular, as we shall see later
for (\ref{general}), with equal tensions there is no first-order
transition when there are three string types.
\subsection{$K=0$: no vertices and three independent spins}
We have already anticipated the results for this simple case, but
it is helpful to see them in greater detail. For $K = 0$ the XY
model reduces to three independent, uncoupled, XY models with
$\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2$ symmetry
under $\varphi_{\alpha}\rightarrow - \varphi_{\alpha}$.
The extremal points are when
%
\begin{equation}
\frac{\partial V_{MF}}{\partial \bar\varphi_\alpha}=0 \qquad
\Leftrightarrow \qquad \frac{\bar\varphi_\alpha}{4DJ_\alpha} -
u(\bar\varphi_\alpha)=0
\end{equation}
%
where $u(\varphi) = I_1(\varphi)/I_0(\varphi)$ as before. One
possible solution is always $\bar\varphi_\alpha=0$, the only real
solution if $2DJ_\alpha(\beta) < 1$.
If $2DJ_\alpha(\beta)
> 1$ then there are two further real solutions, denoted $\pm{\bar
\varphi}_\alpha$, where we take ${\bar \varphi}_\alpha >0$. The
$3^3 = 27$ possible extrema $\varphi = (\bar\varphi_1,\bar
\varphi_2, \bar\varphi_3 )$ then break down into a non-degenerate
$\varphi = (0, 0, 0)$, three doubly degenerate solutions,
exemplified by $\varphi = (\pm{\bar\varphi}_1, 0, 0)$, three
fourfold degenerate solutions, exemplified by
$(\pm{\bar\varphi}_1,\pm{\bar\varphi}_2,0)$ and an eightfold
degenerate solution
$(\pm\bar{\varphi}_1,\pm\bar{\varphi}_2,\pm\bar{\varphi}_3)$. It
is sufficient to restrict ourselves to the positive sector
$\varphi_{\alpha}\geq 0$.
To determine which of these are maxima, which minima, and which
saddle points we need to calculate the eigenvalues of the Hessian
$ M_{\gamma \delta} = {\partial^2 V_{XY}}/{\partial \varphi_\gamma
\partial \varphi_\delta} $ at the extrema. An extremum is a
minimum if all are positive, and a maximum if all are negative.
Otherwise one is dealing with saddle points.
With $K=0$, the only non-zero entries are on the diagonal with (no
summation)
%
\begin{equation}
M_{\alpha \alpha} = u'(\bar\varphi_\alpha)[1-4DJ_\alpha (\beta)
u'(\bar\varphi_\alpha)] \label{second}.
\end{equation}
%
For the case in hand the answer is very simple and very obvious.
\begin{enumerate}
\item $\beta > \beta_1 (> \beta_2,\beta_3)$. In this range the
global minimum occurs at $\vec{\varphi} = (0,0,0)$.
\item $\beta_2 < \beta < \beta_1$. Now $(\bar{\varphi}_1,0,0)$ is
the {\it global minimum}. [$(0,0,0)$ is a now saddle point.]
\item $\beta_3 < \beta < \beta_2$. In this range it is easy to
see that $({\bar\varphi}_1,{\bar\varphi}_2,0)$ is the global
minimum.
\item $\beta < \beta_3$. Here it is equally straightforward to
see that $(\bar{\varphi}_1,\bar{\varphi}_2,\bar{\varphi}_3)$ is
the global minimum, $(0,0,0)$ is a maximum, and all other points
are saddle points.
\end{enumerate}
As expected, as the temperature is increased infinite strings of
the lightest tension first are nucleated at $\beta=\beta_1$; then
those of the next lightest tension at $\beta=\beta_2$; and finally
the heaviest strings when $\beta=\beta_3$. When one decreases the
temperature from a very high one, the opposite happens.
\subsection{$K \neq 0$: vertices and three coupled spins}
Let us now consider the effect of Y-junctions in the generalised
XY model of (\ref{3XY}).
For unequal $\sigma_{\alpha}$ the symmetry of $V_{XY}$ is now
explicitly broken from $\mathbb{Z}_2 \times \mathbb{Z}_2 \times
\mathbb{Z}_2$ to $D_2= \mathbb{Z}_2 \times \mathbb{Z}_2$,
generated by
%
\begin{eqnarray}
P_1:\qquad \varphi_1\rightarrow\varphi_1, \qquad\varphi_2\rightarrow -\varphi_2,
\qquad\varphi_3\rightarrow -\varphi_3\nonumber\\
P_2:\qquad \varphi_1\rightarrow -\varphi_1, \qquad\varphi_2\rightarrow\varphi_2,
\qquad\varphi_3\rightarrow -\varphi_3\nonumber\\
P_3:\qquad \varphi_1\rightarrow -\varphi_1, \qquad\varphi_2\rightarrow -\varphi_2,
\qquad\varphi_3\rightarrow\varphi_3\nonumber
\end{eqnarray}
If any tensions are equal the symmetry is correspondingly
increased.
Imposing $\partial V_{XY}/\partial \varphi_\alpha=0$ gives (no
summation)
\begin{eqnarray}
u'(\bar\varphi_\alpha) \left[ {\bar\varphi_\alpha} -
{4DJ_\alpha}(\beta) u(\bar\varphi_ \alpha)-2K(\beta)
u(\varphi_\beta)u(\bar\varphi_\gamma)\right] &=& 0
\nonumber
\\
\label{11}
\end{eqnarray}
where $\beta = (\alpha+1) \, {\rm mod} \, 3$, $\gamma =
(\alpha+2) \, {\rm mod} \, 3$. There are obvious solutions to these
coupled equations: $(0,0,0)$ for all $\beta$; $(\varphi_1,0,0)$
with $\varphi_1=\bar{\varphi}_1$ (the standard solution provided
$2DJ_1 >1$). The important point though is that {\it it is not
possible to have a solution with only, say $\bar\varphi_1=0$, and
the other two non-zero}. One can see this from (\ref{11}), where
setting $\bar\varphi_1=0$ would require that one of the other two
$\varphi$'s must vanish.
At the extrema the Hessian has the same diagonal elements as in
(\ref{second}), but off-diagonal elements
\begin{eqnarray}
M_{\alpha \beta} &=& -2K(\beta) u'(\bar\varphi_\alpha)
u'(\bar\varphi_\beta) u(\bar\varphi_\gamma)
\end{eqnarray}
We now evaluate these at
the different extrema identified above and discuss the consequences.
\\
\\
\noindent {\bf Case 1}: $\bar\varphi_\alpha=0, \forall\alpha$.
\\
\\
This reduces to the free-string case above, as here the off
diagonal terms of $M$ also vanish. We have a global minimum for
$\beta
> \beta_1$ as all the eigenvalues are positive. Otherwise, when
$\beta_3 < \beta < \beta_1$ we have a saddle point, and for $\beta
< \beta_3$ a global maximum.
Thus, as the temperature increases (or $\beta$ decreases) the 1
direction will `roll' first.
\\
\\
\noindent {\bf Case 2}: $\bar\varphi_2 = \bar\varphi_3=0$ but
$\bar\varphi_1 \neq 0$.
\\
\\
Now, notice that the temperatures $\beta_1$, $\beta_2$ and
$\beta_3$, as defined for free strings, are in principle relevant
{\it only} when $\bar\varphi_\alpha=0$ since then the
off-diagonal terms of $M$ vanish. When non-zero
$\bar\varphi_\alpha$ enter, we have to worry about the
off-diagonal terms, and find the new eigenvalues. This in turn
will introduce new critical ($K$-dependent) temperatures.
As before, $\bar\varphi_1$ is the solution of the standard
equation {\it provided} $2D{\bar J}_1
>1$ or $\beta < \beta_1$.
When $\beta=\beta_2$ the smallest eigenvalue is negative, showing
that $({\bar\varphi}_1,0,0)$ is not a local minimum.
There is an intermediate temperature $\beta_*$, the solution to
\begin{equation} (1-2DJ_2(\beta_*))(1-2DJ_3(\beta_*)) = K^2(\beta_*) u^2(\bar{\varphi}_1) \end{equation}
that denotes the transition from local minimum to saddle point.
That is, strings of type 2
and 3 are nucleated at the same time.
To summarize: for $\beta > \beta_1$ there is a global minimum at
$\bar\varphi_\alpha=0$. For $\beta_2 < \beta_* < \beta < \beta_1$
the global minimum is at $(\bar{\varphi}_1,0,0)$.
\\
\\
\noindent {\bf Case 3}: $\bar\varphi_1,\bar\varphi_2,
\bar\varphi_3$ all non-zero.
\\
\\
For $\beta < \beta_*$ , type 2 and 3 strings are nucleated since
one cannot have only one non-zero $\bar \varphi_{\alpha}$. Hence we
expect to have non-zero $\bar\varphi_\alpha$ for all $\alpha$.
However, there is nothing at this stage to preclude the
possibility of even further transitions, of first and second
order.
\\
\\
\noindent {\bf Discussion}.
\\
\\
We can get some help from elementary Morse theory, applied to the
whole $\bar\varphi_{\alpha}$ space and not just the positive
sector \cite{Volovik}. Empirically, for the purpose of counting
extrema, equations (\ref{11}) also behave just like the cubic
equations that would follow from taking only the leading terms
(\ref{V3}). According to this, when we have 27 extrema, no more
than 14 can be minima. The cases of all $\bar\varphi_{\alpha} = 0$
or one $\bar\varphi_{\alpha}$ non-zero may produce 7 ($7 = 1
+3\times 2$) real extrema and therefore 20 may correspond to
extrema with no $\bar\varphi_{\alpha}$ vanishing. From $D_2$, each
is fourfold degenerate, implying that there may exist five ($5 =
20/4$) different least symmetric extrema, of which no more than
three can be local minima. This still allows for either first or
second-order Hagedorn transitions as $\beta$ is reduced below
$\beta_*$ (or temperature increased).
Now consider the case when two string types have (approximately)
the same tension, and the other is markedly different, e.g.~one
string is very light, and the others heavy. The cases of all
$\bar\varphi_{\alpha} = 0$ or one $\bar\varphi_{\alpha}$ non-zero still
may produce 7 ($7 = 1 + 2+4$) real extrema. However, each extremum
with no $\bar\varphi_{\alpha}$ vanishing is now approximately
eightfold symmetric. As a result we do not expect more than two of
them, of which only one can be a local minimum. This means that
there cannot be any further transitions as $\beta$ is reduced
below $\beta_*$. Although a first order transition cannot be
precluded, empirically we have only found second order transitions
even for $\sigma_{\alpha}$ taking different values.
The situation is summarised schematically in figure \ref{fig:2}.
\begin{figure}
\centerline{\includegraphics[width=0.4\textwidth]{figure2.pdf}}
\caption{Schematic representation of the trajectory of $\varphi$
in field space. The arrow indicates the trajectory as a function
of decreasing temperature.} \label{fig:2}
\end{figure}
From the above discussion, it follows trivially that, for equal $\sigma_{\alpha}$, (with
twelve-fold degeneracy for all $\varphi_{\alpha}$ non-zero) there
is just one second-order transition.
This is relevant to an idealised version of QCD. However, as it
stands the analysis above is restricted to closed or infinite
string. The addition of quarks to string ends changes the picture
again. Further, since flux tubes are not fundamental in any sense,
the `Hagedorn' transition in QCD has a different status, with no
ambiguity about increasing the temperature beyond it.
\\
\\
\noindent{\bf Density of vertices:}
\\
\\
Finally we end this section with a comment on the density of vertices in
the different phases.
From (\ref{nv}), and since $I_m(0) = 0$ for $m\geq 1$ it follows
that, on differentiating $V(\varphi_\alpha )$ with respect to $K$,
\begin{equation}
n_v = 0
\end{equation}
when any ${\bar\varphi}_\alpha = 0$.
Thus, we only have a non-zero density of vertices on infinite strings for
$\beta<\beta_*$, i.e.~at temperatures high enough for all infinite
string of all
types to be present. This is shown in Figure \ref{fig:1}.
\\
\\
\section{Conclusions}
The main idea of this paper has been very simple: that we can
describe the thermodynamics of a network of strings of three
different types (and tensions) by an effective three-field theory
whose potential $ V(\varphi_1,\varphi_2,\varphi_3)$ takes the form
\begin{eqnarray}
\beta V = \sum_{\alpha}\bigg[
\frac{1}{2}m_{\alpha}^2\varphi_{\alpha}^2 +
\frac{1}{4}\lambda_{\alpha}\varphi_{\alpha}^4
\bigg] +\mu\varphi_1\varphi_2\varphi_3 + ...\label{V3b}
\end{eqnarray}
The interaction coefficient $\mu$ reflects the presence of
Y-junctions at which one string of each type meet. The
coefficients are temperature dependent, with $m_{\alpha}^2\propto (1-T/T_{\alpha})$
in the vicinity of its zero. If $\mu$ were zero, the $T_{\alpha}$
would be Hagedorn temperatures for the individual string types.
As a result, the discrete symmetries of $V$ are {\it broken}
at high temperature, {\it restored} at low temperature, in a reversal of the usual pattern.
Our main results, summarised Figs.~1 and 2, essentially follow
from the form of (\ref{V3b}) alone, supplemented by an
understanding of the order parameters, that they characterise
infinite string, and not loops. In consequence, in a network of
strings of different tensions it is the lightest strings whose
{\it infinite} strings survive last after Hagedorn transitions,
and even those disappear in turn, to leave a collection of small
loops. This is despite the presence of junctions between strings
of different types. That is, the only r\^ole that the junctions
play is in these small loops of string whose presence is the only
memory of the initial proliferation of strings of all types.
The burden of this paper has been to provide a model in which we
can see how the potential (\ref{V3b}) is realised, almost as proof
of principle. This has turned out to be a non-trivial task and the
model at hand, an extension of similar models used in QCD in a
much more restricted situation, has its faults. A well as picking
a path through the `fermionic' lattice artefacts, as in the
calculations for QCD strings, our strings are also assumed to be
non-interacting and static. Furthermore we are often pushed to
consider the model in a limit of parameter space where
approximations are not always well controlled (just as in
\cite{Patel1,Patel2,MR}). Our one string bosonic model
demonstrated how, for a single field, $\mu\varphi^3$ terms arise
naturally. However, being unable to generalise the bosonic model
to three string types, we have also had to introduce massive
vertices in the three string model as an artefact of the lattice
mean-field approximation. Naturally, any specific model will give
more information than just the leading terms of $V$ of
(\ref{V3b}). In our case the model is a generalised XY model, in
which transitions are seen in the language of spin ordering and
which, in principle, permit better than the mean-field
approximation.
As suggested above, our analysis points to the the final stage of
the transitions as being that of a single string type, collapsing
into loops, which was the original case to be studied, primarily
in the context of Nambu-Goto strings. In that case, the full
statistical mechanics has been studied in detail, and can be
generalised to non-static strings. The result however, is the
same! Indeed, rather than consider random walks in space, one can
consider simultaneous independent random walks on the
Kibble-Turok spheres for left and right-moving modes respectively \cite{AlbrechtTurok}. The
microstate density at the transition is the square of that for simple random walks,
but integrating over centre-of-mass coordinates reduces the state
density to that of (appropriately defined) single static random
walks.
Another way to make this adiabatic picture dynamical is to attempt
to determine the time scales of the string network transitions
from the timescales of the effective field theory, using the
Kibble scenario \cite{Kibble}. This relies on little more than
causal bounds, and the analysis is under way.
\section*{Acknowledgements}
We thank Ed Copeland, Mark Hindmarsh, Tom Kibble and Mairi
Sakellariadou for useful discussions. RJR thanks the CNRS and the University of Paris 7 for
financial support, and is grateful to APC, Paris 7, and the LPT in Orsay for warm
hospitality.
|
0803.2427
|
\section{Introduction}
In the past few years, dualities were successfully employed
as the linking element between the multiple access channel (MAC)
and the broadcast channel (BC). Thanks to various versions of dualities,
many regions of the MAC and the BC were classified to be identical
under a sum-power constraint.
First, the signal-to-interference-and-noise-ratio (SINR)
regions under single-stream transmission per user
were shown to be identical in \cite{schubert02k,Viswanath}.
Second, the mean-square-error (MSE) regions of the MAC and the BC coincide
which has been proven by means of the SINR duality in
\cite{schubert05c} and later in
\cite{tenenbaum1} or directly in \cite{MeJoHuUt06,HuJoUt08}.
And third, the rate regions of the MAC and the BC under Gaussian signaling
and nonlinear interference cancellation have recently been shown
to be the same, see \cite{Jindal_duality} for the single-antenna case,
\cite{rate_duality} for the multi-antenna case, and
\cite{Weingarten} for the coincidence of the dirty-paper coding rate region
and the capacity region.
A stream-wise duality with power constraints on subsets of antennas
which holds for the optimum filters of a quality-of-service power minimization
was presented in~\cite{YuL07a} for systems with and without nonlinear interference
cancellation. Due to its stream-wise nature, conversion from one domain to
the dual is complicated since it is not clear how to allocate the SINRs
to the users in case of multi-antenna terminals.
Besides the capability of proving congruency of two regions,
dualities also deliver explicit conversion formulas
how to switch from one domain to the other.
In case of the rate duality in \cite{rate_duality},
(arbitrary) optimum receive filters generating sufficient
statistics are assumed both in the MAC and in the BC.
Given transmit covariance matrices in the MAC
are converted to transmit covariance matrices in the dual BC.
Dependencies during these transformations prevent a parallel processing
and force a serial implementation. In addition, the received
data streams have to be decoded jointly which entails a high computational
complexity.
Our contribution in this paper is twofold. First, we
present a novel rate duality for systems with nonlinear interference
cancellation. One of the key steps involved is the change from
the covariance matrices to the transmit filters by which we gain an isometry as
degree of freedom. This degree of freedom is then used to decorrelate
every point-to-point link thus making a fast parallel stream-wise
decoding possible. As the streams of a single
user now do not interfere with each other, we can employ
an SINR duality in the style of our MSE duality in \cite{MeJoHuUt06,HuJoUt08}.
Therein, the transmit filters in the dual domain are scaled receivers
of the primal domain and the receive filters
are scaled
transmitters of the primal domain. We end up with a system of linear equations
to determine these scaling factors.
Our second contribution is a rate duality for linear filtering
applicable to multi-antenna terminals where different streams
of a user are not treated as self-interference.
Up to now, such a duality did not exist and
hitherto existing dualities for linear filtering
treat different streams of a user
as virtual users contributing interference
to the user under consideration, see
\cite{schubert02k,Viswanath,Song2007} for example.
In general, the maximum possible rate cannot be obtained when a
duality based on virtual users is applied.
The underlying framework for the proposed linear duality is similar
to the proposed nonlinear duality presented in the following.
Key observation is again the fact that decorrelation allows
for a stream-wise decoding which also achieves
the rate that is possible under joint decoding.
\section{System Model}
Two systems are considered, namely the MAC where $K$ multi-antenna users
send their data to a common base station which is equipped with $N$
antennas, and the BC where the signal flow is reversed, i.e.,
the base station serves the users. In the former case the transmission
between the $k$th user and the base station is described
by the channel matrix $\B{H}_k\in\mathbb{C}^{N\times r_k}$
with $r_k$ denoting the number of transmit antennas at user~$k$.
The BC link, however, is characterized by the Hermitian channel matrix
$\B{H}_k^{\He}$. User $k$ multiplexes $L_k$ data streams.
If interference cancellation is applied in the MAC, we
assume for the sake of readability that the decoding order
is chosen such that user~$1$ is decoded last, whereas the reversed
encoding order is chosen in the BC, i.e., user~$1$ is precoded first.
For different sortings, the users have to be relabeled correspondingly.
Under these assumptions, the rate of user~$k$ in the MAC with
nonlinear interference cancellation reads as~\cite{Yu04}
\begin{equation}
R_k^{\mathrm{MAC}} = \log_2\frac{\big|\sigma_{\eta}^2\mathbf{I}_N +\sum_{\ell \leq
k}\B{H}_\ell\B{Q}_\ell\B{H}_{\ell}^{\He}\big|}
{\big|\sigma_{\eta}^2\mathbf{I}_N +\sum_{\ell <
k}\B{H}_\ell\B{Q}_\ell\B{H}_{\ell}^{\He}\big|},
\label{MAC_rate}
\end{equation}
where $\sigma_{\eta}^2$ is the noise variance per antenna and
$\B{Q}_\ell\in\mathbb{C}^{r_{\ell}\times r_{\ell}}$ denotes the transmit
covariance matrix of user~$\ell$. Contrary, user $k$'s
rate in the BC with nonlinear dirty paper coding is~\cite{rate_duality}
\begin{equation}
R_k^{\mathrm{BC}} = \log_2\frac{\big|\sigma_{\eta}^2\mathbf{I}_{r_k}+\B{H}_k^{\He}\sum_{\ell\geq
k}\B{S}_\ell\B{H}_k\big|}{\big|\sigma_{\eta}^2\mathbf{I}_{r_k}+\B{H}_k^{\He}\sum_{\ell>
k}\B{S}_\ell\B{H}_k
\big|},
\label{BC_rate}
\end{equation}
where $\B{S}_\ell\in\mathbb{C}^{N\times N}$ is the BC transmit covariance
matrix of user~$\ell$.
If only linear filtering without interference subtraction is applied,
user~$k$ experiences interference from all other users.
\section{Rate Duality for Systems Utilizing Interference Subtraction}
\subsection{Benefits of the Rate Duality with Interference Cancellation}
Besides the ability to show congruency between the two
capacity regions, the decisive reason for utilizing the rate duality is
that all rate expressions are concave functions
of the transmit covariance matrices in the MAC but not in the BC.
Moreover, the optimal sorting of the users can easily be obtained in the MAC.
As a consequence, many rate-based maximizations can be solved
with efficient algorithms converging to the global optimum in the
MAC and afterwards converted to the BC by means of the duality conversion
formulas.
\subsection{State-of-the-Art Duality}
By means of the MAC-to-BC conversion, we illustrate the state-of-the-art
rate duality from \cite{rate_duality}.
Both in the MAC and in the BC, all rate expressions depend only on the
transmit covariance matrices and not on the matrix valued receive filters
since they are implicitly assumed to generate sufficient statistics.
Based on these statistics, the $L_k$ data streams of user $k$ have
to be decoded \emph{jointly}. Given a set of transmit covariance matrices
$\{\B{Q}_k\}$ in the MAC which fulfills a total transmit power constraint
and obtains a rate tuple $R_1^{\mathrm{MAC}},\ldots, R_K^{\mathrm{MAC}}$
under the assumption of optimum receive filters,
the duality in \cite{rate_duality} generates a set of transmit covariance matrices $\{\B{S}_k\}$
for the BC that
fulfills the same total transmit power constraint
and achieves the same rate tuple $R_1^{\mathrm{BC}},\ldots,R_K^{\mathrm{BC}}$.
In the BC, optimum receivers
yielding sufficient statistics are again required and
all streams of every individual user have to be decoded jointly as well.
Two key methods utilized
are the \emph{effective channel} and the \emph{flipped channel} idea.
The former one implies that the capacity of a point-to-point MIMO system
with channel matrix $\B{H}$
subject to an additive Gaussian distortion (noise plus independent interference)
with covariance matrix $\B{X}$ equals the capacity of a point-to-point system
with \emph{effective channel} matrix $\B{L}^{-1}\B{H}$ subject to
additive Gaussian distortion with identity covariance matrix
if $\B{X} = \B{L}\B{L}^{\He}$.
Given an arbitrary effective channel of a point-to-point system,
a system with reversed signal flow and Hermitian effective channel
(\emph{flipped channel}) has the same capacity \cite{Telatar}.
According to (\ref{MAC_rate}), the rate of user~$k$ in the MAC can
be expressed as
\begin{equation}
R_k^{\mathrm{MAC}} = \log_2
\left|\mathbf{I}_N+\B{X}_k^{-1}\B{H}_k\B{Q}_k\B{H}_k^{\He}\right|,
\label{MAC_rate_with_X}
\end{equation}
with the substitution
$\B{X}_k=\sigma_{\eta}^2\mathbf{I}_N+\sum_{\ell=1}^{k-1}\B{H}_\ell\B{Q}_\ell\B{H}_{\ell}^{\He}$.
Introducing the Cholesky
decomposition $\B{X}_k=\B{L}_k\B{L}_k^{\He}$, applying the determinant
equality $|\mathbf{I}_a+\B{AB}|=|\mathbf{I}_b+\B{BA}|$ for arbitrary
$\B{A}$ and $\B{B}$ of appropriate dimensions,
and inserting two identity matrices
$\mathbf{I}_{r_k}=\B{F}_k^{-1}\B{F}_k=\B{F}_k^{\He}\B{F}_k^{-\He}$, (\ref{MAC_rate_with_X})
can be expressed as
\[
R_k^{\mathrm{MAC}} = \log_2\left|\mathbf{I}_N\!+\!\B{L}_k^{-1}\B{H}_k\B{F}_k^{-1}\B{F}_k\B{Q}_k\B{F}_k^{\He}
\B{F}_k^{-\He}\B{H}_k^{\He}\B{L}_k^{-\He}\right|.
\]
Now, $\B{L}_k^{-1}\B{H}_k\B{F}_k^{-1}$ can be regarded as the effective
channel for the covariance matrix $\B{F}_k\B{Q}_k\B{F}_k^{\He}$. How
$\B{F}_k$ must be chosen will be clarified below.
Flipping the channel, outcomes in \cite{rate_duality} ensure the existence of
a covariance matrix $\B{Z}_k\in\mathbb{C}^{N\times N}$ with
\begin{equation}
\begin{split}
R_k^{\mathrm{MAC}}& =\log_2\left|\mathbf{I}_{r_k}+\B{F}_k^{-\He}\B{H}_k^{\He}\B{L}_k^{-\He}
\B{Z}_k \B{L}_k^{-1}\B{H}_k\B{F}_k^{-1}\right|,\\
\tr(\B{Z}_k) & \leq \tr(\B{F}_k\B{Q}_k\B{F}_k^{\He}).
\end{split}
\label{flipped_MAC_rate}
\end{equation}
The rate of user~$k$ in the BC is (cf. Eq.~\ref{BC_rate})
\begin{equation}
\begin{split}
R_k^{\mathrm{BC}} &=\log_2\left|\mathbf{I}_{r_k}+\B{Y}_k^{-1}\B{H}_k^{\He}\B{S}_k\B{H}_k\right|\\
& = \log_2\left|\mathbf{I}_{r_k}+\B{F}_k^{-\He}\B{H}_k^{\He}\B{S}_k\B{H}_k\B{F}_k^{-1}\right|,
\end{split}
\label{BC_rate_with_Y}
\end{equation}
with the substitution
$\B{Y}_k\! =\! \sigma_{\eta}^2\mathbf{I}_{r_k}\!+\!\sum_{\ell=k+1}^K\B{H}_k^{\He}\B{S}_{\ell}\B{H}_k\!=\!\B{F}_k^{\He}\B{F}_k$.
Equality between $R_k^{\mathrm{MAC}}$ in (\ref{flipped_MAC_rate})
and $R_k^\mathrm{BC}$ in (\ref{BC_rate_with_Y})
holds, if
\begin{equation}
\B{S}_k = \B{L}_k^{-\He}\B{Z}_k\B{L}_k^{-1}.
\end{equation}
Implicitly, $\B{Z}_k$ depends on $\B{F}_k$ as will be shown soon.
Thus, $\B{S}_k$ depends on $\B{Y}_k$ which itself is a function of all
$\B{S}_\ell$ with $\ell>k$. These dependencies require that
$\B{S}_k$ has to be computed before $\B{S}_{k-1}$ and consequently,
one has to start with the computation of $\B{S}_K$ followed by
$\B{S}_{K-1},\ldots,\B{S}_1$.
It remains to determine the matrices $\B{Z}_k \ \forall k$.
Introducing the reduced \emph{singular-value-decomposition} (rSVD)
\begin{equation}
\B{L}_k^{-1}\B{H}_k\B{F}_k^{-1} = \B{U}_k\B{D}_k\B{V}_k^{\He} \in \mathbb{C}^{N\times r_k}
\label{rSVD}
\end{equation}
with the two (sub-)unitary matrices $\B{U}_k\in\mathbb{C}^{N\times\rank(\B{H}_k)}$
and $\B{V}_k\in\mathbb{C}^{r_k\times \rank(\B{H}_k)}$,
the matrix $\B{Z}_k$ reads as
\begin{equation}
\B{Z}_k = \B{U}_k\B{V}_k^{\He}\cdot
\B{F}_k\B{Q}_k\B{F}_k^{\He}\cdot
\B{V}_k\B{U}_k^{\He}.
\label{flipped_matrix}
\end{equation}
The proof
for the sum-power conservation
can be found
in~\cite{rate_duality}.
From the MAC-to-BC conversion, it can be concluded that every rate tuple in the
MAC can also be achieved in the dual BC.
Conversely, the transformation from the BC to the MAC which follows from the
same framework, states that every rate tuple in the BC can also be achieved in the
MAC. Hence, the duality of these two domains is proven and as a consequence,
their capacity regions are congruent.
Summing up, the state-of-the-art rate duality including interference
cancellation is serial in two senses: First, it requires a serial implementation of the covariance
matrix conversion due to the dependencies of $\B{S}_k$ on $\B{S}_\ell$ with $\ell>k$.
Second, the application of the duality requires that the different streams associated to a
user are decoded jointly or, at the best, in a serial fashion.
\subsection{Proposed Filter-Based Duality}
The previously described state-of-the-art rate duality
is mainly deduced from information theoretic considerations, where
optimum receivers generate sufficient statistics and capacity is
achieved via joint decoding with inter- and intra-user
successive interference cancellation.
Approaching from a signal processing point of view
enables us to derive a novel intuitive duality of low complexity.
Switching from arbitrary sufficient
statistics generating optimum receivers to MMSE receivers, we are able
to express all rates in terms of error covariance matrices, which
in turn only depend on the transmit covariance matrices, i.e., on the
outer product of the precoding filters.
The remaining degree of freedom is a unitary
rotation and we utilize this isometry in order to decorrelate
every single point-to-point link. Doing so, the error covariance matrix
becomes diagonal and capacity is achieved with \emph{separate} stream-wise
decoding making intra-user interference cancellation superfluous.
The fact that stream-wise encoding/decoding achieves capacity has already
been observed in \cite{Viswanath,Tse_capacity}. There, however, intra-user
successive decoding must be applied and all streams are decoded one by one.
As all rates can now be expressed as functions of the SINRs
of the individual streams, we apply a low-complexity SINR duality
in the style of our MSE duality in \cite{HuJoUt08,MeJoHuUt06}. In a nutshell, the scaled MMSE receivers
are used as precoders in the dual domain and scaled precoding filters serve
as the receive filters in the dual domain. This dual domain features the
same SINR values as the original one and therefore achieves the same user rates.
In the following, we give an elaborate derivation of the MAC-to-BC conversion.
\subsubsection{Derivation}
Assuming that every MAC covariance matrix
$\B{Q}_k=\B{T}_k\B{T}_k^{\He}$ is
generated by the precoder $\B{T}_k\in\mathbb{C}^{r_k\times L_k}$,
the symbol estimate of user~$k$ in the MAC is
\[
\hat{\B{s}}_k=\B{G}_k\Big[\B{H}_k\B{T}_k\B{s}_k +
\sum_{\ell>k}\B{H}_\ell\B{T}_\ell\B{s}_\ell +
\sum_{\ell<k}\B{H}_\ell\B{T}_\ell\B{s}_\ell + \B{\eta}\Big],
\]
where $\B{G}_k$ denotes the receive filter of user~$k$,
$\B{s}_k$ its data vec\-tor with identity covariance matrix, and $\B{\eta}$ the additive noise.
Since interference caused by users
$l>k$
is removed by successive interference cancellation, the MMSE receiver
for user~$k$ is
\begin{equation}
\B{G}_k=\B{T}_k^{\He}\B{H}_k^{\He}
\Big(\sum_{\ell\leq k}\B{H}_\ell\B{T}_\ell\B{T}_{\ell}^{\He}\B{H}_{\ell}^{\He}
+\sigma_{\eta}^2\mathbf{I}_N\Big)^{-1}.
\label{MMSE_receiver}
\end{equation}
Using (\ref{MMSE_receiver}) and the matrix-inversion lemma,
the MMSE error covariance matrix
$\B{C}_k\!=\!\Expect [(\B{s}_k\!-\!\hat{\B{s}}_k)(\B{s}_k\!-\!\hat{\B{s}}_k)^{\He}]$
reads as
\begin{equation}
\B{C}_k = \mathbf{I}_{L_k}\!-\B{G}_k\B{H}_k\B{T}_k
= \left[\mathbf{I}_{L_k}\!+\!\B{T}_k^{\He}\B{H}_k^{\He}
\B{X}_k^{-1}\B{H}_k\B{T}_k\right]^{-1}\!,
\label{MMSE_cov_matrix}
\end{equation}
with
$\B{X}_k=\sigma_{\eta}^2\mathbf{I}_N+\sum_{\ell=1}^{k-1}\B{H}_\ell\B{T}_\ell\B{T}_{\ell}^{\He}\B{H}_{\ell}^{\He}$.
The rate of user~$k$ can be expressed
in terms of its error covariance matrix
\begin{equation}
R_k^\mathrm{MAC} = \log_2|\B{C}_k^{-1}|=-\log_2|\B{C}_k|,
\end{equation}
cf.~(\ref{MAC_rate_with_X}).
Note that the rate of user~$k$ is invariant
to a unitary matrix $\B{W}_k$ multiplied from the right hand side to
$\B{T}_k$ yielding $\B{T}_k^{\prime}=\B{T}_k\B{W}_k$.
Moreover, the rate expressions of other users only depend on the transmit
covariance matrices and not on the filters themselves therefore being also
invariant to this isometry. Last but not least, the transmit power
$\tr(\B{Q}_k)=\tr(\B{T}_k\B{T}_k^{\He})=\tr(\B{T}_k^{\prime}\B{T}_k^{\prime\He})$ is invariant under this
isometry~$\B{W}_k$. Although $\B{W}_k$ does not influence the interference
covariance matrix experienced by any other user, it can be used as
a spatial decorrelation filter for every point-to-point link which in
conjunction with the MMSE
receiver $\B{G}_k^\prime = \B{W}_k^{\He}\B{G}_k$ diagonalizes the error-covariance
matrix $\B{C}_k$. To this end, $\B{W}_k$ must be chosen as the
eigenbasis of $\B{G}_k\B{H}_k\B{T}_k$ which is also the eigenbasis of
$\B{T}_k^{\He}\B{H}_k^{\He}\B{X}_k^{-1}\B{H}_k\B{T}_k$. Due to the decorrelation,
all point-to-point links from
the users to the base station achieve capacity without intra-user successive
interference cancellation thus making separate stream decoding possible.
This way, the rate of user~$k$ can be expressed as the sum of the individual
streams' rates, i.e., $R_k^\mathrm{MAC} =\sum_{i=1}^{L_k} R_{k,i}^{\mathrm{MAC}}$, where
\[
R_{k,i}^{\mathrm{MAC}} = \log_2(1+\mathrm{SINR}_{k,i}^{\mathrm{MAC}}).
\]
Let $\B{t}_{k,i}^{\prime}$ be the $i$th column of $\B{T}_k^\prime$
and $\B{g}_{k,i}^{\prime\Tr}$ be the $i$th row of $\B{G}_k^{\prime}$, then
the general SINR definition in the MAC
\begin{equation}
\mathrm{SINR}_{k,i}^{\mathrm{MAC}} =
\frac{|\B{g}_{k,i}^{\prime\Tr}\B{H}_k\B{t}_{k,i}^\prime|^2}
{\B{g}_{k,i}^{\prime\Tr}
\Big(\B{X}_k
+\sum_{m\neq i}\B{H}_k\B{t}_{k,m}^\prime\B{t}_{k,m}^{\prime\He}
\B{H}_k^{\He}
\!\Big) \B{g}_{k,i}^{\prime *}
}
\label{SINR_MAC}
\end{equation}
reduces for the special choice of the decorrelation filter $\B{W}_k$~to
\begin{equation}
\mathrm{SINR}_{k,i}^{\mathrm{MAC}} =
\frac{|\B{g}_{k,i}^{\prime\Tr}\B{H}_k\B{t}_{k,i}^\prime|^2}
{\sigma_{\eta}^2\|\B{g}_{k,i}^{\prime}\|_2^2 + \sum_{\ell<k}\sum_{m=1}^{L_\ell}
|\B{g}_{k,i}^{\prime\Tr}\B{H}_{\ell}\B{t}_{\ell,m}^{\prime}|^2}
\label{SINR_MAC_simplified}
\end{equation}
i.e., the summation over~$m$ in the denominator
of~(\ref{SINR_MAC}) vanishes as $\B{G}_k^{\prime}\B{H}_k\B{T}_k^{\prime}$ is diagonal. Inserting
$\B{G}_k^{\prime}$ into (\ref{SINR_MAC_simplified}) yields
\begin{equation}
\mathrm{SINR}_{k,i}^{\mathrm{MAC}} = \B{t}_{k,i}^{\prime\He}\B{H}_k^{\He}\B{X}_k^{-1}\B{H}_k\B{t}_{k,i}^{\prime},
\end{equation}
according to the diagonal entries of
$\B{W}_k^{\He}\B{C}_k^{-1}\B{W}_k$, see (\ref{MMSE_cov_matrix}).
In the dual BC with Hermitian channels, dirty paper coding for
inter-user interference presubtraction is applied with re\-versed order.
The receivers perform a stream-wise decoding based on the outputs
of the receive filters $\B{B}_k \ \forall k$. Given precoders $\B{P}_1,\ldots,\B{P}_K$,
the SINR of user~$k$'s stream $i$ is
\begin{equation}
\label{SINR_BC}
\mathrm{SINR}_{k,i}^{\mathrm{BC}} = \frac{|\B{b}_{k,i}^{\Tr}\B{H}_k^{\He}\B{p}_{k,i}|^2}
{\B{b}_{k,i}^{\Tr}\Big(
\B{Y}_k+
\sum_{m\neq i}\B{H}_k^{\He}\B{p}_{k,m}\B{p}_{k,m}^{\He}\B{H}_k
\Big)\B{b}_{k,i}^*},
\end{equation}
and the rate of user~$k$ in the BC with stream-wise decoding reads as
$R_k^{\mathrm{BC}} = \sum_{i=1}^{L_k}\log_2(1+\mathrm{SINR}_{k,i}^{\mathrm{BC}})$.
Besides the de\-corre\-lation, the flipping of transmit and receive \emph{filters}
is the core of our duality: Scaled transmit matrices including the decorrelation in the MAC act as receive filters in the
BC and
scaled receivers in the MAC act as transmit filters in the BC:
\vspace{-1mm}
\begin{equation}
\B{p}_{k,i}=\alpha_{k,i}\B{g}_{k,i}^{\prime*}\quad \text{and}\quad \B{b}_{k,i}
= \alpha_{k,i}^{-1}\B{t}_{k,i}^{\prime *}.
\label{filter_conversion}
\end{equation}
Plugging (\ref{filter_conversion}) into the general BC SINR expression
(\ref{SINR_BC})
we obtain by means of the diagonal structure of $\B{G}_k^{\prime}\B{H}_k\B{T}_k^{\prime}$
\[
\mathrm{SINR}_{k,i}^{\mathrm{BC}} =\frac{\alpha_{k,i}^2|\B{g}_{k,i}^{\prime\Tr}\B{H}_k\B{t}_{k,i}^\prime|^2}
{\sigma_{\eta}^2\|\B{t}_{k,i}^\prime\|_2^2+\sum_{\ell>k}\sum_{m=1}^{L_\ell}|\B{g}_{\ell,m}^{\prime\Tr}\B{H}_k
\B{t}_{k,i}^\prime|^2\alpha_{\ell,m}^2}.
\label{SINR_BC_simplified}
\]
Equating $\mathrm{SINR}_{k,i}^{\mathrm{BC}}$ with the MAC SINR from (\ref{SINR_MAC_simplified}),
we get
\begin{equation}
\vspace{-1mm}
\begin{split}
\alpha_{k,i}^2 & \Big[\sigma_{\eta}^2\|\B{g}_{k,i}^{\prime}\|_2^2 + \sum_{\ell<k}\sum_{m=1}^{L_\ell}
|\B{g}_{k,i}^{\prime\Tr}\B{H}_{\ell}\B{t}_{\ell,m}^{\prime}|^2\Big] \\
& \quad -
\sum_{\ell>k}\sum_{m=1}^{L_\ell}\alpha_{\ell,m}^2|\B{g}_{\ell,m}^{\prime\Tr}\B{H}_k\B{t}_{k,i}^{\prime}|^2
=\sigma_{\eta}^2 \|\B{t}_{k,i}^\prime\|_2^2,
\end{split}
\label{conversion_equality}
\end{equation}
which needs to hold for all users $k$ and all streams $i\in\{1,\ldots,L_k\}$ thus generating the
system of linear equations
\begin{equation}
\B{M}\!\cdot\!\big[\alpha_{1,1}^2,\ldots,\alpha_{K,L_K}^2\big]^{\Tr} =
\sigma_{\eta}^2 \big[\|\B{t}_{1,1}^\prime\|^2_2,\ldots,\|\B{t}_{K,L_K}^\prime\|_2^2\big]^{\Tr}
\label{linear_SOE}
\end{equation}
with the $\sum_{k=1}^K L_k \times \sum_{k=1}^K L_k$ block upper triangular matrix
\begin{equation}
\B{M}=\left[\begin{array}{ccc}
\B{M}_{1,1} & \cdots & \B{M}_{1,K} \\
\mathbf{0} & \ddots & \vdots \\
\mathbf{0} & \mathbf{0} & \B{M}_{K,K}
\end{array}
\right].
\label{M_matrix}
\end{equation}
The off-diagonal blocks with $a<b$ read as (cf.~Eq.~\ref{conversion_equality})
\begin{equation}
\B{M}_{a,b}=-(\B{G}_b^\prime\B{H}_a\B{T}_a^\prime)^{\He}\odot(\B{G}_b^\prime\B{H}_a\B{T}_a^\prime)^{\Tr}
\in\mathbb{R}^{L_a\times L_b}
\label{off_diag_block}
\end{equation}
with the \emph{Hadamard} product $\odot$,
and $\B{M}_{a,a}$ is diagonal with
\begin{equation}
[\B{M}_{a,a}]_{i,i}=\sigma_{\eta}^2\|\B{g}_{a,i}^\prime\|_2^2 -
\sum_{\ell<a}\sum_{m=1}^{L_\ell}[\B{M}_{\ell,a}]_{m,i}.
\label{diag_block}
\end{equation}
Since all off-diagonal elements of $\B{M}$ are nonpositive and all diagonal elements are nonnegative,
$\B{M}$ is a \emph{Z-matrix} \cite{nonnegative_matrices}. For $\sigma_{\eta}^2>0$, $\B{M}$ is column diagonally dominant.
So, $\B{M}$
is an \emph{M-matrix} such that its inverse exists with nonnegative entries~\cite{nonnegative_matrices}
yielding valid solutions $\alpha_{k,i}^2\geq 0$. Because of the block upper triangular structure of $\B{M}$
we can quickly solve for $\alpha_{1,1}^2,\ldots,\alpha_{K,L_K}^2$ via
back-substitution,
in particular since the diagonal blocks $\B{M}_{k,k}$ are diagonal matrices.
Note that a rank-deficient precoder $\B{T}_m$ manifests in
zero columns and zero rows in $\B{M}$ which have to be removed
before inversion. The respective $\alpha_{m,\cdot}^2$ and
$\|\B{t}_{m,\cdot}^\prime\|_2^2$ in (\ref{linear_SOE}) also have to be removed,
and finally,
$\B{p}_{m,\cdot}=\mathbf{0}$ and $\B{b}_{m,\cdot}=\mathbf{0}$ must be chosen.
\\
Summing up the rows of (\ref{linear_SOE}), we obtain
\begin{equation}
\sum_{k=1}^K\sum_{i=1}^{L_k}\underbrace{\alpha_{k,i}^2\|\B{g}_{k,i}^\prime\|_2^2}_{
\|\B{p}_{k,i}\|_2^2}\sigma_{\eta}^2 =\sigma_{\eta}^2 \sum_{k=1}^K\sum_{i=1}^{L_k}\|\B{t}_{k,i}^\prime\|_2^2,
\label{power_conservation}
\end{equation}
stating that the dual BC consumes the same power as the MAC.
Thus, the same or larger (if MMSE receivers are chosen for
$\B{B}_1,\ldots,\B{B}_K$) rates can be achieved in the dual BC as in the primal
MAC under the same transmit power constraint.
The reverse direction of the duality transforming BC filters to the MAC
can be handled with the same framework. Due to its similarity, we skip
its derivation. From this direction of the duality, it follows that the BC
rate region is a subset of the MAC capacity region. In combination
with the former result of the MAC-to-BC conversion stating that the MAC capacity region is a subset of
the BC rate region, the following theorem
becomes evident with the aid of \cite{Weingarten}
(cf.~\cite{rate_duality}):
\vspace{-1.45mm}
\Theorem{congruency}{section}
{
The capacity regions of the MAC and the BC are congruent
under a sum-power constraint.
}
As a consequence, any optimization in the BC can be solved in the MAC,
which offers concave rate expressions suitable for efficient globally convergent
algorithms. Since both capacity regions are congruent, we optimize over the
same region and therefore, do not introduce any suboptimality at this point.
Having found the solution in the MAC we can convert it back to
the BC by means of the duality. Optimality in one domain translates itself
to optimality in the other domain.
The main advantage of the proposed filter-based duality compared to the
state-of-the-art duality in~\cite{rate_duality} is that both the
conversion and the decoding in the dual domain can be parallelized
and need not be applied serially as in \cite{rate_duality}.
The computation of the transmit and receive filters features no dependencies
and the decoding process does not require intra-user interference cancellation
or intra-user joint decoding of the streams, all streams of a user can be
decoded independently in parallel.
\subsubsection{Algorithmic Implementation}
Given arbitrary precoding filters $\B{T}_k \ \forall k$ in the MAC,
MMSE
receivers $\B{G}_k$ are first computed via (\ref{MMSE_receiver}) for all~$k$, see Line~2
in Alg.~\ref{alg:novel_MAC_BC}. The decorrelation filter $\B{W}_k$ is chosen
as the eigenbasis of $\B{G}_k\B{H}_k\B{T}_k$
and afterwards, the transmit and receive filters are adapted,
see Lines~3 and~4.
Thereby, a parallel stream-wise decoding is possible without intra-user
interference cancellation.
Having set up the linear system of equations in (\ref{linear_SOE}) which ensures the
conservation of the SINRs in the BC, the precoders $\B{P}_k$ and
receivers $\B{B}_k$ are computed with (\ref{filter_conversion}),
cf.\ Line~8.
\section{Rate Duality for Systems without Interference Subtraction}
In case of linear filtering,
i.e., when nonlinear inter-user interference cancellation is not applied,
user~$k$ experiences interference from
all other users $\ell\neq k$.
Up to now, a \emph{rate} duality for the linear case without
interference subtraction
does not exist in the literature when multi-antenna terminals are involved
and different streams shall \emph{not} be treated as self-interference.
By jointly decoding the streams in the MAC,
user~$k$ can achieve the rate
\[
R_k^{\mathrm{MAC}} = \log_2\Big|\mathbf{I}_N\!+\!
\big(\sum_{\ell\neq k}\B{H}_\ell\B{Q}_\ell\B{H}_{\ell}^{\He}
\!+\!\sigma_{\eta}^2\mathbf{I}_N\big)^{-1}\B{H}_k\B{Q}_k\B{H}_k^{\He}\Big|
\vspace{-1.5mm}
\]
\vspace{-1.6mm}
\begin{equation}
= -\log_2\big|\mathbf{I}_N - \B{X}^{-1}\B{H}_k\B{Q}_k\B{H}_k^{\He}\big|,\hspace{1.62cm}
\label{linear_rate}
\end{equation}
with the substitution
$\B{X}\!=\!\sigma_{\eta}^2\mathbf{I}_N\!+\!\sum_{\ell=1}^K\B{H}_\ell\B{Q}_\ell\B{H}_{\ell}^{\He}$.
In con\-trast to systems with interference cancellation
described in the previous section,
this matrix is common to MMSE receivers
\begin{equation}
\B{G}_k = \B{T}_k^{\He}\B{H}_k^{\He}\B{X}^{-1}
\label{G_no_SIC}
\end{equation}
for all users~$k$
and therefore has to be computed only once.
Applying $\B{G}_k$, user~$k$ experiences the error covariance
matrix
\begin{equation}
\B{C}_k = \mathbf{I}_{L_k} - \B{T}_k^{\He}\B{H}_k^{\He}\B{X}^{-1}\B{H}_k\B{T}_k,
\end{equation}
which is again decorrelated by the isometry $\B{W}_k$
since the rate $R_k^{\mathrm{MAC}}=-\log_2|\B{C}_k|$ is again invariant under
this unitary degree of freedom.
Choosing $\B{W}_k$ as the
eigenbasis of $\B{T}_k^{\He}\B{H}_k^{\He}\B{X}^{-1}\B{H}_k\B{T}_k$,
we adapt the receive filter $\B{G}_k^\prime=\B{W}_k^{\He}\B{G}_k$ and
the transmit filter $\B{T}_k^\prime = \B{T}_k\B{W}_k$.
Due to the decorrelation, the error covariance matrix $\B{W}_k^{\He}\B{C}_k\B{W}_k$
is diagonalized and all $L_k$ streams of user~$k$ can be decoded
separately yielding the rate $R_k^{\mathrm{MAC},\mathrm{lin}} = \sum_{i=1}^{L_k}R_{k,i}^{\mathrm{MAC},\mathrm{lin}}$,
with the rate
\begin{equation}
R_{k,i}^{\mathrm{MAC},\mathrm{lin}} = \log_2(1+\mathrm{SINR}_{k,i}^{\mathrm{MAC},\mathrm{lin}})
\end{equation}
of user $k$'s stream~$i$.
Its SINR now reads as
\begin{equation*}
\mathrm{SINR}_{k,i}^{\mathrm{MAC},\mathrm{lin}} =
\frac{|\B{g}_{k,i}^{\prime\Tr}\B{H}_k\B{t}_{k,i}^\prime|^2}
{\sigma_{\eta}^2\|\B{g}_{k,i}^{\prime}\|_2^2 + \sum_{\ell\neq k}\sum_{m=1}^{L_\ell}
|\B{g}_{k,i}^{\prime\Tr}\B{H}_{\ell}\B{t}_{\ell,m}^{\prime}|^2}.
\end{equation*}
We apply the same rule for finding the precoding and receive filters
$\B{P}_k$ and $\B{B}_k$ of user~$k$ in the BC as we do in case
of inter\-ference cancellation, i.e.,
$\B{p}_{k,i}=\alpha_{k,i}\B{g}_{k,i}^{\prime*}$ and
$\B{b}_{k,i} =\alpha_{k,i}^{-1}\B{t}_{k,i}^{\prime *}$,
see (\ref{filter_conversion}).
With these transformations, the BC SINR reads as
\begin{equation*}
\mathrm{SINR}_{k,i}^{\mathrm{BC},\mathrm{lin}} =\frac{\alpha_{k,i}^2|\B{g}_{k,i}^{\prime\Tr}\B{H}_k\B{t}_{k,i}^\prime|^2}
{\sigma_{\eta}^2\|\B{t}_{k,i}^\prime\|_2^2+\sum_{\ell\neq k}
\sum_{m=1}^{L_\ell}|\B{g}_{\ell,m}^{\prime\Tr}\B{H}_k
\B{t}_{k,i}^\prime|^2\alpha_{\ell,m}^2}.
\label{SINR_BC_simplified_2}
\end{equation*}
Equating the BC and MAC SINRs yields
the system of linear equations (\ref{linear_SOE}),
where the
matrix $\B{M}$ is not block upper triangular as in (\ref{M_matrix}), since inter-user
interference cancellation is not applied:
\vspace{-3mm}
\begin{equation}
\B{M}=\left[\begin{array}{ccc}
\B{M}_{1,1} & \cdots & \B{M}_{1,K} \\
\vdots & \ddots & \vdots \\
\B{M}_{K,1} & \cdots & \B{M}_{K,K}
\end{array}
\right].
\label{M_matrix_2}
\end{equation}
For this reason, (\ref{linear_SOE}) is solved via \emph{LU-factorization}
\cite[Section 3.2.5]{Golub} and forward-backward
substitution.
The diagonal blocks of $\B{M}$ are diagonal matrices
with diagonal entries
\vspace{-1mm}
\begin{equation}
[\B{M}_{a,a}]_{i,i}=\sigma_{\eta}^2\|\B{g}_{a,i}^\prime\|_2^2 -
\sum_{\ell\neq a}\sum_{m=1}^{L_\ell}[\B{M}_{\ell,a}]_{m,i},
\label{diag_block_2}
\vspace*{-1mm}
\end{equation}
such that $\B{M}$ is again an \emph{M-matrix} satisfying the
power conservation equation (\ref{power_conservation}).
With slight modifications, Alg.~\ref{alg:novel_MAC_BC} can be used to
perform the MAC-to-BC conversion without nonlinear inter-user
interference cancellation. In Line~2, $\B{G}_k$
must be computed according to (\ref{G_no_SIC}),
and in Line~7, the matrix $\B{M}$ follows from (\ref{M_matrix_2}),
(\ref{off_diag_block}), and
(\ref{diag_block_2}).
Again, the converse direction of the duality underlies the same framework
and completes the proof of the duality in case of linear filtering without
inter-user interference cancellation:
\vspace{-1mm}
\Theorem{congruence_linear}{section}
{
The MIMO MAC and the MIMO BC share the same rate region under linear filtering
and a sum-power constraint both for separate and joint
de-/encoding of each user's data streams.
}
\vspace{-1mm}
This novel rate duality for systems without interference cancellation allows us to
convert any rate-based optimization from the BC to the MAC without loss
of optimality. An immediate benefit is that we can switch from the
rate expression
\begin{equation*}
R_k^{\mathrm{MAC},\mathrm{interference}} = -\log_2 \prod_{i}
\big[\mathbf{I}_{L_k}-\B{T}_k^{\He}\B{H}_k^{\He}\B{X}^{-1}\B{H}_k\B{T}_k\big]_{i,i}
\end{equation*}
with separate stream decoding and hence self-interference
to the one in (\ref{linear_rate}) with joint stream decoding
\begin{equation*}
R_k^{\mathrm{MAC},\mathrm{lin}} = -\log_2 \big|\mathbf{I}_{L_k}-\B{T}_k^{\He}\B{H}_k^{\He}\B{X}^{-1}\B{H}_k\B{T}_k\big|,
\end{equation*}
which is always larger than or equal to $R_k^{\mathrm{MAC},\mathrm{interference}}$.
Moreover, the channel and precoder indices are aligned in the MAC,
see (\ref{linear_rate}), whereas they aren't in the BC. Although
(weighted) sum-rate maximization remains a nonconcave maximization in the
MAC, the aforementioned indices alignment allows for simpler expressions
and reduced-complexity algorithms.
Last but not least, MAC precoders
are characterized by only $\sum_{k=1}^K r_k^2$ variables instead of
$N\sum_{k=1}^K r_k$ in the BC.
Summing up, solving rate based optimizations with linear filtering in the MAC
and applying the proposed duality is more efficient than solving the problem
in the BC.
\vspace{-0.5mm}
\begin{algorithm}[!t]
\begin{algorithmic}[1]
\FOR{$k=1:K$}
\STATE $\B{G}_k\leftarrow \B{T}_k^{\He}\B{H}_k^{\He}
\big(\sum_{\ell\leq k}\B{H}_\ell\B{T}_\ell\B{T}_{\ell}^{\He}\B{H}_{\ell}^{\He}
+\sigma_{\eta}^2\mathbf{I}_N\big)^{-1}$
\STATE $\B{W}_k\leftarrow \operatorname{eigenbasis}(\B{G}_k\B{H}_k\B{T}_k)$ \hfill \emph{decorrelation matrix}
\STATE $\B{G}_k^\prime \leftarrow \B{W}_k^{\He}\B{G}_k$ and
$\B{T}_k^\prime \leftarrow \B{T}_k\B{W}_k$ \hfill \emph{decorrelate}
\ENDFOR
\STATE set up $\B{M}$ with (\ref{M_matrix}) -- (\ref{diag_block}),
remove zero columns/rows
\STATE solve for $\alpha_{1,1}^2,\ldots,\alpha_{K,L_K}^2$ via (\ref{linear_SOE})
\STATE $\B{p}_{k,i}=\alpha_{k,i}\B{g}_{k,i}^{\prime *}$ \ \ and \ \
$\B{b}_{k,i}=\frac{1}{\alpha_{k,i}}\B{t}_{k,i}^{\prime *} \ \ \forall k,\ \ \forall i$
\end{algorithmic}
\caption{Novel stream-wise MAC-to-BC conversion.}
\label{alg:novel_MAC_BC}
\end{algorithm}
\bibliographystyle{IEEEbib}
|
0803.3444
|
\section{Introduction}
In the paper \cite{PP2} Proposition $7.21$, G. Pareschi and M. Popa studied the equations of the special subvarieties $W_d$ in Jacobians by means of theta-regularity and the continuous global generation of sheaves on abelian varieties. In the same vein, they gave an effective bound for the equations of the singular locus of the theta divisor on a Jacobian, $\Sigma (\Theta) \cong W^1_{g-1}$, by showing that the ideal sheaf $\sI_{\Sigma (\Theta)}$ is $3$-$\Theta$-regular (cf. \cite{PP2}, Proposition $7.21$) and hence $\sI_{\Sigma (\Theta)} (3\Theta)$ is globally generated.\\ In this paper we generalize this result to an arbitrary complex abelian variety, where we consider an arbitrary ample line bundle instead of a theta divisor. Moreover we consider the multiplicity-$k$ locus of a divisor $D\in |L|$. Now we present the generalization.
Let $A$ be a complex abelian variety of dimension $g\geq 2$ and $L$ be an ample line bundle on $A$. Let $D\in |L|$ be a divisor, $$\Sigma_k(D)=\{x\in A\;|\;\mbox{mult}_xD>k\},$$ be the multiplicity-$k$ locus of $D$ and $\sI_{\Sigma_k(D)}$ be its ideal sheaf:
\begin{theorem}\label{mainth}
The sheaf $\sI_{\Sigma_k(D)}\otimes L^{\otimes 3}$ is globally generated.
\end{theorem}
This Theorem allows us to find the degrees of defining equations of $\Sigma (D)$. In particular, it implies that $\Sigma_k(D)$ is \textit{cut-out by equations in} $|L^{\otimes 3}|$, i.e. locally there exist divisors $D_1,\ldots ,D_m\in |L^{\otimes 3}|$ such that $\Sigma_k(D)=D_1\cap \ldots \cap D_m$.
A more general problem in the case of Jacobians is to find for which positive integers $k$ the sheaves $\sI_{W_d^r(C)}(k\Theta)$ are globally generated, where $W_d^r(C)$ are the Brill-Noether loci on a smooth curve $C$, i.e. $$W_d^r(C)=\{L\in \mbox{Pic}^d(C)\;|\;h^0(L)\geq r+1\}.$$ Let $\Theta$ be a theta divisor on the jacobian $J(C)$ of a smooth curve $C$ of genus $g$. Via the identification $$\Theta\cong W_{g-1}^0(C)=\{L\in \mbox{Pic}^{g-1}(C)\;|\;h^0(L)\geq 1\},$$ Riemann's Theorem ensures that $$\Sigma_k(\Theta)=W_{g-1}^k(C),$$ so Theorem \ref{mainth} is a result in this direction.
The notion of globally generated sheaf is not the unique way in order to get equations of a subvariety. Let $A$ be a complex abelian variety and $\sF$ be a sheaf on $A$. We say that the sheaf $\sF$ is \textit{continuously globally generated} if for any non-empty open subset $U\subset \mbox{Pic}^0(A)$ the sum of evaluation maps $$\bigoplus_{\alpha \in U} H^0(\sF \otimes \alpha)\otimes \alpha^\vee \longrightarrow \sF$$ is surjective (see \cite{PP1}). With the same notation as in Theorem \ref{mainth} we have another result:
\begin{theorem}\label{mainth2}
The sheaf $\sI_{\Sigma_k(D)}\otimes L^{\otimes 2}$ is continuously globally generated.
\end{theorem}
We will see that Theorem \ref{mainth} is an easy consequence of Theorem \ref{mainth2}.
In particular, Theorem \ref{mainth2} implies that $\Sigma_k(D)$ is \textit{cut-out by equations in} $|L^{\otimes 2}\otimes \alpha|$, for some $\alpha\in \mbox{Pic}^0(A)$, i.e. there exist line bundles $\alpha_1,\ldots ,\alpha_t\in \mbox{Pic}^0(A)$ and divisors $D_1,\ldots ,D_m\in \bigcup _i|L^{\otimes 2}\otimes \alpha_i|$ such that locally $\Sigma_k(D)=D_1\cap \ldots \cap D_m$.
The proofs of Theorem \ref{mainth} and Theorem \ref{mainth2} use a general method different from the ad-hoc argument in \cite{PP2}, Proposition $7.21$. Our main tool is the use of the bundle of differential operators associated to an ample line bundle on a complex abelian variety, see \cite{ELN}. In this case we will see that the bundle of differential operators satisfies nice cohomological properties.
In the last section we investigate the same problem on subvarieties of a complex abelian variety. More precisely, let $X\subset A$ be a complex projective smooth subvariety of dimension $n\geq 2$ of a complex abelian variety $A$, and let $M$ be an ample line bundle on $X$ and $D\in |M\otimes \omega_X|$ be a divisor. Putting $L:=M\otimes \omega_X$ we have the following results:
\begin{theorem}\label{intr-sub}
\item (i.) The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes 2}\otimes \omega_X$ is continuously globally generated.
\item (ii.) The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes 3}\otimes \omega_X$ is globally generated.
\item (iii.) The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes {n+2}}$ is continuously globally generated and
\item (iv.) The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes {n+3}}$ is globally generated.
\end{theorem}
In order to prove the last two points we will remark that the cotangent bundle of a subvariety of an abelian variety is nef and we will state a vanishing theorem for varieties with nef cotangent bundle.
\section{Notations and Preliminaries}
Throughout this paper every variety is assumed to be irreducible. If $Y$ is a subvariety, its ideal sheaf is denoted by $\sI_Y$.
In this section we present the notion of sheaf satisfying the Index Theorem, that is a condition on the cohomology of the sheaf. After, following \cite{PP1}, we will give the definition of continuously globally generated sheaf, putting them in relation with globally generated sheaves and sheaves satisfying the index theorem. Only in this section every variety is defined over an algebraically closed field of arbitrary characteristic.
\begin{definition}[\textbf{Sheaf Satisfying the Index Theorem with Index $i$}]
A sheaf $\sF$ on an abelian variety $A$ satisfies the \emph{index theorem with index} $i$, I.T. $i$ for short, if $$H^j(\sF \otimes \alpha )=0$$ for any $\alpha\in \mbox{Pic}^0(A)$ and for any $j\neq i$.
\end{definition}
An ample line bundle on an abelian variety satisfies I.T. $0$: see for example \cite{MU} Application I, p.60, and Chapter 16. In characteristic zero it is a simple consequence of Kodaira's Vanishing Theorem.\\
Recall that a sheaf $\sF$ on an abelian variety $A$ is \emph{globally generated} if the evaluation map $H^0(\sF)\otimes \sO_A\rightarrow \sF$ is surjective. A similar notion is the following
\begin{definition}[\textbf{Continuously Globally Generated Sheaf}]
A sheaf $\sF$ on an abelian variety $A$ is \emph{continuously globally generated} if for any non-empty open subset $U\subset \mbox{Pic}^0(A)$ the sum of evaluation maps $$\bigoplus_{\alpha \in U} H^0(\sF \otimes \alpha)\otimes \alpha^\vee \longrightarrow \sF$$ is surjective.
\end{definition}
The link between this kind of sheaves and globally generated sheaves is explained by the following two propositions.
\begin{proposition}\label{cgg per cgg=gg}
Let $\sF$ be a coherent continuously globally generated sheaf on an abelian variety $A$ and $H$ be a continuously globally generated sheaf on $A$ which is everywhere of rank one on its support, then $\sF\otimes H$ is globally generated.
\end{proposition}
\begin{proposition}\label{it0 implica cgg}
If $\sF$ is a sheaf satisfying I.T. $0$ on an abelian variety $A$, possibly supported on a subvariety $X$ of $A$, then $\sF$ is continuously globally generated.
\end{proposition}
For the proof of Proposition \ref{cgg per cgg=gg} see Lemma 2.3 in \cite{PP3} and for the proof of Proposition \ref{it0 implica cgg} see Proposition 2.13 in \cite{PP1} where it is stated in a more general setting.
In the sequel we will use the following Lemma.
\begin{lemma}\label{cgg-prop}
Let $A$ be an abelian variety.
\item[](i.) A quotient of a continuously globally generated sheaf $\sG$ on $A$ is still a continuously globally generated sheaf.
\item[] (ii.) Let $$0\longrightarrow \sF'\longrightarrow \sF \longrightarrow \sF''\longrightarrow 0$$ be an exact sequence of sheaves on $A$, where $\sF'$ and $\sF''$ are continuously globally generated sheaves and such that $H^1(\sF'\otimes \alpha)=0$ for any $\alpha \in \mbox{Pic}^0(A)$. Then the sheaf $\sF$ is continuously globally generated.
\begin{proof}
\item (i.) Let $\sG'$ be a quotient of $\sG$ and $U\subset \mbox{Pic}^0(A)$ be a non-empty open subset. Consider the following commutative diagram
$$\begin{array}{ccc}
\bigoplus_{\alpha \in U} H^0(\sG \otimes \alpha)\otimes \alpha^\vee & \stackrel{\sigma}{\longrightarrow}& \sG
\\
\downarrow & &\downarrow \nu \\
\bigoplus_{\alpha \in U} H^0(\sG' \otimes \alpha)\otimes \alpha^\vee &\stackrel{\tau}{\longrightarrow}& \sG'.\\
\end{array}$$
Since the maps $\sigma$ and $\nu$ are surjective, $\tau$ also has to be surjective, thus $\sG'$ is continuously globally generated.
\item (ii.) If $\sH$ is a sheaf and $U\subset \mbox{Pic}^0(A)$ is a non-empty subset, denote by $\bar{\sH}_U$ the sheaf $\bigoplus_{\alpha\in U} H^0(\sH\otimes \alpha)\otimes \alpha^\vee$. The hypotheses imply that for any non-empty open subset $U\subset \mbox{Pic}^0(A)$ there is a commutative diagram
$$\begin{array}{ccccccccc}
0 & \longrightarrow & \bar{\sF'_U} & \longrightarrow &
\bar{\sF}_U& \longrightarrow & \bar{\sF''_U} & \longrightarrow & 0\\
& & \downarrow & & \downarrow & & \downarrow & &\\
0 & \longrightarrow & \sF'& \longrightarrow & \sF & \longrightarrow & \sF''& \longrightarrow & 0\\
\end{array}$$
where the first and the third vertical arrow are surjective. At this point the five-lemma implies that the middle vertical arrow is surjective as well.
\end{proof}
\end{lemma}
\section{The Bundle of Differential Operators}
In order to prove Theorem \ref{mainth} and Theorem \ref{mainth2}, we need to introduce the bundle of differential operators of order $\leq k$, see \cite{ELN}, \cite{DS} and \cite{MA}.
Let $X$ be a smooth complex projective variety of dimension $n$ and $L$ be a line bundle on $X$. Let $\Delta \subset X\times X$ be the diagonal of $X$ and $p,q:X\times X\rightarrow X$ be the two projections onto the first and second factor. The \textit{k-jet bundle associated to $L$}, $J_k(L)$, is the vector bundle $$p_*(\sO_{X\times X}/\sI_{\Delta}^{k+1}\otimes q^*L)$$ where $$\sI_{\Delta}^{k+1}=\{f\in \sO_{X\times X}\; | \; \mbox{ord}_x(f)\geq k+1 \;\;\mbox{for any}\;x\in \Delta\}.$$ The k-jet bundle is a vector bundle of rank $\binom{k+n}{n}$ whose fiber is $$(J_k(L))_x=L_x\otimes \sO_{X,x}/ \frak{m}_x^{k+1},$$ where $x\in X$ and $\frak{m}_x$ is the maximal ideal of $\sO_{X,x}$. In other words the elements of a fiber are equivalence classes of sections of $L$, where two sections are in the same class if their Taylor expansions coincide up to order $k$ near $x$.
There are natural maps of sheaves $$j^k:L\longrightarrow J_k(L)$$ sending the germ of a section $s$ at a point $x\in X$ to its $k$-th jet. More specifically for $s\in H^0(X,L)$, $j^k(s(x))$ is the $\binom{k+n}{n}$-ple determined by the coefficients of the terms of degree up to $k$, in the Taylor expansion of $s$ around $x$.
In this way we get a natural projection map $$J_k(L)\longrightarrow J_{k-1}(L).$$ A germ of a section of $J_k(L)$ at a point $x\in X$ is sent to zero, under the projection map, if the terms of degree up to $k-1$ in its Taylor expansion vanish, hence the kernels of these maps are the vector bundles $\mbox{Sym}^k(\Omega_X^1)\otimes L$. In fact a germ of a section of $\mbox{Sym}^k(\Omega_X^1)\otimes L$ at a point $x\in X$ corresponds to the $\binom{k+n-1}{n-1}$-ple determined by the coefficients of the terms of degree $k$ in its Taylor expansion around $x$.
Thus, for $k\geq 1$, there are exact sequences of sheaves of $\sO_X$-modules $$0\longrightarrow \mbox{Sym}^k(\Omega ^1_X)\otimes L\longrightarrow J_k(L)\longrightarrow J_{k-1}(L)\longrightarrow 0.$$
Now we define the \textit{bundle of differential operators of order $\leq k$ associated to $L$} as $$\sD_L^k:=\mathcal{H}\textit{om}_{\sO_X} (J_k(L),L)=J_k(L)^{\vee}\otimes L.$$ By dualizing and after tensoring by $L$ the previous exact sequences we get new exact sequences of sheaves of $\sO_X$-modules.
\begin{equation}\label{2-succ-esa}
0\longrightarrow \sD_L^{k-1}\longrightarrow \sD^k_L \longrightarrow \mbox{Sym}^k(TX)\longrightarrow 0.
\end{equation}
A non-zero section $s\in H^0(X,L)$ determines a morphism of vector bundles $$d_k(s):\sD^k_L \longrightarrow L$$ in this way. Let $U\subset X$ be an open subset and let $f:=s_{|U}\in L(U)$, the map associates to any differential operator $\Psi_U\in \sD_L^k(U)$ the section $\Psi_U(j^k_U(f))\in L(U)$, where $L(U)$ and $\sD_L^k(U)$ are the rings of sections of $L$ and of $\sD_L^k$ over the open set $U$ and $j^k_U$ is the map $j^k$ on the open set $U$.
It follows that $d_k(s)$ is zero exactly at the locus where $s$ vanishes to order $>k$. More precisely let $D\in |L|=\mathbb{P}(H^0(X,L))$, then $D$ corresponds to a section (modulo scalars), say $\phi$. Consider the multiplicity-$k$ locus $$\Sigma_k(D)=\{x\in X\; |\; \mbox{mult}_x D>k\},$$ with its natural scheme structure. Then the image of $d_k(\phi)$ is just the ideal sheaf of this scheme, i.e. one has a surjective sheaf morphism
\begin{equation}\label{sur-imp}
\sD_L^k\longrightarrow \sI_{\Sigma_k(D)}\otimes L.
\end{equation}
\section{Proof of the Theorem}
Now we are ready to prove the following
\begin{theorem}\label{mainth-proof}
Let $A$ be a complex abelian variety of dimension $g\geq 2$ and $L$ be an ample line bundle on $A$. For any $k\geq 1$ and any divisor $D\in |L|$, let $\Sigma_k(D)=\{x\in A \; | \; \mbox{mult}_x D>k\}$ be the multiplicity-$k$ locus of $D$. Then the sheaf $\sI_{\Sigma_k(D)}\otimes L^{\otimes s}$ is continuously globally generated for any $k\geq 1$ and any $s\geq 2$.
\begin{proof}
Let $J_k(L)$ be the $k$-jet bundle associated to $L$ and $\sD_L^k$ its bundle of differential operators. Note that $J_0(L)=L$ and that $\sD_L^0=\sO_A$.
First of all we will prove that the bundle $\sD_L^k\otimes L^{\otimes {s-1}}$ satisfies I.T.0. This is done by induction on $k$, we begin with $k=1$.
Since on an abelian variety the tangent bundle is trivial, by the exact sequence (\ref{2-succ-esa}) we get a new exact sequence $$0\longrightarrow \sO_A\longrightarrow \sD_L^1\longrightarrow \bigoplus_g \sO_A\longrightarrow 0,$$ and by tensoring it by $L^{\otimes {s-1}}$, with $s\geq 2$, we get $$0\longrightarrow L^{\otimes {s-1}}\longrightarrow \sD_L^1\otimes L^{\otimes {s-1}}\longrightarrow \bigoplus_g L^{\otimes {s-1}}\longrightarrow 0.$$
The line bundle $L$ satisfies I.T. $0$ since it is ample on an abelian variety, for the same reason also $L^{\otimes {s-1}}$ and $\bigoplus_g L^{\otimes {s-1}}$ satisfy I.T. $0$. Therefore $\sD_L^1\otimes L^{\otimes {s-1}}$ satisfies I.T. $0$.
Now suppose that the bundle $\sD_L^{k-1}\otimes L^{\otimes {s-1}}$ satisfies I.T. $0$. By the exact sequence (\ref{2-succ-esa}) and by tensoring it by $L^{\otimes {s-1}}$ we get a new exact sequence $$0\longrightarrow \sD_L^{k-1}\otimes L^{\otimes {s-1}}\longrightarrow \sD_L^{k}\otimes L^{\otimes {s-1}}\longrightarrow \bigoplus_{\binom{g+k-1}{g-1}} L^{\otimes {s-1}}\longrightarrow 0.$$
The first term of the sequence satisfies I.T. $0$ by inductive hypothesis and easily also the last term satisfies I.T. $0$, hence also the middle term satisfies I.T. $0$. By Proposition \ref{it0 implica cgg} the bundle $\sD_L^k\otimes L^{\otimes {s-1}}$ is also continuously globally generated.
Let $D\in |L|$. By tensoring the surjection (\ref{sur-imp}) by $L^{\otimes {s-1}}$, we get a new surjection $$\sD_L^k\otimes L^{\otimes {s-1}}\longrightarrow \sI_{\Sigma_k({D})}\otimes L^{\otimes s},$$ and hence the quotient $\sI_{\Sigma_k({D})}\otimes L^{\otimes s}$ is continuously globally generated by \\Lemma \ref{cgg-prop} \emph{(i.)}.
\end{proof}
\end{theorem}
With the same notation as in the previous Theorem, we have the following
\begin{corollary}\label{last}
The sheaf $\sI_{\Sigma_k(D)}\otimes L^{\otimes s}$ is globally generated for any $k\geq 1$ and any $s\geq 3$.
\begin{proof}
By Theorem \ref{mainth-proof} we have that the sheaf $\sI_{\Sigma_k(D)}\otimes L^{\otimes s}$ is continuously globally generated for any $k\geq 1$ and any $s\geq 2$. By tensoring this sheaf by $L$, which is continuously globally generated by Proposition \ref{it0 implica cgg}, we get the claimed result by Proposition \ref{cgg per cgg=gg}.
\end{proof}
\end{corollary}
Finally, we would like to point out that, while the basic properties of continuously globally generated sheaves hold in arbitrary characteristic, those of the bundles of differential operators hold only in characteristic zero, for differentiation reasons. Hence the proof of Theorem \ref{mainth-proof} given above does not work in positive characteristic.
\section{Subvarieties of an Abelian Variety}
In this section we investigate the same problem on smooth subvarieties of a complex abelian variety.
\begin{definition}[\textbf{Nef Bundles}]
A line bundle $L$ on a projective variety $X$ is \textit{nef} (or \textit{numerically effective}) if for every curve $C\subset X$ $$\int_C c_1(L)\geq 0.$$
A vector bundle $E$ on a projective variety $X$ is \textit{nef} (or \textit{numerically effective)} if the associated line
bundle $\sO_{\textbf{P}(E)}(1)$ is nef on the projectivized bundle $\textbf{P}(E)=Proj(\bigoplus _m\mbox{Sym}^mE)$.
\end{definition}
For generalities on nef vector bundles see for example \cite{LA} Theorem 6.2.12. Recall that quotients and pull-backs of
nef vector bundles are nef, and that any tensor product, exterior product, symmetric product, direct sums and extensions of nef
bundles are again nef. Moreover, the trivial bundle is always nef and the tensor product of a nef bundle with an ample bundle is an ample bundle.
Note that \emph{the cotangent bundle of a smooth subvariety $X$ of an abelian variety (of arbitrary characteristic) $A$ is nef}: it is easy to get a surjective map
$\Omega^1_{A|X}\rightarrow \Omega^1_X$, just consider the exact sequence $$0\longrightarrow \sI_X/\sI_X^2\longrightarrow \Omega^1_{A|X}\longrightarrow \Omega^1_X\longrightarrow 0,$$ where $\sI_X/\sI_X^2$ is the conormal sheaf of $X$ in $A$. Since $$\Omega^1_{A|X}=
\bigoplus \sO_X$$ is nef then $\Omega_X^1$ is also nef. We define $\Omega_X^p:=\bigwedge ^p\Omega_X^1.$
For subvarieties, the setting is the following.
Let $A$ be a complex abelian variety of dimension $g$ and let $X$
be a complex projective smooth subvariety of $A$ of
dimension $n\geq 2$. Let $M$ be an ample line bundle on $X$ and
$D\in |\omega_X\otimes M|$ be a divisor. Note that the linear
system $|\omega_X\otimes M|$ is non-empty: it is enough to apply
Theorem 5.8 in \cite{FM} to a subvariety of an abelian variety.
Let $$\Sigma_1(D)=\{x\in X \; | \; \mbox{mult}_x D> 1\}$$ be the
singular locus of $D$.
Putting $L:=\omega_X\otimes M$, the sheaf $L^{\otimes s}\otimes \omega_X^{\otimes p}$ satisfies I.T. $0$ for any $s\geq 1$ and any $p\geq 0$: it follows by Kodaira's Vanishing Theorem since $L^{\otimes s}\otimes \omega_X^{\otimes p}= \omega_X\otimes \omega_X^{\otimes {s-1+p}}\otimes M^{\otimes s}$ and the line bundle $\omega_X^{\otimes {s-1+p}}\otimes M^{\otimes s}$ is ample since $\omega_X$ is nef and the tensor product beetwen a nef line bundle and an ample line bundle is still ample.
\begin{theorem}\label{subvar-cgg}
The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes s}\otimes \omega_X^{\otimes p}$ is continuously globally generated for any $s\geq 2$ and any $p\geq 1$.
\begin{proof}
Consider the standard exact sequence for the bundle of differential operators of order $\leq 1$ associated to $L$ $$0\longrightarrow \sD_L^0=\sO_X\longrightarrow \sD_L^1\longrightarrow T_X\longrightarrow 0.$$ By tensoring this sequence by $L^{\otimes {s-1}}\otimes \omega_X^{\otimes p}$, with $s\geq 2$ and $p\geq 1$, we get a new one $$0\longrightarrow L^{\otimes {s-1}}\otimes \omega_X^{\otimes p}\longrightarrow \sD_L^1\otimes L^{\otimes {s-1}}\otimes \omega_X^{\otimes p}\longrightarrow \Omega_X^{n-1}\otimes L^{\otimes {s-1}}\otimes \omega_X^{\otimes {p-1}}\longrightarrow 0,$$ where we have used the fact that $\Omega_X^{n-1}=T_X\otimes \omega_X$, see \cite{HA} Exercise II.5.16.
The first term of the sequence satisfies I.T. $0$ and therefore it is continuously globally generated by Proposition \ref{it0 implica cgg}. The surjection $\bigoplus_g \sO_X\rightarrow \Omega_X^1$ of the conormal exact sequence induces a surjection $\bigoplus_{\binom{g}{n-1}} \sO_X\rightarrow \Omega_X^{n-1}$. By tensoring this surjection by $L^{\otimes {s-1}}\otimes \omega_X^{\otimes {p-1}}$, we have that the quotient $\Omega_X^{n-1}\otimes L^{\otimes {s-1}}\otimes \omega_X^{\otimes {p-1}}$ is continuously globally generated by Lemma \ref{cgg-prop} \emph{(i.)}. Now applying Lemma \ref{cgg-prop} \emph{(ii.)} we also get that the middle term of the sequence, $\sD_L^1\otimes L^{\otimes {s-1}}\otimes \omega_X^{\otimes p}$, is continuously globally generated and therefore the quotient $\sI_{\Sigma_1(D)}\otimes L^{\otimes s}\otimes \omega_X^{\otimes p}$ is continuously globally generated.
\end{proof}
\end{theorem}
Proceeding as in the proof of Corollary \ref{last} we easily get the following
\begin{corollary}
The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes s}\otimes \omega_X^{\otimes p}$ is globally generated for any $s\geq 3$ and any $p\geq 1$.
\end{corollary}
We can also ask for which positive integers $s$ the sheaf $\sI_{\Sigma_1 (D)}$ is cut-out by equations in $|L^{\otimes s}|$. We will use the following vanishing theorem for varieties whose cotangent bundle is nef.
\begin{proposition}\label{vanishing}
Let $X$ be a complex projective smooth variety of dimension $n$ whose cotanget bundle $\Omega_X^1$ is nef, and let $L$ be an ample
line bundle on $X$. Then $$H^i(\Omega_X^{p}\otimes \omega_X^{\otimes {p+1}}\otimes L)=0,\quad i>0,\quad p=0,\ldots,n.$$
\begin{proof}
The cases $p=0,n$ follow directly from the Kodaira's Vanishing Theorem. The idea in general is to apply Demailly's
Vanishing Theorem, see \cite{LA} Theorem 7.3.14. To fix notation, recall briefly the theorem. Given a vector bundle $E$ of
rank $e$ and a representation $$\rho:GL(e,\mathbb{C})\longrightarrow GL(N,\mathbb{C})$$ of algebraic groups one can
associate
to $E$ a bundle $E_\rho$ of rank $N$ by applying $\rho$ to the transition matrices describing $E$.
The irreducible finite dimensional representations of $GL(e,\mathbb{C})$ are parametrized by non-increasing
$e$-ples $\lambda=(\lambda_1,\ldots ,\lambda_e)$ where $\lambda_i$ are non negative integers and
$\lambda_1\geq\ldots\geq\lambda_e\geq 0$. The height of an $e$-ple $h(\lambda)$ is the number of non-zero
components of $\lambda$. Given $E$ and $\lambda$, we denote by $\Gamma^{\lambda}E$ the bundle associated to the
representation corresponding to $\lambda$. Note that if $\lambda=(1,\ldots,1,0,\ldots ,0)$ with $m$ repetitions of
$1$ then $\Gamma ^{\lambda} E=\bigwedge^m E$. Demailly's Vanishing Theorem states that if $E$ is a nef vector bundle
and $L$ is an ample line bundle then $$H^i(\omega_X\otimes \Gamma^{\lambda}E\otimes (\det E)^{\otimes {h(\lambda)}}\otimes L)=0,
\quad i>0.$$
Now it is sufficient to apply Demailly's Vanishing Theorem with $E=\Omega_X^1$ and with $\lambda=(\stackrel{p-times}
{\overbrace{1,\ldots,1}},0,\ldots ,0)$, in which case $h(\lambda)=p$.
\end{proof}
\end{proposition}
With the same hypothesis of Theorem \ref{subvar-cgg}, we have
\begin{theorem}\label{sub-mainth}
The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes s}$ is continuously globally generated for any $s\geq n+2$.
\begin{proof}
By tensoring the standard exact sequence for the bundle of differential operators associated to $L$ of order
$\leq 1$ by $L^{\otimes {s-1}}$, with $s\geq n+2$, we get the following exact sequence $$0\longrightarrow L^{\otimes {s-1}}\longrightarrow \sD_L^1\otimes L^{\otimes {s-1}}\longrightarrow
T_X\otimes L^{\otimes {s-1}}\longrightarrow 0.$$
Let's prove that the bundle $\sD_L^1\otimes L^{\otimes {s-1}}$ satisfies I.T. $0$.\\ The first term of the sequence satisfies I.T. $0$ and by Proposition \ref{vanishing} we get that $$H^i(T_X\otimes L^{\otimes {s-1}}\otimes \alpha)=H^i(\Omega^{n-1}_X\otimes \omega_X^{\otimes n}\otimes \omega_X^{\otimes {s-n-2}}\otimes M^{\otimes {s-1}}\otimes \alpha)=0,$$ $$\forall \;\alpha\in \mbox{Pic}^0(X),\quad i>0,\quad s\geq n+2,$$ therefore also the third term of the sequence satisfies I.T. $0$. Then also the middle term of the sequence satisfies I.T. $0$ and hence it is continuously globally generated. By Lemma \ref{cgg-prop} \emph{(i.)} the quotient $\sI_{\Sigma_1(D)}\otimes L^{\otimes s}$ is also continuously globally generated.
\end{proof}
\end{theorem}
By Proposition \ref{cgg per cgg=gg} we get the following
\begin{corollary}
The sheaf $\sI_{\Sigma_1(D)}\otimes L^{\otimes s}$ is globally generated for any $s\geq n+3$.
\end{corollary}
\section*{Acknowledgemets}
It is a pleasure to thank Giuseppe Pareschi and Mihnea Popa for many valuable discussions and for their lectures held during the summer school PRAGMATIC 2007 in Catania (Italy). Furthermore we want to thank the Department of Mathematics of University of Catania for the nice stay, where this project was started.
\addcontentsline{toc}{chapter}{Bibliografia}
\nocite{*}
|
2009.12091
|
\section{Introduction}
Given two locally finite positive Borel measures ${\omega},{\sigma}$ in ${\mathbb R}^n$, the two weight problem for an operator $T$ is to characterize ${\omega},{\sigma}$ so that
\begin{equation}
\label{main}
||T(fd{\sigma})||_{L^p({\omega})} \lesssim ||f||_{L^p({\sigma})}, \quad \forall f \in L^p({\sigma}),\;\; p>1.
\end{equation}
\subsection{One weight theory}
\eqref{main} is a generalization of the one weight inequality for the Hilbert transform where $T=H$, $d{\omega}(x)=w(x)dx$, $d{\sigma}(x)=w(x)^{1-p\prime}dx$ and $f\in L^p(w)$
\begin{equation}\label{Hilbert one weight}
||Hf||_{L^p({\omega})} \lesssim ||f||_{L^p({\omega})}
\end{equation}
which was shown by Hunt, Muckenhoupt and Wheeden \cite{HMW} to be equivalent to the finiteness of the Muckenhoupt one weight $A_p$ condition, namely ${\omega}$ has to be absolutely continuous to Lebesgue measure $d{\omega} =w(x)dx$ and
\begin{equation}
\label{one weight Ap}
A_p(w)=\underset{I}{\sup}\frac{1}{|I|}\int_I w(x)dx\left( \frac{1}{|I|}\int_Iw(x)^{\frac{1}{1-p}} dx\right)^{p-1}\leq C<\infty
\end{equation}
where the supremum is taken uniformly over all cubes in ${\mathbb R}^n$. There has been a huge amount of work in harmonic analysis and boundary value problems around the $A_p$ condition, check Stein \cite{St}, Duoandikoetxea \cite{DJ}, Garnett \cite{Ga} and references there.
Coifman and Fefferman in \cite{CoFe} proved \eqref{Hilbert one weight} using the following inequality, which holds for any $w \in A_\infty=\bigcup_{p\geq 1}A_p$,
\begin{equation}\label{Hilbert maximal}
\int_{{\mathbb R}^n}|Tf(x)|^pw(x)dx\leq C\int_{{\mathbb R}^n}|Mf(x)|^pw(x)dx
\end{equation}
where $T$ is any singular integral operator and $f$ is bounded and compactly supported. We can extend to any locally integrable $f$ for which the right hand side is finite (since otherwise there is nothing to prove) using the dominated convergence theorem.
Muckenhoupt in \cite{Mu} proved, for $n=1$, that a more general class of weights than the $A_p$ weights, namely the $C_p$ weights (see \eqref{Cp condition}), are necessary for \eqref{Hilbert maximal} to hold. This was generalized in higher dimensions by Sawyer in \cite{Saw3}. Sawyer in \cite{Saw3} also shows that the $C_q$ condition for $q>p$ is sufficient for \eqref{Hilbert maximal} to hold. It is still unknown if the $C_p$ condition is sufficient for \eqref{Hilbert maximal} to hold.
We say the measure ${\omega}$ satisfies the $C_p$ condition, $1<p<\infty$ if it is absolutely continuous to the Lebesgue measure, i.e. $d{\omega}=w(x)dx$, and there exist $C,\epsilon>0$ such that
\begin{equation}\label{Cp condition}
\frac{|E|_w}{\int_{{\mathbb R}^n}|M\mathbf{1}_I(x)|^pw(x)dx}\leq C\left(\frac{|E|}{|I|}\right)^\epsilon, \text{for } E \text{ compact subset of }I \text{ cube}
\end{equation}
with $\int_{{\mathbb R}^n} \left(M\mathbf{1}_I\left(x\right)\right)^pw(x)dx<\infty$, where $|E|_w=\int_Ew(x)dx$. Here $M$ denotes the classical Hardy-Littlewood maximal operator. We will call $w(x)$ a $C_p$ weight.
We prove that $C_p$ weights is a strictly larger class than the $A_\infty$ weights. We actually show that there exist even doubling weights (see
\eqref{doubling}) that are also $C_p$ weights that are not in $A_\infty$. Check the diagram at the end of the introduction.
\begin{thm}\label{Cp theorem}
($C_p\cap\mathcal{D}\nRightarrow A_\infty$) There exist a weight $w$ that is doubling and satisfies the $C_p$ condition but $w$ is not an $A_\infty$ weight.
\end{thm}
The weight $w$ used in theorem \ref{Cp theorem}. has a doubling constant $C_w\gtrsim 3^{np}$. We show that this is sharp, i.e. if the doubling constant $C_w$ of the weight $w$ does not satisfy $C_w \geq 3^{np}$ then the $C_p$ condition is equivalent to $A_\infty$.
\begin{thm}\label{Cp theorem small doubling}
($C_p$+small doubling $\Rightarrow {\mathcal{A}}_\infty$) Let w be a doubling $C_p$ weight with doubling constant $C_w< 3^{np}$ in ${\mathbb R}^n$. Then $w \in A_\infty$.
\end{thm}
\subsection{Two weight theory.}
The generalization of the one weight $A_p$ condition was naturally modified to the two weight problem by:
\begin{equation}
\label{classical Ap}
{\mathcal{A}}_p({\omega},{\sigma})=\underset{I}{\sup}\left(\frac{{\omega}(I)}{|I|}\right)^\frac{1}{p}\left(\frac{{\sigma}(I)}{|I|}\right)^\frac{1}{p\prime}<\infty
\end{equation}
where the supremum is taken over all cubes in ${\mathbb R}^n$ and the weight $w$ gives its place to two positive locally finite Borel measures. Notice that by setting $d{\omega}=w(x)^{\frac{1}{1-p}}dx$, $d{\sigma}=w(x)dx$
we retrieve the one weight $A_p$ condition \eqref{one weight Ap}.
The two weight problem could have applications in a number of problems connected to higher dimensional analogs of the Hilbert transform. For example, questions regarding subspaces of the Hardy space invariant under the inverse shift operator (see \cite{Vol}, \cite{NaVo}),questions concerning orthogonal polynomials (see \cite{VoYu}, \cite{PeVoYu}, \cite{PeVoYu1}) and some questions in quasiconformal theory for example the conjecture of Iwaniec and Martin (see \cite{IwMa}) or higher dimensional analogues of the Astala conjecture (see \cite{LSU1}).
The classical ${\mathcal{A}}_p$ condition \eqref{classical Ap} is necessary for \eqref{main} to hold but is no longer sufficient, which is an indication that makes two weight theory much more complicated. F. Nazarov in \cite{Naz} has shown that even the strengthened ${\mathcal{A}}_p({\omega},{\sigma})$ conditions with one or two tails of Nazarov, Treil and Volberg
\begin{equation}
\label{one tailed}
{\mathcal{A}}_p^{t_1}({\omega},{\sigma})=\underset{I}{\sup} \left(\frac{{\omega}(I)}{|I|}\right)^\frac{1}{p}\left(P(I,{\sigma})\right)^\frac{1}{p\prime}<\infty
\end{equation}
\begin{equation}
\label{two tailed}
{\mathcal{A}}_p^{t_2}({\omega},{\sigma})=\underset{I}{\sup} \left(P(I,{\omega})\right)^\frac{1}{p}\left(P(I,{\sigma})\right)^\frac{1}{p\prime}<\infty
\end{equation}
where
\begin{equation}\label{poisson integral}
P(I,{\omega})\equiv \int_{\mathbb R}\left(\frac{|I|^\frac{1}{n}}{(|I|^\frac{1}{n}+\dist(x,I))^{2}}\right)^n{\omega}(dx)
\end{equation}
along with their duals ${\mathcal{A}}^{t_1,*}_p({\omega},{\sigma}),{\mathcal{A}}^{t_2,*}_p({\omega},{\sigma})$, where the roles of ${\sigma}$ and ${\omega}$ are interchanged, are no longer sufficient for \eqref{main} to hold.
When the operator $T$ in \eqref{main} is a fractional operator such as the Cauchy transform or the fractional Riesz transforms then the fractional analogs of \eqref{classical Ap}, \eqref{one tailed}, \eqref{two tailed} are used
\begin{equation}
\label{fractional Ap}
{\mathcal{A}}^\alpha_p({\omega},{\sigma})=\underset{I}{\sup}\left(\frac{{\omega}(I)}{|I|^{1-\frac{\alpha}{n}}}\right)^\frac{1}{p}\left(\frac{{\sigma}(I)}{|I|^{1-\frac{\alpha}{n}}}\right)^\frac{1}{p\prime}<\infty
\end{equation}
\begin{equation}
\label{fractional one tailed}
{\mathcal{A}}_p^{t_1,\alpha}({\omega},{\sigma})=\underset{I}{\sup}
\left(\frac{{\omega}(I)}{|I|^{1-\frac{\alpha}{n}}}\right)^\frac{1}{p}\left({\mathcal{P}}^\alpha(I,{\sigma})\right)^\frac{1}{p\prime}<\infty
\end{equation}
\begin{equation}
\label{fractional two tailed}
{\mathcal{A}}_p^{t_2,\alpha}({\omega},{\sigma})=\underset{I}{\sup} \left({\mathcal{P}}^\alpha(I,{\omega})\right)^\frac{1}{p}\left({\mathcal{P}}^\alpha(I,{\sigma})\right)^\frac{1}{p\prime}<\infty
\end{equation}
where ${\mathcal{P}}^\alpha$ is the \textit{reproducing} Poisson integral and is given by
$${\mathcal{P}}^\alpha(I,{\omega})\equiv \int_{{\mathbb R}^n}\left(\frac{|I|^\frac{1}{n}}{(|I|^\frac{1}{n}+\dist(x,I))^{2}}\right)^{n-\alpha}{\omega}(dx)$$
The \textit{standard} Poisson integral, is given by
$$P^\alpha(I,{\omega})\equiv \int_{{\mathbb R}^n}\frac{|I|^\frac{1}{n}}{(|I|^\frac{1}{n}+\dist(x,I))^{n+1-\alpha}}{\omega}(dx)$$
and is used for the definition of the fractional ``buffer" conditions. The two Poisson integrals agree for $n=1$, $\alpha=0$.
We refer the reader to \cite{SSU4} for more details. All the results that we are proving here for the ${\mathcal{A}}_p$ conditions hold for their fractional analogs without any modification in the proofs.
We show that the classical ${\mathcal{A}}_p$ condition is weaker than the tailed conditions, but the two tailed ${\mathcal{A}}_p$ condition holding is equivalent to both one tailed ${\mathcal{A}}_p$ conditions holding.
\begin{thm}\label{non doubling Ap examples} We have the following implications:
\begin{enumerate}
\item (${\mathcal{A}}_p \nRightarrow {\mathcal{A}}_p^{t_1}\cap {\mathcal{A}}_p^{t_1,*}$) The two weight classical ${\mathcal{A}}_p$ condition does not imply the one tailed ${\mathcal{A}}_p$ conditions.
\item (${\mathcal{A}}_p^{t_1}\nRightarrow {\mathcal{A}}_p^{t_2}$) The one tailed ${\mathcal{A}}_p^{t_1}$ condition does not imply the two tailed ${\mathcal{A}}_p^{t_2}$ condition.
\item (${\mathcal{A}}_p^{t_1}\cap {\mathcal{A}}_p^{t_1,*} \Leftrightarrow {\mathcal{A}}_p^{t_2}$) The two tailed ${\mathcal{A}}_p^{t_2}$ condition holding is equivalent to both one tailed ${\mathcal{A}}_p^{t_1},{\mathcal{A}}_p^{t_1,*}$ conditions holding.
\end{enumerate}
\end{thm}
The measures that we use for the proof of theorem \ref{non doubling Ap examples}. are non doubling and we show that this is the only case. All doubling measures are reverse doubling (see lemma \ref{reverse doubling implies doubling}.). So the previous sentence is justified by the following theorem:
\begin{thm}\label{Ap doubling equivalence theorem}(${\omega},{\sigma}\in \mathcal{D}$ , ${\mathcal{A}}_p \Rightarrow {\mathcal{A}}_p^{t_1}\Rightarrow {\mathcal{A}}_p^{t_2}$)
If ${\omega},{\sigma}$ are reverse doubling measures, then the classical two weight classical ${\mathcal{A}}_p$ implies the tailed ${\mathcal{A}}_p$ conditions.
\end{thm}
\subsection{The testing conditions.} Since the two weight ${\mathcal{A}}_p$ conditions are not sufficient for \eqref{main} to hold, some other necessary conditions are required, namely the \textbf{1}-testing conditions
\begin{eqnarray}
\label{test}
||T(\textbf{1}_Id{\sigma})||_{L^p({\omega})} &\leq& \mathfrak{T}^p|I|_{\sigma} \\
||T^*(\textbf{1}_Id{\omega})||_{L^p({\sigma})} &\leq& (\mathfrak{T}^*)^p|I|_{\omega}\nonumber
\end{eqnarray}
where $I$ runs over all cubes and $\mathfrak{T}, \mathfrak{T}^*$ are the best constants so that \eqref{test} holds.
These conditions alone are trivially not sufficient for \eqref{main} to hold since as pointed out in \cite{NiTr} for example, the second Riesz transform $R_2$ of any measure supported on the real line is the zero element in $L^p({\omega})$ for any measure ${\omega}$ carried by the upper half plane. On the other hand, such a pair of measures need not satisfy the Muckenhoupt conditions, which are necessary for \eqref{main} to hold.
The famous Nazarov-Treil-Volberg conjecture (NTV conjecture), states that ${\mathcal{A}}_p({\omega},{\sigma})$ and testing conditions are necessary and sufficient for \eqref{main} to hold.
\subsection{The ``buffer" Pivotal and Energy conditions.} Nazarov, Treil and Volberg in a series of very clever papers assumed the pivotal condition, for $p=2$, and proved \eqref{main} (see \cite{NTV1},\cite{NTV2},\cite{Vol}).
The Pivotal condition ${\mathcal{V}}$ is given by
\begin{equation}
\label{pivotal}
{\mathcal{V}}({\omega},{\sigma})^p=\underset{I_0=\cup I_r}{\sup}\frac{1}{{\sigma}(I_0)}\displaystyle\sum_{r \geq 1}{\omega}(I_r)P(I_r,1_{I_0}{\sigma})^p<\infty
\end{equation}
where the supremum is taken over all possible decompositions of $I_0$ in disjoint cubes $\{I_r\}_{r \in {\mathbb N}}$ and all cubes $I_0$ such that ${\sigma}(I_0) \neq 0$, and its dual ${\mathcal{V}}^*$ where ${\sigma}$ and ${\omega}$ are interchanged.
Lacey, Sawyer and Uriarte-Tuero in \cite{LSU} proved, again for $p=2$, that \eqref{main} for the Hilbert transform implies the weaker Energy condition ${\mathcal{E}}$
\begin{equation}
\label{energy}
{\mathcal{E}}({\omega},{\sigma})^p=\underset{I_0=\cup I_r}{\sup}\frac{1}{{\sigma}(I_0)}\displaystyle\sum_{r \geq 1}{\omega}(I_r)E(I_r,{\omega})^2P(I_r,1_{I_0}{\sigma})^p<\infty
\end{equation}
where the supremum is taken over all possible decompositions of $I_0$ in disjoint cubes $\{I_r\}_{r \in {\mathbb N}}$ and all cubes $I_0$ such that ${\sigma}(I_0) \neq 0$, where
\begin{equation}\label{energy gain}
E(I,{\omega})^2 \equiv \frac{1}{2}\mathbb{E}_I^{{\omega} (dx)}\mathbb{E}_I^{{\omega} (dx')}\frac{(x-x')^2}{|I|^2}
\end{equation}
and its dual ${\mathcal{E}}^*$ where ${\sigma}$ and ${\omega}$ are interchanged.
In the same paper, Lacey, Sawyer and Uriarte-Tuero proved that a hybrid of the Pivotal and Energy conditions was sufficient but not necessary in the two weight inequality for the Hilbert transform.
Both the energy and the pivotal conditions, sometimes referred to as ``buffer conditions", are used to approximate certain forms that appear in the proofs of almost all two weight inequalities. The NTV conjecture states that we can prove $\eqref{main}$ without assuming them.
It is true though that if both ${\omega},{\sigma}$ are individually ${\mathcal{A}}_\infty$ weights, the classical ${\mathcal{A}}_p({\omega},{\sigma})$ condition implies the Pivotal condition providing a short and elegant proof of the NTV-conjecture for ${\mathcal{A}}_\infty$ weights assuming existing $T1$ theory. Earlier, Sawyer in \cite{Saw2} gave a proof using different methods for the case of smooth kernels.
\begin{thm}\label{T1 theorem} ($T1$ theorem for ${\mathcal{A}}_\infty$ weights)
Assume ${\omega},{\sigma}$ are in $A_\infty$, $T$ is an $\alpha$-fractional singular integral and we have the $T1$ testing and the fractional ${\mathcal{A}}_2^\alpha({\omega},{\sigma})$ conditions to hold, along with their duals. Then, $T$ is bounded on $L^2({\mathbb R}^n)$.
\end{thm}
\subsection{The relationship between the two weight ${\mathcal{A}}_p$ and ``buffer" conditions. }It is shown in \cite{LSU} that we can have a pair of measures satisfying the tailed ${\mathcal{A}}_2$ conditions \eqref{one tailed}, \eqref{two tailed} but failing to satisfy the Pivotal condition \eqref{pivotal}, hence proving the implication ${\mathcal{A}}_2^{t_2}\nRightarrow {\mathcal{V}}^2$.
We show here that the Pivotal condition \eqref{pivotal} does not imply the tailed ${\mathcal{A}}_2$ conditions \eqref{one tailed}, \eqref{two tailed}.
\begin{thm}\label{non doubling pivotal example}(${\mathcal{V}}^p \nRightarrow {\mathcal{A}}_p^{t_1}$) Let $1<p\leq 2$.
The Pivotal condition ${\mathcal{V}}^p$ does not imply the one tailed ${\mathcal{A}}_p$ condition ${\mathcal{A}}_p^{t_1}$.
\end{thm}
\begin{rem}\label{Pivotal implies Energy}
It is immediate from \eqref{energy gain} that the Energy condition \eqref{energy} is dominated by the Pivotal condition \eqref{pivotal} hence we immediately get the following important corollary.
\end{rem}
\begin{cor}\label{energy corollary}(${\mathcal{E}}\nRightarrow {\mathcal{A}}_2^{t_1}$)
Let $1<p\leq 2$.
The Energy condition ${\mathcal{E}}$ does not imply the one tailed ${\mathcal{A}}_p$ condition ${\mathcal{A}}_p^{t_1}$. \qed
\end{cor}
\subsection{Organization of the paper}
In section 4 we prove theorems \ref{Cp theorem} and \ref{Cp theorem small doubling}. In section 5.1 we prove theorem \ref{non doubling Ap examples}. and in section 5.2 we prove theorem \ref{Ap doubling equivalence theorem}. We prove the $T1$ theorem for ${\mathcal{A}}_\infty$ weights, theorem \ref{T1 theorem}, in section 5.3, using the Sawyer testing condition (see \eqref{Sawyer testing} and theorem \ref{Sawyer testing theorem}). In section 5.4 we prove theorem \ref{non doubling pivotal example} and give a partial answer to question \ref{doubling measures question} in theorem \ref{Ap and small doubling corollary}. Check the graph and the lattices in sections 2 and 3 for a summary in the $T1$ theory and the theorems presented in this paper.
\subsection{Known cases of the NTV conjecture.}While the general case of the NTV
conjecture in ${\mathbb R}^n$ is still not completely understood, several important special
cases have been completely solved.
First, in the two part paper by Lacey, Sawyer,
Shen and Uriarte-Tuero \cite{LSSU} and Lacey \cite{Lac} proved the NTV conjecture,
namely that ${\mathcal{A}}_p({\omega},{\sigma})$ and testing conditions are necessary and sufficient for
\eqref{main} to hold, assuming also that the measures ${\sigma}$ and ${\omega}$ had no common
point masses, for the Hilbert Transform. Hyt{\"o}nen \cite{Hyt} with his new offset
version of ${\mathcal{A}}_2$
\begin{equation}
\label{offset A2}
{\mathcal{A}}_2^{\text{offset}}({\omega},{\sigma})=\underset{I}{\sup}\frac{{\omega}(I)}{|I|}\int_{{\mathbb R}^n\backslash I}\left(\frac{|I|^\frac{1}{n}}{(|I|^\frac{1}{n}+\dist(x,I))^2}\right)^n{\sigma}(dx) <\infty
\end{equation}
removed the restriction of common point masses on ${\sigma}, {\omega}$. An alternate approach using ``punctured" versions of ${\mathcal{A}}_2$ appears in \cite{SSU5}.
Other important cases include first Sawyer, Shen, Uriarte-Tuero \cite{SSU4} for $\alpha$-fractional singular integrals, Lacey-Wick \cite{LW} for the Riesz transforms, Lacey, Sawyer, Shen, Uriarte-Tuero and Wick \cite{LSSUW} for the Cauchy transform and Sawyer, Shen, Uriarte-Tuero \cite{SSU2} for the Riesz tranform when a measure is supported on a curve in ${\mathbb R}^n$ and recently \cite{Saw2} for general Calderon-Zygmund operators and doubling measures that also satisfy the fractional ${\mathcal{A}}_\infty^\alpha$ condition, check \eqref{A alpha infinity}. The NTV conjecture is yet to be proven for a general operator $T$.
\textbf{Acknowledgements:} I would like to thank my advisors Eric Sawyer and Ignacio Uriarte-Tuero for introducing me the area, presenting the problem to me and providing suggestions for its progress.
\section{Lattices}
\begin{center}
\textbf{One weight conditions}
\end{center}
Combining \eqref{lattice Ap}, theorem \ref{Cp theorem}, \textit{remark \ref{remark1}}, \textit{remark \ref{remark2}}, \textit{remark \ref{remark fractional A infinity}} and theorem \ref{fractional A infinity and doubling} we get, for $p<q$, the following lattice of inclusions for the conditions used in one weighted theory
\[
\xy
(-21,7)*+{A_1({\omega})\subsetneq A_p({\omega}) \subsetneq A_q({\omega}) \subsetneq A_\infty({\omega})\ \ \ \ \ };
(3.2,10.5)*+{\rotatebox{45}{$\,\, \subsetneq\,\,$}};
(26.5,13)*+{{\mathcal{A}}_\infty^\alpha({\omega})\cap \mathcal{D}({\omega})\subsetneq\left\{
\begin{array}{l}
\!\!\!\mathcal{D}({\omega})
\\
\!\!\!{\mathcal{A}}_\infty^\alpha({\omega})
\end{array}
\right.};
(3.2,3)*+{\rotatebox{-45}{$\,\, \subsetneq\,\,$}};
(23,1)*+{\ \ \ \ \ C_p({\omega})\cap \mathcal{D}({\omega})\subsetneq\left\{
\begin{array}{l}
\!\!\!\mathcal{D}({\omega})
\\
\!\!\!C_p({\omega})
\end{array}
\right.};
\endxy
\]
\begin{center}
\textbf{Two weight conditions}
\end{center}
Combining \textit{remark \ref{remark Ap}}, theorem \ref{non doubling Ap examples}, theorem \ref{Ap doubling equivalence theorem}, \textit{remark \ref{pivotal implies classical Ap}}, theorem \ref{non doubling pivotal example}, \textit{remark \ref{Pivotal implies Energy}}, corollary \ref{energy corollary}, theorem \ref{fractional A infinity and doubling}, theorem \ref{Sawyer testing theorem}, corollary \ref{A infinity and pivotal corollary} and the example in \cite{LSU} we get the following lattice of inclusions for the conditions used in two weighted theory.\\
\\
\text{For general Radon measures:}
\begin{eqnarray*}
\text{Theorem \ref{non doubling Ap examples},\,\, }&&\\
\text{\textit{remark \ref{remark Ap}}, \textit{remark \ref{pivotal implies classical Ap}}, \cite{LSU}: }&&{\mathcal{V}}({\omega},{\sigma})^p\subsetneq {\mathcal{A}}_p({\omega},{\sigma})\subsetneq {\mathcal{A}}^{t_1}_p({\omega},{\sigma})\cup {\mathcal{A}}^{t_1}_p({\sigma},{\omega})\\
&& A_p^{t_1}({\omega},{\sigma})\cap A_p^{t_1}({\sigma},{\omega})=A_p^{t_2}({\omega},{\sigma})\\
&& A_p^{t_1}({\omega},{\sigma})\subsetneq A_p^{t_2}({\omega},{\sigma})\\
\text{\textit{Remark \ref{Pivotal implies Energy}},\,\, }\\
\text{theorem \ref{non doubling pivotal example}, corollary \ref{energy corollary}: }&&{\mathcal{E}}({\omega},{\sigma})^p\subsetneq{\mathcal{V}}({\omega},{\sigma})^p \centernot\implies {\mathcal{A}}_p^{t_1}({\omega},{\sigma})\subsetneq {\mathcal{A}}_p^{t_2}({\omega},{\sigma})
\end{eqnarray*}
\text{For doubling measures:}
\begin{eqnarray*}
\text{Theorem \ref{Ap doubling equivalence theorem}: }&&{\mathcal{A}}_p({\omega},{\sigma})= A_p^{t_1}({\omega},{\sigma})= {\mathcal{A}}_p^{t_2}({\omega},{\sigma})\\
\text{Theorem \ref{Sawyer testing theorem}, corollary \ref{A infinity and pivotal corollary}: }&& {\mathcal{A}}_p({\omega},{\sigma})\cap {\mathcal{A}}_\infty({\omega}) \subsetneq S_d({\omega},{\sigma})\subseteq{\mathcal{V}}({\omega},{\sigma})^p\\
\text{Theorem \ref{Ap and small doubling corollary}: }&& {\mathcal{A}}_p({\omega},{\sigma})\cap \mathcal{D}({\sigma})\cap\mathcal{D}({\omega})\subsetneq {\mathcal{V}}({\omega},{\sigma})^p\\
&&\text{(small doubling constant)}
\end{eqnarray*}
\section{What we know so far}
The following diagram shows the relationships between the different conditions that have appeared in the study of two weighted inequalities for the $\textbf{1}$-testing case over the years.
\hspace{-1.3 cm}\includegraphics[height=7 cm,width=15 cm]{"t1_400_diagram".png}
\section{One weight conditions}
\subsection{The $A_1$ and $A_\infty$ conditions.}
We say the weight $w(x)$ is an $A_1$ weight if and only if
\begin{equation}\label{one weight A1 condition}
Mw(x)\leq [w]_{A_1}w(x)
\end{equation}
and we call $[w]_{A_1}$ the $A_1$ constant of $w$. $A_1$ is a stronger condition than the $A_p$ condition for $p>1$.
If we take the union of all the $A_p$ weights for the different $p$ we get the larger class of $A_\infty$ weights, i.e. $A_\infty=\displaystyle \bigcup_{p>1}A_p$ (check \cite{DJ}, chapter 7). Another equivalent and commonly used characterization for $A_\infty$ weights is the following:
We say $w \in A_\infty$, if for all $I \subset {\mathbb R}^n$ and $E\subset I$, there exist uniform constants $C,\varepsilon>0$ such that
\begin{equation}
\label{A infinity}
\frac{w(E)}{w(I)}\leq C\left(\frac{|E|}{|I|}\right)^\varepsilon.
\end{equation}
\begin{rem}
We have the following linear lattice for $1<p<q<\infty$:
\begin{equation}\label{lattice Ap}
A_1\subsetneq A_p \subsetneq A_q \subsetneq A_\infty
\end{equation}
The power weights $w(x)=|x|^\alpha$ show that all the inclusions are proper.
\end{rem}
In particular we have the following known lemma.
\begin{lem}
Let $w(x)=|x|^\alpha$, $x \in {\mathbb R}^n$. Then\\
\begin{center}
$\displaystyle
[w]_{{\mathcal{A}}_p}\approx \begin{cases} (\alpha+n)^{-1}(-\alpha \frac{p\prime}{p}+n)^{-\frac{p}{p\prime}}, \quad -n<\alpha<n(p-1)
\\
\infty ,\quad \text{otherwise}
\end{cases}
$\end{center}\qed
\end{lem}
\subsection{Doubling and reverse doubling measures.} The $A_p$ weights for $1\leq p \leq \infty$ are all absolutely continuous to the Lebesgue measure and \textit{doubling}.
We say a measure ${\omega}$ on ${\mathbb R}^n$ is \textit{doubling}, and write ${\omega} \in \mathcal{D}$, if it's not the zero measure and there
is a constant $K>0$ such that for all cubes $I\subset {\mathbb R}^n$ we have
\begin{equation}\label{doubling}
{\omega}(2I)\leq K{\omega}(I).
\end{equation}
Not all doubling measures are $A_\infty$ as was first shown in \cite{FeMu} using an absolutely continuous measure $w$ that is also doubling but is not in $A_\infty$. Mutually singular doubling measures also exist, which of course are not in $A_\infty$, a nice construction can be found in \cite{GaKS}.
We say a measure ${\sigma}$ is \textit{reverse doubling} if there exists
$\varepsilon>0$ depending only on the measure ${\sigma}$ such that for all cubes $I$:
\begin{equation}\label{reverse doubling}
{\sigma}(I)\leq (1+\varepsilon){\sigma}(I).
\end{equation}
Doubling measures satisfy the reverse doubling property as the following lemma from \cite{Ruz} proves.
\begin{lem}\label{reverse doubling implies doubling}
Let ${\sigma}$ be a doubling measure with doubling constant $K_{\sigma}$. Then there exist a constant $\delta_{\sigma}>0$ depending only on the doubling constant of ${\sigma}$ such that for all cubes $I$ we have ${\sigma}(2I)\geq (1+\delta_{\sigma}){\sigma}(I)$.\qed
\end{lem}
For the rest of this section, we are going to say that a measure ${\omega}$ is doubling if
\begin{equation}\label{doubling3}
{\omega}(3I)\leq C{\omega}(I).
\end{equation}
This definition is equivalent to \eqref{doubling}.
\subsection{A $C_p$ and doubling weight that is not in $A_\infty$}
In this subsection we give the proof for theorem \ref{Cp theorem}. The construction is a very involved variation of the construction in \cite{GaKS}.
\\
\\\textit{Proof of theorem \ref{Cp theorem}:} Let $I_0=[-\frac{1}{2},\frac{1}{2}]$ and $I_n=3I_{n-1}=3^nI_0$, the intervals
centered at $0$ with length $3^n$. We call $\mathcal{G}$ the triadic grid created by the intervals $I_n$. Define the measure $w$ as follows:
$w\left(x\right)=1$, $x \in I_0$ and $w\left(I_n\right)=\frac{1}{\delta_1^n}$, $\frac{1}{3}>\delta_1>0$ to
be determined later. Call $I^l_n, I^m_n, I^r_n$ the left, middle and right third of
$I_n$ respectively. Let $w(x)=\frac{3^{-n+1}(1-\delta_1)}{2\delta_1^n}$, $x \in I^r_n$.
Fix $k \in {\mathbb N}$ and $n_k \in {\mathbb N}$ to be determined later.
Let $I^{l,m}_{n_k}$ to have the same center as
$I^l_{n_k}$ and $\vert I^{l,0}_{n_k}\vert=3^{m}$, $0\leq m \leq n_k-1$. Let $w(I^{l,m}_{n_k})=\delta_2^{n_k-m}w(I^l_{n_k})$, where $\frac{1}{3}>\delta_2>0$ to be determined later. For $m \geq 2$ let
$w\left(x\right)=\frac{3(1-\delta_2)}{2|I^{l,m}_{n_k}|}w\left(I^{l,m}_{n_k}\right),$ for all $x\in I^{l,m}_{n_k}\backslash I^{l,m-1}_{n_k}$. This defines $w$
completely outside $3I^{l,0}_{n_k}$. Check Figure \ref{Figure 2.1.}.
$ $
$ $
\begin{center}
\begin{tikzpicture}
\label{Figure 2.1.}
\color{blue}
\draw (-2/27,0) -- (2/27,0);
\draw (-2/27-12/27,1/4) -- (-2/27,1/4);
\draw (2/27,1/4) -- (2/27+12/27,1/4);
\draw (-2/27-12/27-12/9,1/2) -- (-2/27-12/27,1/2);
\draw (2/27+12/27,1/2) -- (2/27+12/27+12/9,1/2);
\draw (2/27+12/27+12/9,1)--(2/27+12/27+12/9+12/3,1);
\color{red}
\draw (-2/27-12/27-12/9-12/3,1)--(-2/27-12/27-12/9-12/3+4/3,1);
\draw (-2/27-12/27-12/9-4/3,1)--(-2/27-12/27-12/9,1);
\draw (-2/27-12/27-12/9-12/3+4/3,0.5)--(-2/27-12/27-12/9-12/3+4/3+4/9,0.5);
\draw (-2/27-12/27-12/9-4/3-4/9,0.5)--(-2/27-12/27-12/9-4/3,0.5);
\draw (-2/27-12/27-12/9-12/3+4/3+4/9,0.25)--(-2/27-12/27-12/9-12/3+4/3+4/9+4/27,0.25);
\draw (-2/27-12/27-12/9-4/3-4/9-4/27,0.25)--(-2/27-12/27-12/9-4/3-4/9,0.25);
\draw (-2/27-12/27-12/9-12/3+4/3+4/9+4/27,0)--(-2/27-12/27-12/9-4/3-4/9-4/27,0);
\color{black}
\draw[decorate,decoration={brace,mirror}] (-2/27-12/27-12/9-12/3,-0.1) -- (-2/27-12/27-12/9,-0.1)
node (m) at (-2/27-12/27-12/9-12/3+2,-0.4){\footnotesize $I^l_{k_n}$};
\draw[decorate,decoration={brace,mirror}] (0,-0.1) -- (0,-0.1)
node (m) at (0,-0.4){\footnotesize $I_{0}$};
\end{tikzpicture}
Figure 4.3
\end{center}
$ $
Now let $I\subset I^{l,0}_{n_k}$ be any triadic interval such that
$|I|\geq 3^{-i_k}$, and $i_k \in {\mathbb N}$ will be determined later. Let
$$
w\left(I\right)=\left\{
\begin{array}{ll}
\delta_2 w\left(\pi I\right) & \mbox{if } \partial I \cap \partial \pi
I=\emptyset\\
\frac{1-\delta_2}{2}w\left(\pi I\right) & \mbox{if } \partial I \cap
\partial \pi I\neq\emptyset
\end{array}
\right.
$$
where $\pi I$ is the triadic parent of $I$ in the grid $\mathcal{G}$. Let $w\left(x\right)$ be constant for any triadic interval $I\subset I^{l,0}_{n_k}$ with $|I|\leq 3^{-i_k}$.
We are left with defining $w$ on $3I^{l,0}_{n_k}\backslash I^{l,0}_{n_k}$. Call
$J^l_{n_k}$ the left third of $3I^{l,0}_{n_k}$ and $J^r_{n_k}$ its right third.
Let $J^{l,i_k}_{n_k}$ be the right most triadic $i_k$ child of $J^l_{n_k}$ and let
$w(x)=\left(\frac{1-\delta_2}{2}\right)^{i_k}w(J^{l}_{n_k})$, $x \in
J^{l,i_k}_{n_k}$. Now for all triadic $I$ such that $J^{l,i_k}_{n_k} \subset I
\subset J^{l}_{n_k}$, let $I^l, I^m, I^r$ denote the left, middle and right thirds
of $I$ and define $w(x)=\frac{3(1-\delta_2)}{2|I|}w(I)$, $x \in I^l$,
$w(x)=\frac{3\delta_2}{|I|}w(I)$, $x\in I^m$ and $w(I^r)=\frac{1-\delta_2}{2}w(I)$.
Similarly (but on the left end) we define $w$ on $J^r_{n_k}$. This construction on $J^l_{n_k}$ and $J^r_{n_k}$ is done so that $w$ is doubling.
Indeed, to see that $w$ is doubling, let $J_1,J_2$ be two triadic intervals of the
same length that touch. If they have the same triadic parent then $w(J_1)/w(J_2)\lesssim
\frac{1}{\min(\delta_1,\delta_2)}$. If not, we apply the first case to their common ancestor and get again $w(J_1)/w(J_2)\lesssim
\frac{1}{\min(\delta_1,\delta_2)}$. For an arbitrary interval $I$, let $3^m\leq |I|\leq3^{m+1}$. Then $I \subset J_1\cup J_2$ triadic intervals with $|J_1|=|J_2|=3^{m+1}$. Then $w(3I)\lesssim \frac{1}{\min(\delta_1,\delta_2)}w(I)$.
Allowing $i_k \rightarrow \infty$ makes $w$ singular to Lebesgue. Check (\cite{GaKS}, Lemma 2.2).
Choose $i_k$ so that there exists an interval $J_{n_k}$ and $E_{n_k}\subset J_{n_k} \subset 3I^{l,0}_{n_k}$ ,such that
\begin{equation}\label{worst case}
\frac{w\left(E_{n_k}\right)}{w\left(J_{n_k}\right)}\approx\frac{1}{2}, \quad \frac{|E_{n_k}|}{|J_{n_k}|}\approx \frac{1}{2^k} \quad \text{ and }\quad \frac{w\left(E\right)}{w\left(I\right)}\lesssim 2^k \frac{|E|}{|I|}
\end{equation}
for all intervals $I\subset 3I^{l,0}_{n_k}$ and $E\subset
I$. This can be done by following in \cite{GaKS} definition
2.1. and lemma 2.2. Note that because we stop at height
$i_k$, \eqref{worst case} tells us that there is a ``worst
interval" $J_{n_k}$.
\RC{\color{red} This is true by choosing $\delta_1,\delta_2<\frac{1}{3}$. It can be seen easily for intervals of the form $I=3^mI^{l,0}_{n_k}$ that
$$
\frac{w\left(E\right)/w\left(I\right)}{|E|/|I|}\leq\left(3\min(\delta_1,\delta_2)\right)^m\frac{w\left(E\right)/w\left(I^{l,0}_{n_k}\right)}{|E|/|I^{l,0}_{n_k}|}\lesssim 2^k.
$$
Taking different cases on $I$ gives the answer for an arbitrary interval.}
By letting $k \rightarrow \infty$ it is clear that $A_\infty$ fails to hold for $w$. So we now need to prove that the $C_p$ condition holds.
By the end of the next calculation we will determine $\delta_1,\delta_2$. We want to prove $w$ is $C_p$ and for that we need to show that \eqref{Cp condition} holds with $\int_{\mathbb R} \left(M\mathbf{1}_I\left(x\right)\right)^pw\left(x\right)dx<\infty$ for any interval $I$. Let first, $I=I^{l,0}_{n_k}$.
\begin{eqnarray}\label{Cp main}
&&
\int_{\mathbb R} \left(M\mathbf{1}_{I^{l,0}_{n_k}}\left(x\right)\right)^pw\left(x\right)dx=\int_{I^{l,0}_{n_k}}\left(M\mathbf{1}_{I^{l,0}_{n_k}}\left(x\right)\right)^pw\left(x\right)dx+\\
&+&
\int_{I^{l}_{n_k}\backslash I^{l,0}_{n_k}}\left(M\mathbf{1}_{I^{l,0}_{n_k}}\left(x\right)\right)^pw\left(x\right)dx+\int_{{\mathbb R}\backslash I^{l}_{n_k}}\left(M\mathbf{1}_{I^{l,0}_{n_k}}\left(x\right)\right)^pw\left(x\right)dx\notag\\
&\equiv&
A+B+C\notag
\end{eqnarray}
We have immediately $A=w\left(I^{l,0}_{n_k}\right)$, for $B$ we get
\begin{eqnarray*}
B
&=&\int_{I^{l}_{n_k}\backslash I^{l,0}_{n_k}}\left(\frac{|I^{l,0}_{n_k}|}{2|I^{l,0}_{n_k}|+2dist\left(x,I^{l,0}_{n_k}\right)}\right)^pw\left(x\right)dx\\
&\approx&
2^{-p}\left(1-\delta_2\right)\sum_{m=1}^{n_k-1}\frac{3^{-mp}}{\delta_2^m}w\left(I^{l,0}_{n_k}\right)
=2^{-p}\left(1-\delta_2\right)w\left(I^{l,0}_{n_k}\right)\sum_{m=1}^{n_k-1}\left(\frac{3^{-p}}{\delta_2}\right)^m
\end{eqnarray*}
Now choose $\delta_2=\frac{3^{-p}}{2}$ so that the series above diverges (any $\delta_2\leq 3^{-p}$ works here). We also want $n_k$ so that
\begin{equation}\label{the gain}
2^{-p}\left(1-\delta_2\right)w\left(I^{l,0}_{n_k}\right)\sum_{m=1}^{n_k-1}\left(\frac{3^{-p}}{\delta_2}\right)^m\gtrsim 2^kw\left(I^{l,0}_{n_k}\right).
\end{equation}
We are only left with calculating term $C$. We have,
\begin{eqnarray}\label{tail converging}
C
&=&
\int_{{\mathbb R}\backslash I^{l}_{n_k}}\left(\frac{|I^{l,0}_{n_k}|}{2|I^{l,0}_{n_k}|+2dist\left(x,I^{l,0}_{n_k}\right)}\right)^pw\left(x\right)dx\\
&\approx&
2^{-p}3^{-n_kp}\left(1-\delta_1\right)\frac{1-\delta_2}{\delta_2^{n_k-1}}w\left(I^{l,0}_{n_k}\right)\sum_{m=1}^\infty \left(\frac{3^{-p}}{\delta_1}\right)^m\notag
\end{eqnarray}
Choose $\delta_1>3^{-p}$ so that the infinite series converges. Combining the estimates for $A,B$ and $C$ we get:
\begin{equation}\label{Cp first gain}
\int_{\mathbb R} \left(M\mathbf{1}_{I^{l,0}_{n_k}}\left(x\right)\right)^pw\left(x\right)dx<\infty
\end{equation}
and
\begin{equation}\label{Cp first win}
\frac{w\left(E\right)}{\int_{\mathbb R} \left(M\mathbf{1}_{I^{l,0}_{n_k}}(x)\right)^pw\left(x\right)dx}\leq \frac{w\left(E\right)}{2^kw(I^{l,0}_{n_k})}\lesssim \frac{2^k}{2^k}\frac{|E|}{|I^{l,0}_{n_k}|}=\frac{|E|}{|I^{l,0}_{n_k}|}.
\end{equation}
for $E\subset I^{l,0}_{n_k}$.
We want to extend \eqref{Cp first gain} and \eqref{Cp first win} to all triadic
intervals. Note that \eqref{Cp first gain} holds for any interval $I$. To see that, choose $n$ big enough so that $I\subset I_n$. Then, following the calculations for estimating $C$ in \eqref{tail converging} we get that
\begin{equation*}
\int_{{\mathbb R}\backslash I_n} \left(M\mathbf{1}_{I}(x)\right)^pw\left(x\right)dx<\infty
\end{equation*}
which of course gives us
\begin{equation}\label{maximal finiteness}
\int_{{\mathbb R}}\left(M\mathbf{1}_{I}(x)\right)^pw\left(x\right)dx<\infty
\end{equation}
To get \eqref{Cp first win} for any triadic $I\subset
3I^{l,0}_{n_k}$, note that we can follow the same
calculations that led to \eqref{the gain} and just choose $n_k$ big enough so that we get
the gain $2^kw(I)$. This is possible since the construction is finite and it stops at some height $i_k$. For that finite number of intervals, we choose $n_k$ big enough so that all the intervals get the gain $2^k$, i.e. $\int_{\mathbb R} \left(M\mathbf{1}_{I}(x)\right)^pw\left(x\right)dx\geq 2^kw(I)$. So we have for any $E\subset I$, using \eqref{worst case},
\begin{equation}\label{Cp winning}
\frac{w\left(E\right)}{\int_{\mathbb R} \left(M\mathbf{1}_{I}(x)\right)^pw\left(x\right)dx}\leq \frac{w(E)}{2^kw(I)}\lesssim \frac{|E|}{|I|}.
\end{equation}
We will use the following calculation for triadic intervals $I\subset I^l_{n_k}$. Let $I=3I^{l,0}_{n_k}$, following \eqref{Cp main} and using $\delta_2=\frac{3^{-p}}{2}$,
$$
\int_{\mathbb R}(M1_{\mathbf{I}})^pdw\equiv A'+B'+C'.
$$
and $A'+B'\approx 3^p(A+B)$ hence
$$
\int_{I^l_{n_k}}\left(M\mathbf{1}_{I}\left(x\right)\right)^pw\left(x\right)dx\approx 3^p\int_{I^l_{n_k}} \left(M\mathbf{1}_{I^{l,0}_{n_k}}\left(x\right)\right)^pw\left(x\right)dx
$$
and
\begin{equation}\label{cp winning2}
\frac{w\left(E\right)}{\int_{\mathbb R} \left(M\mathbf{1}_{I}(x)\right)^pw\left(x\right)dx}\lesssim \frac{w(E)}{3^p2^kw(I^{l,0}_{n_k})}\lesssim 3^{1-p}\frac{|E|}{|I|}\leq \frac{|E|}{|I|}
\end{equation}
for any $E\subset 3I^{l,0}_{n_k}$, so we don't lose any of
the ``gain" necessary for \eqref{Cp winning} to hold. We can
repeat for all triadic intervals $I$ such that
$I^{l,0}_{n_k}\subset I\subset I^l_{n_k}$. Note that for $I=I^l_{n_k}$ $B'=0$. To extend \eqref{cp winning2} to triadic intervals $I\supset I^l_{n_k}$ notice that
$$
\frac{w(E)}{w(\pi(I))}\lesssim\delta_1\frac{w(E)}{w(I)}\lesssim \delta_1\frac{|E|}{|I|}\leq 3\delta_1\frac{|E|}{|\pi(I)|}\leq \frac{|E|}{|\pi(I)|}\Longrightarrow \frac{w(E)}{w(\pi(I))}\lesssim \frac{|E|}{|\pi(I)|}
$$
for any $E\subset 3I^{l,0}_{n_k}$, where we used $\delta_1<\frac{1}{3}$.
To get \eqref{Cp winning} for an arbitrary triadic interval, let $I$ be a triadic interval not contained in any $I^{l,0}_{n_k}$ and $E$ any subset of $I$. We write
$$
E=\left(\bigcup_{I^{l,0}_{n_k}\subset I}\left(E\cap 3I^{l,0}_{n_k}\right)\right)\bigcup \left(E\big\backslash\bigcup_{I^{l,0}_{n_k}\subset I} 3I^{l,0}_{n_k}\right)=E_1\cup E_2
$$
Using \eqref{cp winning2} we see that
\begin{eqnarray}\label{end proof1}
w(E_1)=\sum_{I^{l,0}_{n_k}\subset I}w(E\cap 3I^{l,0}_{n_k})
&\lesssim&
\sum_{I^{l,0}_{n_k}\subset I}\frac{|E\cap 3I^{l,0}_{n_k}|}{|I|}\int_{\mathbb R}(M1_{\mathbf{I}})^pdw\\
&=&\frac{|E_1|}{|I|}\int_{\mathbb R}(M1_{\mathbf{I}})^pdw \notag
\end{eqnarray}
To deal with $E_2$, note that for $\displaystyle x \in I\big\backslash\bigcup_{I^{l,0}_{n_k}\subset I} 3I^{l,0}_{n_k}$, $w(x)\lesssim \frac{3(1-\delta_2)}{2|I|}w(I)$ so we get
\begin{equation}\label{end proof2}
\frac{w(E_2)}{w(I)}\approx \frac{|E_2|}{|I|}
\end{equation}
combining \eqref{end proof1}, \eqref{end proof2} we get \eqref{Cp winning} for a
triadic interval $I$.
We are left with extending \eqref{Cp winning} to an arbitrary interval $I$. Let $3^m\leq |I|\leq 3^{m+1}$ and $E\subset I$. Then $I \subset J_1\cup J_2$, $J_1,J_2$ triadic intervals such that $|J_1|=|J_2|=3^{m+1}$. Since
$$
M\mathbf{1}_{I}(x)\approx M\mathbf{1}_{J_1}(x)\approx M\mathbf{1}_{J_2}(x), \text{for all }x \in {\mathbb R}.
$$
we get
\begin{eqnarray*}
\frac{w\left(E\right)}{\int_{\mathbb R} \left(M\mathbf{1}_{I}(x)\right)^pw\left(x\right)dx}
&\approx&
\frac{w\left(E\cap J_1\right)}{\int_{\mathbb R} \left(M\mathbf{1}_{J_1}(x)\right)^pw\left(x\right)dx}+
\frac{w\left(E\cap J_2\right)}{\int_{\mathbb R} \left(M\mathbf{1}_{J_2}(x)\right)^pw\left(x\right)dx}\\
&\lesssim&
\frac{|E\cap J_1|}{|J_1|}+\frac{|E\cap J_2|}{|J_2|}\approx \frac{|E|}{|I|}
\end{eqnarray*}
This shows that $w$ satisfies \eqref{Cp condition} and hence $w$ is a $C_p$
weight and the proof is complete. \qed
\subsection{Doubling $C_p$ weights are in ${\mathcal{A}}_\infty$ for small doubling constants}
Note that the construction in the proof of theorem \ref{Cp theorem} we depended heavily on the big doubling constant of the weight $w$. Here we show that this is the only case by proving theorem \ref{Cp theorem small doubling}.
\\
\\\textit{Proof of theorem \ref{Cp theorem small doubling}:} It will be enough to show that
$$
\int_{{\mathbb R}^n}|M\mathbf{1}_I|^pw(x)dx\approx w(I)
$$
the result then follows immediately from \eqref{Cp condition}. Let $I_n=3^mI$ the
cubes with same center as $I$ and side length $\ell(I_n)=3^m\ell(I)$. We write
\begin{eqnarray*}
\int_{{\mathbb R}^n}|M\mathbf{1}_I|^pwdx
\!\!\!&=&\!\!\!
\sum_{m=0}^\infty \int_{I_m\backslash I_{m-1}}\!\!\!\!\!\!\!|M\mathbf{1}_I|^pwdx\approx\sum_{m=0}^\infty \int_{I_m\backslash I_{m-1}}\frac{|I|^pw(x)dx}{(|I|^\frac{1}{n}+\dist(x,I))^{np}}\\
&\approx&\!\!\!
\sum_{m=0}^\infty |I|^p\frac{w(I_m)}{|I_m|^p}\lesssim \sum_{m=0}^\infty \frac{(C_w)^mw(I)}{(3^{np})^m}\lesssim w(I)
\end{eqnarray*}
since $C_{\sigma}<3^{np}$ by hypothesis and the series converges.\qed
\begin{rem}\label{remark1}
Not all doubling weights are $C_p$ weights. For an example just choose $\delta_1<3^{-p}$ in the construction of theorem \ref{Cp theorem}.
\end{rem}
\begin{rem}\label{remark2}
There exist non-doubling $C_p$ weights. For an example choose $\delta_{2,k}=\frac{1}{5k}$ in each $I_{n_k}$ in the construction of theorem \ref{Cp theorem}. A much simpler example is given by getting the Lebesgue measure in ${\mathbb R}^n$ and setting the measure of the unit ball equal to 0, i.e. define $w(E)=m(E\backslash B(0,1))$ where $B(0,1)$ is the unit ball in ${\mathbb R}^n$.
\end{rem}
\subsection{The ${\mathcal{A}}_\infty^\alpha$ condition.}
To complete the picture for the one weight conditions we are introducing the fractional ${\mathcal{A}}_\infty^\alpha$ condition. We are following very closely \cite{Saw2} where ${\mathcal{A}}_\infty^\alpha$ was introduced.
First we define the $\alpha-$relative capacity of a measure $\mathbf{Cap}_\alpha(E;I)$ of a compact subset $E$ of a cube $I$ by
$$
\mathbf{Cap}_\alpha(E;I)\!=\!\inf \Big\{\!\!\int h(x)dx:h \geq 0, Supp h\subset 2I \text{ and }I_\alpha h\geq (diam2I)^{\alpha -n} \text{ on }E\Big\}
$$
Check \cite{AH} for more properties on capacity.
We say that a locally finite positive Borel measure ${\omega}$ is said to be an ${\mathcal{A}}_\infty^\alpha$ measure if
\begin{equation}\label{A alpha infinity}
\dfrac{{\omega}(E)}{{\omega}(2I)}\leq \eta(\mathbf{Cap}_\alpha(E,I))
\end{equation}
when ${\omega}(2I)>0$, for all compact subsets E of a cube I, for some function $\eta : [0,1] \to [0,1]$ with $\displaystyle\lim_{t \to 0}\eta(t)=0$.
Note that omitting the factor 2 in ${\omega}(2I)$ above makes the condition more restrictive in general, but remains equivalent for doubling measures. It is shown in \cite{Saw2} that ${\omega} \in {\mathcal{A}}_\infty^\alpha$ implies the Wheeden-Muckenhoupt inequality
\begin{equation}
\int\left|I_\alpha f\right|^pd{\omega}\leq \int \left|M_\alpha f\right|^pd{\omega}
\end{equation}
for all $f$ positive Borel measures.
\begin{rem}\label{remark fractional A infinity}
${\mathcal{A}}_\infty^\alpha$ measures are not necessarily doubling.
Take for example the Lebesgue measure in ${\mathbb R}^n$ and set the
measure of the unit ball equal to 0, i.e. define
${\omega}(E)=m(E\backslash B(0,1))$ where $B(0,1)$ is the unit ball
in ${\mathbb R}^n$. This measure is clearly non-doubling and hence
not in $A_\infty$ but it is an ${\mathcal{A}}_\infty^\alpha$ measure.
\end{rem}
There exist also doubling fractional $A_\infty$ measures that are not in $A_\infty$. The example we use for that is exactly the one used in \cite{GaKS} but here we have to calculate the relative capacities of the sets used.
\begin{thm}\label{fractional A infinity and doubling}
(${\mathcal{A}}^\alpha_\infty \cap \mathcal{D}\nRightarrow A_\infty $) There exist a measure $\mu$ singular to the Lebesgue measure that is doubling and satisfies the ${\mathcal{A}}^\alpha_\infty$ condition with $\eta(t)=t$ but $\mu$ is not an $A_\infty$ weight.
\end{thm}
\begin{proof}
Let $\mu([0,1])=1$, $0<\delta<3^{-1}$ to be determined later, and for any triadic $I \subset [0,1]$ let
$$
\mu\left(I\right)=\left\{
\begin{array}{ll}
\delta \mu\left(\pi I\right) & \mbox{if } \partial I \cap \partial \pi
I=\emptyset\\
\frac{1-\delta}{2}\mu\left(\pi I\right) & \mbox{if } \partial I \cap
\partial \pi I\neq\emptyset
\end{array}
\right.
$$
It was shown in \cite{GaKS} that $\mu$ is a doubling measure. It is also shown that it is singular to the Lebesgue measure hence it does not satisfy the $A_\infty$ condition.
To show that it satisfies the ${\mathcal{A}}^\alpha_\infty$ condition, let $I\subset [0,1]$ be a triadic interval and $E\subset I$ be compact.
We claim that $||I_\alpha\mu_I||_{L^\infty(I)}=C_{\alpha,\delta}\mu(I)|I|^{\alpha-1}$, where $\mu_I$ is the restriction of $\mu$ on the set $I$ and the constant $C_{\alpha,\delta}$ is independent of $I$. For any $x \in I$ we have
$$
I_\alpha \mu_I(x)
=\int_I |x-y|^{\alpha-1}d\mu(y)\lesssim
\mu(I)|I|^{\alpha-1}\sum_{k=0}^\infty 3^{k(1-\alpha)}\left(\frac{1-\delta}{2}\right)^k=C_{\alpha,\delta}\mu(I)|I|^{\alpha-1}
$$
as long as $3^{1-\alpha}\frac{1-\delta}{2}<1 \Rightarrow \alpha>1-\frac{\ln(\frac{2}{1-\delta})}{\ln 3}$.
Now for any $f \geq 0$, $Suppf \subset 2I$ and $I_\alpha f\geq |2I|^{\alpha-1}$ on $E$, using Fubini's theorem we have
\begin{eqnarray*}
\mu(E)
\!\!\!\!&=&\!\!\!\!
\int_I\mathbf{1}_Ed\mu\leq \int_I|2I|^{1-\alpha}I_\alpha f(x)\mathbf{1}_E(x)d\mu(x)=\int |2I|^{1-\alpha}I_\alpha\mu_E(x)f(x)dx\\
&\leq&\!\!\!\!
||f||_1||I_\alpha\mu_E||_\infty|2I|^{1-\alpha}
\leq ||f||_1||I_\alpha\mu_I||_\infty|2I|^{1-\alpha}\lesssim ||f||_1\mu(I)
\end{eqnarray*}
So $Cap_\alpha(E,I)\gtrsim \frac{\mu(E)}{\mu(I)}$ hence ${\mathcal{A}}^\alpha_\infty$ holds with $\eta(t)=t$ and the proof is complete.
\end{proof}
\section{Two weight conditions}
We start this section with the proofs of theorems \ref{non doubling Ap examples} and \ref{Ap doubling equivalence theorem}.
\subsection{Non doubling ${\mathcal{A}}_p$ examples}
\begin{rem}\label{remark Ap}
Note first that we have the following simple implications ${\mathcal{A}}_p^{t_2} \Rightarrow {\mathcal{A}}_p^{t_1} \Rightarrow {\mathcal{A}}_p$. Indeed it is easy to see:
\begin{eqnarray*}
P(I,{\sigma})&=&\int_I\displaystyle\frac{\displaystyle|I|}{\displaystyle\left(|I|+\dist(x,I)\right)^2}{\sigma}(dx)+\int_{{\mathbb R}/I}\frac{|I|}{\displaystyle\left(|I|+\dist(x,I)\right)^2}{\sigma}(dx)\\
&=&
\frac{{\sigma} (I)}{|I|}+\int_{{\mathbb R}/I}\frac{|I|}{\displaystyle\left(|I|+\dist(x,I)\right)^2}{\sigma}(dx)\geq \frac{{\sigma}(I)}{|I|}
\end{eqnarray*}
and so immediately from the definitions \eqref{classical Ap}, \eqref{one tailed} and \eqref{two tailed} we get
$$
{\mathcal{A}}_p({\omega},{\sigma})\subseteq {\mathcal{A}}_p^{t_1}({\omega},{\sigma}) \subseteq {\mathcal{A}}_p^{t_2}({\omega},{\sigma}) .
$$
\end{rem}
\begin{rem}We work with $p=2$ for simplicity. The examples we use work with trivial modifications for any $p>1$.
\end{rem}
$ $\\
\textit{Proof of theorem \ref{non doubling Ap examples}:}
\textit{(1)} We want to construct two measures ${\omega},{\sigma}$ such that the two weight classical ${\mathcal{A}}_2$ condition holds but both one tailed ${\mathcal{A}}_2$ conditions fail. First, we construct measures $u_k$ and $v^n_k$ that satisfy
$$
\frac{u_k(I)v^n_k(I)}{|I|^2}\leq M,
$$
$$
\frac{u_k(I)}{|I|}P(I,v^n_k)\gtrsim n
$$
where the constant $M$ does not depend on $k,n$. Then we will combine the measures $u_k$ and $v^n_k$ to create ${\omega},{\sigma}$ such that the two weight classical ${\mathcal{A}}_2$ condition holds and both one tailed ${\mathcal{A}}_2$ conditions fail for the pair ${\omega},{\sigma}$.
Let
$$
u_k(E)=m(E \cap [k,k+1])
, \quad
v^n_k(E)=\sum_{i=0}^n \displaystyle 2^{i}m\left(E \cap \left[k+2^i,k+2^{i+1}\right]\right)
$$
where $m$ is the classic Lebesgue measure on ${\mathbb R}$. Let $I=(a,b)$,
$a < k+1$ and $k+2^{i-1} \leq b <k+2^i$, for some $i\geq 0$ (of course if the interval does not
intersect $[k,k+1]$ then $u_k(I)=0$). Then
\begin{equation}\label{classical holds}
\frac{u_k(I)v^n_k(I)}{|I|^2} \leq \frac{4^{i+1}-1}{(4-1)(2^{i-1}-2)^2}=M
\end{equation}
which is bounded for $i>2$ (the cases $i=0,1,2$ can be seen directly). Now let $I=[k,k+1]$. We have then:
$$\frac{u_k(I)}{|I|}P(I,v^n_k)=\int_{\mathbb R} \frac{v^n_k (dx)}{\displaystyle\left(1+\dist(x,[k,k+1])\right)^2}=\sum_{i=0}^n \int_{I_i} \frac{v^n_k (dx)}{\displaystyle\left(1+\dist(x,[k,k+1])\right)^2}$$
where $I_i=[k+2^i,k+2^{i+1}]$. We get:
\begin{eqnarray}\label{tail lose}\hspace{0.5 cm}\displaystyle\sum_{i=0}^k \int_{I_i} \frac{v^n_k (dx)}{\displaystyle\left(1+\dist(x,[1,2])\right)^2} &\geq& 1+\displaystyle\sum_{i=1}^n \int_{I_i} \frac{v^n_k (dx)}{\displaystyle\left(1+2^i-2)\right)^2}\\
&=&1+\displaystyle\sum_{i=1}^n\frac{v^n_k(I_i)}{\left(2^i-1\right)^2}=1+\sum_{i=1}^n\frac{2^{2i}}{\left(2^i-1\right)^2}\approx n.\notag
\end{eqnarray}
Now we define ${\omega},{\sigma}$ as follows:
$$
{\omega}(E)=\sum_{k=1}^\infty u_{100^k}(E)+\sum_{k=1}^\infty v^k_{-100^k}(E)
$$
$$
{\sigma}(E)=\sum_{k=1}^\infty u_{-100^k}(E)+\sum_{k=1}^\infty v^k_{100^k}(E)
$$
It is easy to see that with $I=I_k=[100^k,100^k+1]$ both one tailed ${\mathcal{A}}_2$ conditions fail using \eqref{tail lose}.
To see that the classical ${\mathcal{A}}_2$ condition holds, let $I=(a,b)$ be any interval. It is simple to check that if $I$ is big enough such that $|a|\approx 100^k, |b|\approx 100^n$ for $k\neq n$ then
$$
\frac{{\omega}(I){\sigma}(I)}{|I|^2}\leq 1.
$$
While if $|a|\approx |b|\approx 100^k$ for some $k$ then using \eqref{classical holds} we get
$$
\frac{{\omega}(I){\sigma}(I)}{|I|^2} \leq 2M
$$
hence the classical two weight ${\mathcal{A}}_2$ condition holds but both one tailed ${\mathcal{A}}_2$ conditions fail.
\\
\textit{(2)} Now we turn to proving ${\mathcal{A}}^{t_1}_2 \nRightarrow {\mathcal{A}}^{t_2}_2$. Let the new measures be:
\begin{eqnarray*}
{\omega}(E)&=&\displaystyle\sum_{n=1}^\infty 2^nm\left(E \cap \left[2^n,2^{n+1}\right]\right)\\
{\sigma}(E)&=&m(E \cap [0,1])
\end{eqnarray*}
From the construction above we can see that with $I=[0,1]$ we get:
\begin{eqnarray*}&&P(I,{\omega})P(I,{\sigma})=\int_{\mathbb R} \frac{{\omega} (dx)}{\displaystyle\left(1+\dist(x,[0,1])\right)^2}\int_{\mathbb R} \frac{{\sigma} (dx)}{\left(1+\dist(x,[0,1])\right)^2}\\
&=&
\int_{\mathbb R} \frac{{\omega} (dx)}{\left(1+\dist(x,[0,1])\right)^2}\int_0^1 \frac{dx}{\left(1+\dist(x,[0,1])\right)^2}
=
\int_{\mathbb R} \frac{{\omega} (dx)}{\left(1+\dist(x,[0,1])\right)^2}
\end{eqnarray*}
Now from the definition of ${\omega}$ the last expression is equal to:
$$\displaystyle\sum_{n=1}^\infty \int_{2^n}^{2^{n+1}}\frac{2^n}{\left(1+\dist(x,[0,1])\right)^2}dx\geq \sum_{n=1}^\infty\frac{2^{2n}}{2^{2n}}=\infty$$
To prove that ${\mathcal{A}}_2^{t_1}$ hold let $I$ be an interval such that $2^n \leq |I|<2^{n+1}$ and $2^k-1 \leq \dist(I,[0,1])<2^{k+1}-1$ with $k \geq 0$. We have two cases:
(i) $n\geq k$.
$$\displaystyle\frac{{\omega}(I)}{|I|}P(I,{\sigma})\leq \displaystyle\frac{\displaystyle\sum_{l=1}^{n+1}2^lm\left(I\cap \left[2^l,2^{l+1}\right]\right)}{|I|^2}\leq \displaystyle\frac{\displaystyle\sum_{l=1}^{n+1}2^{2l}}{2^{2n}}=\frac{2^{2(n+2)}-1}{(4-1)2^{2n}}<M<\infty$$
where the first inequality uses the fact that the interval cannot intersect any point in $[2^{n+2}, \infty)$ otherwise $n \geq k$ would not be satisfied.
(ii)$n<k$. If $k=0$ then $I \cap [2,\infty)=\emptyset$ and there is nothing to prove. So assume $k>0$.
$$\frac{{\omega}(I)}{|I|}P(I,{\sigma})\leq \displaystyle\frac{\displaystyle\sum_{l=k}^{k+1}2^lm\left(I\cap \left[2^l,2^{l+1}\right]\right)}{2^{2k}}\leq \frac{2^{2k}+2^{2(k+1)}}{2^{2k}}=5<\infty$$
where the first inequality now holds because $I$ cannot contain neither any point in $ (0,2^k)$ for otherwise $\dist(I,(0,1))<2^k-1$ nor any point in $[2^{k+1},\infty)$ because $n<k$ would not be satisfied and the proof is complete.
\\
\textit{(3)} Last, for the equivalence of the two tailed ${\mathcal{A}}_2$ condition to both one tailed ${\mathcal{A}}_2$ conditions let $I\in {\mathbb R}^n$ be a cube. We have:
\begin{multicols}{2}
$$
P(I,{\sigma})\approx \frac{{\sigma}(I)}{|I|}+\sum_{k=1}^\infty \sum_{m=1}^{3^n-1} \frac{{\sigma}(I^k_m)}{3^{kn}|I^k_m|}
$$
$$
P(I,{\omega})\approx \frac{{\omega}(I)}{|I|}+\sum_{k=1}^\infty \sum_{m=1}^{3^n-1} \frac{{\omega}(I^k_m)}{3^{kn}|I^k_m|}
$$
\columnbreak
\begin{center}
\resizebox{3 cm}{3 cm}{%
\begin{tikzpicture}
\label{Figure 2.2.}
\node at (0,0) {I};
\node at (1/2,1/2) {$I^1_m$};
\node at (3/2,3/2) {$I^2_m$};
\color{blue}
\draw (-1/4,-1/4) -- (-1/4,1/4);
\draw (-1/4,1/4)-- (1/4,1/4);
\draw (1/4,1/4)-- (1/4,-1/4);
\draw (1/4,-1/4)-- (-1/4,-1/4);
\draw (-3/4,-3/4) -- (-3/4,3/4);
\draw (-3/4,3/4)-- (3/4,3/4);
\draw (3/4,3/4)-- (3/4,-3/4);
\draw (3/4,-3/4)-- (-3/4,-3/4);
\draw (-1/4,-1/4) -- (-3/4,-1/4);
\draw (-1/4,1/4)-- (-3/4,1/4);
\draw (1/4,1/4)-- (3/4,1/4);
\draw (1/4,-1/4)-- (3/4,-1/4);
\draw (-1/4,-1/4) -- (-1/4,-3/4);
\draw (-1/4,1/4)-- (-1/4,3/4);
\draw (1/4,1/4)-- (1/4,3/4);
\draw (1/4,-1/4)-- (1/4,-3/4);
\draw (-9/4,-9/4) -- (-9/4,9/4);
\draw (-9/4,9/4)-- (9/4,9/4);
\draw (9/4,9/4)-- (9/4,-9/4);
\draw (9/4,-9/4)-- (-9/4,-9/4);
\draw (-3/4,-3/4) -- (-9/4,-3/4);
\draw (-3/4,3/4)-- (-9/4,3/4);
\draw (3/4,3/4)-- (3/4,9/4);
\draw (3/4,-3/4)-- (9/4,-3/4);
\draw (-3/4,-3/4) -- (-3/4,-9/4);
\draw (-3/4,3/4)-- (-3/4,9/4);
\draw (3/4,3/4)-- (9/4,3/4);
\draw (3/4,-3/4)-- (3/4,-9/4);
\end{tikzpicture}
}
Figure 5.1
\end{center}
\end{multicols}
where $|I^k_m|^\frac{1}{n}=3^k|I|^\frac{1}{n}$ and $d(I^k_m,I)\approx 3^k $, and all the implied constants depend only on the dimension, check Figure \ref{Figure 2.2.}. There exist $k_1,k_2\geq 0$ such that
$$
P(I,{\sigma})\approx 2\left(\frac{{\sigma}(I)}{|I|}+\sum_{k=1}^{k_1}\sum_{m=1}^{3^n-1} \frac{{\sigma}(I^k_m)}{3^{kn}|I^k_m|}\right)\approx 2 \sum_{k=k_1}^\infty \sum_{m=1}^{3^n-1} \frac{{\sigma}(I^k_m)}{3^{kn}|I^k_m|}
$$
$$
P(I,{\omega})\approx 2\left(\frac{{\omega}(I)}{|I|}+\sum_{k=1}^{k_2}\sum_{m=1}^{3^n-1} \frac{{\omega}(I^k_m)}{3^{kn}|I^k_m|}\right)\approx 2 \sum_{k=k_2}^\infty \sum_{m=1}^{3^n-1} \frac{{\omega}(I^k_m)}{3^{kn}|I^k_m|}
$$
We can assume without loss of generality that $k_1\leq k_2$. Let $J=\displaystyle I\cup\left(\bigcup_{k=1}^{k_1}\bigcup_{m=1}^{3^n-1}I^k_m\right)$, hence $|J|^\frac{1}{n}\approx 3^{k_1}|I|^\frac{1}{n}$ where again the implied constant depends only on dimension. We calculate
\begin{eqnarray*}
\frac{{\sigma}(J)}{|J|}P(J,{\omega})
&\approx&
\frac{1}{|J|}\left({\sigma}(I)+\sum_{k=1}^{k_1}\sum_{m=1}^{3^n-1}{\sigma}(I^k_m)\right)\left(\frac{{\omega}(J)}{|J|}+\sum_{k=1}^\infty\sum_{m=1}^{3^n-1}\frac{{\omega}(J^k_m)}{3^{kn}|J^k_m|}\right)\\
&\approx&
\frac{1}{3^{k_1n}|I|}\left({\sigma}(I)+\sum_{k=1}^{k_1}\sum_{m=1}^{3^n-1}{\sigma}(I^k_m)\right)\left(\frac{{\omega}(J)}{|J|}+\sum_{k=1}^\infty\sum_{m=1}^{3^n-1}\frac{{\omega}(I^{k+k_1}_m)}{3^{kn}|I^{k+k_1}_m|}\right)\\
&\approx&
\frac{1}{3^{k_1n}|I|}\left({\sigma}(I)+\sum_{k=1}^{k_1}\sum_{m=1}^{3^n-1}{\sigma}(I^k_m)\right)\left(\frac{{\omega}(J)}{|J|}+\sum_{k=k_1}^\infty\sum_{m=1}^{3^n-1}\frac{3^{k_1n}{\omega}(I^{k}_m)}{3^{kn}|I^{k}_m|}\right)\\
&\gtrsim&
\frac{1}{|I|}\left({\sigma}(I)+\sum_{k=1}^{k_1}\sum_{m=1}^{3^n-1}{\sigma}(I^k_m)\right)\left(\frac{{\omega}(J)}{|J|}+\sum_{k=k_2}^\infty\sum_{m=1}^{3^n-1}\frac{{\omega}(I^{k}_m)}{3^{kn}|I^{k}_m|}\right)\\
&\gtrsim&
\left(\frac{{\sigma}(I)}{|I|}+\sum_{k=1}^{k_1}\sum_{m=1}^{3^n-1}\frac{{\sigma}(I^k_m)}{3^{kn}|I^k_m|}\right)\left(\frac{{\omega}(J)}{|J|}+\sum_{k=k_2}^\infty\sum_{m=1}^{3^n-1}\frac{{\omega}(I^{k}_m)}{3^{kn}|I^{k}_m|}\right)\\
&\approx&
P(I,{\sigma})P(I,{\omega})
\end{eqnarray*}
hence showing that the one tailed ${\mathcal{A}}_p$ conditions bound the two tailed ${\mathcal{A}}_p$ condition and the proof is complete.
\qed
\begin{rem}
From the above construction we see that the same measures could work to prove the same implications for ${\mathcal{A}}_p^{\text{offset}}$ \eqref{offset A2}, and it's two tailed analogue since it's exactly the nature of the tail that we take advantage of in the construction.
\end{rem}
\subsection{Two weight ${\mathcal{A}}_p$ equivalence for doubling measures}
\begin{rem}We are going to use $p=2$ in the proof for simplicity. The general case follows immediately since $\frac{1}{p},\frac{1}{p'}<1$ and hence $$P(I,{\omega})^\frac{1}{p}\approx \left(\sum_{k=1}^\infty\sum_{j=1}^{3^n-1}\frac{{\omega}(I^k_i)}{3^{2kn}|I|}\right)^\frac{1}{p}\leq \sum_{k=1}^\infty\sum_{j=1}^{3^n-1}\left(\frac{{\omega}(I^k_i)}{3^{2kn}|I|}\right)^\frac{1}{p}$$
and from here the proof follows the same way as for $p=2$.
\end{rem}
\textit{Proof of theorem \ref{Ap doubling equivalence theorem}:} Let ${\omega},{\sigma}$ be reverse doubling measures with reverse doubling constants $1+\delta_{\omega}$ and $1+\delta_{\sigma}$ respectively. It is enough to prove that we can bound the two tailed ${\mathcal{A}}^{t_2}_p({\omega},{\sigma})$ from the classical ${\mathcal{A}}_p({\omega},{\sigma})$. Let $I$ be a cube. We then have,
\begin{eqnarray*}
P(I,{\omega})P(I,{\sigma})\!\!\!
&\lesssim&\!\!\!\!
\frac{{\omega}(I){\sigma}(I)}{|I|^2}
+
\frac{{\omega}(I)}{|I|}\sum_{m=1}^\infty\sum_{i=1}^{3^n-1}\frac{{\sigma}(I^m_i)}{3^{2mn}|I|}+\frac{{\sigma}(I)}{|I|}\sum_{k=1}^\infty\sum_{j=1}^{3^n-1}\frac{{\omega}(I^k_i)}{3^{2kn}|I|}\\
\!\!\!&+&\!\!\!\!
\sum_{m=1}^\infty\sum_{i=1}^{3^n-1}\frac{{\sigma}(I^m_i)}{3^{2mn}|I|}\sum_{k=1}^\infty\sum_{j=1}^{3^n-1}\frac{{\omega}(I^k_j)}{3^{2kn}|I|}\equiv A+B+C+D
\end{eqnarray*}
where $|I^m_j|=3^{mn}|I|$, $\dist(I^m_j,I)\approx 3^m|I|^\frac{1}{n}$, $\displaystyle\bigcup_{m \in {\mathbb N}}\bigcup_{j=1}^{3^n-1}I^m_j={\mathbb R}^n\backslash I$ and the implied constant depends only on dimension. $A$ is bounded immediately by ${\mathcal{A}}_2({\omega},{\sigma})$. For $B$ we have:
$$
B=\sum_{m=1}^\infty\sum_{i=1}^{3^n-1}\frac{{\omega}(I){\sigma}(I^m_i)}{3^{2mn}|I|^2}\lesssim \sum_{m=1}^\infty(1+\delta_{\omega})^{-m}\sum_{i=1}^{3^n-1}\frac{{\omega}(I^m){\sigma}(I^m)}{|I^m|^2}\lesssim {\mathcal{A}}_2({\omega},{\sigma})<\infty
$$
where $\displaystyle I^m=I\cup
\left(\bigcup_{\ell=1}^{m}\bigcup_{j=1}^{3^n-1}I^\ell_j\right)$ and the
implied constant again depends only on dimension and the reverse doubling
constant of ${\omega}$. The bound for $C$ is similar to $B$.
For $D$ we have:
\begin{eqnarray*}
D
&=&
\sum_{m=1}^\infty\sum_{k=1}^m\sum_{i=1}^{3^n-1}\sum_{j=1}^{3^n-1}\frac{{\sigma}(I^m_i){\omega}(I^k_j)}{3^{2mn}|I|3^{2kn}|I|}+\sum_{k=1}^\infty\sum_{m=1}^{k-1}\sum_{j=1}^{3^n-1}\sum_{i=1}^{3^n-1}\frac{{\sigma}(I^m_i){\omega}(I^k_j)}{3^{2mn}|I|3^{2kn}|I|}\\
&\equiv& \mathbf{I}+\mathbf{II}
\end{eqnarray*}
We will get the bound for $\mathbf{I}$, the calculations for $\mathbf{II}$ are identical.
\begin{eqnarray*}
\mathbf{I}&\lesssim&
\sum_{m=1}^\infty\sum_{k=1}^m\sum_{i=1}^{3^n-1}\sum_{j=1}^{3^n-1}(1+\delta_{\omega})^{k-m}\frac{{\sigma}(I^m){\omega}(I^m)}{3^{2kn}|I^m|^2}\lesssim\\
&\lesssim&
{\mathcal{A}}_2({\omega},{\sigma}) \sum_{m=1}^\infty (1+\delta_{\omega})^{-m}\sum_{k=1}^m\frac{(1+\delta_{\omega})^k}{3^{2kn}}
\leq C_{n,{\sigma}}{\mathcal{A}}_2({\omega},{\sigma}) <\infty
\end{eqnarray*}
Combining all the above bounds and getting supremum over the cubes $I$ we get
$$
{\mathcal{A}}_2^{t_2}({\omega},{\sigma})\leq C_{n,{\omega},{\sigma}}{\mathcal{A}}_2({\omega},{\sigma})
$$
which completes the proof of the theorem.\qed
\begin{rem}
The same proof works for the fractional ${\mathcal{A}}_p({\omega},{\sigma})$ conditions as defined in \cite{SSU4}.
\end{rem}
\subsection{The $T_1$ theorem for $A_\infty$ weights.}
The goal of this subsection is to prove theorem \ref{T1 theorem}. For that we are going to use the Sawyer testing condition.
\subsubsection{The Sawyer testing condition.}
Sawyer in \cite{Saw4} proved that the Maximal operator is bounded on $L^p(u)\to L^q(w)$ if and only if the Sawyer testing condition is satisfied, i.e. if and only if
\begin{equation}\label{Sawyer testing}
S^{p,q}(w,u^{1-p'})=\sup_I\left(\int_Iu(x)^{1-p'}dx\right)^\frac{-1}{p}\left( \int_I \left[M(\mathbf{1}_Iu^{1-p'})(x)\right]^qw(x)dx\right)^\frac{1}{q}<\infty
\end{equation}
where the supremum is taken over all cubes $I \subset {\mathbb R}^n$. Replacing the weights $w,u$ with the measures ${\omega},{\sigma}$ we call $S^{p,q}_d({\omega},{\sigma})$ the dyadic Sawyer testing condition where the Maximal operator in \eqref{Sawyer testing} is replaced by the dyadic Maximal operator $M_d$ where the supremum in the operator is taken over only dyadic cubes.
\begin{thm}\label{A infinity and Sawyer testing}\label{Sawyer testing theorem}
(${\sigma} \in A_\infty$, ${\mathcal{A}}_p({\omega},{\sigma})\Rightarrow S^{p,p}_d({\omega},{\sigma})$) Let ${\omega},{\sigma}$ be Radon measures in ${\mathbb R}^n$ such that ${\sigma} \in A_\infty$. If ${\omega},{\sigma}$ satisfy the ${\mathcal{A}}_p({\omega},{\sigma})$ condition then the dyadic Sawyer testing condition $S^{p,p}_d({\omega},{\sigma})$ holds.
\end{thm}
\begin{proof}
Let $I$ be a cube in ${\mathbb R}^n$. Let $\Omega_m=\{x\in I:\left(M_d\mathbf{1}_{I}{{\sigma}}\right)(x)>K^m\}=\dot{\bigcup} I^m_j$, where $K$ is a constant to be determined later and $I^m_j$ are the maximal, disjoint dyadic cubes such that $\frac{{\sigma}(I^m_j)}{|I^m_j|}>K^m$. We have
\begin{eqnarray*}
\int_{I} \left(M_d\mathbf{1}_{I}{{\sigma}}\right)^{p}(x)d{{\omega}}(x)
&\lesssim&\!\!\!\!
\sum_{m,j}\left(\frac{{\sigma}(I^m_j)}{|I^m_j|}\right)^p{\omega}(I^m_j)\\
&=&\!\!\!\!
\sum_{m,j}\left(\frac{{\sigma}(I^m_j)^\frac{1}{p'}{\omega}(I^m_j)^\frac{1}{p}}{|I^m_j|}\right)^p\!\!\!{\sigma}(I^m_j)
\leq
{\mathcal{A}}_p({\omega},{\sigma})\sum_{m,j}{\sigma}(I^m_j)
\end{eqnarray*}
Call $A^m_t=\bigcup_{I^{m+1}_j\subset I^m_t}I^{m+1}_j$. Since ${\sigma} \in A_\infty$ we get
$${\sigma}\left(A^m_t\right)\leq C\left(\frac{|A^m_t|}{|I^m_t|}\right)^\varepsilon{\sigma}(I^m_t)$$
for some $C$ positive and $\varepsilon$ like in \eqref{A infinity}. From the maximality of $I^m_j$ we obtain
$$
\left|A^m_t\right|=\!\!\!\sum_{I^{m+1}_j\subset I^m_t}\left|I^{m+1}_j\right|\leq \frac{1}{K^{m+1}}{\sigma}\left(A^m_t\right)\leq\frac{2^n}{K}|I^m_t|
$$
Choose $K$ big enough that $C\left(\frac{|A^m_t|}{|I^m_t|}\right)^\varepsilon\leq\frac{1}{2}$.
Fix $m\in {\mathbb N}$, $k\geq -m$, then
$$
\sum_j{\sigma}(I^k_j)\leq \left(\frac{1}{2}\right)^{\!\!\!m+k}\sum_j{\sigma}(I^{-m}_j)\leq \left(\frac{1}{2}\right)^{\!\!\!m+k}\!\!\!\!\!\!{\sigma}(I)
$$
$$
\sum_{k=-m}^\infty\sum_j{\sigma}(I^k_j)\leq \sum_{k=-m}^\infty 2^{-m-k}{\sigma}(I)\leq 2{\sigma}(I)
$$
and by taking $m \to \infty$ we get
$$
\sum_{k,j}{\sigma}(I^k_j)=\lim_{m \rightarrow \infty}\sum_{k=-m}^\infty\sum_j{\sigma}(I^k_j)\leq 2 {\sigma}(I)
$$
and this completes the proof of the theorem.
\end{proof}
With theorem \ref{Sawyer testing theorem}. at hand we get the following corollary.
\begin{cor}\label{A infinity and pivotal corollary}\label{pivotal by Ap}
(${\omega} \in A_\infty$, ${\mathcal{A}}_p({\omega},{\sigma})\Rightarrow {\mathcal{V}}({\omega},{\sigma})^p$) Let ${\omega},{\sigma}$ be Radon measures in ${\mathbb R}^n$ such that ${\sigma} \in A_\infty$. Then the ${\mathcal{A}}_p({\omega},{\sigma})$ condition implies the pivotal condition ${\mathcal{V}}({\omega},{\sigma})^p$.
\end{cor}
\begin{proof}
Let $I$ be a cube in ${\mathbb R}^n$.
\begin{eqnarray}\label{poisson by maximal bound}
\mathrm{P}(I,{{\sigma}})
&=&
\!\!\!\!\int\frac{|I|}{\left(|I|^{\frac{1}{n}}+|x-x_I|\right)^{2n}}d{{\sigma}}(x)
\!\lesssim
\sum_{m=0}^\infty \frac{{{\sigma}}\big((2^m+1)I\big)}{2^m|2^mI|}\\
\!\!\!\!&\lesssim&\!\!\!\!
\sum_{m=0}^\infty \inf_{x \in I}M_d{{\sigma}}(x)2^{-m}
\lesssim
\inf_{x \in I}M_d{{\sigma}}(x)\notag
\end{eqnarray}
where $M_d$ denotes the dyadic maximal function.
Let $I_0$ be a cube in ${\mathbb R}^n$. Let $I_0=\bigcup_{r\geq 1}I_r$ be a decomposition of $I_0$ in disjoint cubes. Using \eqref{poisson by maximal bound} we get
$$
\sum_{r \geq 1}{{\omega}}(I_r){\mathrm{P}}^p(I_r,\mathbf{1}_{I_0}{{\sigma}})
\leq
\sum_{r\geq 1}{{\omega}}(I_r) \inf_{x \in I_r}\left(M_d\mathbf{1}_{I_0}{{\sigma}}\right)^p(x)
\leq
\int_{I_0} \left(M_d\mathbf{1}_{I_0}{{\sigma}}\right)^p(x)d{{\omega}}(x)
$$
and using theorem \ref{Sawyer testing theorem}. the last expression is bounded by a constant multiple of ${\sigma}(I_0)$. So we have
$$
\sum_{r \geq 1}{{\omega}}(I_r){\mathrm{P}}^p(I_r,\mathbf{1}_{I_0}{{\sigma}})
\leq K{\sigma}(I_0)
$$
and that completes the proof of the corollary.
\end{proof}
\begin{question}\label{doubling measures question}
$A_\infty$ is a special class of doubling measures. Is it true that for ${\omega}$ doubling measure, ${\mathcal{A}}_p({\omega},{\sigma})\Rightarrow {\mathcal{V}}({\omega},{\sigma})^p$?
\end{question}
\begin{question}
In corollary \ref{A infinity and pivotal corollary} we prove that for ${\omega}\in A_\infty$ dyadic Sawyer testing implies pivotal. Is it true that $S_d^{p,p}({\omega},{\sigma})={\mathcal{V}}({\omega},{\sigma})^p$?
\end{question}
\begin{rem}
The proof of corollary \ref{pivotal by Ap}, holds also for the fractional ${\mathcal{A}}_p({\omega},{\sigma})$ and pivotal conditions as stated in \cite{SSU4} (stated for $p=2$ but extends immediately to any $p>1$).
\end{rem}
\textit{Proof of theorem \ref{T1 theorem}}:
If both the measures ${\omega}, {\sigma}$ are in the one weight $A_\infty$, then by corollary \ref{A infinity and pivotal corollary}, the the two weight ${\mathcal{A}}_2({\omega},{\sigma})$ condition implies both the pivotal conditions ${\mathcal{V}}({\omega},{\sigma})^2$ (\eqref{pivotal} and it's dual) and we can apply the main theorem from \cite{SSU4} (or the one in \cite{LW}) to get the result. \qed
\subsection{The ``buffer" conditions do not imply the tailed ${\mathcal{A}}_p$ conditions.}
The goal of this subsection is to give a proof of theorem \ref{non doubling pivotal example}. First we make the following simple remark.
\begin{rem}\label{pivotal implies classical Ap}
It is immediate to see that the Pivotal condition implies the classical ${\mathcal{A}}_p$ condition. Just let the decomposition in \eqref{pivotal} be just a single cube.
\end{rem}
\begin{rem}
We are going to use $p=2$ in the proof for simplicity. The proof works for $1<p\leq 2$, without any modifications.
\end{rem}
\textit{Proof of theorem \ref{non doubling pivotal example}:} We construct measures ${\omega}$ and ${\sigma}$ so that the pivotal condition ${\mathcal{V}}^2$ \eqref{pivotal} holds, but ${\mathcal{A}}_2^{t_1}$ (\ref{one tailed}) does not. Let
$$
\displaystyle{\omega}(E)=\delta_0,\quad {\sigma}(E)=\sum_{n=2}^\infty n\delta_n(E)
$$
where $\delta_n$ denotes the point mass at $x=n$. First we check ${\mathcal{A}}_2^{t_1}$ does not hold. Let $I=[0,1]$. Then
$$\frac{{\omega}(I)}{|I|}P(I,{\sigma})=\int_{\mathbb R} \frac{1}{(1+dist(x,[0,1]))^2}{\sigma} (dx)=\sum_{n=2}^\infty \frac{n}{(n)^2}=\infty$$
To show the pivotal condition holds, let $I_0=(a,b)$ where $a<0$ and $n \leq b <n+1, n\geq 2$ (we need $I_0$ to contain some masses from ${\sigma}$ and $0 \in I_0$ for otherwise there is nothing to prove). Decomposing $I_0=\dot{\cup}I_r$, only the $I_r$ such that $0 \in I_r$ contributes to the pivotal condition. Call that cube $I_1$. We consider the cases:
\begin{enumerate} [(i)]
\item $|I_1| \leq 1$. We calculate:
\begin{eqnarray*}
\frac{{\omega}(I_1) P(I_1,I_0{\sigma})^2}{{\sigma}(I_0)}=\frac{\displaystyle\left(\int_{I_0} \frac{|I_1|{\sigma} (dx)}{(|I_1|+\dist(x,I_1)))^2}\right)^2}{{\sigma}(I_0)}
\!\!\!&\leq& \!\!\!
\displaystyle|I_1|^2\left(\sum_{k=2}^{n}\frac{k}{k^2}\right)^2\Bigg/\displaystyle\sum_{k=2}^{n}k\\
\!\!\!&\leq&\!\!\!
M<\infty
\end{eqnarray*}
where the constant $M$ does not depend on $n$.
\item $|I_1| \geq n$. We get:
\begin{eqnarray*}
\frac{{\omega}(I_1) P(I_1,I_0{\sigma})^2}{{\sigma}(I_0)}=\frac{\displaystyle\left(\int_{I_0} \frac{|I_1|{\sigma} (dx)}{(|I_1|+\dist(x,I_1)))^2}\right)^2}{{\sigma}(I_0)}
\!\!\!&\leq&\!\!\!
\displaystyle\displaystyle|I_1|^2\left(\sum_{k=2}^{n}\frac{k}{|I_1|^2}\right)^2
\!\!\! \Bigg/\displaystyle\sum_{k=2}^{n}k\\
\!\!\!&\leq& \!\!\!
\frac{n^2}{|I_1|^2}\leq 1
\end{eqnarray*}
\item $1\leq |I_1|\leq n$. We have:
\begin{eqnarray*}
\frac{{\omega}(I_1) P(I_1,I_0{\sigma})^2}{{\sigma}(I_0)}\lesssim \frac{|I_1|^2}{{n}^2}\Bigg(\sum_{k=2}^{|I_1|}\frac{k}{|I_1|^2}+\sum_{k=|I_1|}^{n}\frac{1}{k}\Bigg)^2
&\lesssim&
\frac{|I_1|^2}{{n}^2}\bigg(1+\log\Big(\frac{n}{|I_1|}\Big)\bigg)^2\\
&\lesssim&
\frac{|I_1|^2}{{n}^2}+\frac{|I_1|^2}{{n}^2}\log^2\Big(\frac{n}{|I_1|}\Big)
\end{eqnarray*}
\end{enumerate}
Now, on the last expression setting $x=\frac{{n}}{|I_1|}$ we get the function $f(x)=\frac{\log^2x}{x^2}$, $x\geq 1$ which is bounded independent of $n$.
Combining all three cases we see that the pivotal condition is bounded.\qed
\begin{question}
In the above example, one can check that the dual pivotal condition does not hold. Is it true that ${\mathcal{V}}({\sigma},{\omega})^p\cap {\mathcal{V}}({\omega},{\sigma})^p \Rightarrow {\mathcal{A}}_p^{t_1}({\omega},{\sigma})$?
\end{question}
\subsubsection{Doubling measures and the Pivotal condition.}The result in this subsection is essentially in \cite{Saw2}, equation (4.4), but we include it for completeness. We partially answer positively \textbf{question \ref{doubling measures question}}.
If the measures ${\omega},{\sigma}$ are doubling but not in ${\mathcal{A}}_\infty$ then we do not in general know if the Pivotal condition can be controlled by the ${\mathcal{A}}_p({\omega},{\sigma})$ condition. For measures with small doubling constant though the ${\mathcal{A}}_p({\omega},{\sigma})$ condition implies ${\mathcal{V}}({\omega},{\sigma})^p$.
\begin{thm}\label{Ap and small doubling corollary}
(Small doubling+${\mathcal{A}}_p({\omega},{\sigma})\Rightarrow {\mathcal{V}}({\omega},{\sigma})^p$) Let ${\omega}, {\sigma}$ be doubling measures in ${\mathbb R}^n$ with doubling constants $K_{\omega},K_{\sigma}$ and reverse doubling constants $1+\delta_{\omega},1+\delta_{\sigma}$ respectively. If $K_{\sigma}<2^p(1+\delta_{\omega})$ then the ${\mathcal{A}}_p({\omega},{\sigma})$ condition implies the pivotal condition ${\mathcal{V}}({\omega},{\sigma})^p$.
\end{thm}
\begin{proof}
Let $I_0$ be a cube in ${\mathbb R}^n$ and $I_0=\cup_{r \geq 1}I_r$ be a decomposition of $I_0$ in disjoint cubes.
\begin{eqnarray*}
&&\sum_{r\geq 1}{\omega}(I_r)P(I_r,I_0{\sigma})^p
\approx
\sum_{r \geq 1}{\omega}(I_r)\left(\sum_{m=1}^{m_r}\frac{{\sigma}(I^m_r)}{2^m|I^m_r|}\right)^p\\
&\leq&
\sum_{r\geq 1}\left(\sum_{m=1}^{m_r}(1+\delta_{\omega})^{-\frac{m}{p}}\frac{{\omega}^\frac{1}{p}(I^m_r){\sigma}^\frac{1}{p'}(I^m_r)}{2^m|I^m_r|}{\sigma}^\frac{1}{p}(I^m_r)\right)^p\\
&\leq&
{\mathcal{A}}_p({\omega},{\sigma})\sum_{r \geq 1}{\sigma}(I_r)\left(\sum_{m=1}^{m_r}\left(\frac{K_{\sigma}}{2^p(1+\delta_{\omega})}\right)^\frac{m}{p}\right)^p\lesssim {\mathcal{A}}_p({\omega},{\sigma}){\sigma}(I_0)
\end{eqnarray*}
where $m_r=\log_2 \left(\frac{|I_0|}{|I_r|}\right)^\frac{1}{n}$, $I^m_r$ is the cube with same center as that of $I_r$ and $|I^m_r|^\frac{1}{n}=2^m|I_r|^\frac{1}{n}$. where the implied constant depends only on the doubling constant of ${\sigma}$ and the reverse doubling constant of ${\omega}$. This completes the proof of the theorem.
\end{proof}
\begin{rem}
For a doubling measure ${\omega}$ and a cube $I\subset{\mathbb R}^n$ we have that in \eqref{energy gain} $E(I,{\omega})^2\geq c_{\omega}>0$ since ${\omega}(I_1)\approx {\omega}(I_{2})$ where $|I_1|=|I_{2}|=2^{-n}|I|$ and $I_1$ is in the top left corner of $I$, $I_{2}$ in the bottom right corner of $I$. Hence for ${\omega}$ doubling the Pivotal condition ${\mathcal{V}}({\omega},{\sigma})^p$ is equivalent to the Energy condition ${\mathcal{E}}({\omega},{\sigma})^p$.
\end{rem}
|
2203.00816
|
\section{Introduction}
\label{Section:Intro}
We say that a graph $\Gamma$ {\em decomposes} into subgraphs $\Gamma_1, \Gamma_2, \ldots, \Gamma_t$, if the edge sets of the $\Gamma_i$ partition the edges of $\Gamma$. If
${\cal F}=\{\Gamma_i\mid 1\leq i\leq t\}$ where $\Gamma_i \cong H$ for each $1\leq i\leq t$, then we say that ${\cal F}$ is an $H$-decomposition of $\Gamma$.
An {\em $\ell$-cycle system} of a graph $\Gamma$ is a decomposition of $\Gamma$
into $\ell$-cycles. In the case where $\Gamma$ is the complete graph $K_n$ we say that there is an $\ell$-cycle system of {\em order} $n$.
Necessary and sufficient conditions for the existence of an $\ell$-cycle system of order $n$ were given in~\cite{AlspachGavlas2001,Sajna2002}; see also~\cite{BurattiRotational}. Namely, at least one $\ell$-cycle system of order $n > 1$ exists if and only if $3 \leq \ell \leq n$, $n(n-1) \equiv 0 \pmod{2\ell}$ and $n$ is odd.
Two $\ell$-cycle systems ${\cal F}$ and ${\cal F}'$ of the same graph $\Gamma$ are said to be {\em orthogonal} if, for all cycles $C\in {\cal F}$ and $C'\in {\cal F}'$, $C$ and $C'$ share at most one edge.
A set of pairwise orthogonal $\ell$-cycle systems
of $\Gamma$ is said to be a set of {\em mutually orthogonal} cycle systems of $\Gamma$.
In this paper we are interested in the maximum $\mu$ such that there exists a set of $\mu$ mutually orthogonal $\ell$-cycle systems of order $n$;
we denote this value by $\mu(\ell,n)$.
In the array below we exhibit a set of four mutually orthogonal cycle systems of order $9$. We have determined computationally that $\mu(4,9)=4$; i.e., this set is maximum.
\iffalse
System #362881 : 1 2 3 4 1 3 6 5 1 6 2 7 1 8 2 9 2 4 7 5 3 5 8 7 3 8 6 9 4 5 9 8 4 6 7 9
System #422057 : 2 5 8 3 2 8 7 1 2 7 5 9 2 4 5 6 5 3 9 1 8 1 4 9 8 4 7 6 3 1 6 4 3 7 9 6
System #526101 : 5 1 4 8 5 4 9 2 5 9 1 6 5 3 1 7 1 8 6 2 4 2 3 6 4 3 9 7 8 2 7 3 8 9 6 7
System #2578663 : 1 9 6 4 1 6 3 8 1 3 9 2 1 5 4 7 9 4 3 5 9 8 5 7 6 8 2 5 6 2 3 7 4 8 7 2
\fi
{\small
$$\begin{array}{l}
\{(1, 2, 3, 4), (1, 3, 6, 5), (1, 6, 2, 7), (1, 8, 2, 9), (2, 4, 7, 5), (3, 5, 8, 7), (3, 8, 6, 9), (4, 5, 9, 8), (4, 6, 7, 9)\},
\\[0.2ex]
\{(1, 2, 6, 8), (1, 3, 5, 7), (1, 4, 8, 5), (1, 6, 5, 9), (2, 3, 6, 4), (2, 5, 4, 9), (2, 7, 3, 8), (3, 4, 7, 9), (6, 7, 8, 9)\},
\\[0.2ex]
\{(1, 2, 8, 7), (1, 3, 4, 6), (1, 4, 9, 8), (1, 5, 3, 9), (2, 3, 8, 5), (2, 4, 5, 6), (2, 7, 5, 9), (3, 6, 9, 7), (4, 7, 6, 8)\},
\\[0.2ex]
\{(1, 2, 9, 3), (1, 4, 6, 9), (1, 5, 4, 7), (1, 6, 3, 8), (2, 3, 7, 6), (2, 4, 8, 7), (2, 5, 6, 8), (3, 4, 9, 5), (5, 7, 9, 8)\}.
\end{array}
$$
}
Orthogonal cycle systems arise from face $2$-colourable embeddings of graphs on surfaces, which satisfy two conditions natural to polyhedra and similar phenomena: each pair of faces share at most one edge and each edge belongs to exactly two faces.
Let $\mu K_n$ be the multigraph in which each edge of $K_n$ is replaced by $\mu$ parallel edges.
A decomposition ${\mathcal F}$ of $\mu K_n$ into a subgraph $H$ is said to be {\em super-simple} if no two copies of $H$ share more than one edge, and {\em completely-reducible} if
${\mathcal F}$ partitions into $\mu$ decompositions of $K_n$. It follows that a set of $\mu$ mutually orthogonal cycle systems of $K_n$ is equivalent to a completely-reducible super-simple decomposition of
$\mu K_n$ into cycles; see \cite{CY2} for more details.
In the case $\ell=3$, observe that a pair of $\ell$-cycle systems is orthogonal if and only if the cycle systems are disjoint.
It is not hard to see that there are at most $n-2$ pairwise disjoint triple systems of order $n$; a set of systems which meets this bound is called a {\em large set} of disjoint Steiner triple systems, or LTS$(n)$.
An LTS$(7)$ does not exist \cite{Cayley}; however in \cite{LuJX1,LuJX2}, it is shown that an LTS$(n)$ exists if and only if $n>7$ and $n\equiv 1$ or $3\pmod{6}$, except for a finite list of possible exceptions. The exceptional cases are all solved in~\cite{Te}.
In this paper, we are often interested in {\em cyclic} cycle systems of the complete graph $K_n$. Let $G$ be an additive group of order $n$ and suppose $K_n$ has vertex set $G$. Given a cycle $C=(c_0,c_1,\ldots,c_{\ell-1})$ in $K_n$, for each element $g \in G$, define the cycle $C+g=(c_0+g, c_1+g, \ldots, c_{\ell-1}+g)$. We say that a cycle system ${\cal F}$ of $K_n$ is {\em $G$-regular} if, for any $C \in {\cal F}$ and $g \in G$, we have that $C+g \in {\cal F}$. In the case that $G$ is a cyclic group, we refer to a $\mathbb{Z}_n$-regular cycle system as {\em cyclic}. In a cyclic cycle system ${\cal F}$, the {\em orbit} of the cycle $C \in {\cal F}$ is the set of cycles $\{C+g \mid g \in \mathbb{Z}_n\}$; a cyclic cycle system can be completely specified by listing a set of {\em starter cycles}, that is, a set of representatives for the orbits of the cycles under the action of $\mathbb{Z}_n$.
The existence problem for cyclic cycle systems has attracted much attention. Clearly, in order for a cyclic $\ell$-cycle system of odd order $n$ to exist, we must have that $3 \leq \ell \leq n$ and $\ell$ divides $n(n-1)/2$. However, additional conditions for existence also come into play. There is no cyclic $\ell$-cycle system of order $n$ when $(\ell,n)\in \{(3,9), (15,15)\}$; $\ell=n=p^m$ for some prime $p$ and integer $m \geq 2$; or $\ell < n < 2\ell$ and $\gcd(\ell,n)$ is a prime power~\cite{Buratti2004, BurattiDelFra2004}. Buratti~\cite{Buratti2004} has conjectured that a cyclic $\ell$-cycle system of order $n$ exists for any other admissible pair $(\ell,n)$; this conjecture is still open. The existence problem for cyclic cycle systems of the complete graph has been solved in a number of cases, including when $n \equiv 1$ or
$\ell\pmod{2\ell}$~\cite{BDF, BurattiDelFra2004, Kotzig, Rosa, Vietri} (see also~\cite{BlincoElZanatiVandenEynden, BryantGavlasLing, FuWu2004}), $\ell \leq 32$~\cite{WuFu}, $\ell$ is twice or thrice a prime power~\cite{Wu2,WuFu}, or $\ell$ is even and $m > 2\ell$~\cite{Wu}.
We explore the maximum $\mu'$ such that there exists a set of $\mu'$ mutually orthogonal {\em cyclic} $\ell$-cycle systems of order $n$;
this value is denoted by $\mu'(\ell,n)$. Pairs of orthogonal cyclic cycle systems of the complete graph arise from Heffter arrays with certain orderings.
A {\em Heffter array $H(n;k)$} is an $n\times n$ matrix such that each row and column contains $k$ filled cells,
each row and column sum is divisible by $2nk+1$ and either $x$ or $-x$ appears in the array for each integer $1\leq x\leq nk$.
A Heffter array is said to have a {\em simple ordering} if, for each row and column, the entries may be cyclically ordered so that all partial sums are distinct modulo $2nk+1$.
The following was first shown by Archdeacon~\cite{A} as part of a more general result; consult~\cite{BCDY} to see this result stated more explicitly.
\begin{theorem}
If $H(n;k)$ is a Heffter array with a simple ordering, then there exists a pair of orthogonal cyclic decompositions of $K_{2nk+1}$ into $k$-cycles. In particular, $\mu'(k,2nk+1)\geq 2$.
\end{theorem}
Thus the following is implied by existing literature on Heffter arrays.
\begin{theorem} {\rm \cite{ADDY,BCDY,CMPP,DW}}
Let $n\geq k$. Then
$\mu'(k,2nk+1)\geq 2$ whenever:
\begin{itemize}
\item $k\in \{3,5,7,9\}$ and $nk\equiv 3\pmod{4}$;
\item $k\equiv 0\pmod{4}$;
\item $n\equiv 1\pmod{4}$ and $k\equiv 3\pmod{4}$;
\item $n\equiv 0\pmod{4}$ and $k\equiv 3\pmod{4}$ (for large enough $n$).
\end{itemize}
\end{theorem}
With an extra condition on the orderings of the entries of a Heffter array, these orthogonal cycle systems in turn biembed to yield a face $2$-colourable embedding on an orientable surface. Face $2$-colourable embeddings on orientable surfaces have been studied for a variety of combinatorial structures \cite{DM,GG,GrM,GM}. Recently, Costa, Morini, Pasotti and Pellegrini~\cite{CMPP2020} employed a generalization of Heffter arrays to construct pairs of orthogonal $\ell$-cycle systems of the complete multipartite graph in certain cases.
In \cite{CY2}, it is shown that for every graph $H$ and fixed integer $k\geq 1$, for sufficiently large $n$ (satisfying some elementary necessary divisibility conditions), there exists a set of $k$ pairwise orthogonal decompositions of $K_n$ into $H$ (i.e., no two copies of $H$ share more than one edge).
Aside from this quite general asymptotic result, to our knowledge, sets of mutually orthogonal $\ell$-cycle systems of size greater than $2$ have not been studied for $\ell\geq 4$.
In this paper, our focus for cyclic cycle systems is in the case $n \equiv 1\pmod{2\ell}$, for which it is possible to construct a cyclic $\ell$-cycle system with no short orbit.
In particular, we will find lower bounds on $\mu(\ell,n)$ by constructing sets of mutually orthogonal cyclic even cycle systems.
Specifically, we show that if $\ell$ is even and $n \equiv 1\pmod{2\ell}$, then $\mu'(\ell,n)$ is bounded below by a constant multiple of $n/\ell^2$, i.e., $\mu'(\ell,n) = \Omega(n/\ell^2)$.
Our main result is as follows.
\begin{theorem}\label{MainTheorem}
If $\ell \geq 4$ is even, $n \equiv 1\pmod{2\ell}$ and $N=(n-1)/(2\ell)$, then
\[
\mu(\ell,n) \geq \mu'(\ell,n) \geq \frac{N}{a\ell+b}-1,
\]
where
\[
(a,b) = \left\{
\begin{array}{ll}
(4,-2), & \mbox{if } \ell \equiv 0\pmod{4}, \\
(24,-18), & \mbox{if } \ell \equiv 2\pmod{4}.
\end{array}
\right.
\]
\end{theorem}
In Section~\ref{Section:MO4CS}, when $\ell=4$, we improve the bound of Theorem~\ref{MainTheorem} to $\mu(\ell,n) \geq \mu'(\ell,n) \geq 4N$ (Lemma~\ref{CycleLength4}).
Section~\ref{Section:Prelim} establishes some notation and preliminary results.
The general result for $\ell \equiv 0 \pmod{4}$ is proved in Section~\ref{Section:4k} (Theorem~\ref{case8k}), while the bound for $\ell \equiv 2\pmod{4}$ is proved in Section~\ref{Section:4k+2} (Theorem~\ref{Case4k+2}).
In contrast, in Section~\ref{conclus} we establish upper bounds, namely
$\mu(\ell,n)\leq n-2$;
$\mu(\ell,n)\leq (n-2)(n-3)/(2(\ell-3))$ for $\ell\geq 4$;
$\mu(\ell,n)\leq 1$ for $\ell>\sqrt{n(n-1)/2}$;
and $\mu'(\ell,n)\leq n-3$ for $n\geq 4$.
Finally, computational results for small values are given in the appendix.
\section{Mutually orthogonal 4-cycle systems}
\label{Section:MO4CS}
Clearly $n\equiv 1\md{8}$ is a necessary condition for a decomposition of $K_n$ into $4$-cycles, cyclic or otherwise.
Let $[a,b,c,d]_n$ denote the $\mathbb{Z}_n$-orbit of the $4$-cycle $(0,a,a+b,a+b+c)$, where $a+b+c+d$ is divisible by $n$.
Observe that $[a,b,c,d]_n=[-d,-c,-b,-a]_n$.
Where the context is clear, we write $[a,b,c,d]_n=[a,b,c,d]$.
Let $D_n=\{1,2,\dots , (n-1)/2\}$; that is, $D_n$ is the set of {\em differences} in $\mathbb{Z}_n$.
We consider $\mathbb{Z}_n$ as the set $\pm D_n \cup \{0\}$.
By observation, the maximum size of a set of mutually orthogonal cyclic $4$-cycle systems of $K_9$ is $\mu'(4,9)=2$.
Two such systems are $[1,-2,4,-3]_9$ and $[1,-3,4,-2]_9$.
In the non-cyclic case, an exhaustive computational search indicates that the maximum size of a set of mutually orthogonal $4$-cycle systems of $K_9$ is
$\mu(4,9)=4$; see the example given in Section~\ref{Section:Intro}.
\begin{lemma}\label{CycleLength4}
If $n\equiv 1\md{8}$ and $n\geq 17$, then there exists a set of $(n-1)/2$ mutually orthogonal cyclic $4$-cycle systems of order $n$.
In particular, $\mu'(4,n)\geq (n-1)/2$.
\end{lemma}
\begin{proof}
We first describe how to construct a set of $(n-5)/2$ mutually orthogonal cyclic $4$-cycle systems; then we add two more by making some adjustments.
Let $N=(n-1)/8$.
For each $i,j$ with $1\leq i<j\leq 2N$,
let $C_{i,j}$ and $C_{i,j}'$ be the pair of orbits of $4$-cycles:
$$C_{i,j}:=\{[2i-1,2j,-2i,-(2j-1)]\},\quad C_{i,j}':=\{[2i-1,-(2j-1),-2i,2j]\}.$$
Next, let $F_1,F_2,\dots F_{2N-1}$ be a set of $1$-factors which decompose the complete graph on vertex set $\{1,2,\ldots,2N\}$.
For each $1$-factor $F_k$,
the sets $${\cal F}_k:=\mathop{\bigcup_{\{i,j\}\in F_k}}_{i<j} C_{i,j}
\quad \mbox{ and } \quad
{\cal F}_k':=\mathop{\bigcup_{\{i,j\}\in F_k}}_{i<j} C_{i,j}'
$$ each describe a cyclic decomposition of $K_n$ into $4$-cycles.
Observe that the set of such decompositions constitutes a mutually orthogonal set of size $4N-2=(n-5)/2$.
We next make an adjustment to extend this set.
Without loss of generality, let $F_1=\{\{1,2\},\{3,4\},\dots ,\{2N-1,2N\}\}$.
Replace ${\cal F}_1$ and ${\cal F}_1'$ with:
$$\begin{array}{l}
{\cal F}_{\ast}=\{[4i-3,-(4i-2),-(4i-1),4i]\mid 1\leq i\leq N\}, \\
{\cal F}_{\ast}'= \{[4i-3,4i,-(4i-1),-(4i-2)]
\mid 1\leq i\leq N\}.
\end{array}$$
Then, we can add another pair of cyclic decompositions, orthogonal to each
decomposition in $\{{\cal F}_{\ast},{\cal F}_{\ast}',{\cal F}_2,\ldots,{\cal F}_{2N-1},{\cal F}_2',\ldots,{\cal F}_{2N-1}'\}$, given by:
$${\cal F}_{2N}:=\{[1,-3,4N,-(4N-2)]\}\cup \{[4i+1,-(4i+3),4i,-(4i-2)]\mid 1\leq i <N\}$$
and
$${\cal F}_{2N}':=\{[1,-(4N-2),4N,-3]\}\cup \{[4i+1,-(4i-2),4i,-(4i+3)]\mid 1\leq i <N\}.$$
(Note that orthogonality requires $N\geq 2$ at this final step.)
\end{proof}
In the case $n=17$, we have computationally determined that $\mu'(4,17)=10$, which improves on the bound given in Lemma~\ref{CycleLength4}.
A set of ten mutually orthogonal cyclic 4-cycle systems of order 17 is given in the appendix.
We exhibit the methods of the previous proof in the case $n=25$. We start with a $1$-factorization of $K_6$:
$$
\begin{array}{l}
F_1 = \{\{1,2\}, \{3,4\}, \{5,6\}\}, \\
F_2 = \{\{1,3\}, \{2,6\}, \{4,5\}\}, \\
F_3 = \{\{1,4\}, \{2,5\}, \{3,6\}\}, \\
F_4 = \{\{1,5\}, \{2,3\}, \{4,6\}\}, \\
F_5 = \{\{1,6\}, \{2,4\}, \{3,5\}\}.
\end{array}
$$
The resulting $12$ mutually orthogonal cyclic $4$-cycle systems of order $25$ are given by:
$$\begin{array}{l}
{\cal F}_{\ast}=\{[1,-2,-3,4], [5,-6,-7,8], [9,-10,-11,12]\}, \\
{\cal F}_{\ast}'=\{[1,4,-3,-2], [5,8,-7,-6], [9,12,-11,-10]\},\\
{\cal F}_{2}= \{[1,6,-2,-5], [3,12,-4,-11], [7,10,-8,-9]\}, \\
{\cal F}_{2}'= \{[1,-5,-2,6], [3,-11,-4,12], [7,-9,-8,10]\}, \\
{\cal F}_{3}= \{[1,8,-2,-7], [3,10,-4,-9], [5,12,-6,-11]\}, \\
{\cal F}_{3}'=\{[1,-7,-2,8], [3,-9,-4,10], [5,-11,-6,12]\}, \\
{\cal F}_{4}= \{[1,10,-2,-9], [3,6,-4,-5], [7,12,-8,-11]\}, \\
{\cal F}_{4}'=\{[1,-9,-2,10], [3,-5,-4,6], [7,-11,-8,12]\}, \\
{\cal F}_{5}= \{[1,12,-2,-11], [3,8,-4,-7], [5,10,-6,-9]\}, \\
{\cal F}_{5}'=\{[1,-11,-2,12], [3,-7,-4,8], [5,-9,-6,10]\}, \\
{\cal F}_{6}=\{[1,-3,12,-10], [5,-7,4,-2], [9,-11,8,-6]\}, \\
{\cal F}_{6}'= \{[1,-10,12,-3], [5,-2,4,-7], [9,-6,8,-11]\}.
\end{array}$$
\iffalse
Observe that this set of 12 mutually orthogonal systems is not maximal, as it can be extended with the following additional system.
$$\begin{array}{l}
xxx \\
\end{array}$$
This extended set of 13 mutually orthogonal systems is maximal.
However, we have determined through computational means that $\mu'(4,25) \geq 17$.
So this extended list of 13 mutually orthogonal systems is not maximum.
\fi
Through computational means we determined that this collection of 12 mutually orthogonal cyclic 4-cycle systems of order 25 is maximal.
However, it is not maximum, as we also established computationally that $\mu'(4,25) \geq 17$.
\section{Preliminary lemmas for cycle length greater than 4}
\label{Section:Prelim}
In this section, we introduce notation and basic results which will be needed later to construct mutually orthogonal cycle systems with even cycle length $\ell \geq 6$.
Henceforth, for any integers $a$ and $b$ with $a\leq b$, $[a,b]$ is the set of integers $\{a,a+1,\dots ,b\}$. For $a,b\in \mathbb R$ with $a<b$, we also use the notation $(a,b)$ to denote the set of {\em integers} strictly between
$a$ and $b$.
Let the vertices of the complete graph $K_n$ be labelled with $[0,n-1]$, where $n$ is odd. Then the {\em difference} associated with an edge $\{a,b\}$ is defined to be the minimum value in the set
$\{|a-b\pmod {n}|,|b-a\pmod{n}|\}$. Let $e_1$ and $e_2$ be two edges of differences $d$ and $e$, respectively.
Then we may write $e_1=\{a,a+d\pmod{n} \}$ and $e_2=\{b,b+e\pmod{n} \}$, where $a,b\in [0,n-1]$ are uniquely determined.
We define the {\em distance} between
$e_1$ and $e_2$ to be the minimum value in the set $\{|a-b\pmod {n}|,|b-a\pmod{n}|\}$.
Given a cycle $C$ with vertices in $\mathbb{Z}_n$, the set $\Delta C$ is defined to be the multiset of differences of the edges of $C$.
The idea is to construct cyclic systems using so-called {\em balanced} sets of differences. The following definitions and lemma appear in~\cite{BurgessMerolaTraetta}.
\begin{definition}
If $D=\{d_1, d_2, \ldots, d_{2k}\}$ is a set of positive integers, with $d_{i}< d_{i+1}$ for $i\in[1,2k-1]$,
the {\em alternating difference pattern} of $D$ is the sequence $(s_1, s_2, \ldots, s_k)$ where
$s_i=d_{2i}-d_{2i-1}$ for every $i\in[1,k]$. Furthermore, $D$ is said to be {\em balanced} if there exists an integer
$\tau\in[1,k]$ such that
$\sum_{i=1}^{\tau} s_i = \sum_{i=\tau+1}^k s_i$.
\end{definition}
\begin{definition}
Let $D=\{d_1, d_2, \ldots, d_{2k}\}$ be a balanced set of positive integers. Let $\delta_1$, $\delta_2, \dots , \delta_{2k}$ be the sequence obtained by reordering the integers in $D$ as
follows:
$$\delta_i =
\left\{
\begin{array}{ll}
d_i & \mbox{\rm if\ }1\leq i\leq 2\tau-1, \\
d_{i+1} & \mbox{\rm if\ }2\tau\leq i\leq 2k-1, \\
d_{2\tau} & \mbox{\rm if\ }i=2k.
\end{array}\right.$$
Set $c_0=0$ and $c_i=\sum_{h=1}^i (-1)^h\delta_h$ for $1\leq i\leq 2k-1$.
We then define $C(D):=(c_0,c_1,\dots ,c_{2k-1})$.
\end{definition}
\begin{lemma}\label{lemma1} {\rm (Lemma 3.2 of \cite{BurgessMerolaTraetta}).}
Let $k\geq 2$. If $D$ is a balanced set of $2k$ positive integers, then $C(D)$ is a
$2k$-cycle satisfying $\Delta C(D)=D$ and
vertex set $V(C(D))\subset [-d, d']$, where $d=\max D$ and $d'=\max (D\setminus\{d\})$.
\end{lemma}
\begin{corollary}\label{corollary1}
Let $k\geq 2$ and $n\equiv 1\pmod{4k}$. Suppose that the set
$[1,(n-1)/2]$ partitions into sets $D_1, D_2,\dots, D_{(n-1)/(2k)}$, each of which is balanced and of size $2k$.
Then cycles $C(D_i)$, $i \in [1, (n-1)/(2k)]$, form a set of starter cycles for a cyclic $2k$-cycle decomposition of $K_n$; in particular, the set
$$\{C(D_i)+j\mid i\in [1, (n-1)/2k], j\in [0,n-1]\}$$
is a cyclic decomposition of $K_n$ into $2k$-cycles.
\end{corollary}
\begin{proof}
Let $i\in [1, (n-1)/2k]$. Since $D_i\subset [1,(n-1)/2]$, Lemma \ref{lemma1} implies that $V(C(D_i))\subset [-(n-1)/2, (n-1)/2]$. Thus the vertices of $V(C(D_i))$ are distinct in $\mathbb{Z}_n$. The result follows.
\end{proof}
Our general strategy will be to show that a pair of cyclically generated cycle systems is orthogonal by showing that the sets of differences from any two cycles in different orbits share at most one element. To this end, the following lemma will be used in Sections~\ref{Section:4k} and~\ref{Section:4k+2}.
\begin{lemma}
Let $\delta, N>0$ and let $d$ and $d'$ be integers such that $d,d'\in (N/2-\delta N, N/2+\delta N)$.
Let $\alpha, \alpha'$ be integers such that $1\leq \alpha < \alpha' \leq (1-2\delta)/4\delta$.
Then $\alpha d < \alpha' d'$.
\label{squeezy}
\end{lemma}
\begin{proof}
For each positive integer $s$, define
$$I_s = \{si\mid N/2-\delta N <i < N/2+\delta N; i\in {\mathbb R}\}.$$
Let $m=\lfloor \frac{1-2\delta}{4\delta}\rfloor$, and let $S=[1,m]$.
Observe that $\alpha, \alpha'\in S$.
Now, $\delta \leq 1/(4m+2)$ implies that:
\[
\begin{array}{rrcl}
& m(1+2\delta) & \leq & (m+1)(1- 2\delta) \\
\Rightarrow & m(N/2 + \delta N) & \leq & (m+1)(N/2 - \delta N). \\
\end{array}
\]
It follows that for each $s\in S$, every element of $I_s$ is strictly less than every element of $I_{s+1}$.
Since $\alpha d\in I_{\alpha}$ and $\alpha' d'\in I_{\alpha'}$, it follows that
$\alpha d< \alpha' d'$.
\end{proof}
The following variation of Lemma~\ref{squeezy} will be used in Section~\ref{Section:4k+2}.
\begin{corollary}
Let $\delta, N>0$ and let $d$ and $d'$ be integers such that $d,d'\in (N/3-\delta N, N/3+\delta N)$.
Let $\alpha, \alpha'$ be integers such that $1\leq \alpha < \alpha' \leq (1-3\delta)/6\delta$.
Then $\alpha d < \alpha' d'$.
\label{squeezier}
\end{corollary}
\begin{proof}
If $m$ is a positive integer,
$m\leq (1-3\delta)/6\delta$ implies that
$$m(N/3 + \delta N) \leq (m+1)(N/3 - \delta N).$$
The remaining argument is similar to the previous lemma.
\end{proof}
\section{Orthogonal sets of $4k$-cycle systems with $k \geq 2$}
\label{Section:4k}
Our aim in this section is to prove Theorem~\ref{case8k}. In particular, for each $k \geq 2$ and $n \equiv 1\pmod{8k}$, we will show that $\mu'(n,4k) = \Omega(n/k^2)$. That is, we construct a set of mutually orthogonal $4k$-cycle decompositions of $K_n$ of size at least $cn/k^2$ where $c$ is a constant.
Let $N$ and $k$ be positive integers and let $n=8kN+1$.
For each integer $d\in (N/2-N/(16k-2), N/2)$, we construct a cyclic $4k$-cycle decomposition of $K_n$ which we will denote by ${\mathcal F}(d)$.
The first $d$ starter cycles in $\mathcal{F}(d)$ use the set of differences $[1,4kd]$. For $i \in [1,d]$, let
\[
S_{d,i}=\{i,d+i,2d+i,\ldots,(4k-1)d+i\}.
\]
Observe that the set $S_{d,i}$ is balanced, with $\tau=k$, for each $i \in [1,d]$.
Henceforth in this section, let $e:=N-d$. (In effect, $e$ is a function of $d$.) Observe that $e\in (N/2, N/2 + N/(16k-2))$.
The remaining $e$ starter cycles in ${\mathcal F}(d)$ use differences $[4kd+1, 4kN]$. For $i \in [1, e]$, take
\[
T_{e,i} = \{4kd+i, 4kd+e+i, 4kd+2e+i, \ldots, 4kd+(4k-1)e+i\}.
\]
Observe that the set $T_{e,i}$ is balanced for each $i\in [1,e]$, where $\tau=k$.
Moreover, since $4kd+4ke=4kN$, we have that
\[
\left(\bigcup_{i=1}^{d} S_{d,i}\right) \cup \left( \bigcup_{i=1}^{e} T_{e,i} \right)=[1,4kN],
\]
so by Lemma~\ref{corollary1},
the set of cycles
$${\mathcal F}(d):=\{C(S_{d,i})\mid i\in [1,d]\}\cup \{C(T_{e,i})\mid i\in [1,e]\}$$
is a set of starter cycles for a cyclic $4k$-cycle system of order $n=8kN+1$.
In order to show that we have constructed an orthogonal set of decompositions, we will make use of the following, which is a direct consequence of Lemma~\ref{squeezy}.
\begin{lemma}
Let
$d,d'\in (N/2-N/(16k-2), N/2)$ where $d\neq d'$ and let $e=N-d$ and $e'=N-d'$. Let
$\alpha, \alpha' \in [1,4k-1]$.
Then no two of $\alpha d$, $\alpha' d'$, $\alpha e$ and $\alpha' e'$ are equal.
Moreover, if $\alpha < \alpha'$ then $\alpha d < \alpha' d'$ and $\alpha e < \alpha'e'$.
\label{Squeezy2}
\end{lemma}
\begin{lemma}
Let $d,d'\in (N/2-N/(16k-2), N/2)$ where $d\neq d'$.
Then
the decompositions ${\mathcal F}(d)$ and ${\mathcal F}(d')$, as defined above, are orthogonal.
\end{lemma}
\begin{proof}
In what follows,
$d\neq d'$, $e=N-d$ and $e'=N-d'$. Observe that
$e,e'\in (N/2, N/2 + N/(16k-2))$.
It suffices to show that if $C$ is a cycle from $\mathcal{F}(d)$ and $C'$ is a cycle from $\mathcal{F}(d')$, then
$C$ and $C'$ share at most one difference.
Equivalently, we will show that:
\begin{enumerate}
\item [{\bf (i)}:] {\em For any $i\in [1,d]$ and $i'\in [1,d']$, $|S_{d,i}\cap S_{d',i'}|\leq 1$};
\item [{\bf (ii)}:] {\em For any $i\in [1,e]$ and $i'\in [1,e']$, $|T_{e,i}\cap T_{e',i'}|\leq 1$}; and
\item [{\bf (iii)}:] {\em For any $i\in [1,d]$ and $i'\in [1,e']$, $|S_{d,i}\cap T_{e',i'}|\leq 1$}.
\end{enumerate}
To show (i), suppose to the contrary that $\{x,y\} \subseteq S_{d,i} \cap S_{d',i'}$ with $x<y$. Thus $y-x = \alpha d = \alpha' d'$ for some $\alpha, \alpha'
\in [1,4k-1]$, contradicting Lemma~\ref{Squeezy2}. The justification of (ii) is similar. For (iii), if $x,y \in S_{d,i}\cap T_{e',i'}$ with $x<y$, then $y-x = \alpha d$ for some $\alpha \in [1,4k-1]$ (since $x,y \in S_{d,i}$) and $y-x=\alpha' e'$ for some
$\alpha' \in [1,4k-1]$ (since $x,y \in T_{e',i'}$), so $\alpha d = \alpha'e'$, which again contradicts Lemma~\ref{Squeezy2}.
\end{proof}
Since $n=8Nk+1$, we have the following theorem.
\begin{theorem}
Let $k \geq 2$ and $n=8Nk+1$. There is a set of mutually orthogonal cyclic $4k$-cycle systems of order $n$ of size at least
$$\frac{N}{16k-2}-1=\frac{n-1}{8k(16k-2)}-1.$$
Thus, if $n\equiv 1\pmod{8k}$,
$$\mu(n,4k)\geq \mu'(n,4k)\geq \frac{n-1}{8k(16k-2)}-1.$$
\label{case8k}
\end{theorem}
\section{Orthogonal sets of $(4k+2)$-cycles}
\label{Section:4k+2}
Let
$N$ and $k$ be positive integers and let
$n=2(4k+2)N+1$. For each $d \equiv N\pmod{2}$ with $d\in (N/3 - N/(48k+15), N/3)$, we form a cyclic $(4k+2)$-cycle decomposition $\mathcal{F}(d)$ of $K_n$. Let $e=(N-d)/2$, and observe that $N/3 < e < N/3 + N/(2(48k+15))< N/3 + N/(48k+15)$.
Thus $e\in (N/3, N/3 + N/(48k+15))$.
For $i \in [1,d]$, let
\[
S_{d,i,1} = \{i, d+i, 2d+i, \ldots, (4k-1)d+i\} \mbox{ and } S_{d,i,2} = \{4kN+4e+i, (4k+2)N-i+1\},
\]
and let $S_{d,i} = S_{d,i,1} \cup S_{d,i,2}$.
Now, when constructing the cycles containing differences in $S_{d,i}$, instead of $(4k+2)N-i+1$, we will use the {\em negative} of this difference modulo $n$, that is, the value
\[
(8k+4)N+1 - ((4k+2)N-i+1) = (4k+2)N+i.
\]
We construct a starter cycle $C'(S_{d,i})$ using the set of differences $S_{d,i}$ but in a slightly different way to Lemma~\ref{lemma1}.
\begin{eqnarray*}
C'(S_{d,i}) &=& (0,-i,d,-d-i,\ldots, kd, -kd-i, \\
&& (k+2)d, -(k+1)d-i, (k+3)d, -(k+2)d-i, \ldots, 2kd, -(2k-1)d-i, \\
&& (4k+2)N-(2k+1)d, -(2k+1)d-i).
\end{eqnarray*}
(Note that in the case $k=1$, $C'(S_{d,i}) = (0,-i,d,-d-i,4N+e-d, -3d-i)$.)
\begin{lemma}
Let $i \in [1,d]$. Working modulo $n$, the ordered sequence $C'(S_{d,i})$ is a
$(4k+2)$-cycle with difference set $S_{d,i}$.
\label{specialcycle}
\end{lemma}
\begin{proof}
To see that no vertices are repeated (modulo $n$) within the sequence $C'(S_{d,i})$, it suffices to observe that:
\[
\begin{array}{l}
-(4k+2)N < -(2k+1)d-i < -(2k-1)d-i < -(2k-2)d-i < \cdots < -d-i < -i \\
< 0 < d < 2d < \cdots < kd < (k+2)d < (k+3)d < \cdots < 2kd \\
< (4k+2)N-(2k+1)d < (4k+2)N.
\end{array}
\]
By inspection, and since $(4k+2)N-(2k+1)d = 4kN+4e-(2k-1)d$ and $n-((4k+2)N-i+1)=(4k+2)N+i$, the set of differences of the edges of the cycle $C'(S_{d,i})$ is $S_{d,i}$.
\end{proof}
Note that
\[
\bigcup_{i=1}^d S_{d,i} = [1,4kd] \cup [4kN+4e+1,4kN+4e+d] \cup [(4k+2)N-d+1,(4k+2)N];
\]
since $4kN+4e+d = (4k+2)N-d$, we have that
\[
\bigcup_{i=1}^d S_{d,i} = [1,4kd] \cup [4kN+4e+1,(4k+2)N].
\]
For $j,\ell \in [1,e]$, let
\[
\begin{array}{l}
T_{e,j,1} = \{4kd+j, 4kd+e+j, \ldots, 4kd+(4k-1)e+j\}, \\
T_{e,j,2} = \{ 4kN+j, 4kN+2e+j\}, \\
U_{e,\ell,1} = \{4kd+4ke+\ell, 4kd+(4k+1)e+\ell, \ldots, 4kd+(8k-1)e+\ell\}, \\
U_{e,\ell,2} = \{4kN+e+\ell, 4kN+3e+\ell\},
\end{array}
\]
and set $T_{e,j} = T_{e,j,1} \cup T_{e,j,2}$ and $U_{e,\ell} = U_{e,\ell,1} \cup U_{e,\ell,2}$.
The sets $T_{e,j}$ and $U_{e,\ell}$ are each balanced with $\tau=k+1$. We have that
\[
\left(\bigcup_{j=1}^e T_{e,j}\right) \cup \left(\bigcup_{\ell=1}^e U_{e,\ell}\right) = [4kd+1,4kd+8ke] \cup [4kN+1,4kN+4e] = [4kd+1,4kN+4e],
\]
since $4kd+8ke=4kN$. Observe that for fixed $d$,
\[
\left(\bigcup_{i=1}^d S_{d,i}\right) \cup \left(\bigcup_{j=1}^e T_{e,j} \right) \cup \left(\bigcup_{\ell=1}^e U_{e,\ell}\right) = [1,(4k+2)N],
\]
and thus by Lemmas~\ref{corollary1} and~\ref{specialcycle}, the set of cycles
\[
\mathcal{F}(d)=\{C'(S_{d,i}) \mid i \in [1,d]\} \cup \{C(T_{e,j}) \mid j \in [1,e]\} \cup \{C(U_{e,\ell}) \mid \ell \in [1,e]\}
\]
is a set of starter cycles for a $(4k+2)$-cycle decomposition of $K_n$.
In order to show that the decompositions $\mathcal{F}(d)$, $d \in (N/3-N/(48k+15), N/3)$, are orthogonal, we will make use of the following lemma which is directly implied by Corollary~\ref{squeezier}.
\begin{lemma} \label{squeezy4}
Let $d\neq d'$, $e\neq e'$ and $$d,d',e,e'\in \left(\frac{N}{3} - \frac{N}{48k+15}, \frac{N}{3} + \frac{N}{48k+15}\right).$$ Let $\alpha, \alpha' \in [1,8k+2]$. Then $\alpha d \neq \alpha' d'$ and $\alpha e \neq \alpha' e'$.
Moreover, if $\alpha < \alpha'$, then $\alpha d < \alpha' d'$ and $\alpha e < \alpha' e'$.
\end{lemma}
\begin{lemma}
Suppose that $\beta d + i = \beta'd'+i'$, where $\beta, \beta' \in [0, 4k-1]$,
$i\in [1,d]$, $i' \in [1,d']$ and $d' < d$. Then either $\beta'=\beta$ or $\beta'=\beta+1$.
\label{prelim}
\end{lemma}
\begin{proof}
From Lemma~\ref{squeezy4}, $(\beta+1)d < (\beta+2)d'$. Now,
\[
(\beta-1)d'+i' \leq \beta d' \leq \beta d < \beta d+ i
\]
and
\[
\beta d + i \leq (\beta+1)d < (\beta+2)d' < (\beta+2)d'+i';
\]
hence
\[
(\beta-1)d'+i' < \beta d + i < (\beta+2)d'+i'.
\]
\end{proof}
\begin{lemma}
Let $d\neq d'$ such that $d,d' \equiv N\pmod{2}$ and
$$d,d'\in \left(\frac{N}{3} - \frac{N}{48k+15}, \frac{N}{3} + \frac{N}{48k+15}\right).$$
Let $e=(N-d)/2$ and $e'=(N-d')/2$. Let $i \in [1,d]$, $i' \in [1,d']$, $j, \ell \in [1,e]$ and $j', \ell' \in [1,e']$. Then for each $X \in \{S_{d,i}, T_{e,j}, U_{e,\ell}\}$ and each $Y \in \{S_{d',i'}, T_{e',j'}, U_{e',\ell'}\}$, $|X \cap Y| \leq 1$ \em{with the exception} $S_{d,i} \cap S_{d',i} = \{i,(4k+2)N+i\}$.
\label{Intersections2mod4}
\end{lemma}
\begin{proof}
Recall from the start of this section that $e,e'\in (N/3, N/3 + N/(48k+15))$. In what follows, we frequently apply Lemma~\ref{squeezy4} to $d,d',e$ and $e'$.
To prove the lemma, it suffices to show the following:
\begin{enumerate}
\item[{\bf (i):}] $S_{d,i} \cap S_{d',i} = \{i,(4k+2)N-i+1\}$ and if $i \neq i'$ then $|S_{d,i} \cap S_{d',i'}|\leq 1$;
\item[{\bf (ii):}] $|T_{e,j} \cap T_{e',j'}| \leq 1$, $|U_{e,\ell} \cap U_{e',\ell'}| \leq 1$ and $|T_{e,j} \cap U_{e',\ell'}| \leq 1$;
\item[{\bf (iii):}] $|S_{d,i} \cap T_{e',j'}| \leq 1$ and $|S_{d,i} \cap U_{e',\ell'}| \leq 1$.
\end{enumerate}
\bigskip
\noindent {\bf Proof of (i):} In this case, we may assume without loss of generality that $d'<d$. We note that
\begin{center}
\begin{tabular}{l}
$4kN+4e'+i' > 4kN > 4kd \geq (4k-1)d+i$ \ and \\
$4kN+4e+i > 4kN > 4kd' \geq (4k-1)d'+i'$,
\end{tabular}
\end{center}
so $S_{d,i,1} \cap S_{d',i',2} = S_{d',i',1} \cap S_{d,i,2} = \emptyset$.
Now, supposing that $|S_{d,i,1} \cap S_{d',i',1}| \geq 2$, it follows that for some $x$, $(x,x+\alpha d) = (x, x+\alpha' d')$ where $\alpha, \alpha' \in [1,4k-1]$; thus $\alpha d = \alpha' d'$, in contradiction to Lemma~\ref{squeezy4}. Next, supposing that $|S_{d,i,2} \cap S_{d',i',2}| \geq 2$, then either
\begin{center}
\begin{tabular}{ll}
(a) & $4kN+4e+i=4kN+4e'+i'$ and $(4k+2)N-i+1 = (4k+2)N-i'+1$, or \\
(b) & $4kN+4e+i = (4k+2)N-i'+1$ and $4kN+4e'+i' = (4k+2)N-i+1$.
\end{tabular}
\end{center}
In both cases, it is straightforward to check that $e=e'$, a contradiction.
Thus if $|S_{d,i} \cap S_{d',i'}| \geq 2$, it must be that $|S_{d,i,1} \cap S_{d',i',1}|=1$ and $|S_{d,i,2} \cap S_{d',i',2}|=1$.
If $i=i'$ then $\{i,(4k+2)N-i+1\}\subseteq S_{d,i} \cap S_{d',i'}$.
Moreover, recalling that $S_{d,i,1} \cap S_{d',i',2} = S_{d',i',1} \cap S_{d,i,2} = \emptyset$, it follows that $|S_{d,i} \cap S_{d',i'}|=2$. Hence if $i=i'$, then $S_{d,i} \cap S_{d',i} = \{i,(4k+2)N-i+1\}$.
We now assume that $i \neq i'$. From Lemma~\ref{prelim}, $|S_{d,i,1} \cap S_{d',i',1}|=1$ implies that either
\begin{center}
\begin{tabular}{ll}
(a) & $\beta d + i = \beta d' + i'$, or \\
(b) & $\beta d + i = (\beta+1) d' + i'$
\end{tabular}
\end{center}
for some $\beta,\beta' \in [0,4k-1]$. Now suppose that also $|S_{d,i,2} \cap S_{d',i',2}|=1$. Since $i \neq i'$, we note that $(4k+2)N-i+1 \neq (4k+2)N-i'+1$. Also, it cannot be the case that $4kN+4e+i=(4k+2)N-i'+1$, since
\[
4kN+4e+i = (4k+2)N-2d+i \leq (4k+2)N-d < (4k+2)N-d' \leq (4k+2)N-i' < (4k+2)N-i'+1.
\]
Now suppose that $4kN+4e+i=4kN+4e'+i'$. Then $2d-i=2d'-i'$. If (a) is true, then $(\beta+2)d=(\beta+2)d'$; since $\beta+2>0$, we have $d=d'$, a contradiction. On the other hand, if (b) is true, then $(\beta+2)d = (\beta+3)d'$, contradicting Lemma~\ref{squeezy4}. Thus the only remaining possibility is that $4kN+4e'+i'=(4k+2)N-i+1$, so that $i+i'= 2N-4e'+1=2d'+1$ is odd. Since $d$ and $d'$ have the same parity, this contradicts (a), so it must be that (b) is true. It follows that
\[
(\beta+3)d' - \beta d + 1 = 2i \leq 2d.
\]
Thus $(\beta+3)d' \leq (\beta+2)d-1 < (\beta+2)d$, contradicting Lemma~\ref{squeezy4}.
\bigskip
\noindent {\bf Proof of (ii):} We first note that the largest element in $T_{e,j,1} \cup U_{e,\ell,1}$ is $4kd+(8k-1)e+\ell$, while the smallest element of $T_{e,j,2} \cup U_{e,\ell,2}$ is $4kN+j$. Since
\[
4kd+(8k-1)e+\ell \leq 4kd+8ke = 4kN < 4kN+j,
\]
it follows that $T_{e,j,1} \cap T_{e',j',2}=\emptyset$, $U_{e,\ell,1} \cap U_{e',\ell',2}=\emptyset$ and $T_{e,j,1} \cap U_{e',\ell',2}=\emptyset$.
Now, if $|T_{e,j,1} \cap T_{e',j',2}| \geq 2$, $|U_{e,\ell,1} \cap U_{e',\ell',1}| \geq 2$ or $|T_{e,j,1} \cap U_{e',j',1}| \geq 2$, then for some $x$, $(x,x+\alpha e) = (x,x+\alpha'e')$, where $\alpha, \alpha' \in [1,8k-1]$. Thus $\alpha e = \alpha' e'$, contradicting Lemma~\ref{squeezy4}. If $|T_{e,j,2} \cap T_{e',j',2}| \geq 2$, $|U_{e,\ell,2} \cap U_{e',\ell',2}| \geq 2$ or $|T_{e,\ell,2} \cap U_{e',\ell',2}| \geq 2$, then it follows that $e=e'$, a contradiction.
Thus, if $|T_{e,j} \cap T_{e',j'}| \geq 2$, it must be that $|T_{e,j,1} \cap T_{e',j',1}|=1$ and $|T_{e,j,2} \cap T_{e',j',2}|=1$. Since $|T_{e,j,1} \cap T_{e',j',1}|=1$, we have that for some
$\alpha, \alpha' \in [0,4k-1]$,
$4kd+\alpha e + j = 4kd' + \alpha'e' +j'$, which implies that $(8k-\alpha)e-j = (8k-\alpha')e'-j'$. Since $|T_{e,j,2} \cap T_{e',j',2}|=1$, then $4kN+\beta e + j = 4kN+\beta' e' + j'$ where $\beta, \beta' \in \{0,2\}$.
Hence $(8k-\alpha+\beta) e = (8k-\alpha' + \beta') e'$, which contradicts Lemma~\ref{squeezy4} since
$(8k-\alpha+\beta), (8k-\alpha'+\beta') \in [4k+1 , 8k+2]$.
We conclude that $|T_{e,j} \cap T_{e',j'}| \leq 1$.
In a similar way, the assumption that $|U_{e,\ell,1} \cap U_{e,\ell',1}|=1$ and $|U_{e,\ell,2} \cap U_{e,\ell',2}|=1$ leads to a contradiction, as does the assumption that $|T_{e,j,1} \cap U_{e',\ell',1}|=1$ and $|T_{e,j,2} \cap U_{e',\ell',2}|=1$. We conclude that $|U_{e,\ell} \cap U_{e',\ell'}| \leq 1$ and $|T_{e,j} \cap U_{e',\ell'}| \leq 1$.
Next, suppose that $|U_{e,\ell,1} \cap U_{e',\ell',1}|=1$ and $|U_{e,\ell,2} \cap U_{e',\ell',2}|=1$. Since $|U_{e,\ell,1} \cap U_{e',\ell',1}|=1$, we have that for some $\alpha, \alpha' \in [4k , 8k-1]$, $4kd+\alpha e + \ell = 4kd' + \alpha' e'+\ell'$, which implies that $(8k-\alpha)e-\ell = (8k-\alpha')e'-\ell$. Since $|U_{e,\ell,2} \cap U_{e',\ell',2}|=1$, then $4kN+\beta e + \ell = 4kN+\beta'e'+\ell'$ where $\beta, \beta' \in \{1,3\}$. Hence $(8k-\alpha+\beta)e = (8k-\alpha'+\beta')e'$, which contradicts Lemma~\ref{squeezy4} since $(8k-\alpha+\beta), (8k-\alpha'+\beta') \in [2 , 4k+3]$.
Finally, suppose that $|T_{e,j,1} \cap U_{e',\ell',1}|=1$ and $|T_{e,j,2} \cap U_{e',\ell',2}|=1$. Since $|T_{e,j,1} \cap U_{e',\ell',1}|=1$, we have that for some $\alpha \in [0,4k-1]$, $\alpha'\in [4k, 8k-1]$, $4kd+\alpha e + j = 4kd' + \alpha'e'+\ell'$, which implies that $(8k-\alpha)e-j = (8k-\alpha')e'-\ell'$. Since $|T_{e,j,2} \cap U_{e',\ell',2}|=1$, then $4kN+\beta e + j = 4kN+\beta'e'+\ell'$, where
$\beta\in \{0,2\}$ and
$\beta'\in \{1,3\}$. Hence $(8k-\alpha+\beta)e = (8k-\alpha' + \beta')e'$, which contradicts Lemma~\ref{squeezy4} since $4k+1 \leq 8k-\alpha+\beta\leq 8k+2$ and $2\leq 8k -\alpha' + \beta' \leq 4k+3$.
\bigskip
\noindent {\bf Proof of (iii):} Note that since
\[
(4k-1)d + i \leq 4kd < 4kN < 4kN+j' < 4kN+e'+\ell',
\]
then $S_{d,i,1} \cap T_{e',j',2} = \emptyset$ and $S_{d,i,1} \cap U_{e',\ell',2} = \emptyset$. Moreover,
\[
4kd'+(4k-1)e'+j' \leq 4kd' + 4ke' < 4kd' + (8k-1)e' + \ell' \leq 4kd' + 8ke' = 4kN < 4kN+4e+i,
\]
and so $S_{d,i,2} \cap T_{e',j',1} = \emptyset$ and $S_{d,i,2} \cap U_{e',\ell',1} = \emptyset$.
By Lemma~\ref{squeezy4},
\[
(4k-2)d+i \leq (4k-1)d < 4kd'< 4kd'+j'.
\]
It follows that $|S_{d,i,1} \cap T_{e',j',1}| \leq 1$.
Also, since $d<N/3<e'$,
\[
(4k-1)d+i\leq 4kd <4ke' <4kd'+4ke' + \ell',
\]
and thus $S_{d,i,1} \cap U_{e',\ell',1} = \emptyset$.
Now, using Lemma~\ref{squeezy4}, we also have that
\[
4kN+e'+\ell' \leq 4kN+2e' < 4kN+2e'+j' \leq 4kN+3e' < 4kN+4e < 4kN+4e+i,
\]
and so $S_{d,i,2} \cap T_{e',j',2} = \emptyset$ and $|S_{d,i,2} \cap U_{e',\ell',2}| \leq 1$. It follows that $|S_{d,i} \cap T_{e',j'}| \leq 1$ and $|S_{d,i} \cap U_{e',\ell'}| \leq 1$.
\end{proof}
\begin{theorem}
Let $k \geq 1$ and $n=(8k+4)N+1$. There is a set of mutually orthogonal cyclic $(4k+2)$-cycle systems of order $n$ of size at least
$$\frac{N}{96k+30}-1 = \frac{n-1}{(8k+4)(96k+30)}-1.$$ Thus, if $n \equiv 1\pmod{2(4k+2)}$, then
\[
\mu(n,4k+2) \geq \mu'(n,4k+2) \geq \frac{n-1}{(8k+4)(96k+30)} - 1.
\]
\label{Case4k+2}
\end{theorem}
\begin{proof}
The number of integers strictly between $N/3 - N/(48k+15)$ and $N/3$ with the same parity as $N$ is at least $N/(96k+30)-1$.
It thus suffices to show that for distinct integers $d$ and $d'$ with the same parity such that
\[
d,d'\in \left(\frac{N}{3} - \frac{N}{48k+15}, \frac{N}{3}\right),
\]
the decompositions $\mathcal{F}(d)$ and $\mathcal{F}(d')$ are orthogonal.
In turn, it suffices to deal with the exceptional case from Lemma~\ref{Intersections2mod4}. From Lemma~\ref{specialcycle}, the edges of differences $i$ and $(4k+2)N-i+1$ within $C'(S_{d,i})$ are $\{0,-i\}$ and $\{(4k+2)N-(2k+1)d,-(2k+1)d-i\}$, which are at distance $(4k+2)N - (2k+1)d+i$. Similarly, the edges of differences $i$ and $(4k+2)N-i+1$ within $C'(S_{d',i})$ are $\{0,-i\}$ and $\{(4k+2)N-(2k+1)d',-(2k+1)d'-i\}$, which are at distance $(4k+2)N-(2k+1)d'+i$. If the pairs of edges within cycles generated from the starters $C'(S_{d,i})$ and $C'(S_{d',i})$ coincide, then we must have that $(2k+1)d \equiv (2k+1)d'\pmod{n}$. But $n$ and $2k+1$ are coprime, so $d=d'$.
\end{proof}
\section{Concluding remarks}
\label{conclus}
The main results of this paper have been to establish lower bounds on the number of mutually orthogonal cyclic $\ell$-cycle systems of order $n$.
For upper bounds on the number of systems (not necessarily cyclic in nature) we have the following lemmata.
\begin{lemma} If there exists a set of $\mu$ mutually orthogonal $\ell$-cycle systems of order $n$, then $\mu\leq n-2$.
That is, $\mu(\ell,n)\leq n-2$.
\label{uppa1}
\end{lemma}
\begin{proof}
Consider a vertex $w$ in $K_n$. The vertex $w$ belongs to precisely $(n-1)(n-2)/2$ paths of length $2$ in $K_n$ where $w$ is the center vertex of the path.
Moreover, each such path belongs to at most one $\ell$-cycle from any set of $\mu$ mutually orthogonal $\ell$-cycle systems.
The number of cycles in one $\ell$-cycle system which contain vertex $w$ is equal to $(n-1)/2$.
Thus $\mu(n-1)/2\leq (n-1)(n-2)/2$. The result follows.
\end{proof}
\begin{lemma} Let $\ell\geq 4$.
Then $$\mu(\ell,n)\leq \frac{(n-2)(n-3)}{2(\ell-3)}.$$
\label{uppa2}
\end{lemma}
\begin{proof}
Suppose there exist a set $\{{\mathcal F}_1,{\mathcal F}_2,\dots , {\mathcal F}_{\mu}\}$ of mutually orthogonal $\ell$-cycle systems of $K_n$.
Consider an edge $\{v,w\}$ in $K_n$. Then for each $i\in [1,\mu]$, there is an $\ell$-cycle $C_i\in {\mathcal F}_i$ containing the edge $\{v,w\}$. Let $H$ be the clique of size $n-2$ in $K_n$ not including vertices $v$ and $w$. Then
the intersection of $C_i$ with $H$ is a path $P_i$ with $\ell-3$ edges. Moreover, orthogonality implies that the paths in the set $\{P_i\mid i\in [1,\mu]\}$ are pairwise edge-disjoint.
Thus, $(\ell-3)\mu$ is bounded by the number of edges in $H$; that is, $(\ell-3)\mu\leq (n-2)(n-3)/2$.
\end{proof}
Observe that Lemma \ref{uppa2} improves Lemma \ref{uppa1} only if $\ell >(n+3)/2$.
If $\ell>n/\sqrt{2}$, it is not even possible to find a pair of orthogonal cycle systems, as shown in the following lemma.
\begin{lemma} If $2\ell^2>n(n-1)$ then $\mu(\ell,n)\leq 1.$
\label{uppa3}
\end{lemma}
\begin{proof}
Suppose there exists a pair $\{{\mathcal F}_1,{\mathcal F}_2\}$ of mutually orthogonal $\ell$-cycle systems of $K_n$.
Then ${\mathcal F}_1$ and ${\mathcal F}_2$ each contain $n(n-1)/(2\ell)$ cycles of length $\ell$.
Let $C$ be a cycle in ${\mathcal F}_1$. By the definition of orthogonality, each edge of $C$ intersects a unique cycle in ${\mathcal F}_2$.
Thus $\ell \leq n(n-1)/(2\ell)$, contradiction.
\end{proof}
When the systems are required to be cyclic, Lemma \ref{uppa1} can be slightly improved.
\begin{lemma}\label{CyclicUpperBound} Let $n\geq 4$. If there exists a set of $\mu'$ mutually orthogonal cyclic $\ell$-cycle systems of order $n$, then $\mu'\leq n-3$.
That is, $\mu'(\ell,n)\leq n-3$.
\end{lemma}
\begin{proof}
Since $\mu'(\ell,n)\leq \mu(\ell,n)$, Lemma \ref{uppa1} implies that $\mu'(\ell,n)\leq n-2$. Suppose, for the sake of contradiction that $\mu'(\ell,n)=n-2$.
Thus there exists a set of $n-2$ orthogonal cyclic decompositions of $K_n$ where the vertices are labelled with elements of $\mathbb{Z}_n$. Let $a\in [1,(n-1)/2]$.
Suppose that the path $(-a,0,a)$ of length $2$ does not occur in a cycle from one of these decompositions. Then the total number of paths of length $2$ containing $0$ which appear in one of the cycles is less than $(n-1)(n-2)/2$. However, there are $(n-2)(n-1)/2$ cycles containing vertex $0$, contradicting the condition of orthogonality.
Let $C_a$ be the cycle containing the path $(-a,0,a)$ and let ${\mathcal F}$ be the decomposition of $K_n$ containing $C_a$.
Since our decomposition is cyclic, there is also a cycle $C'\in {\mathcal F}$ containing $(0,a,2a)$; since $C'$ and $C_a$ share an edge we must have $C'=C_a$.
Inductively, $C_a=(0,a,2a,\dots )$.
In particular $C_1=(0,1,2,\dots ,n-1)$ and thus $\ell=n$.
But since $\mu'(\ell,n)=n-2 \geq 2$ and $n>(n-1)/2$, there is a cycle $C''\neq C_a$ in a decomposition ${\mathcal F}'\neq {\mathcal F}$
containing a repeated difference $a\in [1,(n-1)/2]$.
The cycle $C''$ shares two edges with $C_a$, contradicting the condition of orthogonality.
\end{proof}
It is worth noting that for certain congruencies the upper bound in Lemma~\ref{CyclicUpperBound} can be made significantly smaller.
For example, if $n \equiv 3\pmod{6}$ then $\mu'(3,n)= 1$, because in this case any cyclic decomposition necessarily contains the cycle $(0,n/3, 2n/3)$.
In the appendix we give computational results for $\mu'(\ell,n)$ when $\ell$ and $n$ are small.
As yet we are unaware of any instances for which the bound of Lemma~\ref{CyclicUpperBound} is tight,
and so we ask if equality ever occurs.
\begin{question}
For which values of $\ell$ and $n$, if any, is $\mu'(\ell,n) = n-3$?
\end{question}
\section*{Acknowledgements}
Authors A.C.\ Burgess and D.A.\ Pike acknowledge research support from NSERC Discovery Grants
RGPIN-2019-04328
and RGPIN-2016-04456, respectively.
Thanks are given to
the Centre for Health Informatics and Analytics of the Faculty of Medicine at Memorial University of Newfoundland
for access to computational resources.
|
1304.5470
|
\section{Introduction}
\label{section:introduction}
The evolution of the surface rotation of low-mass stars along the
pre-main sequence (hereafter PMS) follows a specific path as shown
by the data, both rotation periods and $v \sin i$ measurements,
collected in young stellar clusters
\citep[e.g.][ and references therein for a review]{IrwinBouvier2009}.
The rotational properties of young stars appear to result from an intricate interplay between several physical processes that affect the angular
momentum gains, losses, and redistribution as the stars evolve along the
PMS towards the zero age main sequence (hereafter ZAMS).
These mechanisms can be roughly divided into two classes.
The first ones result from the connection of the stars to their environment (magnetic and dynamic coupling to a
circumstellar disk, accretion, jets, stellar and disk winds, etc.), and are particularly crucial during the T-Tauri phase, when star-disk interaction is observed and
expected to be strong \citep[see, for instance,][]{Shu1994,MattPudritz2005,ZanniFerreira2012}.
The broad variety of possible star-environment configurations may, in particular, explain part of the large dispersion in rotation rates of solar-type stars observed along the PMS and at the arrival on the ZAMS \citep[e.g., ][]{Staufferetal85,IrwinBouvier2009}.
The second class is related to stellar secular evolution and consists of the (magneto-) hydrodynamical transport
mechanisms that contribute to redistributing angular momentum inside the stars themselves.
In the present paper we focus on exploring these internal mechanisms once the disk has dissipated and the accretion process is over, which occurs after 3-10 Myr
\citep[e.g.,][]{Haischetal01,Hartmann05,Hernandezetal08}, i.e., roughly at the time when a radiative core appears in the contracting PMS stars.
Our aim is to evaluate, in particular and for the first time, the
interplay between internal gravity waves
(hereafter IGW), meridional circulation, turbulent shear, and stellar
contraction during the PMS, considering that IGW are one of the best candidate mechanisms
to explain the flat angular velocity profile inside the Sun as
revealed by helioseismology \citep{CharbonnelTalon2005Science}.
This work is also motivated by the results of \citet[][ hereafter TC08]{TalonCharbonnel2008}, who showed that IGW are efficiently excited inside intermediate-mass PMS stars and
suggested that waves should efficiently transport angular momentum during the PMS evolution, which should
affect the angular velocity profile at this phase and at the arrival on the ZAMS.
In \S\ref{section:formalism} we introduce the formalism for IGW excitation and for the transport of angular momentum through the various mechanisms considered.
We describe in \S \ref{sec:models} the basic assumptions for the present grid of PMS models for low-mass (0.6 to 2~M$_{\odot}$), solar-metallicity stars.
In \S \ref{sec:IGWexcitation} we examine IGW generation by the convective envelope and the convective core (when present)
along the PMS evolution for the whole mass range covered by the grid.
In \S~\ref{evolutionrotprofile} we describe the impact of the various interacting transport mechanisms on the evolution of the internal rotation profile for the various stellar masses, and in \S~\ref{section:globalproperties} we briefly discuss their influence on the surface rotation velocity and lithium abundance, as well as on global stellar properties.
Conclusions are presented in \S~\ref{sec:conclusion}.
\section{Formalism}
\label{section:formalism}
We follow \citet[][ hereafter TC05]{TalonCharbonnel2005} for the treatment of both the excitation of IGW and the transport of angular momentum and chemicals
by waves, meridional circulation, and shear turbulence in hydrodynamical stellar models.
We, however, underline three main improvements over TC05 paper. First, we consider IGW generated both by the external and central convective regions (when present), while only those excited by the convective envelope were considered in our previous work. Second, the important variations in the stellar structure and of the IGW properties along the pre-main sequence (see \S~\ref{sec:IGWexcitation} and TC08)
require that we then compute the wave spectra at each evolution time step,
while the main sequence computation presented in TC05 was based on the wave spectrum of the stellar model on the ZAMS. Finally, we account here for both prograde and retrograde waves in the whole radiative interior, while only the latter ones were considered previously.
We recall below the formalism (i.e., relevant equations and assumptions) that is included in the evolution code STAREVOL (see TC05, TC08, and \citealt{Mathisetal13}).
\subsection{IGW generation}
\label{subsection:excitation}
In this exploratory work we apply the \citet{GoldreichMurray1994} formalism as adapted by \citet{KQ1997} to calculate the spectrum of IGW excited by Reynolds stress and buoyancy in the bulk of convective regions \citep[see e.g.][]{ZahnTalon1997}.
We do not consider the possible effects of IGWs generated by convective overshooting plumes, since no analytical prescription is available to describe this excitation mechanism (see details and discussion in TC05).
However as we see in \S~\ref{subsubsection:generaltransport}, we multiply the IGW flux by a factor 2 in the transport equation in order to account for the recent results by \citet{LecoanetQuataert13}.
We treat the waves by assuming that they are pure gravity waves (i.e., not modified by the Coriolis acceleration) that only feel the entrainment by differential rotation.
The kinetic energy flux per unit frequency is
\begin{eqnarray}
{\cal F}_E \left( \ell, \omega \right) &=& \frac{\omega^2}{4\pi} \int dr\; \frac{\rho^2}{r^2}
\left[\left(\frac{\partial \xi_r}{\partial r}\right)^2 +
\ell(\ell+1)\left(\frac{\partial \xi_h}{\partial r}\right)^2 \right] \nonumber \\
&& \times \exp\left[ -h_\omega^2 \ell(\ell+1)/2r^2\right] \frac{v_c^3 L^4 }{1
+ (\omega \tau_L)^{15/2}}
\label{gold}
\end{eqnarray}
with $\xi_r$ and $[\ell(\ell+1)]^{1/2}\xi_h$ the radial and horizontal displacement wave functions normalized to unit energy flux at the
edge of the considered convection zone, $v_c$ the convective velocity,
$L=\alpha_{\rm MLT} H_P$ the radial size of an energy bearing turbulent eddy,
$\tau_L \approx L/v_c$ the characteristic convective time, $H_P$ the pressure scale height $P/\rho g$, and $h_\omega$ the radial size of the largest eddy at depth $r$ with characteristic frequency of $\omega$ or higher ($h_\omega = L \min\{1, (2\omega\tau_L)^{-3/2}\}$).
The radial and horizontal wave numbers (respectively $k_r$ and $k_h$) are related by
\begin{equation}
k_r^2 = \left( \frac{N^2}{\omega^2} -1 \right) k_h^2 =
\left( \frac{N^2}{\omega^2} -1 \right) \frac{\ell \left( \ell +1 \right)}{r^2} \label{kradial}
\end{equation}
where $N^2$ is the Brunt-V\"ais\"al\"a frequency\footnote{The Brunt-V\"ais\"al\"a - or buoyancy - frequency $N$ is given by
$N^2 = N_T^2+N_\mu^2 = \frac{g}{H_P} \left( \delta(\nabla_\text{ad}-\nabla)+\varphi\nabla_\mu \right)$, with $\delta = −(\partial \ln \rho/\partial \ln T )_{P,µ}$, $\varphi =
(\partial \ln \rho/\partial \ln \mu)_{P,T}$, $\nabla$ the logarithmic temperature gradient, and $\nabla_\mu$ the mean molecular weight gradient.} .
At the considered convective edge (located at radius $r_{cz}$), the mean flux of angular momentum carried by a monochromatic wave of spherical order $\ell$ and local (i.e., emission) frequency $\omega$ ($m$ being the azimutal order), i.e., the momentum flux per unit frequency,
is related to the kinetic energy flux by
\begin{equation}
{{\cal F}_{J,{r_{cz}}}} \left( m, \ell, \omega \right) = \frac{2m}{\omega} {\cal F}_E \left( \ell, \omega \right)
\end{equation}
\citep{GoldreichNicholson1989,ZahnTalon1997}.
The so-called angular momentum luminosity at the considered convective edge (envelope or core) is obtained after horizontal
integration
\begin{equation}
{{\cal L}_J}_{\ell, m} \left( r_{\rm cz} \right) = 4 \pi r_{cz}^2 {\cal F}_{J,r_{cz}}.
\label{eq:angmomluminosity}
\end{equation}
\subsection{IGW damping}
\label{subsection:transportIGW}
Deposition of angular momentum (positive or negative) within the radiative layers occurs
at the depth where individual monochromatic waves are eventually damped by thermal diffusivity and viscosity in corotation resonance \citep{GoldreichNicholson1989,Schatzman1993,ZahnTalon1997}.
In the present study the local momentum luminosity at a given radius $r$ within the radiative region accounts for prograde and retrograde waves (i.e., with respectively positive and negative $m$ values) generated by both the convective envelope and the convective core (if present), i.e.,
\begin{equation}
{\cal L}_J(r) = {\cal L}_{J\text{,env}}+{\cal L}_{J,\text{core}}\label{eq:envcore},\end{equation}
where each component is given by
\begin{equation}
\sum_{\sigma, \ell, m} {{\cal L}_J}_{\ell, m} \left( r_{\rm cz} \right)
\exp \left[ -\tau(r, \sigma, \ell) \right] \label{locmomlum},
\end{equation}
where `${\rm cz}$' refers to the interface between the radiative region and the corresponding convection zone (i.e., envelope or core).
The local damping rate
\begin{equation}
\tau(r, \sigma, \ell) = \left[ \ell(\ell+1) \right] ^{3\over2} \int_r^{r_{cz}}
\left( K_T + \nu_v \right) \; {N {N_T}^2 \over
\sigma^4} \left({N^2 \over N^2 - \sigma^2}\right)^{1 \over 2} {{\rm d} r
\over r^3} \label{optdepth}
\end{equation}
takes the mean molecular weight stratification into account \citep{GoldreichNicholson1989,Schatzman1993,ZahnTalon1997}, as well as
the thermal and the (vertical) turbulent viscosity ($K_T$ and $\nu_v$ respectively).
Here, $\sigma$ is the local Doppler-shifted frequency
\begin{equation}
\sigma(r) = \omega - m
\left[ \Omega(r)-\Omega _{\rm cz} \right] \label{sigma}
\end{equation}
with $\omega$ the wave frequency in the reference frame of the corresponding emitting convection zone that rotates with the angular velocity $\Omega _{\rm cz}$.
As can be seen from these expressions, angular momentum redistribution by IGW within the radiative region is dominated by low-frequency ($\sigma \ll N$), low-degree waves;
indeed, those penetrate deeper, and their prograde and retrograde components
experience strong differential damping, as required to produce a net momentum deposition.
In contrast, high-degree waves are damped closer to the convection zone (since damping $\propto \left[ \ell(\ell+1) \right] ^{3\over2}$), and high-frequency waves experience less differential damping.
\subsection{Global transport of angular momentum by IGW, meridional circulation, and shear turbulence}
\label{subsection:transportglobal}
\subsubsection{General equations}
\label{subsubsection:generaltransport}
We assume solid-body rotation in the convective regions.
In the stellar radiative regions, the evolution of angular momentum through advection by meridional circulation, diffusion by shear turbulence, and deposit or extraction by IGW follows the general expression below \citep[e.g.,][]{TalonZahn1998}:
\begin{eqnarray}
\rho \frac{\diff}{\diff t} \left[ r^2 {\Omega} \right] &= &
\frac{1}{5 r^2} \frac{\partial}{\partial r} \left[ \rho r^4 \Omega U \right]
+ \frac{1}{ r^2} \frac{\partial}{\partial r} \left[ \rho \nu_v r^4 \dr{\Omega} \right]
\nonumber \\
&& - {2} \frac{3}{8\pi} \frac{1}{r^2} \frac{\partial}{\partial r}{{\cal L}_J(r)},
\label{ev_omega}
\end{eqnarray}
where $U$ is the radial meridional circulation velocity, $\nu_v$ the turbulent viscosity due to differential rotation, and $\rho$ the density.
We have added a factor 2 in the last term to account for the study by \citet{LecoanetQuataert13}, who predict the IGW flux due to turbulent convection to be a few to five times larger than in previous estimates by, e.g., \citet{GoldreichKumar90} and \citet{GoldreichMurray1994}. However as we see in \S~\ref{subsubsect:mershearwaves}, our conclusions are not sensitive to this multiplication factor.
Following \citet{DecressinMathis2009} and \citet{Mathisetal13} we integrate Eq.~\ref{ev_omega} over
an isobar enclosing the mass $m\left(r\right)$ to obtain the expression of the total flux (loss or gain) of angular momentum carried by the considered transport processes:
\begin{eqnarray}
\Gamma(m) = \frac{1}{4\pi}\frac{\rm d}{{\rm d}t}\left[\int_{0}^{m\left(r\right)}{r'}^2\overline{\Omega}\, {\rm d}m'\right] \nonumber = - F_{\rm MC}\left(r\right) - F_{\rm S}\left(r\right) + F_{\rm IGW}\left(r\right)
\label{fluxAM}
\end{eqnarray}
where the fluxes driven by meridional circulation, vertical shear-induced turbulence, and IGWs are, respectively,
\begin{equation}
F_{\rm MC}\left(r\right)=-\frac{1}{5}\overline\rho r^4{\overline\Omega}U_{2}
\label{eq:Fmc}
\end{equation}
\begin{equation}
F_{\rm S}\left(r\right)= - \overline\rho r^4 \nu_{v}\partial_{r}{\overline\Omega}
\label{eq:FS}
\end{equation}
\begin{equation}
F_{\rm IGW}\left(r\right)= \frac{3}{8 \pi} {\cal L}_J(r).
\label{eq:FIGW}
\end{equation}
\subsubsection{Meridional circulation}
\label{subsubsection:meridionalcirculation}
As can be seen in Eq.~\ref{ev_omega} the transport of angular momentum through meridional circulation is treated as an advective process.
As in our previous studies we apply the formalism developed by
\citet{Zahn1992}, \citet{MaederZahn1998} and \citet[][ see also \citealt{DecressinMathis2009}]{MathisZahn2004}.
\subsubsection{Shear-induced turbulence}
\label{subsubsection:turbulence}
Shear-induced turbulence is assumed to be highly anisotropic.
Following TC05 we assume that the turbulent diffusion coefficient equals turbulent viscosity and use the corresponding expression given by \citet{TalonZahn1997}, i.e.,
\begin{equation}
D_v = \nu_v = \frac{8}{5} \frac {Ri_{\rm crit} (r {\rm d}
\Omega/{\rm d} r)^2}{N^{2}_{T}/(K+D_h)+N^{2}_{\mu}/D_h}
\label{Dv}
\end{equation}
that considers the weakening effect of thermal diffusivity ($K_T$) on the thermal
stratification and of horizontal turbulence ($D_h$, see below) on both the thermal and mean
molecular weight stratifications.
For the treatment of horizontal turbulent viscosity, we follow \citet{Zahn1992}, again as in TC05:
\begin{eqnarray}
D_h = \nu_h = \frac{r}{C_h}\sqrt{\left|\frac{1}{3 \rho r}\frac{{\rm d} (\rho r^2 U)}{{\rm d}
r}-\frac{U}{2}\frac{{\rm d} \ln r^2\Omega}{{\rm d} \ln r}\right|^2 + U^2}
\label{eq:Dh}
\end{eqnarray}
with $C_h=1$.
The influence of the prescriptions assumed for $D_v$ and $D_h$ will be investigated in a future paper.
\subsection{Transport of chemicals}
We treat the transport of chemical species in the radiative regions as a diffusive process through the combined action of meridional circulation and shear-induced turbulence \citep{ChaboyerZahn1992}.
The effective diffusion coefficient is written
\begin{equation}
D_{\rm eff} = \frac{ \left| r U(r) \right|^2}{30\,D_h}
\label{eq:Deff}
\end{equation}
where $D_h$ is the horizontal component of the turbulent diffusivity (see Eq.~\ref{eq:Dh}).
In the present study we neglect atomic diffusion, whose effects require much longer timescales to develop compared to the very short duration of the pre-main sequence phase.
We also neglect possible wave-induced turbulence.
Therefore the expression for the transport of chemicals (here, the mass fraction $X$ of the element $i$) in the stellar radiative region writes as \citep[see e.g.][]{MeynetMaeder2000}:
\begin{eqnarray}
\left(\frac{{\rm d} X_i}{{\rm d}t}\right)_{M_r}= \frac{\partial}{\partial
M_r}\left[(4\pi
r^2\rho)^2\left(D_V+D_{\rm eff}\right) \frac{\partial X_i}{\partial M_r}\right] + \left(\frac{{\rm d}X_i}{{\rm d}t}\right)_{\rm nucl},
\label{eq:transportchemicals}
\end{eqnarray}
where $d M_{r}=4\pi{\overline\rho} r^2 d r$, and the last term accounts for nuclear destruction or production of the considered element.
\section{Stellar models}
\label{sec:models}
\subsection{Input physics and basic assumptions}
\label{subsec:inputphysics}
We focus on the pre-main sequence evolution of solar-metallicity stars in the mass range between 0.6 and 2.0~M$_{\odot}$.
We adopt the solar composition of \citet{AsplundGrevesse2009}.
Opacity tables are updated accordingly both at high and low temperature respectively from OPAL and Wichita websites\footnote{\url{http://adg.llnl.gov/Research/OPAL/opal.html}; \url{http://webs.wichita.edu/physics/opacity/}} \citep[see e.g.][]{IglesiasRogers1996,FergusonAlexander2005}.
The mixing length parameter $\alpha_\text{MLT} = 1.63$ is calibrated so that our
standard (i.e., non rotating) 1 \ensuremath{M_\odot}{}, \ensuremath{Z_\odot}{} model fits the solar radius,
effective temperature, and luminosity at the age of the sun.
Convection zone bounderies are defined by the Schwarzschild criterion,
and we do not account for convective overshoot.
Computations are performed with the stellar evolution code STAREVOL \citep[see e.g. TC05, ][]{LagardeDecressin2012}.
Initial models are totally convective polytropic stars, with central temperature lower than 10$^6$~K (i.e., deuterium burning has not yet occurred).
We follow the PMS evolution along the Hayashi track
up to the arrival on the ZAMS that we define as the point where the ratio between central and surface hydrogen abundance reaches
0.998.
The stellar mass is assumed to be constant during that phase (i.e., no accretion nor mass loss).
For each stellar mass we compute classical models (i.e., without any transport of angular momentum nor of chemicals) as well as rotating models with and without IGW.
We neglect the hydrostatic effects of the centrifugal force in all our rotating models but two; we discuss the impact of this simplification in \S~\ref{evolutionrotprofile}.
The evolution tracks of the classical models in the Hertzsprung-Russel diagram are shown in Fig.\ref{fig:surfdetails}.
\begin{figure*}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_hrdall_renv_svg}%
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_hrdall_tenv_svg}\\%
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_hrdall_Fconv_svg}%
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_hrdall_kt_svg}%
\caption{PMS tracks in the Hertzsprung-Russel diagram for solar metallicity stars with initial masses between 0.6 and 2.0~\ensuremath{M_\odot}{}
(classical models are shown here) and properties of the external convective layers.
Colors indicate the radial extent of the convective envelope (top left panel),
the temperature at its bottom
(top right panel), the maximal convective flux (bottom left panel), and
the thermal diffusivity below the envelope (bottom right panel).
Dashed lines connect points with similar values for these quantities, and the colored axes are in cgs units.
The dotted parts of the tracks correspond to the phase when the stars are still fully convective
}
\label{fig:surfdetails}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_pmsall_rcorehrd}%
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_pmsall_Tcorehrd}\\%
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_allpms_coreFconv}%
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_allpms_coreKt}%
\caption{Same as Fig.~\ref{fig:surfdetails}, but for the properties of the convective core. Here dotted parts on the tracks indicate the phase when the convective core is not yet present}
\label{fig:coredetails}
\end{figure*}
\subsection{Initial internal and surface rotation}
\label{subsec:rotation}
We assume solid-body rotation while stars are fully convective (which corresponds to the dotted part of the tracks in Fig.~\ref{fig:surfdetails}) and we start computing the evolution of surface and internal rotation under the action of stellar contraction, meridional circulation, turbulence, and IGW when the radiative core appears,
which happens at ages between $\sim$ 0.5 and 7.5~Myr for the mass range considered (see $\tau$(core) in Table~\ref{table:properties}), and at $\sim$2.5~Myr for the 1.0~M$_{\odot}$ model.
At that time, most or even all low-mass stars have already lost their disks as shown by observations in very young clusters \citep[e.g.][]{Haischetal01,Hartmann05,Hernandezetal08}.
For all stellar masses, we choose the initial rotation velocity at the moment when the radiative zone appears to be equal to 5$\%$ of the critical velocity of the corresponding model
(V$_{crit} = \sqrt{\frac{2}{3} \frac{G M} {R}}$;
see Table~\ref{table:properties}). This corresponds approximately to the median of the observed distribution in young open clusters (see Fig.~\ref{fig:surfacerotation} and \S~\ref{section:globalproperties} for discussion).
We assume that there is no more coupling between the star and a potential disk beyond that evolution point. The surface of the star is then free to spin up, and we do not apply any magnetic braking.
The influence of the initial rotation velocity, of the disk lifetime that affects the moment when a PMS star starts spinning up,
as well as that of magnetic wind braking that may affect the rotation rate at the arrival on the ZAMS,
will be investigated in a forthcoming paper.
\begin{table*}
\caption{Properties of the different models computed without rotation (std), with rotation but without IGW (rot), and with rotation taking into account IGW (igw); for the 1.0~M$_{\odot}$ star a couple of models were computed taking into account the hydrostatic effects of rotation (rot+hydro and igw+hydro).
For each model we give: lifetime on the PMS, age at which the radiative core appears, initial rotation velocity, surface rotation rate and rotation period when the radiative core appears (taken at 5$\%$ of critical rotation velocity of the corresponding model), and surface rotation velocity, surface rotation rate, rotation period, surface lithium abundance, effective temperature, and luminosity at the arrival on the ZAMS}
\centering
\begin{tabular}{| c | c | c | c | c | c | c | c | c |c |c |c |c | }
\hline
Star & & $\tau$(pms) & $\tau$(core) & v$_{surf}$ & $\Omega_{surf}$ / $\Omega_{\odot}$ & P$_\text{init}$ & v$_{surf}$ & $\Omega_{surf} / \Omega_{\odot}$ & P$_\text{ZAMS}$ & N(Li) & Teff & log L/L$_{\odot}$\\
M$_{\odot}$ & & Myr & Myr & init, km.s$^{-1}$ & init & days & zams, km.s$^{-1}$ & zams & days &zams & zams, K & zams \\
\hline
0.6 & std & 191 &7.47& -- &-- &-- &-- &-- & -- & -4.96 & 4118 & -1.14\\
&rot & 194 &7.47& 14.5 & 6.60 & 3.73 & 36.3 & 34.5 & 7.30 & -5.08 & 4127& -1.14\\
&igw & 217 &7.47& 14.5 & 6.60 & 3.73 & 37.3 & 35.6 & 7.12 & -5.40 & 4121& -1.14\\
\hline
0.9 & std & 75 &3.00&-- &-- &-- &-- &-- & -- & 2.66 & 5246 &-0.366 \\
& rot & 76 &3.00&14.1& 4.16 & 5.99 & 48.5 & 30.7 & 0.83 & 2.64& 5246 & -0.366 \\
& igw & 89 &3.00&14.1& 4.16 & 5.99 & 55.7 & 35.2 & 0.72 & 2.64 & 5246 &-0.366 \\
\hline
1.0& std & 57.5&2.43 & --& -- &-- &-- &-- & -- & 2.96 & 5608 &-0.152 \\
& rot & 53.9&2.43& 14.2 & 3.76& 6.84 & 53.1 & 29.8& 0.76 & 3.02 &
5608 & -0.152 \\
& rot+hydro& 59.4 &2.43& 14.2 & 3.76& 6.84 & 52.9 & 29.7 & 0.76 & 3.12 & 5589 & -0.152 \\
& igw & 59.5&2.43& 14.2 & 3.76& 6.84 & 63.1 & 35.5& 0.68 & 2.95 & 5608& -0.152 \\
& igw+hydro & 59.2 &2.43& 14.2 & 3.76 & 6.84 & 63.1 & 35.3 & 0.72 & 2.95
& 5584 & -0.156 \\
\hline
1.2& std &38.6&1.58& --& -- &-- &-- &-- & -- & 3.17 &6217 & 0.235 \\
& igw & 43.5&1.58& 17.2& 2.99 & 8.03& 86.7 & 38.6 & 0.86& 3.17 & 6217 & 0.235 \\
\hline
1.4& std & 26.5&1.09& --& -- &-- &-- &-- & -- & 3.22 & 6828 &0.574 \\
& igw & 29.4&1.09& 13.6& 4.08 & 6.02 & 109 & 39.9 & 0.64& 3.22 & 6838 & 0.572 \\
\hline
1.6& std & 19.6&0.81& --& -- &-- &-- &-- & -- & 3.23 &7757& 0.835 \\
& rot & 19.3&0.81& 13.0& 1.85 & 13.7& 100 & 39.2 & 0.71& 3.23 & 7757 &0.837 \\
& igw & 21.2&0.81& 13.0& 1.85 & 13.7& 120 & 42.2 & 0.57 & 3.23 &7783 &0.829 \\
\hline
1.8& std & 15.0&0.62& --& -- &-- &-- &-- & -- & 3.24& 8666 &1.04 \\
& rot & 15.0&0.62& 12.4& 1.43& 16.9& 112 & 37.6& 0.68& 3.24 & 8666 &1.04 \\
& igw & 16.1&0.62& 12.4& 1.43 & 16.9& 128 & 43.9& 0.57 & 3.24 &8676& 1.04 \\
\hline
2.0& std & 11.8&0.48& --& -- &-- &-- &-- & -- & 3.24 & 9479 & 1.23 \\
& rot &11.7&0.48& 20& 3.32 & 7.62& 124 & 40.7 & 0.62& 3.24 & 9473 & 1.23 \\
& igw & 12.1&0.48& 20& 3.32 & 7.62& 139 & 46.0 & 0.54& 3.24 & 9492 & 1.23 \\
\hline
\end{tabular}
\label{table:properties}
\end{table*}
\section{IGW generation along the PMS evolution for all grid models}
\label{sec:IGWexcitation}
The internal structure strongly changes as low-mass stars evolve along the PMS.
This implies strong variations in the quantities that are relevant to IGW generation and momentum transport, as depicted in Figs.~\ref{fig:surfdetails} for the properties of the convective envelope and \ref{fig:coredetails} for the core.
All quantities are given in cgs units.
Stars are first fully convective and a radiative core appears along the Hayashi track as they contract and heat (Fig.~\ref{fig:surfdetails}).
The thickness of the convective envelope decreases, and the temperature at its base increases as the stars move towards higher effective temperatures.
Due to central CNO-burning ignition on the final approach towards the ZAMS a convective core develops (Fig.\ref{fig:coredetails}).
One can follow the evolution along the tracks of the maximum convective flux ($F_c = C_p \rho v_c \Delta T$) inside the external and central convective regions,
which directly affects the energy flux associated to a given frequency (see Eq.\ref{gold}).
Wave excitation is stronger when the convective length scale ($\ell_c = 2 \pi r_{cz} / \alpha H_p$) is larger, but decreases when the turnover timescale ($\tau_c = \alpha_{MLT} H_p / v_c$)
becomes too large.
The combination of these two factors induces large differences in the overall efficiency of wave generation as the internal structure evolves.
This is well illustrated in Fig.~\ref{fig:igwexcitspectrum1Msun} that shows the luminosity spectrum of IGW generated by the external convection zone in the 1~M$_{\odot}$ model at four ages on the PMS.
One sees clearly that wave-induced transport is dominated by low-frequency waves (i.e., $< 3.5~\mu$Hz).
High degree waves at low frequencies do not contribute much to the transport of angular
momentum even though their excitation flux is important : indeed they are essentially
damped near the convective envelope edge.
In Fig.~\ref{fig:igwexcit} colors along the tracks indicate the net momentum luminosity ${\cal L}_J$ (see Eq.~\ref{eq:angmomluminosity})
of IGWs generated by the external and internal convective regions.
In the case of external convection, the net momentum luminosity ${\cal L}_{J, surf}$ rapidly increases as the excitation of IGW strengthens up when
stars evolve towards higher effective temperature, and reaches maximum values as high as 10$^{39}$~g.cm$^2$s$^2$ around $T_\text{eff} \sim 6200$~K.
Stars with initial masses lower than 1.3 \ensuremath{M_\odot}{} never reach this effective temperature and the corresponding ${\cal L}_{J, surf}$ remains always
below this maximum and shows only a monotonic increase along the PMS.
On the other hand in the more massive models the convective envelope keeps shrinking in size and ${\cal L}_{J, surf}$ decreases when T$_{eff}$ increases above 6200~K.
This behavior confirms TC08 findings for intermediate-mass PMS stars, and is very similar to the ${\cal L}_J$ plateau we found for Pop I and Pop II main sequence stars \citep{TalonCharbonnel2003,TalonCharbonnel04},
which share very similar convective properties with PMS stars in the same T$_{eff}$ range.
IGW are also emitted from the convective core at the end of the PMS.
The more massive the star, the more the convective core expands, and the stronger the corresponding wave excitation.
We note however from Fig.~\ref{fig:igwexcit} that wave excitation by the convective core (when present) is generally much less efficient than that of the convective envelope.
The ratio between ${\cal L}_{J,core}$ and ${\cal L}_{J,surf}$ is shown in Fig.~\ref{fig:excitrenv} as a function of T$_{eff}$ for the various models.
For stars with masses below 1.4 \ensuremath{M_\odot}, ${\cal L}_{J,core}$ is always $\sim$ 5-6 order of magnitude lower than ${\cal L}_{J,surf}$.
These two quantities reach similar orders of magnitude only very close from the ZAMS for stars more massive than 1.6 \ensuremath{M_\odot}{}.
Therefore and as we shall see below, the impact of IGW on the internal rotation profile along the PMS will be dominated by the waves emitted by the convective envelope.
\begin{figure*}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_spectrum1p0_5p8Myr.eps}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_spectrum1p0_14Myr.eps}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_spectrum1p0_35Myr.eps}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_spectrum1p0_55Myr.eps}
\caption{Angular momentum luminosity at the base of the convective envelope of IGW generated by Reynolds-stress in the external convective layers
as a function of emission frequency $\omega$ and degree $\ell$. The color axis is in log and white areas correspond to log${\cal L}_{J, surf} <22$. The plots are shown for the 1~M$_{\odot}$ model at four ages along the PMS (5.8, 14, 35, and 55~Myr from top left to bottom right; the corresponding values of T$_{eff}$ are 4277, 4357, 5560, and 5612~K).
}
\label{fig:igwexcitspectrum1Msun}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_hrdall_LJsurf_svg}%
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_hrdall_LJcore_svg}%
\caption{Same as Fig.~\ref{fig:surfdetails}, but with colors indicating the total momentum
flux carried by IGW generated by the convective envelope (left) and
the convective core (right). Dotted lines indicate the region where the
stars are fully convective (left) or have no convective core (right) so
that no IGW can be generated. In the left panel the vertical dashed
lines connect models where the excitation has the same value: $\log \left(\Sigma
|\mathcal F_J(\omega,l,m)|\right) = 28$, 30, 32, 34, 36, 38.}
\label{fig:igwexcit}
\end{figure*}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_allpmsrapsurfcore_svg}
\caption{Ratio of the total momentum luminosity carried by IGW generated
by the convective core and the convective envelope for the PMS models of various masses
}
\label{fig:excitrenv}
\end{figure}
\section{Evolution of the internal rotation profile}
\label{evolutionrotprofile}
\subsection{The case of the 1~M$_{\odot}$ star}
\label{subsection:1MZsun}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{Charbonneletal_pms_1msun_omega_Mr_igw_svg2}
\includegraphics[width=0.45\textwidth]{Charbonneletal_pms_1msun_omega_r_igw_svg2}\\
\includegraphics[width=0.45\textwidth]{Charbonneletal_pms_1msunzsun_Mr_rotomega_svg2}
\includegraphics[width=0.45\textwidth]{Charbonneletal_pms_1msunzsun_r_rotomega_svg2}
\caption{Evolution along the PMS of the rotation profile in the 1~M$_{\odot}$ models computed with and without IGW (top and bottom respectively) as a function of relative mass fraction and radius in solar units (left and right respectively).
The curves are labeled according to age. On each plot the left and right scales give $\Omega$ in $\mu$Hz and in solar units respectively}
\label{fig:omegaHzevol1p0}
\end{figure*}
\begin{figure*}
\includegraphics[angle=0,width=8.5cm]{Charbonneletal_pms_m1zsun_fluxAM_Mr_logneg.eps}
\includegraphics[angle=0,width=8.5cm]{Charbonneletal_pms_m1zsun_fluxAM_Mr_rotlogneg.eps}
\caption{Decomposition of the total flux of angular momentum (solid black) into meridional circulation (long-dashed magenta), shear turbulence (dotted blue), and IGW (short-dashed red) in the 1~M$_{\odot}$ models computed with and without IGW (left and right panels respectively). Bold lines indicate negative values for the fluxes $F_{\rm MC}\left(r\right)$, $F_{\rm S}\left(r\right)$, or $F_{\rm IGW}\left(r\right)$, when angular momentum is transported towards the central regions by the corresponding mechanism; in the case of meridional circulation and of shear turbulence this corresponds respectively to clockwise currents ($U_2 > 0$) and to a positive $\Omega$ gradient. The profiles are shown at three different ages along the PMS. Shaded areas correspond to convective regions}
\label{flux}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{Charbonneletal_pms_igwpms_plotU2_1.eps}
\includegraphics[width=\textwidth]{Charbonneletal_pms_plotU2PMS_m1p0z013_rot_Pi8Vi12.eps}
\caption{Meridional circulation currents in the 1~M$_{\odot}$ models computed with and without IGW (top and bottom respectively) at three different evolution ages along the PMS (14, 35, and 55Myrs from left to right)). Blue and red lines indicate clockwise ($U_2 > 0$) and counterclockwise ($U_2 < 0$) circulation respectively. Hatched areas correspond to convective regions}
\label{fig:U2}
\end{figure*}
Figure~\ref{fig:omegaHzevol1p0} depicts the evolution along the PMS of the rotation profile inside the 1~M$_{\odot}$ star for two cases: when angular momentum transport is operated solely by meridional circulation and shear turbulence (bottom panels), and when angular momentum deposition by internal gravity waves is taken into account in addition to the hydrodynamic processes (top panels); the rotation profile is shown at different ages as a function of both relative mass fraction and reduced radius (left and right panels respectively).
The decomposition of the total flux of angular momentum into the various components driven by meridional circulation, shear turbulence, and IGW (when accounted for; see Eqs.~\ref{eq:Fmc}, \ref{eq:FS}, and \ref{eq:FIGW} respectively)
is shown in Fig.~\ref{flux} at three ages along the PMS.
Meridional circulation currents are shown at the same ages in Fig.\ref{fig:U2}; clockwise currents (matter flowing from the equator to the pole and resulting in deposition of angular momentum inwards) and counterclockwise ones (carrying angular momentum outwards) are drawn in blue and red respectively.
\subsubsection{Transport of angular momentum by meridional circulation and shear turbulence only}
\label{subsubsect:mershear}
When only meridional circulation and shear turbulence are accounted for,
differential rotation rapidly develops inside the radiative region as
the surface rotation velocity increases due to stellar contraction (Fig.~\ref{fig:omegaHzevol1p0}, bottom plots).
This behavior as well as the rotation profile at the arrival on the ZAMS are similar to the results of \citet{Eggenbergeretal12} for their rotating 1~M$_{\odot}$ model computed with similar assumptions.
As can be seen in Fig.\ref{flux} (right plots) for this model without IGW, the transport of angular momentum is dominated by
meridional circulation all along the PMS, while the contribution of shear turbulence is negligible (the flux of angular momentum by turbulence $F_V$ is indeed $\sim$ 2 orders of magnitude lower than the flux driven by meridional circulation $F_{MC}$).
The number of circulation loops evolves with time (Fig.\ref{fig:U2}, bottom plots; see also Fig.~\ref{flux}): In the early stages (14.9~Myrs, left panel), the circulation consists of a single counterclockwise current that transports matter inward along the rotational axis and outward in the equatorial plane; later on (33.5~Myrs, middle panel) a clockwise loop appears in the central regions; finally an additional counterclockwise loop shows up when the convective core develops (55~Myrs, right panel).
\subsubsection{Impact of internal gravity waves}
\label{subsubsect:mershearwaves}
The evolution of the internal rotation profile changes drastically when IGW are taken into account in conjunction with meridional circulation and shear turbulence, as can be seen in Fig.~\ref{fig:omegaHzevol1p0} (top plots).
As already discussed in \S~\ref{sec:IGWexcitation}, the emitted wave spectrum strongly evolves with the stellar structure along the PMS.
IGW are first emitted by the receding convective envelope, and much later by the convective core when it appears during the final approach towards the zams.
In the case of the 1~M$_{\odot}$ model, IGW emitted by the convective core play actually no role since their luminosity is extremely low (see Fig.~\ref{fig:excitrenv} and discussion in \S~\ref{sec:IGWexcitation}).
Therefore the following discussion refers only to those emitted by the envelope.
In order to understand wave-induced transport, we must also focus on the important quantities for wave damping in the radiative layers,
namely the Brunt-V\"ais\"al\"a frequency $N^2$ and the thermal diffusivity $K_T$: For a given differential rotation within the radiative layers, low-frequency (i.e., with $\omega < $ 3.5~$\mu$Hz) and/or large degree waves that dominate the angular momentum transport are damped very efficiently close to the convective edges when $N^2_T$ is too small or when $K_T$ is too large (see Eq.~\ref{optdepth}).
Fig.\ref{fig:BVf} and \ref{fig:chemicaldiffusioncoefficients} show the radial profiles of these two quantities in the radiative layers of the 1~M$_{\odot}$ model at various ages (see also Fig.~\ref{fig:surfdetails} and \ref{fig:coredetails} that show the variations along the evolution track of the value of $K_T$ just below the convective envelope and above the convective core).
At all ages $N^2$ drops near the stellar center and the
convective edges; in addition its value at a given depth increases with time along the PMS
as a result of the stellar contraction that leads to an increase of gravity and a decrease of the pressure scale height as the star evolves.
On the other hand the value of $K_T$ just below the convective envelope also increases as the star contracts and move towards higher effective temperature; this implies stronger damping of all the waves (independently of their properties) closer to the convective envelope; note that $K_T$ at a given depth within the star increases only slightly during the evolution. Besides, the build up of differential rotation with time within the star induces a change in the local Doppler shift frequency, which allows a different damping for waves with different frequencies and $m$ through the term
$\sigma^{-4} \left({N^2 - \sigma^2}\right)^{-0.5}$.
Let us see what these general considerations imply for the 1.0~M$_{\odot}$ model.
We start with initial solid body rotation and then follow the transport of angular momentum when the radiative layers appear.
At that moment differential rotation has not yet developed, and the local frequency $\sigma$ of individual waves in the very thin radiative zone is similar to their emission frequency $\omega$ at the base of the convective envelope. However slight differential rotation soon builds up as a result of stellar contraction along the Hayashi track, which induces a Doppler shift between the emission and local IGW spectra.
As a consequence, low-frequency low-degree waves, which undergo the largest differential damping between retrogade and prograde components, soon penetrate all the way to the central regions where they deposit their negative momentum and very efficiently spin down the core whose amount of angular momentum is minute (see Fig.~\ref{fig:omegaHzevol1p0}).
This explains the strong positive gradient in the profile of $\Omega$ below $\sim 0.2$ R$_{\odot}$, while the negative gradient of $\Omega$ in the external layers results from ongoing stellar contraction.
As a consequence a peak builds up in the internal rotation profile with a core spinning at lower rate than the stellar surface all along the PMS.
\begin{figure*}
\includegraphics[angle=0,width=8.5cm]{Charbonneletal_pms_igwpms_Nt2_Mr_1.eps}
\includegraphics[angle=0,width=8.5cm]{Charbonneletal_pms_igwpms_Nt2_r_1.eps}
\caption{Evolution of the Brunt-V\"ais\"al\"a frequency in the 1~M$_{\odot}$ PMS star as a function of relative mass fraction and radius (left and right respectively). The colours correspond to the same ages as in Fig.~\ref{fig:omegaHzevol1p0}. On the right plot the vertical lines indicate the total stellar radius at the corresponding ages}
\label{fig:BVf}
\end{figure*}
We show in Fig.\ref{flux} the total flux of angular momentum carried by the waves as a function of depth within the 1~M$_{\odot}$ model, and compare it to the contribution of meridional circulation and shear turbulence at different evolution stages.
We note first that the transport of angular momentum is generally dominated by the waves in the radiative layers where they can propagate, except in the early times when
meridional circulation dominates in the most external regions below the convective envelope (upper panel at 14~Myr).
Since downward propagating waves are totally damped as soon as the condition $ \Omega(r) = \omega /m +\Omega _{\rm cz}$
is fulfilled near the corotation radius, the total flux F$_{IGW}$ drops and remains negligible below the $\Omega$ peak. This can be clearly seen in the middle and lower panels in Fig.\ref{flux} at 35 and 55~Myrs; at that time meridional circulation dominates in the regions below $\sim$ 0.15 and 0.2~M$_{\odot}$ respectively, while IGW are dominant in the outer regions.
Note that the total flux of angular momentum is dominated by IGW when they are accounted for and is larger by two orders of magnitude compared to the case without IGW.
Overall, IGW do shape the circulation patterns, leading to the appearance of several loops in the whole radiative region as can be seen in Figs.~\ref{flux} and \ref{fig:U2}.
Let us add a final remark.
As explained in \S~\ref{subsubsection:generaltransport}, we have increased by a factor 2 the IGW luminosity in order to account for the results by \citet{LecoanetQuataert13} who predict the IGW flux due to turbulent convection to be a few to five times larger than in previous estimates by e.g. \citet{GoldreichKumar90} and \citet{GoldreichMurray1994}.
In order to test the impact of this assumption, we have computed two additional models for the 1~M$_{\odot}$ rotating star with multiplying factors of 1 and 5.
We find that this has no impact on the conclusions, as can be seen in Fig.~\ref{fig:allzams} where we plot the corresponding rotation profiles at the arrival on the ZAMS.
\subsection{Impact of the stellar mass}
\label{subsection:variousmasses}
For all the stars within the considered mass range, strong differential rotation with a fast rotating core is obtained under the combined action of stellar contraction and meridional circulation
when IGW are not accounted for.
Besides, in all cases IGW do break-up the stellar core, which results in a peak in $\Omega$ at r$\sim$0.25-0.3R$_*$ as in the 1~M$_{\odot}$ case.
This can be seen in Fig.~\ref{fig:allzams} where we show the rotation profiles at the arrival on the ZAMS for all our models (black and red lines correspond respectively to the models computed without or with IGW).
\begin{figure}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_igwpms_sumall}
\caption{Rotation profile on the ZAMS
for all the stellar masses between 0.6 and 2.0~\ensuremath{M_\odot}{} in the cases without and with IGW (full black and red dotted lines respectively).
In the 1~M$_{\odot}$ panel, the blue long-dashed and the green dashed lines correspond to computations made with multiplication factors for IGW luminosity
of one and five respectively, all the other models with IGW being computed with a multiplication factor of two (see Eq.~\ref{ev_omega})
}
\label{fig:allzams}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_m2zsun_fluxAM_logneg.eps}
\caption{Same as Fig.~\ref{flux} for the 2~M$_{\odot}$ model computed with IGW}
\label{fig:2mfluxam}
\end{figure}
Let us note however that the impact of IGW is slightly different in stars more massive than $\sim 1.6$~M$_{\odot}$.
This is illustrated for the 2~M$_{\odot}$ star in Fig.~\ref{fig:2mfluxam} where we decompose the total flux of angular momentum within the model according to the various transport processes at three different ages,
and in Fig.~\ref{fig:2msunomega} where we follow the corresponding evolution of the radial profile of $\Omega$.
For this more massive star, IGW emitted by the convective envelope dominate during the first part of the PMS and manage to slow down the most central regions as in the 1~M$_{\odot}$ case (top panel, Fig.~\ref{fig:2mfluxam}).
However those waves fade away when the convective envelope becomes too thin and are supplanted by those emitted by the convective core at the approach of the ZAMS (see Fig.~\ref{fig:igwexcit}).
During that transition period (middle panel in Fig.~\ref{fig:2mfluxam}), meridional circulation dominates the transport of angular momentum although shear turbulence also contributes more efficiently near the most central regions (between 0.05 and 0.1~\ensuremath{M_\odot}{}) and in the most external layers; as a result, the core slightly accelerates and eventually manages to rotate faster than the outer radiative layers, although not fast enough for the peak to be erased.
Once the convective core has sufficiently developed (lower panel, Fig.~\ref{fig:2mfluxam}), the IGW emitted in the central regions will start conveying angular momentum very efficiently towards the core;
at that time meridional circulation remains however the dominant process in the most external radiative layers.
\begin{figure*}
\includegraphics[width=0.45\textwidth]{Charbonneletal_pms_2msun_omega_r_igw_svg2.eps}
\includegraphics[width=0.45\textwidth]{Charbonneletal_pms_2msun_omega_Mr_igw_svg2.eps}
\caption{Same as Fig.\ref{fig:omegaHzevol1p0} for the 2~M$_{\odot}$ model computed with IGW
}
\label{fig:2msunomega}
\end{figure*}
\section{Global stellar properties, surface rotation and lithium abundance}
\label{section:globalproperties}
\begin{figure*}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_artpms_compbouvier_svg.eps}
\includegraphics[width=0.5\textwidth]{Charbonneletal_pms_artpms_allomegasurf_svg.eps}
\caption{Evolution of the surface rotation rate as a function of time for the models computed with IGW. Stellar masses are indicated on the tracks.
In the left panel, the theoretical predictions for the 0.9, 1, and 1.2~M$_{\odot}$ models are compared with
the observed rotational distribution for stars with estimated masses between $\sim$ 0.9 and 1.1~M$_{\odot}$ in young open clusters from \citet{GalletBouvier13}
}
\label{fig:surfacerotation}
\end{figure*}
\begin{figure*}[t]
\includegraphics[angle=0,width=8.5cm]{Charbonneletal_pms_plotdiffvsMsurMstar_m1p0z013_rot_ondes2_Pi8Vi12_brakepmsK8e30.eps}
\includegraphics[angle=0,width=8.5cm]{Charbonneletal_pms_plotdiffPMS_m1p0z013_rot_Pi8Vi12.eps}
\caption{Diffusion coefficients associated to meridional circulation (red), horizontal and vertical turbulence (black and blue respectively). The total diffusion coefficient for the chemicals (magenta) and thermal diffusivity (cyan) are also shown.
The figures correspond to the 1~M$_{\odot}$ models with and without IGW (left and right respectively) at different evolution ages along the PMS. Hatched areas indicate the convective regions}
\label{fig:chemicaldiffusioncoefficients}
\end{figure*}
We summarize in Table~\ref{table:properties} the main properties of our models computed under various assumptions.
We also include the predictions for two additional models of 1~M$_{\odot}$ that account for the hydrostatic effects of rotation (i.e., the effects of centrifugal acceleration on effective gravity) and show in
Fig.~15 all the corresponding evolution tracks for this star.
We see that the rotating tracks without hydrostatic effects are hardly modified compared to the standard case, the main shift to slightly lower effective temperature and luminosity (that implies slightly longer PMS lifetime) being due to the effects of the centrifugal force and not to rotation-induced mixing. This is in agreement with the predictions by \citet{Eggenbergeretal12} (see also \citealt{Pinsonneaultetal89,MartinClaret96,Mendesetal99}).
However the hydrostatic effects are modest and our general conclusions on the evolution of the internal rotation profile and on the impact of IGW are not affected by this simplification.
We can also note in Table~\ref{table:properties} that the models computed with IGW have longer PMS lifetimes than the others.
This simply results from the higher total diffusion coefficient for chemicals in the deep radiative layers close to the convective core when central H-burning sets in close to the ZAMS (see Fig.~\ref{fig:chemicaldiffusioncoefficients}).
As shown in Fig.~\ref{fig:surfacerotation}, the evolution of surface rotation for the models with IGW accounts well for the mean rotation rates collected by \citet{GalletBouvier13}
for PMS stars in young open clusters in the considered mass range.
The rotation velocity at the arrival on the ZAMS is slightly higher (by a few $\%$; see Table~\ref{table:properties}) in this case than in rotating models without IGW, due to the different efficiency of the redistribution of angular momentum by the various transport mechanisms within the star as discussed previously. Again, the hydrostatic effects are negligible.
\begin{figure}[t]
\includegraphics[angle=0,width=8.5cm]{Charbonneletal_pms_hrd1msun.eps}
\caption{Impact of rotation, IGW, and of the hydrostatic effects on the evolution track of the 1~M$_{\odot}$ star. The square indicates the point where the radiative zone appears and internal transport of angular momentum starts}
\label{fig:hrd1Msun_all}
\end{figure}
The surface lithium abundance at the ZAMS is not significantly different in the rotating models without and with IGW, as can be seen from Table~\ref{table:properties}.
Indeed this quantity mostly depends, on one hand, on the temperature at the base of the convective envelope, which is unaffected since the evolution tracks almost superpose, and on the other hand, on the diffusion coefficient $D_{eff}$ (Eq.~\ref{eq:Deff}) in the external radiative layers shown in Fig.~\ref{fig:chemicaldiffusioncoefficients}.
Since the gradient of $\Omega$ in the outer part of the star is dominated by stellar contraction and is very similar in the cases with and without IGW (see Figs.~\ref{fig:omegaHzevol1p0} and \ref{fig:2msunomega}), the resulting Li abundance at the ZAMS is unaffected.
The rotating models including the hydrostatic effects have slightly higher lithium abundance on the ZAMS, in agreement with the behavior found by \citet{Eggenbergeretal12}.
In a future work we will revisit PMS Li depletion taking the influence of the disk lifetime, of the initial rotation velocity, and of magnetic braking into account .
\section{Conclusions}
\label{sec:conclusion}
In this paper we have analyzed the transport of angular momentum during the PMS for solar-metallicity, low-mass stars (with masses between 0.6 and 2.0~M$_{\odot}$) through
the combined action of structural changes, meridional circulation, shear turbulence, and internal gravity waves generated by Reynold-stress and buoyancy in the stellar convective envelope and core (when present).
For all the stellar masses considered, IGW are efficiently generated by the convective envelope with a momentum luminosity that peaks around $T_{eff} \sim$6200~K, as in the case of main sequence stars.
These waves soon become an efficient agent for angular momentum redistribution because they spin down the stellar core early on the PMS, while structural changes lead to a negative differential rotation in the outer stellar layers as the star contracts.
On the other hand, IGW generated by the convective core close to the arrival on the ZAMS carry much less energy, except in the case of stars more massive than $\sim$ 1.6~M$_{\odot}$.
Over the whole considered mass range, IGW were found to significantly modify the internal rotation profile of PMS stars and lead to slightly higher surface rotation velocity compared to the case where only meridional circulation and shear turbulence are accounted for.
The exploratory results presented in this paper show the ability of IGW to efficiently extract angular momentum in the early phases of stellar evolution, as anticipated by \citet{TalonCharbonnel2008} and as shown by \citet{CharbonnelTalon2005Science} and \citet{TalonCharbonnel2005} for solar-type main sequence stars.
We now plan to investigate the influence of the disk lifetime, of the initial rotation velocity, and of magnetic braking during the PMS over a broader mass domain in order to compare model predictions with large data sets that are currently being collected to trace the rotational properties of young stars.
\begin{acknowledgements}
We thank F.Gallet and J.Bouvier for kindly providing data before publication and for fruitful discussions, as well as P.Eggenberger for detailed model comparisons. We thank the referee J.P.Zahn for suggestions that helped improve the manuscript.
We acknowledge financial support from the Swiss National Science Foundation (FNS), from the french Programme National de Physique Stellaire (PNPS) of CNRS/INSU, and from the Agence Nationale de la Recherche (ANR) for the project TOUPIES (Towards Understanding the sPIn Evolution of Stars).
\end{acknowledgements}
\bibliographystyle{aa}
|
2209.05980
|
\section{Introduction}
Physically realizable adversarial attacks are a threat for safety-critical (semi-)autonomous systems such as self-driving cars or robots. Adversarial patches \citep{brown_2017,pmlr-v80-karmon18a} are the most prominent example of such an attack. Their realizability has been demonstrated repeatedly, for instance by Lee and Kolter \citep{lee_2019}: an attacker places a printed version of an adversarial patch in the physical world to fool a deep learning system. While empirical defenses \citep{Hayes18,NaseerKP19,selvaraju_grad-cam:_2019,wu2020defending} may offer robustness against known attacks, it does not provide any guarantees against unknown future attacks \citep{chiang2020certified}. Therefore, certified defenses for the patch threat model, which allow guaranteed robustness against all possible attacks for the given threat model, are crucial for safety-critical applications.
Research on certifiable defenses against adversarial patches can be broadly categorized into certified recovery and certified detection. \emph{Certified recovery} \citep{chiang2020certified,levine,zhang_clipped_2020,xiang_patchguard_2020,metzen2021efficient,lin2021certified,xiang2021patchcleanser,salman2021certified, ECViT} has the objective to make a correct prediction on an input even in the presence of an adversarial patch. In contrast, \emph{certified detection} \citep{mccoyd2020minority,xiang2021patchguard,han2021scalecert,huang2021zeroshot} provides a weaker guarantee by only aiming at \emph{detecting} inputs containing adversarial patches. While certified recovery is more desirable in principle, it typically comes at a high cost of reduced performance on clean data. In practice, certified detection might be preferable because it allows maintaining high clean performance. Most existing certifiable defenses against patches are focused on image classification, with the exception of DetectorGuard \citep{xiang2021detectorguard} and ObjectSeeker \citep{xiang2022objectseeker} that certifiably defend against patch hiding attacks on object detectors. Moreover, existing defences are not easily applicable to arbitrary downstream models, because they assume either that the downstream model is trained explicitly for being certifiably robust \citep{levine,metzen2021efficient}, or that the model has a certain network architecture such as BagNet \citep{zhang_clipped_2020,metzen2021efficient,xiang_patchguard_2020} or a vision transformer \citep{salman2021certified,huang2021zeroshot}. A notable exception is PatchCleanser \citep{xiang2021patchcleanser}, which can be combined with arbitrary downstream models but is restricted to image classification.
Adversarial patch attacks were also proposed for the image segmentation problem \cite{Nesti2022EvaluatingTR}, mostly for attacking CNN-based models that use a localized receptive field \citep{zhao2017pspnet}. However, recently self-attention based vision
transformers \citep{dosovitskiy2021an} have achieved new state-of-the-art in the image segmentation task \cite{liu2021Swin, senformer}. Their output may become more vulnerable to adversarial patches if they manage to manipulate the global self-attention \cite{Lovisotto2022GiveMY}. We demonstrate how significant parts of the segmentation output may be affected by a small patch for the Senformer \cite{senformer} model in Figure \ref{fig:patch_motivation}. We point out that preventive certified defences are important because newly developed attacks can immediately be used to compromise safety-critical applications unless they are properly defended.
In this work, we propose the novel framework \textsc{Demasked Smoothing} (Figure \ref{fig:intro_pic}) to obtain the first (to the best of our knowledge) certified defences against patch attacks on semantic segmentation models. Similarly to previous work \citep{levine}, we mask different parts of the input (Figure \ref{fig:mask}) and provide guarantees with respect to every possible patch that is not larger than a certain pre-defined size. While prior work required the classification model to deal with such masked inputs, we leverage recent progress in image inpainting \citep{dong2022incremental} to reconstruct the input \emph{before} passing it to the downstream model. This decoupling of image demasking from the segmentation task allows us to support arbitrary downstream models. Moreover, we can leverage state of the art methods for image inpainting. We also propose different masking schemes tailored for the segmentation task that provide the dense input allowing the demasking model to understand the scene but still satisfy the guarantees with respect to the adversarial patch.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/patch_motivation/pic.png}
\caption{}
\label{fig:patch_motivation}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/masking/pic.png}
\caption{}
\label{fig:mask}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{graphics/intro/intro_pic.png}
\caption{}
\label{fig:intro_pic}
\end{subfigure}
\caption{(a) A simple patch attack on the state-of-the-art ViT-based Senformer \cite{senformer} manages to switch the prediction for a large object. (b) Masking the patch. (c) A sketch of \textsc{Demasked Smoothing} for certified image segmentation. First, we generate a set of masked versions of the image such that each possible patch can only affect a certain number of masked images. Then we use image inpainting to partially recover the information lost during masking and then apply an arbitrary segmentation method. The output is obtained by aggregating the segmentations pixelwise. The masking strategy and aggregation method depend on the certification mode (detection or recovery).}
\end{figure}
\clearpage
We summarize our contributions as follows:
\begin{itemize}[nolistsep]
\item We propose \textsc{Demasked Smoothing} which is the first (to the best of our knowledge) certified recovery or certified detection based defence against adversarial patch attacks on semantic segmentation models (Section \ref{section:demasked_smoothing}).
\item \textsc{Demasked Smoothing} can do certified detection and recovery with any possible segmentation model without requiring finetuning or any other adaptation.
\item We implement \textsc{Demasked Smoothing}, evaluate it for different certification objectives and masking schemes (Section \ref{section:experiments}). We can certify 64\% of all pixels in certified detection for a 1\% patch and 48\% in certified recovery for a 0.5\% patch for the Senformer \citep{senformer} segmentation model on the ADE20K \cite{zhou2017scene} dataset.
\end{itemize}
\section{Related Work}
\label{section:related_work}
\textbf{Certified recovery.} The first certified recovery defence for classification models against patches was proposed by Chiang et al. \cite{chiang2020certified}, who adapted interval-bound propagation \cite{Gowal_2019_ICCV} to the patch threat model. Levine and Feizi \cite{levine} proposed De-Randomized Smoothing (DRS), which provides significant accuracy improvement when compared to Chiang et al. \cite{chiang2020certified} and scales to the ImageNet dataset. In DRS, a base classifier is trained on images where everything but a small local region is masked (ablated). At inference time, a majority vote of all specified ablations is taken as the final classification. If this vote has a large enough margin to the runner-up class, the prediction cannot be shifted by any patch that does not exceed a pre-defined size. A similar approach was adopted in Randomized Cropping \cite{lin2021certified}. A general drawback of these approaches is that the classifier needs to be trained to process masked/cropped inputs, which (in contrast to our work) prohibits the usage of arbitrary pretrained models. A further line of work studies network architectures that are particularly suited for certified recovery. For instance, models with small receptive fields such as \emph{BagNets} \cite{brendel2018approximating} have been explored, either by combining them with some fixed postprocessing \cite{zhang_clipped_2020,xiang_patchguard_2020} or by training them end-to-end for certified recovery \cite{metzen2021efficient}. Salman et al. \cite{salman2021certified} propose to apply DRS to \emph{Vision Transfomers (ViTs)}. In contrast to the aforementioned works, our Demasked Smoothing can be applied to models with arbitrary architecture. This is a property shared with PatchCleanser \cite{xiang2021patchcleanser}, which however is limited to image classification and it is not clear how it can be extended to semantic segmentation where a class needs to be assigned to every pixel including the masked ones. Certified recovery against patches has also been extended to object detection, specifically to defend against patch hiding attacks. Two notable works in this direction are DetectorGuard \cite{xiang2021patchguard}, an extension of PatchGuard \cite{xiang_patchguard_2020} to object detection, and ObjectSeeker \cite{xiang2022objectseeker}. Randomized smoothing \citep{pmlr-v97-cohen19c} has been applied to certify semantic segmentation models against $\ell_2$-norm bounded adversarial attacks \citep{Fischer2021ScalableCS}. However, to the best of our knowledge, no certified defence against patch attacks for semantic segmentation has been proposed so far.
\textbf{Certified detection.} An alternative to certified recovery is \emph{certified detection}. Here, an adversarial patch is allowed to change the model prediction. However, if it succeeds in doing so, there is a mechanism that detects this attack certifiably with zero false negatives. Minority Reports \cite{mccoyd2020minority} was the first certified detection method against patches, which is based on sliding a mask over the input in a way that ensures that there will be one mask position that completely hides the patch. PatchGuard++ \cite{xiang2021patchguard} is an extension of Minority Reports where the sliding mask is not applied on the input but on the feature maps of a BagNet-type feature extractor. This reduces inference time drastically since the feature extractor needs to be executed only once per input. ScaleCert \cite{han2021scalecert} tries to identify ``superficial important neurons'', which allows pruning the network in a way that the prediction needs to be made for fewer masked inputs. Lastly, PatchVeto \cite{huang2021zeroshot} is a recently proposed method for certified detection that is tailored towards ViT models. It implements masking by removing certain input patches of the ViT. In this work, we propose a novel method for certified detection in the semantic segmentation task that can be used for any pretrained model.
\textbf{Image reconstruction.} The problem of learning to reconstruct the full image from inputs where parts have been masked out was pioneered by Vincent et al. \cite{JMLR:v11:vincent10a}. It recently attracted attention as proxy task for self-supervised pre-training, especially for the ViTs \cite{bao2022beit, he2021masked}. Recent approaches to this problem are using Fourier convolutions \citep{Suvorov2022ResolutionrobustLM} and visual transformers \citep{dong2022incremental}. SPG-Net \citep{Song2018SPGNetSP} trains a subnetwork to reconstruct the full semantic segmentation directly from the masked input as a part of the image inpainting pipeline. In this work, we use the state-of-the-art ZITS \citep{dong2022incremental} inpainting method.
\section{Problem Setup} \label{section:problem_setup}
\subsection{Semantic Segmentation} \label{section:image_segmentation}
In this work, we focus on the semantic segmentation task. Let $\mathcal{X}$ be a set of rectangular images. Let $x \in \mathcal{X}$ be an image with height $H$, width $W$ and the number of channels $C$. We denote $\mathcal{Y}$ to be a finite label set. The goal is to find the segmentation map $s \in \mathcal{Y}^{H \times W}$ for $x$. For each pixel $x_{i, j}$, the corresponding label $s_{i, j}$ denotes the class of the object to which $x_{i, j}$ belongs. We denote $\mathbb{S}$ to be a set of segmentation maps and $f: \mathcal{X} \rightarrow \mathbb{S}$ to be a segmentation model. Assume that we know the ground truth segmentation map $s \in \mathbb{S}$. We evaluate a model $f$ by using \textit{accuracy map} $M^{acc}(f, x, s) \in \{0, 1\}^{H \times W}$ such that $M^{acc}(f, x, s)_{i,j} :=[ f(x)_{i, j} = s_{i, j} ]$, where $[P]=1$ if $P=\text{True}$ and 0 otherwise. Let $Q(f, x, s)$ be a segmentation quality metric e.g.\ \textit{global accuracy}, $Acc(f, x, s):=\dfrac{1}{H \cdot W}\sum_{i=1}^{H} \sum_{j=1}^{W}M^{acc}(f, x, s)_{i,j}$. We discuss other metrics for the segmentation evaluation in Section \ref{section:evaluation_metrics}.
\subsection{Threat model} \label{section:threat_model}
Let us consider an untargeted adversarial patch attack on a segmentation model. Consider an image $x \in [0, 1]^{H\times W\times C}$ and its ground truth segmentation map $s$. Assume that the attacker can modify an arbitrary rectangular region of the image $x$ which has a size of $H^\prime \times W^\prime$. We refer to this modification as a \textit{patch}. Let $l \in \{0,\ 1\}^{H\times W}$ be a binary mask that defines the patch location in the image in which ones denote the pixels belonging to the patch. Let $\mathcal{L}$ be a set of all possible patch locations for a given image $x$. Let $p \in [0, 1]^{H\times W\times C}$ be the modification itself. Then we define an operator $A$ as $A(x,\ p,\ l) = (1 - l) \odot x + l \odot p$, where $\odot$ is element-wise product. The operator $A$ applies the $H^\prime \times W^\prime$ subregion of $p$ defined by a binary mask $l$ to the image $x$ while keeping the rest of the image unchanged. We denote $\mathcal{P} := [0,\ 1]^{H \times W \times C} \times \mathcal{L}$ to be a set of all possible patch configurations $(p, l)$ that define an $H^\prime \times W^\prime$ patch. The goal of an attacker is to find $(p^\star,\ l^\star)$ s. t. $(p^\star,\ l^\star) = \argmin\limits_{(p,\ l) \in \mathcal{P}}\ Q(f, A(x, p, l), s)$
\subsection{Defence objective} \label{section:defence_objective}
In this paper, we propose certified defences against patch attacks. It means that we certify against \textit{any possible attack} from $\mathcal{P}$ including $(p^\star,\ l^\star)$. We consider two robustness objectives.
\textbf{Certified recovery} For a pixel $x_{i, j}$ our goal is to verify that the following statement is true
\begin{equation}
\label{eq:certified_recovery_formulation}
\forall\ (p,\ l) \in \mathcal{P}:\ f(A(x,\ p,\ l))_{i, j} = f(x)_{i,j}
\end{equation}
\textbf{Certified detection} We consider a verification function $v$ defined on $\mathcal{X}$ such that $v(x) \in \{0, 1\}^{H \times W}$. If $v(x)_{i,j}=1$, then we claim that there is no adversarial patch in the image $x$ or the prediction on a pixel $x_{i,j}$ is not affected by this patch. Otherwise $v(x)_{i,j}=0$ which means that a potential patch attack was detected and we return an alert. For a pixel $x_{i, j}$ our goal is to verify that the following statement is true
\begin{equation}
\label{eq:certified_detection_formulation}
\forall\ (p,\ l) \in \mathcal{P}: v(A(x, p, l))_{i, j}=1 \Rightarrow f(A(x,\ p,\ l))_{i, j} = f(x)_{i,j}
\end{equation}
The secondary objective is to keep the alert ratio as small as possible and not return an alert for the pixels that satisfy \ref{eq:certified_recovery_formulation}.
Depending on the objective our goal is to certify one of the conditions \ref{eq:certified_recovery_formulation}, \ref{eq:certified_detection_formulation} for each pixel $x_{i, j}$. It gives us a lower bound on attacker objective under any adversarial patch attack from $\mathcal{P}$.
\section{Demasked Smoothing} \label{section:demasked_smoothing}
\everypar{\looseness=-1}
\textsc{Demasked Smoothing} (Figure \ref{fig:intro_pic}) consists of several steps. First, we apply a predefined set of masks with specific properties to the input image to obtain a set of masked images. Then we reconstruct the masked regions of each image based on the available information with an inpainting model $g$. After that we apply a segmentation model $f$ to the demasked results. Finally, we aggregate the segmentation outcomes and make a conclusion for the original image with respect to the statements \eqref{eq:certified_recovery_formulation} or \eqref{eq:certified_detection_formulation}.
\subsection{Input masking}
\label{section:input_masking}
\begin{figure}[t]
\centering
\begin{subfigure}{0.20\linewidth}
\includegraphics[width=\linewidth]{graphics/masking_1/orig_000000011968.jpg}
\caption{original image}
\label{fig:masking_orig}
\end{subfigure}
\begin{subfigure}{0.18\linewidth}
\includegraphics[width=\linewidth]{graphics/rec_2.png}
\caption{$T=2$}
\label{fig:rec_2_scheme}
\end{subfigure}
\begin{subfigure}{0.20\linewidth}
\includegraphics[width=\linewidth]{graphics/masking_1/masked_image_0000.jpg}
\caption{column mask}
\label{fig:rec_2_example}
\end{subfigure}
\begin{subfigure}{0.18\linewidth}
\includegraphics[width=\linewidth]{graphics/rec_3.png}
\caption{$T=3$}
\label{fig:rec_3_scheme}
\end{subfigure}
\begin{subfigure}{0.20\linewidth}
\includegraphics[width=\linewidth]{graphics/masking_1/masked_image_0004.jpg}
\caption{3-mask}
\label{fig:rec_3_example}
\end{subfigure}
\begin{subfigure}{0.18\linewidth}
\includegraphics[width=\linewidth]{graphics/rec_4.png}
\caption{$T=4$}
\label{fig:rec_4_scheme}
\end{subfigure}
\begin{subfigure}{0.20\linewidth}
\includegraphics[width=\linewidth]{graphics/masking_1/masked_image_0003.jpg}
\caption{4-mask}
\label{fig:rec_4_example}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\includegraphics[width=\linewidth]{graphics/masking_1/masked_image_0018.jpg}
\caption{detection column}
\label{fig:det_col}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\includegraphics[width=\linewidth]{graphics/masking_1/masked_image_0002_det.jpg}
\caption{detection row}
\label{fig:det_row}
\end{subfigure}
\caption{\label{fig:masking} examples of a mask for the column masks with $T=2$ (b, c), 3-mask with $T=3$ (d, e), and 4-mask with $T=4$ (f, g) with the number of masks $K=5,7,9$ respectively. The number of a block denotes in which mask it is not masked (there is only one such mask for each block). For each mask set, we show one of the locations $l$ in which an adversarial patch affects $T$ different maskings.}
\end{figure}
\textbf{Motivation.} Like in previous work (Section \ref{section:related_work}) we apply masking patterns to the input image and use predictions on masked images to aggregate the robust result. If an adversarial patch is completely masked, it has no effect on further processing. However, in semantic segmentation, we predict not a single whole-image label like in the classification task, but a separate label for each pixel. Thus, making prediction on a masked image must allow us to predict the labels also for the masked pixels.
\textbf{Preliminaries.} Consider an image $x \in [0, 1]^{H\times W\times C}$. We define "$*$" to be a special masking symbol that does not correspond to any pixel value and has the property that $\forall z \in \mathbb{R}:\ z \times * = *$. Let $m \in \{*, 1 \}^{H\times W}$ be a mask. We call the element-wise product $x \odot m$ a \emph{masking} of $x$ where a subset of pixels is hidden with * and the rest remains unchanged. We consider the threat model $\mathcal{P}$ with patches of size $H^\prime \times W^\prime$ (Section \ref{section:threat_model}). We say that a mask $m$ \emph{covers} a patch $(p, l)$ if $\forall\ 1\leq i \leq H, 1 \leq j \leq W:\ (l \odot m)_{i,j} \neq 1$. It means that the patch content has no effect on the masked image i. e.\ $A(x,\ p,\ l) \odot m = x \odot m$. We divide the image $x$ into a set $B$ of non-intersecting blocks having the same size $H^\prime \times W^\prime$ as the adversarial patch $(p, l)$. $B=\{b_{q, r}\}$, $1 \leq q \leq \lceil H / H^\prime \rceil$, $1 \leq r \leq \lceil W / W^\prime \rceil$. We consider a set $M$ of $K$ masks, $|M|=K$. We refer to the masks in the set $M$ by their index $k$ as $M[k]$. To each mask $M[k]$ we assign a subset of blocks $B_k \subset B$ that are not covered in this mask while the blocks $B \setminus B_k$ are covered. We construct the masks so that each block is not covered in exactly one mask. We explain the reason for this in Section \ref{section:certification}. For a mask set $M$ we define its \emph{strength} $T(M)$ as $T(M) = \max_{(p, l) \in \mathcal{P}} \vert \{m \in M\ | A(x,\ p,\ l) \odot m \neq x \odot m \} \vert$. $T(M)$ is the largest number of masks that can be affected by some patch. If $M$ is defined, we refer to the value $T(M)$ as $T$ for simplicity of notation.
\textbf{Certified recovery.} We define a set of column masks for which $T=2$. We assign every $k$-th column to be not covered in the mask $M[k]$ (Figure \ref{fig:rec_2_example}). Any $(p, l) \in \mathcal{P}$ can intersect at most two adjacent columns since $(p, l)$ has the same width as a column. Thus, it can be not covered in at most two masks (Figure \ref{fig:rec_2_scheme}). A similar scheme can be proposed for the rows. Due to the block size in $B$, the patch $(p, l)$ cannot intersect more than four blocks at once. We define a mask set that we call \emph{3-mask} s. t. for any four adjacent blocks two belong to the same mask (Figures \ref{fig:rec_3_scheme}). Hence $T=3$ for 3-mask. See details in Appendix \ref{appendix:masking_strategies}. To achieve $T=4$ any assignment of blocks to the masks works. We consider \emph{4-mask} that allows uniform distribution of the unmasked blocks in the image (Figure \ref{fig:rec_4_scheme}).
\textbf{Certified detection.} We define $M_d = \{M_d[k]\}_{k=1}^K$ to be a set of masks s. t.\ for $\forall\ (p, l) \in \mathcal{P}\ \exists\ m \in M_d: A(x,\ p,\ l) \odot m = x \odot m$ i. e.\ for every patch exists at least one mask that covers this patch. For a patch of size $H^\prime \times W^\prime$ we consider $K=W - W^\prime + 1$ masks such that the mask $M_d[k]$ masks a column of width $W^\prime$ starting at the horizontal position $k$ in the image (Figure \ref{fig:det_col}). To obtain the guarantee for the same $\mathcal{P}$ with a smaller $K$, we consider a set of strided columns of width $W^{\prime\prime} \geq W^\prime$ and stride $W^{\prime\prime} - W^\prime + 1$ that also satisfy the condition (see the proof adapted from PatchCleanser\cite{xiang2021patchcleanser} in Section \ref{appendix:strided_proof} in the Appendix). A similar scheme can be proposed for the rows (Figure \ref{fig:det_row}). Alternatively, we could use a set of block masks of size $H^\prime \times W^\prime$. Then the number of masks grows quadratically with the image resolution. Therefore, in the experiments we focus on the column and the row masking schemes.
Let $g$ be a demasking model, $g(x \odot m) \in [0, 1]^{H\times W\times C}$. The goal of $g$ is to make the reconstruction $g(x \odot m)$ as close as possible to the original image $x$. For a segmentation model $f$ we define a \emph{segmentation set} $S(M, x, g, f):=\{S[k] = f(g(x \odot M[k]))\ |\ M[k] \in M \}$.
\subsection{Certification}
\label{section:certification}
\textbf{Certified recovery.} For the threat model $\mathcal{P}$ consider a set $M$ of $K$ masks with some strength $T$. We define a function $h: \mathcal{X} \rightarrow \mathbb{S}$ that assigns a label to the pixel $x_{i, j}$ via majority voting over labels assigned at each reconstructed segmentation $s \in S$. A class for the pixel that is predicted by the largest number of segmentations is assigned. We break the ties by assigning a class with a smaller index.
\begin{theorem} \label{th:recovery}
If the number of masks $K$ satisfies $K \geq 2T + 1$ and for a pixel $x_{i, j}$ we have $\forall\ S[k] \in S:\ S[k]_{i, j} = h(x)_{i, j}$ (i.e.\ all the votes agree), then $\forall\ (p,\ l) \in \mathcal{P}:\ h(A(x, p, l))_{i, j}=h(x)_{i, j}$
\end{theorem}
\begin{proof}
Assume that $\exists\ (p,\ l) \in \mathcal{P}$ s. t.\ $h(A(x, p, l))_{i, j} \neq h(x)_{i, j}$. Let us denote $x^\prime=A(x, p, l)$ and $S^\prime$ as the segmentation set for $x^\prime$. Then the class $h(x)_{i, j}$ did not get the majority vote for $S^\prime$. However, by definition of $T$ we know that $(p,\ l)$ could affect at most $T$ segmentations. Since all $K$ segmentations of $S$ have voted for $h(x)_{i, j}$, then at least $K - T > K / 2$ of them are still voting for $h(x)_{i, j}$ in $S^\prime$. Therefore $h(x^\prime)_{i, j} = h(x)_{i, j}$
\end{proof}
In Section \ref{section:input_masking} we said that by construction each block is not covered in exactly one mask. Assume that two masks keep the same block of the image unmasked. By placing a patch so that it intersects this block, the attacker can affect both of the corresponding segmentations. Thus, they can be counted as one when computing $T(M)$. It would result in producing redundant segmentations that do not affect the certification. Therefore, we consider the case of masks with non-overlapping visible blocks in Section \ref{section:input_masking}. Since each masking keeps approximately $1/K$ of the image pixels visible, we are interested in keeping $K$ as small as possible to maintain more visual information for the demasking model $g$. Thus, for the evaluation of masking schemes with $T=2,3,4$ we use the smallest possible values of $K=5,7,9$ respectively (Figure \ref{fig:masking}). We observe that although for $T=2$ we can keep a larger fraction of pixels unmasked, $T=3,4$ provide visible blocks that are spread in the image more uniformly. We evaluate these masking approaches in Section \ref{section:experiments}.
\textbf{Certified detection.} Consider $M_d = \{M_d[k]\}_{k=1}^K$. For a set of demasked segmentations S we define the verification map $v(x)_{i, j} := [f(x)_{i, j} = S[1]_{i, j} = \ldots = S[K]_{i, j}]$.
\begin{theorem} \label{th:detection}
Assume that $v(x)_{i, j} = 1$. Then $\forall\ (p,\ l) \in \mathcal{P}: v(A(x, p, l))_{i, j}=1 \Rightarrow f(A(x,\ p,\ l))_{i, j} = f(x)_{i,j} $
\end{theorem}
\begin{proof}
Assume that $\exists\ (p,\ l) \in \mathcal{P}$ s. t.\ $v(A(x, p, l))_{i, j}=1$ and $f(A(x,\ p,\ l))_{i, j} \neq f(x)_{i,j}$.Let us denote $x^\prime=A(x, p, l)$ and $S^\prime$ as the segmentation set for $x^\prime$. By definition of $M_d$, $\exists\ M_d[k] \in M_d$ s. t.\ $M_d[k]$ masks the patch $(p,\ l)$. Hence,
$$g(x \odot M_d[k])=g(x^\prime \odot M_d[k]),$$
$$S[k] = f(g(x \odot M_d[k])) = f(g(x^\prime \odot M_d[k])) = S^\prime[k],$$
Since $v(x)_{i, j} = 1$, we have $f(x)_{i, j} = S[k]_{i, j}$. Since $v(x^\prime)_{i, j} = 1$, we have $f(x^\prime)_{i, j} = S^\prime[k]_{i, j}$. Thus, $f(x^\prime)_{i, j} = f(x)_{i,j}$.
\end{proof}
For a given image $x$ the verification map $v(x)$ is complementary to the of the model segmentation output $f(x)$ that stays unchanged. Thus, there is no drop in clean performance.
\section{Experiments} \label{section:experiments}
In this section, we evaluate \textsc{Demasked Smoothing} with the masking schemes proposed in Section \ref{section:demasked_smoothing}. Certified recovery and certified detection provide certificates of different strength (Section \ref{section:demasked_smoothing}) which are not comparable. We evaluate them separately for different patch sizes.
\subsection{Experimental Setup}
\label{section:experimental_setup}
\begin{figure}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/recovery/orig_000000007601.jpg}
\caption{original image}
\label{fig:orig}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/recovery/masked_image_0000.jpg}
\caption{recovery masking}
\label{fig:masked_rec}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/recovery/rec_000000007601_00000.jpg}
\caption{recovery demasking}
\label{fig:inp_rec}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/recovery/rec_000000007601_00000_segm.jpg}
\caption{recovery segmentation}
\label{fig:inp_rec_segm}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/recovery/000000007601_gt.jpg}
\caption{ground truth}
\label{fig:inp_gt}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/detection/masked_image_0002.jpg}
\caption{detection masking}
\label{fig:masked_det}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/detection/rec_000000007601_00002.jpg}
\caption{detection demasking}
\label{fig:inp_det}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=\linewidth]{graphics/reconstruction/detection/rec_000000007601_00002_segm.jpg}
\caption{detection segmentation}
\label{fig:inp_det_segm}
\end{subfigure}
\caption{ Reconstructing the masked image with ZITS \cite{dong2022incremental} and segmenting with Senformer \cite{senformer}}
\label{fig:inpainting}
\end{figure}
We evaluate \textsc{Demasked Smoothing} on two challenging semantic segmentation datasets: ADE20K \citep{zhou2017scene} and COCO-Stuff-10K \citep{caesar2018coco} having 150 and 171 semantic classes respectively. Validation sets of ADE20K and COCO-Stuff-10K consist of 2000 and 1000 images respectively. For demasking we use the ZITS \cite{dong2022incremental} inpainting model. We use the checkpoint provided in the official paper repository \footnote{\url{https://github.com/DQiaole/ZITS_inpainting}}. The model was trained on Places2 \citep{places2} dataset with images resized to 256$\times$256. As a segmentation model $f$ we use Senformer \cite{senformer}, Swin \cite{liu2021Swin}, PSPNet \cite{zhao2017pspnet} and DeepLab v3 \citep{deeplabv3plus2018} (see Section \ref{appendix:additional_coco} in the Appendix for full experimental results). We note that the first two models are based on transformers and obtain near state-of-the-art result on the considered datasets. PSPNet and DeepLab v3 are CNN-based segmentation method that we consider to demonstrate that \textsc{Demasked Smoothing} is not specific to transformer-based architectures and can be applied to an arbitrary segmentation model. We use the segmentation model implementations provided in the \emph{mmsegmentation} framework \cite{mmseg2020}. For each architecture we select the best performing checkpoint. An illustration of the image reconstruction and respective Senformer segmentation can be found in Figure \ref{fig:inpainting}. Since evaluation on different images can be parallelized, we split the images equally among 5 Nvidia Tesla V100-32GB GPUs. The certification for the whole ADE20K validation set with ZITS and Senformer takes around 1.2 hours for certified recovery and 2 hours for certified detection (due to a larger number of masks). We point out that we did not focus on reaching the fastest possible execution time and it can be further improved by speeding up the corresponding modules. We provide more details in the Appendix.
\subsection{Evaluation metrics}
\label{section:evaluation_metrics}
For both certified recovery and certified detection we produce a standard segmentation output (without any abstentions) and a corresponding certification map (Figure \ref{fig:cert_map}). In case of certified detection, the segmentation output remains the same as for the original segmentation model. For the certified recovery, the output is obtained by a majority vote over the segmentations of demasked images (Section \ref{section:certification}) and is different from the original model output. We evaluate the mean intersection over union (mIoU) for these outputs. The certification map is obtained by assigning to each certified pixel the corresponding class from the segmentation output and assigning a special \emph{uncertified} label to all non-certified pixels. For each image we evaluate the fraction of pixels which are certified and correct (coincide with the ground truth). \%C is a mean of these fractions over all the images in the dataset. However, in semantic segmentation task, the class frequencies are usually skewed, therefore global pixel-wise accuracy alone is an insufficient metric.
Matching the certification map separately for each class with the ground truth segmentation of this class in the image allows us to compute the guaranteed lower bound ($cTP$) on the number of per-class true positive pixel predictions ($TP$) i.e.\ those that were correctly classified into this class. If a pixel was certified with a correct class, then this prediction cannot be changed by a patch (or, alternatively, the change will be detected by the verification function $v$ in certified detection). We consider per-class \emph{recall} $R=TP / (TP + FN)$ where $FN$ is the number of false negative predictions for this class. $P = TP + FN$ is the total area of a given class in the ground truth and does not depend on our prediction. We can evaluate \emph{certified recall} $cR= cTP / P$ which is a lower bound on recall $R$. For each class we compute its total recall in the dataset by dividing the total number of $TP$ pixels for all images over the total $P$. By averaging this number among all classes we evaluate \emph{mean recall} mR and its lower bound \emph{certified mean recall} cmR. Evaluating lower bounds on other popular metrics such as mean precision or mIoU cannot be done this way since they depend on the upper bound on false positive ($FP$) predictions. For the pixels that are not certified we cannot guarantee that they will not be assigned to a certain class, therefore, a non-trivial upper bound on $FP$ cannot be achieved. We leave this direction for future work.
Due to our threat model, certifying small objects in the scene can be difficult because they can be partially or completely covered by an adversarial patch in a way that there is not chance to recover the prediction. To provide an additional perspective on our methods, we also evaluate their mR and cmR specifically for the ``big'' classes, which occupy on average more than 20\% of the image in which they appear. These are, for example, road, building, train, and sky, which are important for understanding the scene. The full list of such classes for each dataset is provided in the Appendix.
\begin{table}
\caption{\label{tab:det} The certified detection results(\%) for a patch occupying no more than 1\% of the image. mIoU - mean intersection over union, mR - mean recall, cmR - certified mean recall. \%C - mean percentage of certified and correct pixels in the image. mIoU and mR are the clean model metrics.}
\vspace{+2mm}
\centering
\begin{tabular}{lll|c|cc|cc|c}
\toprule
\multirow{2}{*}{dataset} & \multirow{2}{*}{segm} & \multirow{2}{*}{mask} & \multirow{2}{*}{mIoU} & \multicolumn{2}{c|}{big} & \multicolumn{2}{c|}{all} & \multirow{2}{*}{\%C} \\
& & & & mR & cmR & mR & cmR & \\
\midrule
\multirow{6}{*}{ADE20K \citep{zhou2017scene}} & \multirow{2}{*}{Senformer \citep{senformer}} & col & \multirow{2}{*}{53.08} & \multirow{2}{*}{70.19} & \textbf{59.95} & \multirow{2}{*}{63.98} & \textbf{32.90} & \textbf{64.36} \\
& & row & & & 51.23 & & 26.48 & 59.30 \\
\cmidrule{2-9}
& \multirow{2}{*}{PSPNet \citep{zhao2017pspnet}} & col & \multirow{2}{*}{44.39} & \multirow{2}{*}{61.83} & \textbf{50.02} & \multirow{2}{*}{54.74} & \textbf{26.37} & \textbf{60.57} \\
& & row & & & 42.44 & & 19.88 & 54.62 \\
\cmidrule{2-9}
& \multirow{2}{*}{Swin \citep{liu2021Swin} } & col & \multirow{2}{*}{48.13} & \multirow{2}{*}{68.51} & \textbf{55.45} & \multirow{2}{*}{59.13} & \textbf{29.06} & \textbf{61.44} \\
& & row & & & 47.21 & & 22.04 & 55.93 \\
\midrule
\multirow{2}{*}{COCO10K \citep{caesar2018coco}} & \multirow{2}{*}{Senformer \citep{senformer}} & col & \multirow{2}{*}{49.13} & \multirow{2}{*}{77.46} & \textbf{63.68} & \multirow{2}{*}{61.31} & \textbf{36.05} & \textbf{53.96} \\
& & row & & & 56.86 & & 31.67 & 50.02 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}{0.22\linewidth}
\includegraphics[width=\linewidth]{graphics/cert_map/000000011968_orig.jpg}
\caption{original image}
\label{fig:orig_cert}
\end{subfigure}
\begin{subfigure}{0.22\linewidth}
\includegraphics[width=\linewidth]{graphics/cert_map/000000011968_rec.jpg}
\caption{recovery map}
\label{fig:rec_cert}
\end{subfigure}
\begin{subfigure}{0.22\linewidth}
\includegraphics[width=\linewidth]{graphics/cert_map/000000011968_cert_det.jpg}
\caption{detection map}
\label{fig:det_cert}
\end{subfigure}
\begin{subfigure}{0.22\linewidth}
\includegraphics[width=\linewidth]{graphics/cert_map/000000011968_gt.jpg}
\caption{ground truth}
\label{fig:gt}
\end{subfigure}
\caption{ Certification maps}
\label{fig:cert_map}
\end{figure}
\begin{table}[ht]
\caption{\label{tab:rec} The certified recovery results(\%) against a 0.5\% patch. 3-mask and 4-mask correspond to $T=3$ and $T=4$ respectively (Figure \ref{fig:masking})}
\vspace{+2mm}
\centering
\begin{tabular}{lll|c|cc|cc|c}
\toprule
\multirow{2}{*}{dataset} & \multirow{2}{*}{segm} & \multirow{2}{*}{mask} & \multirow{2}{*}{mIoU} & \multicolumn{2}{c|}{big} & \multicolumn{2}{c|}{all} & \multirow{2}{*}{\%C} \\
& & & & mR & cmR & mR & cmR & \\
\midrule
\multirow{12}{*}{ADE20K \citep{zhou2017scene}} &\multirow{4}{*}{Senformer \citep{senformer}} & col & \textbf{24.79} & \textbf{64.91} & \textbf{40.78} & \textbf{29.69} & \textbf{13.23} & \textbf{48.59} \\
& & row & 14.39 & 42.99 & 20.55 & 17.57 & 5.56 & 32.93 \\
& & 3-mask & 19.45 & 61.62 & 27.38 & 23.49 & 7.07 & 40.22 \\
& & 4-mask & 16.45 & 51.81 & 20.51 & 19.88 & 5.51 & 36.55 \\
\cmidrule{2-9}
& \multirow{4}{*}{PSPNet \citep{zhao2017pspnet}} & col & \textbf{19.17} & \textbf{51.90} & \textbf{34.11} & \textbf{23.66} & \textbf{10.76} & \textbf{44.90} \\
& & row & 12.00 & 36.26 & 12.03 & 15.03 & 3.74 & 28.29 \\
& & 3-mask & 15.00 & 44.93 & 19.55 & 18.41 & 5.58 & 35.85 \\
& & 4-mask & 12.74 & 40.41 & 15.86 & 15.87 & 4.14 & 31.22 \\
\cmidrule{2-9}
& \multirow{4}{*}{Swin \citep{liu2021Swin}} & col & \textbf{22.43} & \textbf{59.75} & \textbf{34.88} & \textbf{27.09} & \textbf{11.70} & \textbf{46.14} \\
& & row & 13.58 & 42.88 & 15.13 & 16.70 & 4.46 & 30.64 \\
& & 3-mask & 17.06 & 51.03 & 24.15 & 20.74 & 6.65 & 38.27 \\
& & 4-mask & 14.77 & 46.67 & 17.74 & 10.05 & 4.72 & 34.04 \\
\midrule
\multirow{4}{*}{COCO10K \citep{caesar2018coco}} &\multirow{4}{*}{Senformer \citep{senformer}} & col & \textbf{30.44} & \textbf{71.04} & \textbf{44.66} & \textbf{39.73} & \textbf{15.96} & \textbf{34.83} \\
& & row & 27.84 & 66.58 & 24.70 & 37.23 & 9.25 & 22.64 \\
& & 3-mask & 26.54 & 66.61 & 32.68 & 35.24 & 10.64 & 28.82 \\
& & 4-mask & 25.73 & 66.12 & 29.83 & 34.10 & 8.57 & 24.51 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/limitation/limitations_det.png}
\caption{certified detection}
\label{fig:limitation_det}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/limitation/limitations_rec.png}
\caption{certified recovery}
\label{fig:limitation_rec}
\end{subfigure}
\caption{Performance for different adversarial patch sizes evaluated on 200 ADE20K images.}
\label{fig:limitation}
\end{figure}
\subsection{Discussion}
The results for certified detection and certified recovery can be found in Table \ref{tab:det} and Table \ref{tab:rec} respectively. In Table \ref{tab:det} we see that for certified detection the column masking (Figure \ref{fig:det_col}) achieves better results than row masking in all settings. For all models we can certify around 60\% of pixels on ADE20K and 50\% on COCO10K. We also achieve 63.68 and 36.05 cmR for the big and all classes respectively on COCO10K with Senformer. In Table \ref{tab:rec} we see that for certified recovery the column masking ($T=2$, see Section \ref{section:input_masking}) achieves the highest results in all settings. However, row masking that also satisfies $T=2$ achieves worse results than the ones for $T=3,4$ on ADE20K. This suggests that not only the fraction of the image that is visible but also the mask structure affects the certification performance of \textsc{Demasked Smoothing}. For the recovery task, we also observe that majority voting achieves 24.79 mIoU on ADE20K and 30.44 on COCO10K compared to the original model values of 53.08 and 49.13 respectively. For Senformer on ADE20K we can on average certify almost 50\% of the pixels and achieve cmR of 41.78 and 13.23 for big and all classes respectively. Figure \ref{fig:limitation} shows how the performance of \textsc{Demasked Smoothing} depends on the patch size. We see that certified detection metrics remain high even for a patch as big as 5\% of the image surface and for the recovery they slowly deteriorate as we increase the patch size to 2\%.
\subsection{Limitations} \label{section:limitations}
The performance of \textsc{Demasked Smoothing} certified recovery may be insufficient for the downstream task if we certify against big patches (Figure \ref{fig:limitation}) unless robustness is prioritized over clean performance. The certification time that depends on the speed of the reconstruction and segmentation models may exceed the given quota in time-sensitive applications.
\section{Conclusion} \label{section:consclusion}
In this work, we propose the first (to the best of our knowledge) certified defences against patch attacks on segmentation models. We implement them in a framework that we call \textsc{Demasked Smoothing}. Due to its novel design based on masking schemes and image demasking, \textsc{Demasked Smoothing} is compatible with any segmentation model and can on average certify 64\% of the pixel predictions for a 1\% patch in the detection task and 48\% against a 0.5\% patch for the recovery task on the ADE20K dataset.
\textbf{Ethical and Societal Impact} This work contributes to the field of certified defences against physically-realizable adversarial attacks. The proposed approach allows to certify robustness of safety-critical applications such as medical imaging or autonomous driving. The defence might be used to improve robustness of systems used for malicious purposes such as (semi-)autonomous weaponry or unauthorized surveillance. This danger may be mitigated e.g.\ by using a system of sparsely distributed patches which makes certifying the image more challenging. All activities in our organization are carbon neutral, so the experiments performed on our GPUs do not leave any carbon dioxide footprint.
\section*{Acknowledgements}
Matthias Hein is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645 and of the BMBF Tübingen AI Center, FKZ: 01IS18039B.
|
nucl-th/0610040
|
\section{Introduction}
Skyrme forces \cite{Skyrme,Vau,Engel_75,Dob} are widely used for the
self-consistent description of ground state and excitations of atomic nuclei
(for a recent review see \cite{Ben}). Recently, there is increasing interest in
applications to the dynamics of exotic deformed nuclei, see e.g.
\cite{Stoitsov_PRC_03,Obertelli_PRC_05,Maruhn_PRC_05}. The demands on the
reliability and quality of the description are higher now then in the
pioneering earlier studies. This calls for closer inspection of the dynamical
properties of Skyrme forces. For example, there are still several open problems
related to the description of giant resonances, particularly for the for
isovector modes \cite{Rei_NPA_99}, for which the giant dipole resonance (GDR)
is the most prominent representative. Already for the description of the GDR,
there still exist a several puzzling features. For example, some trends of GDR
properties with nuclear matter characteristics look at first glance surprising
(microscopic calculations \cite{Rei_NPA_99} show a
decrease of the GDR energy with increasing symmetry energy while macroscopic
estimates predict the opposite trend \cite{Bertsch_book}). Another puzzling
feature is that some Skyrme parameterizations (like SkM$^*$) yield a too large
high-energy shoulder of the dipole strength distribution in heavy nuclei
\cite{Maruhn_PRC_05,srpa_PRC_02}. Moreover, the influence of the effective
mass and related time-odd couplings in the Skyrme forces on the GDR spectra has
not yet been fully clarified and deserves closer inspection.
The aim of this contribution is to explore more closely these GDR properties
in heavy deformed nuclei.
Until recently, the treatment of excitations in deformed nuclei within
self-consistent models was rather involved and time consuming. Meanwhile,
a new generation of the efficient RPA schemes has emerged
\cite{Stoitsov_PRC_03,Obertelli_PRC_05,srpa_PRC_02,nes_long,prep_05},
which allow now the systematic studies. In this contribution, we
will exploit one of these schemes, the separable random-phase-approximation (SRPA).
This method drastically simplifies the calculations and, at the same time,
provides high accuracy. It can be used for both spherical
\cite{srpa_PRC_02} and deformed \cite{nes_long,prep_05} nuclei.
Here we will apply it to study trends and spectral pattern
of the GDR in heavy deformed nuclei $^{150}$Nd and $^{238}$U.
Descriptions for four different Skyrme forces will be compared and
scrutinized.
\section{Details of Calculations}
The explicit form of the Skyrme functional used in our study is given
elsewhere \cite{srpa_PRC_02,Re92}. The calculations are performed for
the Skyrme forces SkT6 \cite{skt6}, SkM$^*$ \cite{skms}, SLy6 \cite{sly6}
and SkI3 \cite{ski3}. Though these forces were fitted with a different
bias, they all provide a good overall description of nuclear bulk
properties and are suitable for deformed nuclei (see review
\cite{Ben}). For our aim it is important that this selection of forces
covers different values of key characteristics of nuclear matter, as
shown in Table 1. We so dispose a large span of the effective
masses (isoscalar as well as isovector) and some variation of
the symmetry energy.
\begin{table}
\caption{\label{tab:skyrme}
Nuclear matter and deformation properties for the Skyrme forces
under consideration. The table represents the isoscalar effective
mass $m_0^*/m$, symmetry energy $a_{\rm sym}$, density dependence
of symmetry energy $a'_{\rm sym}=d/d\rho a_{\rm sym}$, sum rule
enhancement factor $\kappa$, isovector effective mass
$m_1^*/m=1/(1+\kappa)$, and quadrupole moments $Q_2$ in $^{150}$Nd
and $^{238}$U.
The experimental values of $Q_2$ are taken from \protect\cite{Goldhaber}.
}
\begin{ruledtabular}
{\begin{tabular}{@{}c|ccccc|cc@{}}
Forces & $m_0^*/m$ & $a_{\rm sym}$ [MeV]&
$a'_{\rm sym}$ [MeV\,fm$^3$] &
$\kappa$ & $m_1^*/m$ & \multicolumn{2}{c}{$Q_2$ [b]}\\
& & & & & & $^{150}$Nd & $^{238}$U \\
\hline
SkT6 & 1.00 & 30.0 & 63 & 0.001 & 1.00 & 6.0 & 11.1 \\
SKI$^*$ & 0.79 & 30.0 & 95 & 0.531 & 0.65 & 6.2 & 11.1 \\
SLy6 & 0.69 & 32.0 & 100 & 0.250 & 0.80 & 5.8 & 11.0 \\
SkI3 & 0.58 & 34.8 & 212 & 0.246 & 0.80 & 5.9 & 11.0 \\
\hline
exp. & & & & & & 5.2 & 11.1 \\
\end{tabular}}
\end{ruledtabular}
\end{table}
The calculations employ a cylindrical coordinate-space grid with the
mesh size 0.7 fm. Pairing is treated at the BCS level. The ground
state deformation is determined by minimizing the total energy. As is
seen from Table 1, all three Skyrme forces give a reasonable
quadrupole moment in $^{238}$U. The calculated moment in $^{150}$Nd is
somewhat overestimated. However, this nucleus is rather soft and
it is difficult to expect here a precise agreement with the
experiment. The modest overestimation is not a principle obstacle for
our study.
The dipole response involves contributions from both time-even (nucleon
$\rho_s$, kinetic energy $\tau_s$, and spin-orbital $\Im_s$) and time-odd
(current $j_s$ and spin $\sigma_s$) densities, where $s$ denotes protons
and neutrons. Besides, the contributions
from the pairing densities $\chi_s$ is taken into account.
The contributions of the time-odd densities are driven by the variations
\begin{equation}
\frac{\delta^2 E}{\delta \vec{j}_{s_1}\delta \vec{j}_{s}}, \quad
\frac{\delta^2 E}{\delta \vec{\sigma}_{s_1}\delta \vec{j}_{s}}, \quad
\frac{\delta^2 E}{\delta \vec{j}_{s_1}\delta \vec{\sigma}_{s}}
\end{equation}
of the Skyrme functional terms \cite{srpa_PRC_02,Re92}
\begin{eqnarray}
&&b_1 (\rho \tau - \vec{j}^2)
- b'_1 \sum_s (\rho_s \tau_s - \vec{j}_s^2)
\\
&-& b_4 \left( \rho (\vec{\nabla}\vec{{\Im}})
+ \vec{\sigma} \cdot (\vec{\nabla} \times \vec{j})\right)
- b'_4 \sum_s \left( \rho_s(\vec{\nabla} \vec{\Im}_s)
+ \vec{\sigma}_s \cdot (\vec{\nabla} \times \vec{j}_s) \right)
\nonumber
\end{eqnarray}
where the total densities (like $j=j_p+j_n$) are given without the
index. As was shown in refs. \cite{Engel_75,Dob}, the time-odd
densities naturally belong to the Skyrme functional if it involves all
the possible bilinear combinations of the nucleon and spin densities
together with their derivatives up to the second order. The time-odd
densities enter the functional only in specific combinations, as a
complement to the time-even ones, so as to keep Galilean and gauge
invariance of Skyrme forces.
The present SRPA calculations are performed in the approximation of
two generating operators, which allows to cover dynamics in both
surface and interior of the nucleus \cite{nes_long}. The dipole
response is computed as the photo-absorption energy-weighted strength
function
\begin{equation}\label{eq:strength_function}
S(E\lambda\mu ; \omega)
=
\sum_{\nu}
\omega_{\nu} M_{\lambda\mu \nu}^2 \zeta(\omega - \omega_{\nu})
\end{equation}
where
\begin{equation}
\zeta(\omega - \omega_{\nu})
=
\frac{1}{2\pi}
\frac{\Delta}{(\omega - \omega_{\nu})^2 + (\Delta/2)^2}
\label{eq:lorfold}
\end{equation}
is the Lorentz weight with the averaging parameter $\Delta$,
$M_{\lambda\mu \nu}$ is the matrix element of $E\lambda\mu$ transition
from the ground state to the RPA state $|\nu>$, $\omega_{\nu}$ is the
RPA eigen-energy. By using the SRPA technique
\cite{nes_long,prep_05}, we directly compute the strength function
with the Lorentz weight. This dramatically reduces the computation time.
For example, by using a PC with CPU Pentium 4 (3.0 GHz)
we need about 25 minutes for the complete calculations of the GDR in
$^{238}$U.
The isovector dipole response is computed with the proton and neutron
effective charges $e_p^{eff}=N/A$ and $e_n^{eff}=-Z/A$, where Z, N are
numbers of protons and neutrons and A is the mass number. The
isoscalar spurious mode (center of mass) is located at 2-3 MeV and
thus is safely separated from the isovector one. We use a large
configuration space including all proton and neutron levels from the
bottom of the potential well up to $\sim +16$ MeV. This results in
$\sim 7300$ ($^{150}$Nd) and $\sim 9500$ ($^{238}$U) two-quasiparticle
(2QP) configurations in the energy interval 0 - 100 MeV. In both
nuclei, the energy-weighted sum rule $EWSR=9NZ\hbar^2 e^2/(8A\pi m^*)$
is exhausted by 99$\%$, 97$\%$, 93$\%$, and 89$\%$ for SkT6, SkM$^*$,
SLy6, and SkI3 forces, respectively.
\section{Results and Discussion}
Results of the calculations are presented in Figs. 1-3.
Figure 1 exhibits the results for the dipole strength computed with a
width parameter $\Delta$= 2 MeV. This width is supposed to simulate
line broadening from nucleon escape as well as two-body collisions and
it is found to be suitable for the comparison with the experimental
data. For comparison, we show also the unperturbed dipole strength deduced
from the pure two-quasi-particle (2qp) excitations.
The figure shows
a strong dependence of the dipole response on the Skyrme
forces. This dependence is particularly pronounced for the unperturbed
strength where it is obviously related to the isoscalar effective mass
$m_0^*/m$. Low effective masses yield a stretched single particle
spectrum (see e.g. \cite{nest_PRC_mom}) leading to large 2qp energies.
Following this trend, the unperturbed 2qp strength in Fig. 1 exhibits
a systematic shift to higher energy from SkT6 to SkI3.
The residual
interaction in the isovector dipole channel is repulsive and moves
the dipole strength to higher energies. The corresponding collective
energy shift is determined by the isovector parameters listed in
Table 1. What is remarkable, there is a clear correlation in the
dependencies of the 2qp strength and the collective shift on the
Skyrme force. The dependencies are opposite. A small effective mass
$m_0^*/m$ stretches the 2qp spectrum and, at the same time, we have
reduction of the residual interaction driven by $m_1^*/m$. Thereby,
both effective masses contribute and maybe even correlate. The latter
might be justified by that both effective masses originate from one and
the same term of the Skyrme functional $\sim b_1, b'_1$. After all, we
obtain similar GDR energies for all the forces, in fair agreement with
the experimental data \cite{nd_e1_exp,u_e1_exp}. These results
corroborate the experience from spherical nuclei that SRPA with Skyrme
forces provides a reasonable description of the GDR in heavy nuclei
\cite{srpa_PRC_02}.
\begin{figure}[th]
\centerline{\psfig{file=fig1.eps,width=10cm,angle=-90}} \vspace*{8pt}
\caption{ The dipole giant resonance in $^{150}$Nd and $^{238}$U, calculated
with the Skyrme forces SkT6, SkM$^*$, SLy6 and SkI3. The calculated strength
(solid curve) is compared with the experimental data
\protect\cite{nd_e1_exp,u_e1_exp} (triangles). The quasiparticle
(unperturbed) strength is denoted by the dotted curve. The Lorentz averaging
parameter is $\Delta$=2 MeV.}
\end{figure}
Having a closer look at the full strengths in Fig. 1, we see
still some differences between SkT6, SkM$^*$, SLy6 and SkI3 cases. There is
a small shift of the average peak position as well as different strength
patterns. The most prominent peculiarities can be explained by the
isovector parameters listed in Table 1. For example,
the exceptionally large collective shift in SkM$^*$ can be related
to very low isovector effective mass $m^*_1/m=$0.65 for this force
or, which is the same, to a high value of the sum rule enhancement
factor $\kappa =$0.531. It is seen that this case drastically deviates
from the SkT6 one where the impact of the isovector parameters
is negligible ($m^*_1/m=$1.0 and $\kappa =$0.001).
The trend of the average peak position devotes a special analysis. It
can be related to the isovector parameters in the combination
$\sqrt{a_{\rm sym,act}/m^*_1}$ where
$a_{\rm sym,act}\approx a_{\rm sym}-a'_{\rm sym}\rho_{\rm nm}/2$ is an
estimate for the actual symmetry energy in a heavy nucleus and the
density $\rho_{\rm nm}/2$ at the nuclear surface determines the GDR
response. This predicts the sequence of the peak heights
with the highest SkM$*$, smaller SLy6 and lowest SkT6
and SkI3, in agreement with Fig. 1.
Note that this explanation takes care of the ``actual''
symmetry energy whose trend deviates from the trend of the volume
symmetry energy $a_{\rm sym}$ (which then delivers the wrong trend in
macroscopic estimates of the GDR peak \cite{Bertsch_book}).
The detailed pattern of the strength distributions is caused by the
fragmentation of the bulk dipole peak over energetically close 2qp states.
Fig. 1 shows that the spectral details considerably vary with the force.
This is best visible for SkM$^*$ case where unrealistically high
right GDR shoulder and the subsequent overestimation of the resonance
width take place, especially in $^{150}$Nd. The effect
becomes weaker for SLy6 (thus giving the best description of GDR) and
vanishes for SkI3 (already with underestimation of the resonance
width). The appearance of the right shoulder for some Skyrme forces,
preferably those with a large effective mass, was also noted in the
calculations for deformed rare-earth and actinide nuclei within the
full (non-separable) Skyrme RPA \cite{Maruhn_PRC_05} and in the SRPA
calculations for $^{208}$Pb \cite{srpa_PRC_02}. This effect seems to
be universal for GDR in heavy nuclei, independently on their shape.
\begin{figure}[th]
\centerline{\psfig{file=fig2.eps,width=10cm,angle=-90}}
\vspace*{8pt}
\caption{
The dipole giant resonance in $^{150}$Nd and $^{238}$U, calculated with
the Skyrme forces SkT6, SkM$^*$, SLy6 and SkI3. Plots a) exhibit
the full strength (solid curve) and its branches $\mu = 0$
(left dotted structure) and $\mu = 1$ (right dotted structure). Plots b) exhibit
the full strength with (solid curve) and without (dotted curve) the contribution
from the proton sub-shell $1g_{9/2}$ and the neutron sub-shell $1h_{11/2}$.
The Lorentz averaging
parameter is $\Delta$=1 MeV.
}
\end{figure}
The right shoulder of the GDR is studied in more detail in Fig. 2. To
provide a more detailed description, we use here a smaller averaging
of $\Delta$= 1 MeV. The left panels of the figure show the two
GDR branches with projections $\mu=0$ and $\mu=1$.
It is seen that the right shoulder effect is not
caused by the deformation splitting since it appears only
in the branch with $\mu =1$. Instead, it is rather a consequence of
the strength fragmentation. Following Fig. 2a, the branch $\mu =1$
undergoes a dramatic transformation
from SkM$^*$ to SkI3: its strength flows from the right to the left
flank. To understand such behavior, we should take into account that
the {\it unperturbed} dipole strength also has some kind of a right
shoulder or a strong right tail (see Fig. 1) and this may cause a
considerable fragmentation. The SkM$^*$ produces a maximal
collective shift and thus places most of the $\mu =1$ strength beyond
the tail. This minimizes the fragmentation and collects the
strength into the narrow peak. Vise versa, the SkI3 collective shift
is small and insufficient to push the strength beyond the tail. So in
this case the strength is fragmented between nearby 2qp
pairs and we do not observe anymore large concentration of
strength at the upper end of the spectrum.
\begin{figure}
\centerline{\psfig{file=fig3.eps,width=10cm,angle=-90}}
\vspace*{8pt}
\caption{
The dipole giant resonance in $^{150}$Nd and $^{238}$U, calculated with
the Skyrme forces SkT6, SkM$^*$, SLy6 and SkI3. The full strength is
calculated with (solid curve) and without (dashed curve) the current
contribution. The Lorentz averaging parameter is $\Delta$=2 MeV.
}
\end{figure}
It is interesting to figure out which single-particle states are
responsible for the right flank of GDR. A simple analysis shows that
these are the occupied intruder states $j=l+1/2$ with the largest orbital
momentum $l$.
These sub-shells dive into the valence shell due to the strong
spin-orbit splitting and, the heavier the nucleus, the stronger their
impact. In $^{150}$Nd, the intruder sub-shells are $1g_{9/2}$ for
protons and $1h_{11/2}$ for neutrons. In $^{238}$U, they are
$1h_{11/2}$ and $1i_{13/2}$, respectively. Fig. 2b shows that
excluding these sub-shells indeed weakens the right flank of GDR. It
worth noting that such sub-shells should manifest themselves in GDR of
heavy nuclei independently of the nuclear shape. The same effect
appears in spherical heavy nuclei as well.
As a next step, we consider the influence of time-odd densities in the
residual interaction. Following our calculations, only the
current-current contribution $\delta^2 E/\delta \vec{j}_{s_1}\delta
\vec{j}_{s}$ is essential, while contributions related with the spin
density are negligible. So, we will discuss only the effect of the
current-current term $\sim (b_1+b'_1\delta_{s,s'})$. Fig. 3 shows that
the time-odd contribution strongly depends on the force and, moreover,
obviously correlates with the differences in strength distributions
presented in Fig. 1. This is not surprising since the squared current
density is involved to the terms $\sim b_1, b'_1$ of the Skyrme
functional and just the time-even partners in this term are
responsible for the effective masses, see Eq. (2). As is seen from
the figure, the time-odd effect is negligible for SkT6 (where
$m^*_0/m=m^*_1/m=1$ and thus the influence of the effective masses is
minimal) but quite strong for other forces. Besides, the effect may
have different sign (compare SkM$^*$ versus SLy6 and SkI3). The
surprisingly large influence of the time-odd terms (and of the effective
mass) can be understood by the fact that the dominant contributions to
the dipole response from the principle Skyrme terms have different signs
and thus considerably compensate each other \cite{nes_long}. Hence
small effects, like the time-odd terms, acquire more weight.
It is remarkable, that the time-odd densities and effective masses
affect the same part of GDR, namely its right flank. As was shown above,
this part of the resonance strictly depends on the intruder states with a high
orbital moment. Such states should be very sensitive to any velocity
dependent values and hence to the current and effective masses. Then
it is clear why just the right GDR flank is mainly affected by these
values. This property of the GDR can be effectively used for
additional testing the isovector parameters.
Altogether, we see that the average peak position is basically
determined by the isovector parameters (symmetry energy, isoscalar
effective mass) while the detailed fragmentation pattern is also
strongly influenced by the isoscalar effective mass. A systematic
analysis of dipole strength distribution over many spherical and
deformed nuclei will help us to learn more about the underlying
single particle spectra and dynamics.
\section{Conclusions}
The giant dipole resonance (GDR) in deformed nuclei has been
investigated within the separable RPA (SRPA) method. Four different
Skyrme forces (SkT6, SkM$^*$, SLy6, and SkI3) with different nuclear
matter characteristics (symmetry energy, isoscalar and isovector
effective mass) were applied. As the test cases, the typical axially
deformed nuclei, $^{150}$Nd and $^{238}$U, were considered.
SRPA was found to provide a good description of the GDR and, what is
important, with a minimal computational effort. This method is
indeed an efficient theoretical tool for systematic exploration of
the dynamics of deformed nuclei.
All four Skyrme forces in our sample reproduce in general the
average position of the GDR strength and its two-bump structure.
There are some trends in the peak positions (shifts about $\pm$ 1 MeV)
which can be explained by the different isovector properties of the
forces.
At closer inspection, we see considerable differences in the GDR width
and fragmentation pattern. For example, some forces as, e.g., SkM$^*$
result in a too high right shoulder of the GDR strength distribution.
We show that height and position of this shoulder are determined by
high-angular-momentum states which are shifted down into the valence
shell by the strong spin-orbit force. Thus many factors
(effective mass, spin-orbit force) have a significant influence on the
fragmentation pattern of the spectra, especially at the right flank of
the GDR. This feature, in turn, provides useful information for
determination of the Skyrme parameters and
exploration of the possible correlations.
We have also discussed the effect of the time-odd terms on the GDR
profile. The time-odd spin-orbit terms have only negligible
influence. The current-current coupling, however, contributes
substantially, and the lower the isoscalar effective mass, the
stronger the contribution. This effect is crucial to
counterweight the spectral stretching of the unperturbed
excitations.
As we have seen, the detailed pattern of the resonance spectra carry worthwhile
information on the underlying single-particle spectrum of the self-consistent
mean field. The newly developed SRPA code for deformed nuclei provides access
to a much larger pool of data on nuclear giant resonances. This allows more
systematic investigations to disentangle various influences and improve
description of nuclear excitation properties. Work in that direction is in
progress.
\section*{Acknowledgements}
The work was supported by the DFG grant GZ:436 RUS 17/104/05,
Heisenberg-Landau (Germany-BLTP JINR) grants for 2005 and 2006 years,
and the BMBF, contracts 06 DD 119 and 06 ER 808, and by the research plan MSM
0021620834 of Clench Republic.
|
physics/0610143
|
\section{Introduction\label{intro}}
Reciprocity, which was first found by Lorentz at the end of 19th century,
has a long history\cite{Potton} and has been derived in several formalisms.
There are two typical reciprocal configurations in optical responses
as shown in Fig.~\ref{fig1}. The configurations in Figs.~\ref{fig1}(a)
and \ref{fig1}(b) are transmission reciprocal
and those in Figs.~\ref{fig1}(a)
and \ref{fig1}(c) are reflection reciprocal. As shown in Fig.~\ref{fig1},
we denote transmittance by $T$ and reflectance by $R$; the suffice k and
$\theta$ stand for incident wavenumber vector and angle, respectively.
The reciprocal configurations are obtained by symmetry operations
on the incident light of the wavenumber vector: ($k_x,\, k_z) \to
(-k_x, -k_z$) or ($-k_x,\, k_z$).
Reciprocity on transmission means that $T_{\rm k} = T_{\rm -k}$,
and that on reflection is expressed as $R_{\theta} = R_{-\theta}$, which is not intuitively obvious and is frequently surprising to students.
The most general proof
was published by Petit in 1980,\cite{Petit} where reciprocal reflection
as shown in Fig.~\ref{fig1} is derived
for asymmetric gratings such as an echelette grating.
On the basis of the reciprocal relation for
the solutions of the Helmholtz equation,
the proof showed that reciprocal reflection holds for
periodic objects irrespective of absorption. It seems difficult
to apply the proof to transmission because it would be
necessary to construct solutions of Maxwell equations that satisfy
the boundary conditions at the interfaces of the incident, grating, and
transmitted layers.
The history of the literature on reciprocal optical responses
has been reviewed in Ref~\onlinecite{Potton}.
Since the 1950s, scattering problems regarding light, elementary particles,
and so on have been addressed by using scattering matrix (S-matrix).
In the studies employing the S-matrix, it is assumed
that there is no absorption by the object. The assumption leads to the
unitarity of the S-matrix and makes it possible to prove reciprocity.
The reciprocal reflection of lossless objects
was verified in this formalism.\cite{Gippius}
In this paper we present a simple, direct, and general derivation
of the reciprocal optical responses for transmission and reflection
relying only on classical electrodynamics.
We start from the reciprocal theorem described in Sec.~\ref{thm} and derive
the equation for zeroth order transmission and reflection coefficients
in Sec.~\ref{proof}. The equation is essential to the reciprocity.
A numerical and experimental example of reciprocity is presented in
Sec.~\ref{example}.
The limitation and break down of reciprocal optical responses
are also discussed.
\section{Reciprocal Theorem\label{thm}}
The reciprocal theorem has been proved in various fields, such as statistical
mechanics, quantum mechanics, and electromagnetism.\cite{Landau}
Here we introduce the theorem for electromagnetism.
When two currents exist as in Fig.~\ref{fig2} and the induced
electromagnetic (EM) waves travel in linear and locally responding media
in which
$D_i({\bf r}) = \sum_j \varepsilon_{ij}E_j({\bf r})$ and
$B_i({\bf r}) = \sum_j \mu_{ij}H_j({\bf r})$, then
\begin{equation}
\int {\bf j}_1({\bf r})\cdot {\bf E}_2({\bf r}) d{\bf r} =
\!\int {\bf j}_2({\bf r})\cdot {\bf E}_1({\bf r}) d{\bf r}.\label{reci}
\end{equation}
Equation~\eqref{reci} is the reciprocal theorem in electromagnetism.
The proof shown in
Ref.~\onlinecite{Landau} exploits plane waves and is straightforward.
Equation~(\ref{reci}) is valid even for media with losses.
The integrands take non-zero values at the position ${\bf r}$
where currents exist, that is, ${\bf j}_i({\bf r})\neq{\bf 0}$.
The theorem indicates the reciprocity between
the two current sources ${\bf j}_i$ ($i = 1,2$) and the induced EM waves ${\bf E}_i$
which are observed at the position of the other source ${\bf j}_k$
($k \neq i$).
\section{Reciprocal Optical Responses\label{proof}}
In this section, we apply the reciprocal theorem to optical responses
in both transmission and reflection configurations.
First, we define the notation used in the calculations of the integrals in
Eq.~(\ref{reci}).
An electric dipole oscillating at the frequency $\omega$
emits dipole radiation, which is detected in the far field. When
a small dipole ${\bf p}$ along the $z$ axis is located at the origin,
it is written as
${\bf p}(t)=p(t)\ev_z$ and $p(t) = p_0 e^{i\omega t}$, where $\ev_z$ denotes
the unit vector along the $z$ axis and $p_0$ the magnitude of the dipole.
The dipole in vacuum emits radiation, which in the far field is
\begin{subequations}
\begin{align}
{\bf E}({\bf r},t) &= \frac{1}{4\pi\varepsilon_0}
\frac{\ddot{p}(t')}{c^2 r}\sin\theta \cdot\ev_{\theta} \\
&= \frac{-1}{4\pi\varepsilon_0}\frac{p_0\,\omega^2}{c^2 r}
e^{i\omega t'}\sin\theta \cdot\ev_{\theta} , \label{dipole}
\end{align}
\end{subequations}
where polar coordinates ($r$, $\theta$, $\phi$) are used, a unit vector is
given by
$\ev_{\theta} = (\cos\theta\cos\phi,\cos\theta\sin\phi,-\sin\theta)$,
and $t'= t-r/c$. Because the dipole ${\bf p}$ is defined by
${\bf p}({\bf r},t) = \!\int {\bf r}\rho({\bf r},t) d{\bf r}$
and conservation of charge density is given by
$\nabla\cdot{\bf j} + \partial\rho/\partial t = 0$, we obtain
the current ${\bf j}$ associated with the dipole ${\bf p}$:
\begin{equation}
{\bf j}({\bf r},t) = \dot{p}(t)\delta({\bf r})\ev_z. \label{j}
\end{equation}
Consider two arrays of $N$ dipoles (long but finite)
in the $xz$ plane as shown in Fig.~\ref{fig3}.
The two arrays have the same length, and the directions are specified
by normalized vectors ${\bf n}_i$ ($i=1,2$) and
${\bf n}_1 \parallel {\bf n}_2$.
In this case, the current is ${\bf j}_i\parallel {\bf n}_i$.
If the dipoles coherently oscillate with the same phase,
then the emitted electric fields are superimposed and form a wave front
at a position far from the array in the $xz$ plane as drawn
in Fig.~\ref{fig3}.
The electric field vector of the wave front, ${\bf E}_{i,{\rm in}}$,
satisfies ${\bf E}_{i,{\rm in}}\parallel {\bf n}_i$
and travels with wavenumber vector ${\bf k}_{i,{\rm in}}$.
Thus, if we place the dipole arrays far enough from the object,
the induced EM waves become slowly decaying incident plane
waves in the $xz$ plane to a good approximation.
The arrays of dipoles have to be long enough to form the plane wave.
For the transmission configuration, we calculate
$\int {\bf j}_i\cdot {\bf E}_k\,d{\bf r}$ ($i,k=1,2$ and $i \neq k$).
Figure~\ref{fig3} shows a typical transmission configuration, which includes
an arbitrary periodic object asymmetric along the $z$ axis. The relation
between the current ${\bf j}_i$, the direction ${\bf n}_i$ of the dipole,
and the wavenumber vector ${\bf k}_{i,{\rm in}}$ of the wave front is
summarized as ${\bf j}_i \parallel {\bf n}_i$ and
${\bf n}_i \perp {\bf k}_{i,{\rm in}}$.
It is convenient to expand the electric field into a Fourier series
for the calculation of periodic sources:
\begin{equation}
{\bf E}({\bf r}) = \sum_m {\bf E}^{(m)}
\exp(i{\bf k}_m\cdot {\bf r}), \label{E_expand}
\end{equation}
where ${\bf E}^{(m)}$ is the Fourier coefficient of ${\bf E}({\bf r})$,
${\bf k}_m = (k_{x,m},0,k_{z,m}) = (k_{{\rm in},x}+2\pi m/d_x, 0, k_{z,m})$
($m=0,\pm 1,\pm 2, \cdots$), and $d_x$ is the periodicity of the object
along the $x$ axis (see Fig.~\ref{fig3}).
The $z$ component is expressed in homogeneous media in vacuum as
$k_{z,m} = \pm\sqrt{{\bf k}_{\rm in}^2 - k_{x,m}^2}$, where
the signs correspond to the directions along the $z$ axis.
When the dipole array is composed of sufficiently small and
numerous dipoles, the integration can be calculated to good accuracy as
\begin{subequations}
\begin{align}
\int {\bf j}_1({\bf r}) \cdot {\bf E}_2({\bf r}) d{\bf r}
&= \!\int i\omega p_0 {\bf n}_1 \cdot \sum_m {\bf E}_2^{(m)}
\exp(i{\bf k}_m\cdot s {\bf n}_1) ds \\
&= \sum_m \delta_{m,0} N(i\omega p_0 {\bf n}_1 \cdot
{\bf E}_2^{(m)}) \\
&= i\omega N p_0 E_2^{(0)}, \label{j1E2}
\end{align}
\end{subequations}
where $E_2^{(0)} = |{\bf E}_2^{(0)}|$.
To ensure that the integration is proportional to $\delta_{m,0}$,
the array of dipoles has to be longer than $L$:
\begin{equation}
L = (\mbox{length of dipole})\cdot q,
\end{equation}
where $q$ is the least common multiple of the diffraction channels which are
open at the frequency $\omega$.
This condition would usually be satisfied when ${\bf E}_{i,{\rm in}}$
forms a plane wave.
By permutating 1 and 2 in Eq.~(\ref{j1E2}), we obtain
$\int {\bf j}_2\cdot{\bf E}_1 d{\bf r} = i\omega N p_0 E_1^{(0)}$.
Equation~(\ref{j1E2}) and the reciprocal theorem in
Eq.~(\ref{reci}) lead to the equation
\begin{equation}
E_1^{(0)} = E_2^{(0)}. \label{E_reci}
\end{equation}
Each electric vector $E_i^{(0)}$ ($i=1,2$) is observed at the position ${\bf r}$
where there is another current ${\bf j}_k({\bf r})$ ($k\neq i$).
The integral in Eq.~(\ref{reci}) is reduced to
Eq.~(\ref{j1E2}) which is expressed only by the zeroth components of the
transmitted electric field. The reciprocity is thus independent of higher
order harmonics,
which are responsible for the modulated EM fields in structured objects.
When there is no periodic object in Fig.~\ref{fig3},
a similar relation holds:
\begin{equation}
E_1^{\rm no,(0)} = E_2^{\rm no,(0)}. \label{E0_reci}
\end{equation}
The transmittance $T_i$ is given by
\begin{equation}
T_i = \left|\frac{E_i^{(0)}}{E_i^{\rm no,(0)}}\right|^2.\label{T_reci}
\end{equation}
From Eqs.~(\ref{E_reci})--(\ref{T_reci}),
we finally reach the reciprocal relation $T_1 = T_2$.
The feature of the proof that $T_1 = T_2$ is independent of the detailed
evaluation of $E_i^{(0)}$ and therefore makes the proof simple and general.
The proof can be extended to two-dimensional periodic structure by
replacing the one-dimensional periodic structure in Fig.~\ref{fig3}
by two-dimensional one.
Although we have considered periodic objects, the proof can also
be extended to non-periodic objects. To do this extension, Eq.~(\ref{E_expand}) has to be expressed in the general form
${\bf E}({\bf r}) = \int {\bf E}({\bf k})\exp(i{\bf k}\cdot{\bf r})d{\bf k}$,
and a more detailed calculation for
$\int{\bf j}_i\cdot{\bf E}_kd{\bf r}$ is required.
Reciprocity for transmission thus holds
irrespective of absorption, diffraction, and scattering by objects.
In Fig.~\ref{fig3} the induced electric fields ${\bf E}_i$ are
polarized in the $xz$ plane. The polarization is called TM polarization
in the terminology of waveguide theory and is also often called $p$
polarization.
For TE polarization (which is often called $s$ polarization)
for which ${\bf E}_i$ has a polarization
parallel to the $y$ axis, the proof is similar to what we have described
except that the dipoles are aligned along the $y$ axis.
Reciprocal reflection is also shown in a similar way.
The configuration is depicted in Fig.~\ref{fig4}. The two sources have to be
located to satisfy the mirror symmetry about the $z$ axis.
The calculation of $\int{\bf j}_i\cdot{\bf E}_kd{\bf r}$ leads
to the reciprocal relation for reflectance $R_1 = R_2$.
Note that $E_i^{{\rm no},(0)}$ in Eq.~(\ref{E0_reci}) has to be evaluated
by replacing the periodic object by a perfect mirror.
\section{Numerical and Experimental Confirmation\label{example}}
An example of reciprocal optical response is shown here.
Figure \ref{fig5}(a) displays the structure of the sample and
reciprocal transmission configuration. The sample consists of
periodic grooves etched in metallic films of Au and Cr on a quartz substrate.
The periodicity is 1200\,nm, as indicated by the dotted lines in
Fig.~\ref{fig5}(a).
The unit cell has the structure of Au:air:Au:air = 3:1:4:5.
The thickness of Au, Cr, and quartz is 40\,nm,
5\,nm, and 1\,mm, respectively.
The structure is obviously asymmetric about the $z$ axis.
The profile was modeled from an AFM image of the fabricated sample.
Figure~\ref{fig5}(b) shows our numerical results.
The incident light has $\theta = 10^{\circ}$ and TM polarization (the electric
vector is in the $xz$ plane).
The numerical calculation was done with an
improved S-matrix method\cite{Tikh, Li1} The permittivities of gold and
chromium were taken from Refs.~\onlinecite{Johnson} and
\onlinecite{Johnson2}; the permittivity of quartz is well known to be 2.13.
In the numerical calculation, the incident light is taken to be a plane wave,
and harmonics up to $n=\pm 75$ in Eq.~(\ref{E_expand}) were used, which
is enough to obtain accurate optical responses.
The result indicates
that transmission spectra (lower solid line) are numerically
the same in the reciprocal configurations, while reflection (upper solid
line) and absorption (dotted line) spectra show a definite difference. The
absorption is plotted along the left axis. The difference
implies that surface excitations are different on each side and absorb
different numbers of photons.
Nonetheless, the transmission spectra are the same for incident
wavenumber vectors ${\bf k}_{1,{\rm in}}$ and ${\bf k}_{2,{\rm in}}$.
Experimental transmission spectra are shown in Fig.~\ref{fig5}(c) and
are consistent within experimental error.
Reciprocity is thus confirmed both numerically and experimentally.
There have a few experiments on reciprocal transmission (see references
in Ref.~\onlinecite{Potton}). In comparison with these results, Fig.~\ref{fig5}(c) shows the excellent agreement of reciprocal transmission and is
the best available experimental evidence supporting reciprocity.
We note that transmission spectra in Figs.~\ref{fig5}(b) and \ref{fig5}(c)
agree quantitatively above 700\,nm. On the other hand, they show a qualitative
discrepancy below 700\,nm. The result could come from the difference between
the modeled profile in Fig.~\ref{fig5}(a) and the actual profile of the
sample.
The dip at 660\,nm stems from a surface plasmon at the metal-air
interface, so that the measured transmission spectra would be affected
significantly by the surface roughness and the deviation from
the modeled structure.
\section{Remarks and summary\label{summary}}
As described in Sec.~\ref{thm}, the reciprocal theorem assumes that
all media are linear and show local response. Logically, it can happen that
the reciprocal optical responses do not hold for nonlinear or
nonlocally responding media.
Reference~\onlinecite{non-recipro} discusses an explicit difference of the
transmittance for a reciprocal
configuration in a nonlinear optical crystal of KNbO$_3$:Mn.
The values of the transmittance deviate by a few tens of percent in the
reciprocal configuration.
The crystal has a second-order response such that
$D_i({\bf r}) = \sum_j \varepsilon_{ij}E_j({\bf r}) + \sum_{j,k}\varepsilon_{ijk}
E_j({\bf r})E_k({\bf r})$.
The break down of reciprocity comes from the nonlinearity.
Does reciprocity also break down in nonlocal media?
In nonlocal media the induction \textbf{D} is given by
$\textbf{D}({\bf r}) = \!\int\!\varepsilon({\bf r},{\bf r}'){\bf E}({\bf r}')d{\bf r}'$. Although
a general proof for this case has not been reported to our knowledge,
it has been shown that reciprocity holds in a particular stratified structure
composed of nonlocal media.\cite{H.Ishihara}
In summary, we have presented an elementary and heuristic proof of the
reciprocal optical responses for transmittance and reflectance.
When the reciprocal theorem in Eq.~(\ref{reci}) holds, the reciprocal
relations come from geometrical configurations of light sources and
observation points, and are independent of the details of the objects.
Transmission reciprocity has been confirmed both numerically
and experimentally.
\begin{acknowledgments}
We thank S.\ G.\ Tikhodeev for discussions. One of us (M.\ I.)
acknowledges the Research Foundation for Opto-Science and Technology for
financial support, and the Information Synergy Center, Tohoku University for
their support of the numerical calculations.
\end{acknowledgments}
|
hep-ph/0610323
|
\section{Introduction}
Quantum Chromodynamics (QCD) exhibits various states of matter
depending on the environment~\cite{review}: A strongly coupled
quark-gluon plasma (sQGP) has been discovered at the Relativistic
Heavy Icon Collider (RHIC)~\cite{sQGP} and various color
superconductors presumably exist in the cores of the compact stellar
object~\cite{CSC}. These are widely-known extreme states of
equilibrated QCD matter at high temperature and at high (baryon or
quark) density, respectively. The study on sQGP properties at high
temperature has been led by the Monte-Carlo simulation of QCD on the
lattice~\cite{lattice}. The lattice QCD results provide us with
fundamental information such as the phase transition temperature
$T_{\text{c}}$~\cite{Tc}, equation of state~\cite{Aoki:2005vt},
susceptibility~\cite{Bernard:2004je}, mesonic correlation above
$T_{\text{c}}$~\cite{charmonium}, etc.
In contrast, at finite density, the lattice technique is not quite
successful so far; it is hindered by the notorious problem that the
fermion determinant in the presence of nonzero quark chemical
potential $\mu_q$ is not positive semidefinite (i.e.\ not
nonnegative). This problem is commonly refereed to as the fermion
sign problem~\cite{review:signproblem}. The Monte-Carlo simulation
based on importance sampling requires positive semidefinite
probability for each gauge configuration. In the presence of
$\mu_q\neq0$, however, the probability is no longer a well-defined
quantity due to a negative fermion determinant arising. There have
been several techniques proposed to handle the sign problem, e.g.\ the
reweighting method~\cite{reweighting}, the Taylor expansion
method~\cite{Taylor}, the analytical continuation from an imaginary
chemical potential~\cite{imaginary}. Although these proposals have
achieved partial successes when $\mu_q/T$ is small, no prescription
applicable at high density has been established yet.
This work is aimed to point out that the sign problem is relevant
even in the mean-field treatment of gauge fields. One can intuitively
understand it in the following way; in the mean-field approximation
the partition function is estimated from the most dominant
contribution of particular configurations that have the largest
probability or the smallest free energy. When the probability for
each gauge configuration is ill-defined due to the sign problem,
hence, the mean-field free energy may well be problematic.
In the present work we shall focus on a specific manifestation of
the sign problem appearing in the simplest model at finite temperature
and density. Because QCD is a highly nontrivial theory, applicability
of the mean-field approximation is quite limited. We choose a finite
temperature model because the mean-field description presumably works
well to sketch the hot QCD phase transition. The Polyakov loop plays
an essential role as an order parameter there~\cite{polyakov_loop}.
The dynamics of the Polyakov loop was closely examined some years ago
both in the lattice simulation~\cite{polyakov_lattice} and in the
mean-field approximation~\cite{weiss,strong}. Recently, in addition,
the Polyakov loop dynamics near $T_{\text{c}}$ is specifically paid
attention~\cite{pl_model}. There is also an interesting observation
that the entanglement between the chiral and Polyakov loop dynamics
turns out to be indispensable to understand the nature of the QCD
phase transitions~\cite{go_model,hatta,Fukushima:2003fw}.
We have already known several indications from the mean-field
studies about the Polyakov loop behavior at $\mu_q\neq0$. In the best
of our knowledge, the effective potential of the Polyakov loop at
finite temperature and density was first derived in
Ref.~\cite{KorthalsAltes:1999cp} in perturbative QCD in the one-loop
order. (See also Ref.~\cite{Weiss:1987mp} for the potential with an
imaginary chemical potential.) It is obvious from Eq.~(5) in
Ref.~\cite{KorthalsAltes:1999cp} that the Polyakov loop variable is
augmented to a complex valued variable when $\mu_q\neq0$, and thus the
effective potential turns complex. It is quite nontrivial how to
derive meaningful information from such a complex effective potential;
the expectation value of the Polyakov loop cannot be fixed directly by
a minimum of the complex effective potential. The same problem occurs
also in the chiral effective models with the Polyakov loop coupling
which was first formulated by one of the present authors in
Ref.~\cite{Fukushima:2003fw} and has been investigated
extensively~\cite{fnjl,Ratti:2005jh}, though the sign problem has been
almost overlooked. Here, we would refer to a closely related work,
Ref.~\cite{Dumitru:2005ng}, in which the authors mentioned on the sign
problem in a matrix model of the Polyakov loop. It should be noted
that the mean-field free energy as seen in
Refs.~\cite{Ratti:2005jh,Dumitru:2005ng} is not complex unlike the
effective potential in Ref.~\cite{KorthalsAltes:1999cp} so that one
can determine the expectation value of the Polyakov loop without
ambiguity. We will go into details of this issue later. Our findings
are all consistent with what has been foreseen in
Ref.~\cite{Dumitru:2005ng}.
The concept of this work lies not in solving the sign problem but in
observing the sign problem in a manageable way, so to speak, in a
clean environment. We also demonstrate that a technique which we call
the \textit{phase reweighting method} works fine as a practical
prescription in the similar but not the same spirit as in the finite
density lattice simulation. We will employ the dense-heavy model
obtained from QCD in the double limit of large quark mass and large
quark chemical potential~\cite{Blum:1995cb} (see also discussions in
Ref.~\cite{Bender:1991gn}). The reasons we adopt the dense-heavy
model are as follows: First, the fermion determinant is exactly
calculable as a function of the gauge field. Second, neither the
chiral condensate nor the diquark condensate is involved in the
dynamics owing to large quark mass. Third, lattice data is available
from Ref.~\cite{Blum:1995cb}. From these three reasons the
dense-heavy model is considered to be an appropriate implement for our
purpose to scrutinize the sign problem within the mean-field
approximation.
This paper is organized as follows. In Sec.~\ref{sec:sign_problem}
we make a brief overview on the sign problem. We formulate the model
in Sec.~\ref{sec:dense_heavy} and the mean-field approximation in
Sec.~\ref{sec:mean_field}. Then, in Sec.~\ref{sec:su2}, we examine
the color SU(2) case first which is free from the sign problem in
order to make sure if the mean-field approximation is reasonable. We
next proceed to the SU(3) calculation. In Sec.~\ref{sec:su3} we
explain the phase reweighting method to circumvent the SU(3) complex
fermion determinant and then discuss the validity of our method by
viewing the mean-filed effective potential in
Sec.~\ref{sec:MFFreeEnergy}. Section~\ref{sec:summary} is devoted to
the summary.
\section{Sign Problem}
\label{sec:sign_problem}
Here are general discussions on the sign problem of the fermion
determinant at finite density. Readers who are already familiar with
it can skip to the model study starting from Sec.~\ref{sec:model}. We
will later demonstrate how the dense-heavy model concretely embodies
the general features mentioned in this section.
The fermion determinant in Euclidean space-time with a quark mass
$m_q$ and a quark chemical potential $\mu_q$ takes the form of
\begin{equation}
\det\mathcal{M}(\mu_q)\equiv
\det\bigl[\gamma_\mu D^\mu + \gamma_4\mu_q +m_q\bigr] \,,
\end{equation}
where $D^\mu\equiv\partial^\mu-igA^\mu$ is the covariant derivative.
It is a well-known argument that
$\det\mathcal{M}(\mu_q)=\det\gamma_5\mathcal{M}(\mu_q)\gamma_5
=\{\det\mathcal{M}(-\mu_q)\}^\ast$ and thus the fermion determinant is
real in the zero density ($\mu_q=0$) case. Alternatively, we can
explicitly look into the eigenvalue spectrum of the Dirac operator to
confirm that the determinant is positive semidefinite as follows: For
$\mu_q=0$, when $\psi_n$ is an eigenstate of $\gamma_\mu D^\mu$, the
eigenvalue $\lambda_n$ is pure imaginary because $\gamma_\mu D^\mu$ is
anti-Hermitian where $\gamma_\mu$'s are Hermitian in our convention.
Since the mass term is simply proportional to unity, $\psi_n$ is also
an eigenstate of $\gamma_\mu D^\mu+m_q$ with the eigenvalue
$\lambda_n+m_q$. One can show that $\gamma_5 \psi_n$ is an eigenstate
of $\gamma_\mu D^\mu+m_q$ having the eigenvalue
$-\lambda_n+m_q=(\lambda_n+m_q)^\ast$ from the property
$\gamma_5\gamma_\mu D^\mu\gamma_5=-\gamma_\mu D^\mu$. Therefore, the
determinant consists of a pair of $\lambda_n+m_q$ and
$(\lambda_n+m_q)^\ast$ which makes the whole determinant real and
nonnegative as $|\lambda_n+m_q|^2\geq0$.
In the presence of the chemical potential term the fermion
determinant is not necessarily positive semidefinite. When $\psi_n$
is an eigenstate of $\gamma_\mu D^\mu+\gamma_4\mu_q$ with the
eigenvalue $\lambda_n$, in the same was as above, one can show that
$\gamma_5\psi_n$ is also an eigenstate of
$\gamma_\mu D^\mu+\gamma_4\mu_q$ with the eigenvalue $-\lambda_n$
which is not $\lambda_n^\ast$ because $\gamma_\mu D^\mu+\gamma_4\mu_q$
is no longer anti-Hermitian. It is apparent from this explanation
that, if $\mu_q$ is pure imaginary~\cite{imaginary},
$\gamma_\mu D^\mu+\gamma_4\mu_q$ is anti-Hermitian so that the fermion
determinant turns nonnegative.
If a considered theory has degenerate fermions associated with an
internal symmetry under the transformation $T$ and the chemical
potential $\mu_q$ is replaced by a matrix $\boldsymbol{\mu}$ in
internal space transforming like
\begin{equation}
T^{-1}\,\boldsymbol{\mu}\,T=-\boldsymbol{\mu}\;,
\label{eq:change}
\end{equation}
then
$\det\mathcal{M}(\boldsymbol{\mu})
=\{\det\mathcal{M}(\boldsymbol{\mu})\}^\ast$, so that the determinant
becomes real.
A well-known realization of this comes from the isospin degrees of
freedom in flavor ($u$,$d$) space. The isospin chemical potential
$\mu\propto\tau_3$ and $T=i\tau_2$ (or $T=\tau_1$) certainly satisfies
Eq.~(\ref{eq:change}) where $\tau$'s are the Pauli matrices in
($u$,$d$) space~\cite{Son:2000xc}.
Another realization is the color SU(2) case~\cite{su2} that is
relevant to our model study as we will argue later. The
$C$-transformation changes the quark chemical potential $\mu_q$ as in
Eq.~(\ref{eq:change}). In general cases, however, the
$C$-transformation does not give an escape from the sign problem
because it also swaps color and anticolor. That is,
\begin{equation}
C^{-1}\gamma_5\,\gamma_\mu D^\mu\,\gamma_5 C
= \bigl[\gamma_\mu\bigl(\partial^\mu-ig(A^\mu)^C\bigr)\bigr]^\ast
\end{equation}
with $(A^\mu)^C\equiv-(A^\mu)^\ast$. Special for $N_{\text{c}}=2$ is that
anticolor is not distinguishable from color since two doublets can
make a singlet. Actually $\sigma_2(A^\mu)^C\sigma_2=A^\mu$ where
$\sigma$'s are the Pauli matrices in color space. Therefore the SU(2)
case is free from the sign problem.
We should note that the fermion determinant being complex is not
necessarily harmful on its own. Rather, it is important whether the
real or imaginary part of the fermion determinant is positive
semidefinite or not. Even though the determinant evaluated for a
certain $A^\mu$ is a complex number, the functional integral over
$A^\mu$ amounts to a real value for physical observables. It is
understood in view of the relation,
\begin{equation}
\begin{split}
& \det\bigl[\gamma_\mu\bigl(\partial^\mu-ig(A^\mu)^C\bigr)
+\gamma_4\mu_q+m_q\bigr] \\
=& \Bigl\{\det\bigl[\gamma_\mu D^\mu+\gamma_4\mu_q
+m_q\bigr]\Bigr\}^\ast \,.
\end{split}
\label{eq:cconj}
\end{equation}
We see clearly that the real (imaginary) part of the determinant is
$C$-even ($C$-odd). For a $C$-even ($C$-odd) observable, thus, the
imaginary (real) part of the determinant vanishes after integration
over $A^\mu$. Accordingly the genuine problem stems from that the
real or imaginary part of the determinant may change its sign
depending on the configuration $A^\mu$.
\section{Model Study}
\label{sec:model}
We will analyze a simple model to see the sign problem occurring in
the mean-field level. As a practice to study the model, we will make
the mean-field approximation in the SU(2) case for which we do not
have to face the sign problem. We will then observe the sign problem
in the SU(3) calculation and attempt the phase reweighting method to
deal with the complex phase of the fermion determinant.
\subsection{Dense-Heavy Model}
\label{sec:dense_heavy}
We will closely analyze a lattice model with dense heavy
quarks~\cite{Blum:1995cb}. Let us consider QCD in the limit of
$\mu_q\to\infty$ so that we can drop antiquarks. We shall
simultaneously take another limit of $m_q\to\infty$ which renders all
quarks static. Under such limits we can evaluate the staggered
fermion determinant exactly to reach the fermion action,
\begin{align}
e^{-S_{\text{f}}[L]} \equiv & \det\bigl[\gamma_\mu D^\mu+\gamma_4\mu_q
+m_q\bigr]\notag\\
\to & \bigl[\det(1+\epsilon L)\bigr]^{N_{\text{f}}/4} \,,
\end{align}
where $N_{\text{f}}$ is the number of flavors. Not to go into subtlety of the
flavor counting inherent to the staggered formalism, which is not of
our interest, we set $N_{\text{f}}=4$ throughout this paper.
We take those two limits in a way characterized by the parameter
$\epsilon$ ranging from zero to infinity;
\begin{equation}
\epsilon\equiv\Bigl(\frac{e^{\mu_q a}}{2m_q a}\Bigr)^{N_\tau}\,.
\end{equation}
We will often call this model parameter as the ``density'' parameter
because $\epsilon$ has strong correlation to the quark number density
as seen in Fig.~\ref{fig:edep_density_su2} for the SU(2) case and
Fig.~\ref{fig:edep_density_su3} for the SU(3) case. Here $N_\tau$ is
the number of the lattice sites in the temporal direction, that is,
the inverse temperature $1/T$ is given by $N_\tau a$ with the lattice
spacing $a$. In this model quarks are allowed to propagate only in
the positive temporal direction. Each time a quark with mass $m_q$
travels by one temporal lattice, it picks up the hopping parameter
$1/(2m_q a)$ and the gauge invariant chemical potential factor
$e^{\mu_q a}$~\cite{Hasenfratz:1984em}. After $N_\tau$ hops, a quark
winds around the temporal circle. It results in the weight $\epsilon$
for the quark excitation represented by the Polyakov loop $L$ in the
fundamental representation,
\begin{align}
L(\vec{x}) &\equiv \prod_{x_4=a}^{N_\tau a}U_4(\vec{x},x_4) \notag\\
&\equiv\mathcal{P}\exp\biggl[\,ig\!\int_0^{1/T}\!\! dx_4\,
A_4(\vec{x},x_4)\biggr] \,.
\end{align}
The first line is the expression of the Polyakov loop in terms of link
variables on the lattice and second is in the continuum. It does not
matter whichever expression we use since the SU($N_{\text{c}}$) matrix $L$
plays the role of the dynamical variable in our model and we will not
return to its definition.
The calculation of the determinant in color space is explicitly
doable. After all we have the fermion action,
\begin{equation}
e^{-S_{\text{f}}[L]} = \prod_{\vec{x}}\bigl[1+\epsilon^2
+2\epsilon\ell\bigr]
\label{eq:sf_su2}
\end{equation}
in the color $N_{\text{c}}=2$ case (and $N_{\text{f}}=4$ is implicit as we already
noted) and
\begin{equation}
e^{-S_{\text{f}}[L]} = \prod_{\vec{x}}\bigl[1+\epsilon^3
+3\epsilon\ell+3\epsilon^2\ell^\ast\bigr]
\label{eq:sf_su3}
\end{equation}
in the color $N_{\text{c}}=3$ case. In the above expressions we employed the
traced Polyakov loop defined as
\begin{equation}
\ell = \frac{1}{N_{\text{c}}}\text{tr} L,
\end{equation}
where $\text{tr}$ is taken in fundamental color space. We remark that the
$C$-transformation changes $\ell$ to $\ell^\ast$ and vice versa, where
$\ell$ is real for $N_{\text{c}}=2$ and generally complex for $N_{\text{c}}\neq2$.
It is obvious that the $N_{\text{c}}=2$ determinant (\ref{eq:sf_su2}) is real
and positive semidefinite for any $\epsilon$ because
$1+\epsilon^2+2\epsilon\ell\ge(1-|\epsilon|)^2\ge0$, while the $N_{\text{c}}=3$
expression (\ref{eq:sf_su3}) suffers the sign problem for nonzero
$\epsilon$; the determinant can be complex except when either
$\epsilon=0$ (zero density), $\epsilon=1$ (half-filling), or
$\epsilon\to\infty$ (full-filling). We can intuitively understand why
the SU(3) determinant becomes real for $\epsilon=1$. One-quark
excitation represented by $\ell$ and two-quark excitation that is
equivalent to one-antiquark excitation represented by $\ell^\ast$
occur with the common weight $\epsilon=\epsilon^2=1$ because of
half-filling (see $n=0.5$ at $\epsilon=1$ in
Figs.~\ref{fig:edep_density_su2} and \ref{fig:edep_density_su3}).
Therefore, in effect, the system has equality in number of quarks and
(effective) antiquarks just like in the zero density case. It is not
an escape from the sign problem, however. In the $N_{\text{c}}=3$ case
$\ell+\ell^\ast$ can take a value ranging from $-1$ to $2$, and thus
the determinant at $\epsilon=1$, i.e.\
$\det(1+L)\propto 2+3(\ell+\ell^\ast)$ is real but can be negative for
$-1<\ell+\ell^\ast<-2/3$.
We note here one more important feature of the model. The fermion
determinant is invariant under the duality
transformation~\cite{Blum:1995cb},
\begin{equation}
\epsilon \:\leftrightarrow\: 1/\epsilon \quad\text{and}\quad
\ell \:\leftrightarrow\: \ell^\ast \,,
\end{equation}
by which it is sufficient for us to investigate the model in the
region $\epsilon\in[0,1]$ and the outer region $\epsilon\in(1,\infty]$
can be deduced by means of the duality.
Regarding the gluodynamics, we assume the nearest neighbor
interaction between the Polyakov loops,
\begin{equation}
S_{\text{g}}[L]=-N_{\text{c}}^2 J\sum_{\text{n.n.}} \ell(\vec{x})\,
\ell^\ast(\vec{y}).
\label{eq:sg}
\end{equation}
This action looks pretty simple and still reproduces the fundamental
nature of the phase transition in the pure gluonic sector; the
action~(\ref{eq:sg}) leads to a second-order phase transition for the
SU(2) case and a first-order phase transition for the SU(3)
case~\cite{Kogut:1981ez}, which is in agreement with the lattice QCD
simulation and the theoretical expectation from center
symmetry~\cite{polyakov_loop}.
One can interpret $J$ as a model parameter specifying the
``temperature'' of the system. In the strong coupling expansion, in
fact, $J$ is related to $T$ through $J=\exp[-\sigma a/T]$ where
$\sigma$ is the string tension. In this work we shall leave $J$ as a
model parameter as it is, for we do not want to introduce any further
modeling into our analyses. In other words, our aim is to form a
model not to imitate QCD itself but to mimic the QCD sign problem.
The effective action that defines our model is eventually given by
\begin{equation}
S[L]=S_{\text{g}}[L]+S_{\text{f}}[L]
\end{equation}
with the ``density'' parameter $\epsilon$ contained in $S_{\text{f}}[L]$ and
the ``temperature'' parameter $J$ in $S_{\text{g}}[L]$.
\subsection{Mean Field Approximation}
\label{sec:mean_field}
In the finite-temperature field theory the free energy is evaluated
in the functional integral form like the effective action as
\begin{equation}
e^{-f\cdot V/T}=\int\!\mathcal{D} L\,e^{-S[L]} \;.
\end{equation}
We make use of the mean-field technique to approximate the free
energy. Our ansatz for the mean-field action
is~\cite{go_model}
\begin{equation}
S_{\text{mf}}[L] \equiv -\frac{x}{2}\sum_{\vec{x}}\bigl[\ell(\vec{x})
+\ell^\ast(\vec{x})\bigr] \;.
\label{eq:ansatz}
\end{equation}
In case of pure gluonic theories, the mean-field $x$ is simply
proportional to the Polyakov loop expectation value;
$x=12N_{\text{c}}^2 J\langle\ell\rangle$. In the presence
of fermionic contributions, however, there is no simple relation
between them. Besides, $\langle\ell\rangle$ and
$\langle\ell^\ast\rangle$ in the SU(3) case have different dependence
on $\mu_q$. One might think that there should be two independent
mean-fields to deal with differing $\langle\ell\rangle$ and
$\langle\ell^\ast\rangle$. This idea has much to do with the sign
problem actually, and we will come back to this point later.
Then the mean-field free energy can be estimated as
\begin{equation}
f_{\text{mf}}(x)\cdot V/T = \langle S[L]-S_{\text{mf}}[L]
\rangle_{\text{mf}} -\ln\!\int\mathcal{D} L\,e^{-S_{\text{mf}}[L]},
\label{eq:free_energy}
\end{equation}
where the average $\langle\cdots\rangle_{\text{mf}}$ is taken by the
mean-field action $S_{\text{mf}}[L]$. Roughly speaking, the first part
corresponds to the internal energy and the logarithmic part is the
entropy. We fix $x$ so as to minimize $f_{\text{mf}}(x)$. Once $S_{\text{mf}}[L]$ is
known with $x$ determined, the expectation value of any observable
$\mathcal{O}[L]$ as a function of the Polyakov loop can be estimated
by the group integration over $L$ with the mean-field action
$S_{\text{mf}}[L]$;
\begin{equation}
\langle\mathcal{O}[L]\rangle \simeq \langle\mathcal{O}[L]
\rangle_{\text{mf}} \equiv \frac{\displaystyle\int\!dL\,
\mathcal{O}[L]\,e^{-S_{\text{mf}}[L]}}{\displaystyle \int\!dL\,
e^{-S_{\text{mf}}[L]}} \,.
\end{equation}
For instance, the quark number density per color degrees of freedom is
available by calculating
\begin{equation}
n \equiv -\frac{1}{N_{\text{c}}}\cdot\frac{\partial f}
{\partial\mu_q}\simeq\frac{\epsilon}{N_{\text{c}} V}\biggl\langle
\frac{dS_{\text{f}}}{d\epsilon}\biggr\rangle_{\text{mf}} \,.
\label{eq:density}
\end{equation}
One can calculate the Polyakov loop susceptibility $\chi$ in the same
way which reflects information of the deconfinement phase transition.
If one directly uses $\mathcal{O}[L]=\ell^2$ in the mean-field
approximation, however, nothing becomes singular at the critical point
unless the fluctuation of $\ell$ is taken into account. Equivalently
one can estimate the susceptibility from the inverse curvature of the
effective potential because the one-loop fluctuation leads to the
trace of propagator for $\langle\ell^2\rangle$, that is, the inverse
screening mass. Since $\langle\ell\rangle$ and
$\langle\ell^\ast\rangle$ are uniquely determined given $x$ is fixed
by the free energy~(\ref{eq:free_energy}), one can regard
$f_{\text{mf}}(x)$ as a function of $\langle\ell\rangle$ or
$\langle\ell^\ast\rangle$. The Polyakov loop susceptibility is then
\begin{equation}
\biggl(\frac{\partial^2 f_{\text{mf}}}{\partial\langle\ell\rangle^2}\biggr)^{-1}
=\biggl(\frac{\partial\langle\ell\rangle}{\partial x}\biggr)^2
\biggl(\frac{\partial^2 f_{\text{mf}}}{\partial x^2}\biggr)^{-1} \,,
\end{equation}
where we used $\partialf_{\text{mf}}/\partial x=0$. Of course, the
susceptibility defined in terms of $\ell^\ast$ is available with
$(\partial\langle\ell\rangle/\partial x)^2$ replaced by
$(\partial\langle\ell^\ast\rangle/\partial x)^2$. The difference thus
lies only in the nonsingular coefficient we are not interested in.
For our purpose it is rather convenient to focus on the singular part
alone discarding the difference of $\langle\ell\rangle$ and
$\langle\ell^\ast\rangle$. Hence, we define the susceptibility as
\begin{equation}
\chi \equiv \biggl(\frac{\partial^2 f_{\text{mf}}}{\partial^2 x}
\biggr)^{-1}
\label{eq:sus}
\end{equation}
for presenting our numerical results.
Now that we finish explaining our approximations and computational
procedures, let us step forward to the model analysis.
\subsection{SU(2) Results}
\label{sec:su2}
We consider the model in the color SU(2) case first, as we
mentioned, to see how the mean-field approximation works apart from
the sign problem.
\begin{figure}
\includegraphics[width=7.5cm]{edep_density_su2.eps}
\caption{Correlation between the density parameter $\epsilon$ and the
quark number density per lattice site (i.e.\ density in the unit of
the lattice spacing) divided by $N_{\text{c}}=2$. It is obvious from the
figure that $\epsilon=0$ is the zero-density ($n=0$) state and
$\epsilon=1$ is the half-filling ($n=0.5$) one.}
\label{fig:edep_density_su2}
\end{figure}
\begin{figure}
\includegraphics[width=7.5cm]{jdep_su2.eps}
\caption{SU(2) (traced) Polyakov loop $\langle\ell\rangle$ as a
function of the coupling $J$ at various density parameters;
$\epsilon=$0 (solid), 0.1 (dotted), 0.2 (short-dashed), 0.5 (dashed),
and 1.0 (dotted-dashed). The second-order phase transition at
$\epsilon=0$ occurs at $J_{\text{c}}\simeq0.083$. The transitional
behavior is gradually smeared by the center symmetry breaking terms
in the fermion determinant as $\epsilon$ grows larger.}
\label{fig:Jdep_su2}
\end{figure}
\begin{figure}
\includegraphics[width=7.5cm]{edep_su2.eps}
\caption{SU(2) Polyakov loop $\langle\ell\rangle$ as a function of
the density parameter $\epsilon$ at various temperature parameters;
$J$=0 (solid), 0.05 (short-dashed), 0.1 (dashed), and 0.2
(dotted-dashed). The Polyakov loop becomes insensitive to $\epsilon$
as $J$ goes larger, which is consistent with Fig.~\ref{fig:Jdep_su2}
in which the results at various $\epsilon$ converge at large $J$.}
\label{fig:edep_su2}
\end{figure}
Figure~\ref{fig:edep_density_su2} shows the results for the quark
number density $n$ as a function of the density parameter $\epsilon$
using Eq.~(\ref{eq:density}). For $\epsilon>1$ the duality relation
$n(\epsilon)=1-n(1/\epsilon)$ enables us to deduce the number density.
We can immediately confirm from this plot and the duality relation
that the density parameter specifies the quark number density uniquely
which monotonously approaches unity as $\epsilon\to\infty$. It should
be noted that the half-filling $n=0.5$ realizes at $\epsilon=1$ and
there is no $J$ dependence at all then.
Let us look at the phase transition seen in the Polyakov loop
behavior with increasing $J$. The deconfinement phase transition in
the SU(2) pure gluonic theory is known to be second-order belonging to
the same universality class as the Ising
model~\cite{polyakov_loop,polyakov_lattice,Engels:1989fz}.
In our model at $\epsilon=0$ we have a continuous transition at
$J=J_{\text{c}}\simeq0.083$ as indicated by the solid curve in
Fig.~\ref{fig:Jdep_su2}. The presence of dynamical quarks acts on the
Polyakov loop variable as an external field breaking center symmetry.
In fact, the results at nonzero $\epsilon$ in Fig.~\ref{fig:Jdep_su2}
are not of transition but of crossover.
We plot the density dependence of the Polyakov loop behavior in
Fig.~\ref{fig:edep_su2}. The density effects generally tend to make
the Polyakov loop larger, and eventually, the Polyakov loop becomes
insensitive to the density in the large $J$ (i.e.\ high temperature)
region. It is because both the temperature and the density break
center symmetry spontaneously and explicitly, respectively, having the
Polyakov loop saturated. Therefore, the $J$ dependence is less for
larger $\epsilon$ and the $\epsilon$ dependence is less for larger
$J$.
\begin{figure}
\includegraphics[width=7.5cm]{Tom_fit_num.eps}
\caption{Comparison to the number density measured on the lattice at
$4/g^2=2.0$ (circle) and $4/g^2=1.5$ (square) taken from Fig.~1 in
Ref.~\cite{Blum:1995cb}.}
\label{fig:Tom_fit_num}
\end{figure}
\begin{figure}
\includegraphics[width=7.5cm]{Tom_fit_P.eps}
\caption{Comparison to the SU(2) Polyakov loop measured on the
lattice at $4/g^2=2.0$ (circle) and $4/g^2=1.5$ (square) taken from
Fig.~2 in Ref.~\cite{Blum:1995cb}.}
\label{fig:Tom_fit_P}
\end{figure}
Our mean-field outputs are to be compared with the lattice
simulation in Ref.~\cite{Blum:1995cb}; our
Figs.~\ref{fig:edep_density_su2} and \ref{fig:edep_su2} correspond to
Figs.~1 and 2 presented in Ref.~\cite{Blum:1995cb}, respectively. We
cannot expect a quantitative coincidence because our ansatz for the
pure gluonic action $S_{\text{g}}[L]$ is only a crude approximation of QCD and
besides we neglect the renormalization of the Polyakov loop in the
mean-field treatment. Nevertheless, the agreement turns out to be
surprisingly good beyond our expectation if we treat the model
parameter $J$ as a fitting parameter incorporating the undetermined
effect of the Polyakov loop renormalization, which is implied by the
ansatz (\ref{eq:sg}). In such a way, we can fix $J=0.0042$ and
$J=0.04$ to reproduce the SU(2) Polyakov loop \textit{only} at
$\epsilon=1$ for $4/g^2=2.0$ and $4/g^2=1.5$, respectively. We would
emphasize that we did \textit{not} use the data of the Polyakov loop
at $\epsilon\neq1$ and not the data of the number density at all.
Nevertheless, as clearly seen from the comparisons in
Figs.~\ref{fig:Tom_fit_num} and \ref{fig:Tom_fit_P}, our numerical
results fit \textit{all} of the lattice data pretty well. We can
conclude from this observation that the main QCD corrections to our
ansatz (\ref{eq:sg}) could be absorbed into the renormalization of the
coupling alone. We are now confident that the mean-field treatment is
a fairly acceptable approximation for this type of problem.
\begin{figure}
\includegraphics[width=7.5cm]{jdep_sus_su2.eps}
\caption{Susceptibility relevant to the SU(2) Polyakov loop $\ell$ as
a function of $J$ at various density parameters; $\epsilon=$0.0
(solid), 0.1 (short-dashed), 0.2 (dashed), and 1.0 (dotted-dashed).
The $\epsilon=0$ result has a divergence at $J=J_{\text{c}}$
characteristic to the second-order phase transition.}
\label{fig:Jdep_sus_su2}
\end{figure}
Finally let us check that the susceptibility (\ref{eq:sus}) diverges
at $\epsilon=0$ and $J=J_{\text{c}}$. Figure~\ref{fig:Jdep_sus_su2}
shows the susceptibility as a function of $J$ at various $\epsilon$. In
the plot $\chi$ becomes greater with increasing $J$ because we did not
include $\partial\langle\ell\rangle/\partial x$ in our definition of
$\chi$ in Eq.~(\ref{eq:sus}). It is intriguing to remark that the
$\epsilon=0.1$ result is not really critical in view of $\chi$, while
the crossover at $\epsilon=0.1$ in Fig.~\ref{fig:Jdep_su2} looks
rather close to a phase transition. Actually $\chi$ is a more
informative quantity to judge how critical the crossover is in fact.
In summary of the SU(2) Polyakov loop dynamics at finite temperature
and density, we shall depict a three-dimensional plot of
$\langle\ell\rangle$ in Fig.~\ref{fig:3d_su2} as a function of the
``temperature'' $J$ and the ``density'' $\epsilon$. We immediately
see general tendency that $\langle\ell\rangle$ grows up with
increasing $J$ and $\epsilon$.
\begin{figure*}
\includegraphics[width=14cm]{3d_su2.eps}
\caption{Three-dimensional plot of the fundamental Polyakov loop in
the SU(2) case as a function of the temperature parameter $J$ and the
density parameter $\epsilon$.}
\label{fig:3d_su2}
\end{figure*}
\begin{figure}
\includegraphics[width=7.5cm]{jdep_ad_su2.eps}
\caption{Dependence of the SU(2) adjoint Polyakov loop
$\langle\ell_{\text{adj}}\rangle$ on the temperature parameter $J$ at
various density parameters; $\epsilon$=0.0 (solid), 0.1 (dotted), 0.2
(short-dashed), 0.5 (dashed), and 1.0 (dotted-dashed).}
\label{fig:Jdep_ad_su2}
\end{figure}
Before closing this subsection we will comment a bit on the adjoint
Polyakov loop whose definition is
\begin{equation}
\ell_{\text{adj}} \equiv \frac{1}{N_{\text{c}}^2-1}\text{tr} L^{\text{adj}}
=\frac{1}{N_{\text{c}}^2-1}\Bigl(N_{\text{c}}^2|\ell|^2-1\Bigr) \;.
\label{eq:ad_pol}
\end{equation}
The adjoint Polyakov loop is no longer an order parameter because the
adjoint representation is not faithful by the center of the gauge
group. This quantity has been closely argued in
Ref.~\cite{Dumitru:2003hp} and there still remain subtleties. Not in
the SU(2) but in the SU(3) case, it has been found that
$\langle\ell_{\text{adj}}\rangle$ takes an infinitesimally small value
in the confined phase. One possible explanation would be that the
group integration over $L$ is important in the center symmetric regime
where nonperturbative phenomena like color confinement are relevant
(see also Ref.~\cite{Lenz:1998qk}). As a matter of fact, the
SU($N_{\text{c}}$) group integration over $\ell_{\text{adj}}$ turns out to be
zero, if we are allowed to disregard the dynamics. Even with the
dynamics taken into account within the mean-field approximation, as
shown in Fig.~\ref{fig:Jdep_ad_su2}, the argument should hold so
$\langle\ell_{\text{adj}}\rangle\simeq 0$ in the low temperature side.
However, the adjoint Polyakov loop behavior in our study should be
understood only up to a qualitative level. It is pointed out in
Ref.~\cite{Dumitru:2003hp} that the renormalization for
$\ell_{\text{adj}}$ is significant, which is not considered in our
present treatment.
\subsection{SU(3) case}
\label{sec:su3}
In case of $N_{\text{c}}=3$ we have to tackle the sign problem. We are not
capable of solving QCD exactly like the lattice simulation, so one
might think that within the framework of approximations one is allowed
to impose the mean-field ansatz (\ref{eq:ansatz}) to get some results
anyhow. The free energy after the integration over $L$ could be real
in terms of $x$. It seems to work at least as a rough estimate that
is worth trying first.
The serious flaw in such a simple strategy is that
$\langle\ell\rangle=\langle\ell^\ast\rangle$ is inevitably concluded.
It is, however, contradict to the lattice results~\cite{Taylor} and
the model analyses~\cite{Dumitru:2005ng} where
$\langle\ell\rangle\neq\langle\ell^\ast\rangle$ has been observed at
finite density. If the mean-field ansatz (\ref{eq:ansatz}) is
extended to having two variables $x$ and $y$ in order to take account
of the difference between $\langle\ell\rangle$ and
$\langle\ell^\ast\rangle$, the price to pay is that the mean-field
free energy is not convex. We will revisit this issue latter. In any
case, though the appearance might be unalike, the difficulty of the
sign problem is conserved even in the mean-field approximation unless
the difference $\langle\ell\rangle\neq\langle\ell^\ast\rangle$
is neglected.
In our work we shall elucidate that the ``phase reweighting method''
is one way to resolve these difficulties. We should, however, note
that the reweighting method in the present context is one
\textit{approximation} scheme unlike in the lattice simulation. That
is, the reweighting method is expected to be precise if the number of
configurations is infinitely large, and thus the lattice simulation
with infinite number of configurations generated could provide us with
the exact answer in principle, while the mean-field approximation
picking up only the most dominant configuration cannot.
\subsubsection{Method}
The point of the method is that we decompose the fermion determinant
into one part that gives the positive semidefinite probability and the
other part that is regarded as the observable whose average is taken
by configurations.
The complex fermion determinant consists of the $C$-even magnitude
and the $C$-odd phase. Accordingly, the fermion action can be
rewritten as
\begin{equation}
S_{\text{f}}[L]= S_{\text{f}}^{\text{mag}}[L]+i\Theta[L] \,,
\end{equation}
where
\begin{align}
S_{\text{f}}^{\text{mag}}[L] &= -\sum_{\vec{x}} \ln\bigl|1+\epsilon^3
+3\epsilon\ell+3\epsilon^2\ell^\ast\bigr| \,, \\
\Theta[L] &= -\sum_{\vec{x}}\arg\bigl(1+\epsilon^3+3\epsilon\ell
+3\epsilon^2\ell^\ast\bigr) \,.
\label{eq:phase}
\end{align}
With these definitions we approximate the expectation value of
$\mathcal{O}[L]$ by the one obtained as follows;
\begin{equation}
\langle\mathcal{O}[L]\rangle\simeq \frac{\displaystyle
\bigl\langle\mathcal{O}[L]\,e^{-i\Theta[L]}\bigr\rangle_{\text{mf}}}
{\displaystyle \bigl\langle e^{-i\Theta[L]}\bigr\rangle_{\text{mf}}}
\;.
\label{eq:rew}
\end{equation}
Here $S_{\text{mf}}[L]$ or $x$ is fixed from the free energy with the action
$S_{\text{g}}[L]+S_{\text{f}}^{\text{mag}}[L]$, so that $x$ encompasses the information of
$S_{\text{f}}^{\text{mag}}[L]$ implicitly. This scheme is the same as what has been
adopted in the lattice simulation in Ref.~\cite{Blum:1995cb}.
Here we would draw attention to a related work; in
Ref.~\cite{deForcrand:1999cy} the correlation between $\text{Im}\ell$
and $\Theta[L]$ was investigated in the lattice simulation, which is
apparent in our case from the expression (\ref{eq:phase}).
\subsubsection{Results}
\begin{figure}
\includegraphics[width=7.5cm]{edep_density_su3.eps}
\caption{Correlation between the density parameter $\epsilon$ and the
quark number density $n$ per lattice site divided by $N_{\text{c}}=3$. The
gross feature is similar to Fig.~\ref{fig:edep_density_su2};
$\epsilon=0$ is the zero density state ($n=0$) and $\epsilon=1$ is
the half-filling state ($n=0.5$) in this SU(3) case as well as in the
SU(2) case.}
\label{fig:edep_density_su3}
\end{figure}
\begin{figure}
\includegraphics[width=7.5cm]{jdep_su3.eps}
\caption{SU(3) (traced) Polyakov loop $\langle\ell\rangle$ as a
function of the temperature parameter $J$ at various density
parameters; $\epsilon=$0 (solid), 0.1 (dotted), 0.2 (short-dashed),
0.5 (dashed), and 1.0 (dotted-dashed). The first-order phase
transition at $\epsilon=0$ and $\epsilon=0.1$ occurs at
$J_{\text{c}}=0.132$ and $J_{\text{c}}=0.123$, respectively.}
\label{fig:Jdep_su3}
\end{figure}
Let us start with checking the monotonous correlation between
$\epsilon$ and $n$ as in the SU(2) case. This allows us to regard
increase (or decrease) in the density parameter $\epsilon$ as increase
(or decrease) in the quark number density $n$. The positive
correlation is obvious from the results we show in
Fig.~\ref{fig:edep_density_su3}. The ``temperature'' or $J$
dependence is slightly greater than the SU(2) results. We can give a
possible account for this as follows; in the confined phase at small
$J$, the quark number density is suppressed as compared with high $J$
results. This suppression comes from the group integration that
forces the thermally excited particles to be not quarks but (nearly)
color-singlet baryons consisting of $N_{\text{c}}$
quarks~\cite{go_model,Oleszczuk:1992yg}. In general larger $N_{\text{c}}$
leads to stronger suppression by heavier excitation quanta.
Therefore, the stronger $J$ dependence presented in
Fig.~\ref{fig:edep_density_su3} originates from the stronger
suppression at small $J$. The suppression is physically interpreted
as \textit{effective} tendency toward
confinement~\cite{Fukushima:2003fw}.
Figure~\ref{fig:Jdep_su3} is the Polyakov loop as a function of $J$
corresponding to the SU(2) result in Fig.~\ref{fig:Jdep_su2} and to be
compared qualitatively with the lattice result of Fig.~7 in
Ref.~\cite{Blum:1995cb}. We find a first-order phase transition for
$\epsilon=0$ at $J=J_{\text{c}}=0.132$ and for $\epsilon=0.1$ at
$J=J_{\text{c}}=0.123$. The effect of nonzero $\epsilon$ smears the
transitional behavior and the phase transition eventually ceases to be
of first-order at a certain $\epsilon$. The end-point of the
first-order phase boundary is a second-order critical point called the
critical end-point, which is of much interest in attempts to clarify
the QCD phase diagram~\cite{CEP,Asakawa:1989bq}. We have crossover at
larger $\epsilon$. The global picture is well consistent with what
has been already clarified in the Potts system as a toy model of
finite temperature and density QCD~\cite{Alford:2001ug,Kim:2005ck}.
\begin{figure}
\includegraphics[width=7.5cm]{edep_su3.eps}
\caption{SU(3) Polyakov loop $\langle\ell\rangle$ as a function the
density parameter $\epsilon$ at various temperature parameters; $J$=0
(solid), 0.05 (dotted), 0.10 (short-dashed), 0.13 (dashed), and 0.2
(dotted-dashed). The gross feature is similar to the SU(2) case in
Fig.~\ref{fig:edep_su2}.}
\label{fig:edep_su3}
\end{figure}
We plot the ``density'' dependence of the SU(3) Polyakov loop in
Fig.~\ref{fig:edep_su3} which is the SU(3) counterpart of
Fig.~\ref{fig:edep_su2}. The SU(2) and SU(3) results are
qualitatively similar except for that the Polyakov loop is suppressed
at small $J$ and $\epsilon$ just as we found in the quark number
density. It would be interesting if we could compare our results with
the lattice data, but unfortunately, the SU(3) data as a function of
$\epsilon$ is not available from Ref.~\cite{Blum:1995cb}. We cannot
argue the $J$ (or $6/g^2$) dependence because it involves unknown
renormalization effects.
\begin{figure}
\includegraphics[width=7.5cm]{diff.eps}
\caption{Difference of the SU(3) Polyakov loops $\langle\ell\rangle$
and $\langle\ell^\ast\rangle$ as a function of the temperature
parameter $J$ at various density parameters; $\epsilon=$0 (solid)
which is zero entirely, 0.1 (dotted), 0.2 (short-dashed), and 0.5
(dashed).}
\label{fig:diff}
\end{figure}
\begin{figure}
\includegraphics[width=7.8cm]{edep_diff.eps}
\caption{Difference of the SU(3) Polyakov loops $\langle\ell\rangle$
and $\langle\ell^\ast\rangle$ as a function of the density parameter
$\epsilon$ at various temperature parameters; $J=$0 (solid), 0.05
(short-dashed), 0.1 (dashed), and 0.2 (dotted-dashed). The thin
curves represent the results from the Taylor expansion method in the
confine phase at $J<J_{\text{c}}$ and in the deconfined phase at
$J=0.20>J_{\text{c}}$ which is almost overlaid on the result from the
phase reweighting.}
\label{fig:edep_diff}
\end{figure}
\begin{figure*}
\includegraphics[width=14cm]{3d_su3.eps}
\caption{Three-dimensional plot of the fundamental Polyakov loop in
the SU(3) case as a function of the temperature parameter $J$ and the
density parameter $\epsilon$.}
\label{fig:3d_su3}
\end{figure*}
In the phase reweighting calculation we can see how
$\langle\ell\rangle$ and $\langle\ell^\ast\rangle$ become distinct at
$\mu_q\neq0$. The observable $\ell-\ell^\ast$ is $C$-odd and so the
imaginary part of the fermion determinant is responsible for a
nonvanishing difference. When $\epsilon$ is small in the fermionic
determinant (\ref{eq:sf_su3}) the imaginary part comes from
$\text{Im}\,\epsilon\ell\propto \epsilon(\ell-\ell^\ast)$.
Consequently the expectation value of the difference is proportional
to $\epsilon\langle(\text{Im}\,\ell)^2\rangle_0$ where
$\langle\cdots\rangle_0$ is taken at zero
density~\cite{Dumitru:2005ng}. In Fig.~\ref{fig:diff} we present our
numerical results for the difference
$\langle\ell\rangle-\langle\ell^\ast\rangle$ as a function of $J$.
The difference is trivially zero at $\epsilon=0$ and $\epsilon=1$
where the fermion determinant is real. As long as the density
parameter stays smaller than $\epsilon\sim0.5$, a larger density
parameter $\epsilon$ leads to a bigger difference. For example, we
find the difference at $\epsilon=0.5$ as large as
$\langle\ell\rangle-\langle\ell^\ast\rangle=-0.076$ which is
comparable to $\langle\ell\rangle=0.073$.
One can intuitively understand why $\langle\ell^\ast\rangle$ is
greater than $\langle\ell\rangle$ at nonzero $\mu_q$, which agrees
with what has been observed in the lattice simulation~\cite{Taylor}.
It is because, as discussed also in Ref.~\cite{Taylor}, the presence
of quarks at finite density enhances the screening effect for
antiquarks so that the antiquark excitation costs less energy.
The dense-heavy model, or $e^{-S_{\text{f}}}$, originally takes a form of
power series in $\epsilon$ as seen in the
expression~(\ref{eq:sf_su3}). We have thus performed the Taylor
expansion also to estimate the Polyakov loop difference. The
expectation value of the coefficient of the $\epsilon$ series is
calculated at zero density $\epsilon=0$. In this static model the
fermionic contribution is just vanishing at $\epsilon=0$, and so the
Taylor expansion method gives rise to the identical output for any
$J<J_{\text{c}}$ in the confined phase where $x$ remains zero. In the
deconfined phase $J>J_{\text{c}}$ the $J$ dependence is brought in by
nonzero $x$. Figure~\ref{fig:edep_diff} shows the difference
$\langle\ell\rangle-\langle\ell^\ast\rangle$ as a function of
$\epsilon$ at various $J$ with the Taylor expansion results in the
confined and deconfined phase. We can see excellent agreement between
two methods from the comparison at $J=0.00$ and $J=0.20$. However,
the Taylor expansion cannot reproduce the results at
$0<J<J_{\text{c}}$ at all. The lesson from our model study is that
the Taylor expansion in terms of \textit{density} breaks down in the
presence of the first-order phase transition with respect to
\textit{temperature}. Still, in the deconfined phase at high
temperature, the expansion is validated. We do not think that this
finding is trivial in our model. In the realistic QCD lattice
simulation with $2+1$ flavors, the zero density result is most likely
crossover, and thus the Taylor expansion method should be reliable at
all temperatures.
As we showed in the previous subsection, we shall present the
three-dimensional plot of the SU(3) Polyakov loop as a function of the
temperature parameter $J$ and the density parameter $\epsilon$ in
Fig.~\ref{fig:3d_su3}. The figure is qualitatively consistent with
the lattice result shown in Fig.~5 in Ref.~\cite{Blum:1995cb}.
\begin{figure}
\includegraphics[width=7.5cm]{jdep_sus_su3.eps}
\caption{Susceptibility relevant to the SU(3) Polyakov loop $\ell$ as
a function of $J$ at various density parameters; $\epsilon=0.0$
(solid), 0.1 (short-dashed), 0.2 (dashed), and 1.0 (dotted-dashed).}
\label{fig:Jdep_sus_su3}
\end{figure}
\begin{figure}
\includegraphics[width=7.5cm]{jdep_ad_su3.eps}
\caption{SU(3) adjoint Polyakov loop
$\langle\ell_{\text{adj}}\rangle$ as a function of the temperature
parameter $J$ at various density parameters; $\epsilon$=0.0 (solid),
0.1 (dotted), 0.2 (short-dashed), 0.5 (dashed), and 1.0
(dotted-dashed).}
\label{fig:Jdep_ad_su3}
\end{figure}
We shall turn to the susceptibility $\chi$ to examine the phase
transition more closely. Because the phase transition at small
$\epsilon$ is of first-order, $\chi$ jumps discontinuously at
$J=J_{\text{c}}$ as seen in Fig.~\ref{fig:Jdep_sus_su3}. The
susceptibility $\chi$ grows as the model parameters approach the
critical end-point where the second-order phase transition takes
place. Figure~\ref{fig:Jdep_sus_su3} implies that the zero density
($\epsilon=0$) result is significantly affected by the critical
end-point which should be located nearby at small $\epsilon$. In view
of Figs.~\ref{fig:Jdep_sus_su2} and \ref{fig:Jdep_sus_su3} the
critical region in the SU(3) case is wider than the SU(2) case.
Let us comment on the adjoint Polyakov loop here again. As we
mentioned in discussions on the SU(2) results, the adjoint Polyakov
loop should require proper renormalization beyond the mean-field
treatment. We show the adjoint Polyakov loop result in
Fig.~\ref{fig:Jdep_ad_su3} just because we can do that. The
first-order phase transition is located at the same point;
$J=J_{\text{c}}\simeq0.132$, of course. The group integration over
$L$ suppresses $\langle\ell_{\text{adj}}\rangle$ at low $J$ even for
large $\epsilon$, which makes the crossover in
Fig.~\ref{fig:Jdep_ad_su3} look shaper than in case of the fundamental
Polyakov loop in Fig.~\ref{fig:Jdep_su3}.
\begin{figure}
\includegraphics[width=7.5cm]{edep_phase.eps}
\caption{Phase of the fermion determinant
$\text{Re}\langle e^{-i\theta}\rangle$ per lattice site as a
function of the density parameter $\epsilon$ at various temperature
parameters; $J=$0 (solid), 0.05 (short-dashed), 0.10 (dashed), and
0.20 (dotted-dashed).}
\label{fig:edep_phase}
\end{figure}
Finally we present the results for the expectation value of the
phase factor of the fermionic determinant, $e^{-i\Theta}$. We plot
$\text{Re}\langle e^{-i\theta}\rangle$ as a function of $\epsilon$ in
Fig.~\ref{fig:edep_phase}, where $\theta$ is the phase at each lattice
site;
$\theta\equiv-\arg(1+\epsilon^3+3\epsilon\ell+3\epsilon^2\ell^\ast)$
(i.e.\ $\Theta=\sum_{\vec{x}}\theta$.)
Comparing it with Fig.~9 in Ref.~\cite{Blum:1995cb}, we can surely
check that our results qualitatively reproduce the lattice data. For
more quantitative arguments, let us put the volume $6^3=216$ of the
lattice simulation in Ref.~\cite{Blum:1995cb} into our results. The
expected phase factor $\langle e^{-i\Theta}\rangle$ can be, as a rough
estimate, approximated as $(\langle e^{-i\theta}\rangle)^{216}$. For
instance, our $J=0$ result has a minimum at $\epsilon=0.61$ where
$\langle e^{-i\theta}\rangle\simeq 0.977$, and we get
$0.977^{216}=0.0066$. The minimum value in Fig.~9 in
Ref.~\cite{Blum:1995cb} is read as of order $0.01$, which is not quite
far from our estimate viewed on the logarithmic plot.
\subsection{Mean-Field Free Energy}
\label{sec:MFFreeEnergy}
Here we will pursue another strategy to deal with the difference
between $\langle\ell\rangle$ and $\langle\ell^\ast\rangle$ as a
double-check. It is possible to extend the mean-field ansatz as
\begin{equation}
S_{\text{mf}}[L] = -\frac{x}{2}\sum_{\vec{x}}\bigl[\ell(\vec{x})
\!+\!\ell^\ast(\vec{x})\bigr] -\frac{y}{2}\sum_{\vec{x}}\bigl[
\ell(\vec{x})\!-\!\ell^\ast(\vec{x})\bigr] \,,
\label{eq:ansatz2}
\end{equation}
and then we can compute the mean-field free energy from
Eq.~(\ref{eq:free_energy}) as a function of $x$ and $y$. After the
integration over $L$, the free energy $f_{\text{mf}}(x,y)$ is a real function
of real variables $x$ and $y$. We can thus fix the mean-fields by
\begin{equation}
\frac{\partialf_{\text{mf}}}{\partial x}\biggr|_{(x,y)=(x_0,y_0)}
=\frac{\partialf_{\text{mf}}}{\partial y}\biggr|_{(x,y)=(x_0,y_0)}=0 \,.
\label{eq:mean_field_eq}
\end{equation}
Nonzero $y_0$ appears in the presence of nonzero $\mu_q$. It turns
out that around $(x_0,y_0)$ the free energy $f_{\text{mf}}(x,y)$ has a minimum
in the $x$ direction, while it has a maximum in the $y$ direction.
That is, the solution to Eq.~(\ref{eq:mean_field_eq}) is a
saddle-point of $f_{\text{mf}}(x,y)$, which is consistent with
Ref.~\cite{Dumitru:2005ng}. The instability with respect to $y$
should be a remnant of the sign problem~\footnote{One way to
understand instability in $y$ might be that $\ell-\ell^\ast$ is pure
imaginary though $y$ corresponding to
$\langle\ell\rangle-\langle\ell^\ast\rangle$ is real.} We here point
out that this instability also exists in the zero density limit;
$f_{\text{mf}}(x,y)$ has a saddle-point at $y=0$ even at zero density. It
should be instructive to take a closer look at how the saddle-point
arises even at zero density. If we expand the free energy in terms of
$\langle\ell\rangle$ and $\langle\ell^\ast\rangle$, the leading term
dependent on $\langle\ell\rangle$ and $\langle\ell^\ast\rangle$ is
quadratic $\sim\langle\ell\rangle\langle\ell^\ast\rangle$ as explicitly
seen in a simple estimate in Ref.~\cite{Ratti:2005jh}. This form of
the free energy implies an instability inducing
$\langle\ell\rangle\neq\langle\ell^\ast\rangle$ because it can written
as $\langle\ell\rangle\langle\ell^\ast\rangle \propto
(\langle\ell\rangle\!+\!\langle\ell^\ast\rangle)^2
-(\langle\ell\rangle\!-\!\langle\ell^\ast\rangle)^2$.
The situation is somewhat analogous to thermodynamics in the finite
density NJL-model calculation. If the free energy $f$ is calculated
as a function of the renormalized chemical potential
$\mu_q$~\cite{Asakawa:1989bq}, then the value of $\mu_q$ is fixed by
the condition $\partial f/\partial\mu_q=0$ which corresponds to not a
minimum but a maximum of $f$ as a function of $\mu_q$. This is not
problematic, however, because the condition $\partial
f/\partial\mu_q=0$ means a \textit{constraint} equation of number
density. Likewise, we might as well think that the determination of
$y$ in the Polyakov loop dynamics is not energetic but the second
equation of (\ref{eq:mean_field_eq}) has something to do with a
constraint equation of number density of gauge charge, namely, the
Gauss law constraint. If this conjecture is the case, though the
rigorous proof is beyond our current scope, the saddle-point nature of
the free energy is no longer an obstacle to fix $x$ and $y$.
\begin{figure}
\includegraphics[width=7.5cm]{diff_comp.eps}
\caption{Comparison to the difference
$\langle\ell\rangle-\langle\ell^\ast\rangle$ estimated by the
saddle-point of the mean-field free energy at various densities;
$\epsilon=$0.1 (solid and dashed) and 0.5 (dotted and
dotted-dashed).}
\label{fig:diff_comp}
\end{figure}
Let us see what comes out if we calculate the Polyakov loop using
the ansatz (\ref{eq:ansatz2}) with the solution to
Eq.~(\ref{eq:mean_field_eq}) assuming that the second equation of
(\ref{eq:mean_field_eq}) stems from a constraint associated with gauge
dynamics. In Fig.~\ref{fig:diff_comp} we show the difference
$\langle\ell\rangle-\langle\ell^\ast\rangle$ as a function of $J$.
The solid and dotted curves at $\epsilon=0.1$ and $\epsilon=0.5$
respectively are just the same as already presented in
Fig.~\ref{fig:diff}. The dashed and dotted-dashed curves are the
counterparts derived from the mean-field energy with $x$ and $y$.
In view of two $\epsilon=0.5$ curves, on the one hand, the
mean-field result is entirely consistent with what we found in the
phase reweighting method. On the other hand, the coincidence is not
very good for the $\epsilon=0.1$ results except for $J\simeq0$ and
$J>J_{\text{c}}$. This discrepancy comes from a singularity of the
dense-heavy model located at $\epsilon=0$; when $\epsilon$ is small,
the fermion action in the dense-heavy model is approximated as
$S_{\text{f}}[L]\sim-3\epsilon\sum_{\vec{x}}\ell(\vec{x})$. Hence, nonzero
$\epsilon$ tends to align the vacuum into the direction of $x=y$ (see
Eq.~(\ref{eq:ansatz2})), and the model does not reduce to a pure
gluonic theory with $y=0$ smoothly in the limit of $\epsilon\to0^+$.
This situation makes a sharp contrast to finite density QCD. In the
strong coupling expansion, for a simple example, the fermion action is
$S_{\text{f}}[L]\sim H\sum_{\vec{x}}\bigl[\ell(\vec{x})e^{\mu_q/T}+\ell^\ast
(\vec{x})e^{-\mu_q/T}\bigr]$. In the limit of $\mu_q\to0$,
therefore, the fermionic action acts as an external field toward
$x\neq0$ and $y=0$, so that the vacuum alignment of a pure gluonic
theory is smoothly retrieved in the $\mu_q\to0$ and $H\to0^+$ limit.
Apart from the discrepancy inherent to the singular behavior of the
dense-heavy model near $\epsilon=0$, the mean-field free energy leads
to the results in accord with the phase reweighting method. This is
an indirect evidence for that the saddle-point is not harmful
actually, as we conjectured.
We shall comment on a possible clue to resolve the sign problem
based on what we have seen here. We changed the sign problem into a
form of the saddle-point of the mean-field free energy. The
saddle-point appears harmless from our analyses, presumably stabilized
by the Gauss law, and then the sign problem is resolved. Of course,
it is highly nontrivial how to map this analytical procedure to the
lattice QCD simulation. Still, we would point out that the clustering
method developed in Ref.~\cite{Alford:2001ug} is a philosophically
similar idea along this line; partial integration over the cluster
domains wipes away the sign problem, just like seen in our mean-field
free energy free from the sign problem after the group integration.
The field-theoretical approach to reveal the relation between the
saddle-point of the free energy and the Gauss law is beyond our
current scope, but it definitely deserves further investigation.
\section{Summary}
\label{sec:summary}
We investigated the dense-heavy model and observed the sign problem
at finite density within the framework of the mean-field
approximation. We calculated the quark number density, the
fundamental and adjoint Polyakov loop, the susceptibility, and the
phase of the fermion determinant as a function of the model parameters
specifying the temperature and the density. All the mean-field
results are reasonable and even in quantitative agreement with the
lattice data.
In the environment free from the sign problem in the SU(2) case we
found that the mean-field approximation goes better than expected.
Our mean-field results nicely reproduced the quark number density and
the Polyakov loop measured on the lattice once the unknown Polyakov
loop renormalization is fixed by fitting. Then, we proceeded into the
SU(3) case where the sign problem is relevant.
We saw that the approximation scheme which is capable of describing
the different $\mu_q$ dependence of $\langle\ell\rangle$ and
$\langle\ell^\ast\rangle$ cannot avoid the sign problem even at the
mean-field level. We applied the phase reweighting method as a
practical resolution in order to handle the complex fermion
determinant. We acquired $\langle\ell\rangle-\langle\ell^\ast\rangle$
as a function of the density parameter or the temperature parameter.
So far, there are not many lattice data available for this quantity,
but our results $\langle\ell^\ast\rangle>\langle\ell\rangle$ are
consistent with other model studies as well as what has been obtained
from the Taylor expansion method on the lattice.
The important message from our work is the following. The sign
problem may be a serious obstacle even in the mean-field model studies
at finite temperature and density. We would say, at least, that one
should be careful enough when one deals with the Polyakov loop
behavior at finite density. The chiral effective models with the
Polyakov loop coupled are examples that definitely need more or less
caution for application to the finite density problem. The phase
reweighting method is one prescription in order to approximate the
expectation value in a manageable way. This prescription is, however,
only practical and not any solution to the sign problem. We would
insist that one cannot solve the QCD sign problem until one can at
least find a solution to the sign problem appearing even in that
simple model setting. The converse is not necessarily true, though.
In the future, we would like to apply other ideas than the phase
reweighting method for the sign problem at the mean-field level. The
bottom line here is, thus, that we have formulated an analytical
testing ground to think about the sign problem, and we believe that
our clarification would be useful for further developments.
\begin{acknowledgments}
The authors thank Y.~Aoki, T.~Blum, Ph.~de~Forcrand, R.~Pisarski,
C.~Schmidt for useful conversations. The authors specially thank
S.~Ejiri for discussions on the dense-heavy model which urged us to
initiate this work. This research was supported in part by RIKEN BNL
Research Center and the U.S.\ Department of Energy under cooperative
research agreement \#DE-AC02-98CH10886.
\end{acknowledgments}
|
math/0610083
|
\subsubsection{\@startsection{subsubsection}{3}%
\normalparindent{.5\linespacing\@plus.7\linespacing}{-.5em}%
{\normalfont\bfseries}}
\setcounter{tocdepth}{1}
\raggedbottom
\def\mathcal{\mathcal}
\def\smallskip{\smallskip}
\def\smallskip{\smallskip}
\def{\mathbb \mu}_{n+1}{{\mathbb \mu}_{n+1}}
\def\partial{\partial}
\def\simeq{\simeq}
\def\alpha{\alpha}
\def\beta{\beta}
\def\gamma{\gamma}
\def\delta{\delta}
\def\epsilon{\epsilon}
\def\lambda{\lambda}
\def\sigma{\sigma}
\def\tau{\tau}
\def\Lambda{\Lambda}
\def\cal{\mathcal}
\def\longrightarrow{\longrightarrow}
\def\mathbb {Z}/n\mathbb{Z}{\mathbb {Z}/n\mathbb{Z}}
\def\mathbb{Z}/(n+1)\mathbb{Z}{\mathbb{Z}/(n+1)\mathbb{Z}}
\def\{1,\dots,n\}{\{1,\dots,n\}}
\def\nonumber{\nonumber}
\def\langle{\langle}
\def\rangle{\rangle}
\def\mathbb {Z}/2\mathbb{Z}{\mathbb {Z}/2\mathbb{Z}}
\def\mathbb{Z}/(2n\mathbb{Z)}{\mathbb{Z}/(2n\mathbb{Z)}}
\def\mathbb{S}_n{\mathbb{S}_n}
\def\mathbb{S}_{n+1}{\mathbb{S}_{n+1}}
\defsign{\mathrm{sign}}
\def\mathrm{codim}{\mathrm{codim}}
\def\mathrm{Str}{\mathrm{Str}}
\def\mathrm{Hom}{\mathrm{Hom}}
\def\mathcal{C}{\mathcal{C}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{FROB}{\mathcal{FROB}}
\def\mathrm{Tr}{\mathrm{Tr}}
\def\mathrm{STr}{\mathrm{STr}}
\def\mathrm{Fix}{\mathrm{Fix}}
\defsign{sign}
\def\mathbb {Z}/2\mathbb{Z}{\mathbb {Z}/2\mathbb{Z}}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lm}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{crl}[thm]{Corollary}
\newtheorem{claim}[thm]{Claim}
\newtheorem{df-pr}[thm]{Definition-Proposition}
\newtheorem{df}[thm]{Definition}
\newtheorem{qu}[thm]{Question}
\begin{document}
\title[Discrete torsion, symmetric products and the Hilbert scheme]
{Discrete torsion, symmetric products and the Hilbert scheme}
\author[Ralph M. Kaufmann]{Ralph M. Kaufmann$^*$\\
University of Connecticut\\
and Max--Planck Institut f\"ur Mathematik}
\thanks{${}^*$ Partially supported by NSF grant \#0070681}
\email{ralphk@mpim-bonn.mpg.de}
\address{University of Connecticut, Department of Mathematics,
Storrs, CT, USA and
Max--Planck Institut f\"ur Mathematik, Bonn, Germany}
\maketitle
\section*{Introduction}
Recently the understanding of the cohomology of the Hilbert scheme
of points on K3 surfaces has been greatly improved by Lehn and Sorger \cite{LS}.
Their approach uses the connection of the Hilbert scheme to
the orbifolds given by the symmetric products of these surfaces.
We introduced a general theory
replacing cohomology algebras or more generally Frobenius
algebras in a setting of global quotients by finite groups \cite{K1}. This is our
theory of group Frobenius algebras, which
are group graded non--commutative algebras whose non--commutativity is
controlled by a group action.
The action and the grading turn these algebras
into modules over the Drinfel'd double of the group ring. The appearance of
the Drinfel'd double is natural from the orbifold point of view
(see also \cite{Kir})
and can be translated into the fact that the algebra
is a $G$--graded $G$--module algebra in the following sense:
the $G$ action acts by conjugation on the grading
while the algebra structure is
compatible with the grading with respect to left multiplication
(cf. \cite{K3,Mo}).
In the special case
of the symmetric group, we recently proved existence and
uniqueness for the
structures of symmetric group Frobenius algebras based on a
given Frobenius algebra \cite{K2}, providing explicit formulas
for the multiplication in the algebra.
This uniqueness has to be understood up to the action of two
groups of symmetries on group Frobenius algebras called discrete torsion
and super-twisting \cite{K3}. The set of $G$--Frobenius algebras is
acted upon by both of these groups. This action only changes
some defining structures of a Frobenius algebra in a projective
manner while keeping others fixed.
Applying this result to the global orbifold cohomology of a
symmetric product, where there is a canonical choice
for the discrete torsion and super-twists, we obtain its uniqueness.
Our latest results on this topic \cite{K3} explain the origin
of these discrete degrees of freedom. In the special case of the Hilbert
scheme as a resolution of a symmetric product the
choice of sign for the metric specifies a discrete torsion cocycle
that in turn changes the multiplication by a much discussed sign.
Assembling our results which we review we obtain:
{\bf Theorem.}{\it The cohomology of $Hilb^{[n]}$, the Hilbert scheme
of n--points for a K3 surface, is the $\mathbb{S}_n$ invariant part of
the $\mathbb{S}_n$--Frobenius Algebra
associated to the symmetric product of the cohomology of the surface
twisted by a discrete torsion. Or
in other words the unique $\mathbb{S}_n$--Frobenius Algebra structure for the extended
global orbifold cohomology twisted by the specific discrete torsion
which is uniquely determined by the map of \cite{LS}.
In general, the sequence of spaces $Hilb^{[n]}$
gives rise to the twisted second quantization of the underlying surface
on the cohomological (motivic) level.}
Here the term associated refers to the uniqueness result of
\cite{K2} stated above.
This result follows from a series of considerations which we will review.
The logic is roughly as follows:
The theoretical background for our considerations was first presented
at \cite{talk} and is given in \cite{K1}
where we showed that the algebras arising from the ``stringy'' study
of objects with a global action by a finite group are so--called
$G$--Frobenius algebras. These algebras are non--commutative group
graded extensions of their classical
counterpart, Frobenius algebras, which arise for instance
in the study of manifolds
as cohomology algebras and in the study of singularities with an isolated
critical point as a Milnor ring. A $G$--action by automorphisms
is part of the data of a $G$--Frobenius algebra and taking invariants
under this action yields a commutative algebra.
Given an object, such as a manifold, together with a finite group action
and a functor, such as cohomology, one would like to augment this
functor to take values in $G$--Frobenius algebras.
The underlying additive structure of the $G$--Frobenius algebra
is given by evaluating the functor on each fixed point set for
each group element and forming the direct sum.
This yields a collection of Frobenius algebras,
one for each group element. The Frobenius algebra for the identity
element is the Frobenius algebra associated to the object itself and
is called the identity sector. For the other algebras, called twisted sectors,
we only retain their structure as modules over the identity sector, together
with their pairings -- the module structure over the identity sector
being induced by inclusion maps. Furthermore, there is a $G$ action on the
identity sector.
Any ``stringy'' extension of the original functor will respect
these structures, and add a {\em group graded multiplication} and
a group action by automorphisms on the whole algebra which is
compatible with the group action on the identity sector.
It is possible to classify all such ``stringy'' extensions
in the special case when all the twisted sectors are cyclic modules over
the identity sector. Such $G$--Frobenius algebras are called special.
The classification is in terms of group cohomological
data as shown in \cite{K1}. The cyclicity condition is met in the situation
of singularities with an isolated critical point at zero as well
as for symmetric products which are the global quotient of the
$n$-th power by the $n$th symmetric group $\mathbb{S}_n$. For the $n$th symmetric
product of an object the untwisted sector is the $n$th tensor power of the
Frobenius algebra of this object, while the twisted sector for
a permutation is again a tensor power of the Frobenius algebra of this
object, but to the power of the number of cycles in the permutation.
It can be checked that these twisted sectors are indeed cyclic.
Imposing the cyclicity condition (i.e.\ restricting to special
$\mathbb{S}_n$ Frobenius algebra) and an additional grading condition it is
possible to make the classification concrete. The additional
grading condition is satisfied in the case of symmetric products.
First, we showed in \cite{K2}, if such a structure exists it is essentially
unique.
This uniqueness is essential in the following sense: as explained
in \cite{K3} given one ``stringy'' extension of the data
considered above, it is possible to produce another extension with
the use of a group cocycle in $Z^2(G,k^*)$. In the setting of
super ($\mathbb {Z}/2\mathbb{Z}$--graded) algebras, we can also produce yet another
extension for each element of ${\rm Hom}(G,\mathbb {Z}/2\mathbb{Z})$. These twists of
the original extension can be achieved via a tensor product with a
twisted group ring or a super--graded group ring \cite{K3}. This
yields an action of both groups $Z^2(G,k^*)$ and ${\rm
Hom}(G,\mathbb {Z}/2\mathbb{Z})$. These actions are called discrete torsion and super
twist, respectively. Thus essentially unique means unique up to
the action of these two groups.
Second, in the case of symmetric products, the unique structure
exists, as we showed in \cite{K2}. There is also a canonical
choice of an initial algebra structure upon which discrete torsion
and super--twists act.
The proof of existence relies on a general formalism which makes
use of the fact that in a setup such as a manifold with a finite
group action, we can also regard the fixed point sets of all
subgroups generated by several elements of the group. These fixed
point sets are then the intersection of the fixed point sets for
the individual generators. The general setting in which this is
possible is the setting of intersection $G$--Frobenius algebras.
In this framework, one can show that the multiplication factors
through double intersections while the associativity equation is
to be checked on triple intersections.
In the case where one is considering the symmetric product of
a manifold, the canonical non--commutative structure coincides with
the one found by \cite{FG} and its commutative
invariants are those of the Chen--Ruan orbifold
cohomology as calculated in \cite{U}. Notice that our uniqueness result
makes no reference to any space of maps or to any specific ``stringy'' extension,
but only depends on the algebraic structure and is thus common to all ``stringy''
extensions.
Another non--commutative ``stringy'' multiplication based on
the additive data underlying symmetric products was given in \cite{LS}
in their consideration of Hilbert schemes of K3 surfaces. By our result
this has to be related to the one stemming from symmetric products by
either a twist by discrete torsion or a super--twist.
Indeed there is a twist by discrete torsion, which produces the
algebra of \cite{LS}. This discrete torsion cocycle is actually trivial
on the level of cohomology, but naturally induces a sign change for the
multiplication and the metric which is given in \cite{LS} on the level
of group invariants. From our considerations of the action of
discrete torsion \cite{K3} this cocycle is actually already fixed
by the choice of sign for the metric, which by geometric reasoning
(resolution of $A_2$ singularities) has to be negative definite.
Lastly, the family of multiplications found in \cite{QW}
can be identified as the family of arising from
twisting with discrete torsion cocycles which
preserve the grading condition.
The paper is organized as follows: In \S1 we present the
general functorial setup for extending functors to
Frobenius algebras to those with values in
$G$--Frobenius algebras. \S2 contains the basic definitions
of $G$--Frobenius algebras and special $G$--Frobenius algebras \cite{K1},
for which the possible extensions are classifiable in terms
of group cohomological data.
In \S3 we introduce intersection Frobenius algebras which
are adapted to the situation in which one can take successive intersections
of fixed point sets. This structure is needed in order to show the
existence of symmetric products and in general it is shown how in
such a situation the multiplication can be defined via double intersections
while the associativity equations are naturally given by total symmetry
in the triple intersection.
\S4 reviews our analysis of discrete torsion \cite{K3}.
\S5 recalls our results on the structure of $\mathbb{S}_n$--Frobenius algebras \cite{K2}
and contains the existence and uniqueness statements.
\S6 assembles these results in the case of any $\mathbb{S}_n$--Frobenius algebra
twisted by a specific discrete torsion uniquely determined from the map \cite{LS}.
This result applied to
the situation of the Hilbert scheme yields the theorem above.
\section*{Notation} For the remainder of the paper $G$ is a fixed
finite group and $k$ is a field of characteristic $0$.
\section*{Acknowledgments}
I would like to thank the Max--Planck--Institut for its kind
hospitality and also gratefully acknowledge the support from the
NSF. I would also like to thank the organizers of the conference.
Special thanks go to Takashi Kimura, since it was a discussion
with him that was the initial spark for our approach to discrete
torsion. Most importantly I wish to thank Yuri I.\ Manin whose
deep insights into the beauty of mathematics have been
a continuous source of inspiration.
\section{Functorial setup}
\subsection{General background}
We will consider objects $X$ together with the
action of a finite group $G$. In this situation, one would classically
study the invariants or the quotient of $X$ by $G$. In stringy
geometry for global quotients, however, it is the aim to enlarge
this picture to consider the fixed point sets for all group
elements, together with an induced $G$ action on them. Here $G$
acts on the fixed point sets by conjugation of the group elements
labelling the fixed point sets. The classical part is then
represented by $X$ considered as the fixed point set of the
identity in $G$ and the $G$ action on this fixed point set.
In particular, classical functors such as cohomology which takes values
in Frobenius algebras should have an augmented counterpart including
the information about all the fixed point sets. The augmented functors
should take values in $G$--Frobenius algebras as defined in \cite{K1}.
Physically this can be seen as the transition from topological
field theory (TFT) to a finite
gauge group TFT (see \cite{DW,DVVV,K1}).
The functorial setup of extending from functors with values in Frobenius
algebras to those with values $G$ Frobenius algebras in the following:
Let $\mathcal{FROB}$ be the category of Frobenius algebras, whose objects are Frobenius
algebras and morphisms are maps which respect all the structures.
\begin{df}{A $G$--category}
is a category $\mathcal{C}$ where for each object
$X \in Ob(\mathcal{C})$ and each $g \in G$ there exists an object $X^g$ and a
morphism
$i_g \in \mathrm{Hom}(X^g,X)$ with $X^e=X$ and $i_e= id$
and there are isomorphisms $\psi_{g,g^{-1}}\in \mathrm{Hom}(X^{g},X^{g^{-1}})$.
We call a category a $G$ intersection category
if it is a $G$ category and for each pair
$(g,h) \in G\times G$ and object $X\in Ob(\mathcal{C})$ there are isomorphisms
$\psi\in \mathrm{Hom}( (X^g)^h, (X^h)^g)$ and morphisms
$i^{gh}_{g,h} \in \mathrm{Hom}((X^g)^h,X^{gh})$.
A {$G$--action} for a $G$--category is given by
a collection of morphisms $\phi_g(X,h) \in \mathrm{Hom}(X^h,X^{ghg^{-1}})$
which are compatible with the structural morphisms and
satisfy $\phi_g(X,g'hg^{\prime -1})\phi_{g'}(X,h)= \phi_{gg'}(X,h)$.
\end{df}
\subsection{Examples}
Examples of an intersection $G$--category with $G$--action are
categories of spaces equipped with a $G$--action whose fixed
point sets are in the same category. Actually this is the category
of pairs $(X,Y)$ with $X$ say a smooth space with a $G$--action
and $Y$ a subspace of $X$. Then $(X,Y)^g:=(X,Y\cap Fix(g,X))$ with
$Fix(g),X$ denoting the fixed points of $g\in G$ in X, and
$i_g=(id,\iota_g)$ with $\iota_g: Y\cap Fix(g)\rightarrow Y)$
being the inclusion. It is enough to consider pairs $(X,Y)$ where
$Y\subset X$ is the set fixed by a subgroup generated by an
arbitrary number of elements of $G$: $H:=\langle
g_1,\dots,g_k\rangle$
We could also consider the action on the $X^g$ to be trivial and
set $(X^g)^h:= X^g$. This will yield a $G$--category.
Also the category of functions $f:{\bf C}^n\rightarrow {\bf C}$
with an isolated singularity at $0$ together with a group action
of $G$ on the variables induced by a linear action of $G$ on the
linear space fixing the function is an example of a $G$--category.
This is a category of triples $({\bf C}^n, f: {\bf C}^n
\rightarrow {\bf C}), \rho \in {\rm Hom}(G,GL(n))$ such that $f$
has an isolated singularity at zero and $f(\rho({\bf z}))= f({\bf
z})$ for ${\bf z}\in {\bf C}^n$ with morphisms being linear
between the linear spaces such that all structures are compatible.
The functor under consideration is the local ring or Milnor ring.
Again we set $(X^g)^h:= X^g$.
Here the role of
the fixed point set is played by the linear fixed point set and
the restriction of the function to this fixed point set
(cf.\cite{K1}). Again we can consider pairs of and object and a
subobject as above in order to get an intersection $G$--category.
Our main examples are smaller categories such as a global
orbifold. As a $G$ category, the objects are the fixed point sets
of the various cyclic groups generated by the element of $G$ and
the morphisms being the inclusion maps. Again we set $(X^g)^h:=
X^g$. For a global orbifold, we can also consider all fixed point
sets of the groups generated by any number of elements of $G$ as
objects together with the inclusion maps as morphisms. This latter
will render a $G$--intersection category.
The same is true for isolated singularities. Here the objects are
the restriction of the function to the various subspaces fixed by
the elements of $g$ together with inclusion or for the $G$--intersection
category we consider all intersections of these subspaces together with the
restriction of the function to these subspaces as objects,
again with the inclusion morphisms.
\subsection{The classification/reconstruction program}
Given a functor to Frobenius algebras (like cohomology),
we would like to find its stringy counterpart for global quotients.
Now, suppose we have a
$G$--category $\mathcal{C}$ and a contravariant functor $\mathcal{F}$
from $\mathcal{C}$ to $\mathcal{FROB}$. In this setting
there might be several schemes to define a ``stingy geometry'' by
augmenting the functor to take values in $G$--Frobenius algebras.
But all of these schemes have to have the same additive structure provided
by the ``classical orbifold picture'' (see \ref{classical}) and satisfy
the axioms of $G$--Frobenius algebras (see \S 2). Furthermore there
are more structures which are already fixed in this situation,
which is explained below. These data can sometimes be used to classify
the possible algebra structures and reconstruct it when the classification
data is known. In the case of so--called special $G$--Frobenius algebras
a classification in terms of group cohomology classes is possible.
There are some intermediate steps which contain partial information
that have been previously considered, like
the additive structure, dimensions etc., as discussed in \ref{classical}.
\subsubsection{The ``classical orbifold picture''}
\label{classical} Now, suppose we have a $G$--category $\mathcal{C}$ and a
contravariant functor $\mathcal{F}$ from $\mathcal{C}$ to $\mathcal{FROB}$, then for
each $X \in Ob(\mathcal{C})$, we naturally obtain the following collection
of Frobenius algebras: $(\mathcal{F}(X^g):g\in G)$ together with
restriction maps $r_g = \mathcal{F}(i_g): \mathcal{F}(X) \mapsto \mathcal{F}(X^g)$.
One possibility is to regard the direct sum of the Frobenius algebras
$A_g:=\mathcal{F}(X^g)$.
The first obstacle is presented in the presence of a
grading, say by ${\bf N}, {\bf Z}$ or ${\bf Q}$; as it is well known that
the direct sum of two graded Frobenius algebras is only well defined
if their Euler dimensions (cf.\ e.g.\ \cite{K3}) agree. This can, however,
be fixed by using the shifts
$s^+$ discussed in \ref{shifts}. If the grading was originally in ${\bf N}$
these shifts are usually in $\frac{1}{2}{\bf N}$, but in the complex case still lie in ${\bf N}$.
Furthermore, if we have a $G$--action on the $G$ category, it
will induce the structure of a $G$--module on this direct
sum.
Each of the Frobenius algebras $A_g$ comes equipped with
its own multiplication,
so there is a ``diagonal'' multiplication
for the direct sum which is the direct sum of these multiplications.
Using the shift $s^+$ it is possible to define a ``classical
theory'' by considering the diagonal algebra structure and taking
$G$--invariants. This is the approach used in \cite{AS}, \cite{T}
and \cite{AR}. The paper \cite{AS} shows that this structure
describes the $G$--equivariant rather than the $G$--invariant
geometry.
One can of course forget
the algebra structure altogether and retain only
the additive structure. This was done e.g.\ in \cite{S} in the setting of
V--manifolds (i.e.\ orbifolds).
Concentrating only on the dimensions one arrives for instance at
the notion of ``stringy numbers'' \cite{BB}.
\subsubsection{The ``stringy orbifold picture''}
The ``diagonal'' multiplication is however {\em not} the right object to study
from the perspective of ``stringy geometry'' or a TFT with a finite
gauge group \cite{K1,CR}.
The multiplication should rather be $G$--graded, i.e.\ map
$A_g \otimes A_h \rightarrow A_{gh}$. We call such a product ``stringy'' product.
Here the natural question is the following:
\begin{qu}
Given the additive structure
of a $G$--Frobenius algebra, what are the possible ``stringy'' products?
\end{qu}
A more precise version of this question is the setting of our reconstruction program \cite{K2, K3}.
\subsubsection{The $G$--action} One part of the structure of
a $G$--Frobenius algebra is the $G$--action. If the $G$--category
is already endowed with a $G$--action, we can use it to reconstruct the
$G$--action on the $G$--Frobenius algebra, which in turn limits the
choices of ``stringy'' products to those that are compatible with it.
\subsubsection{Invariants} By definition $G$--Frobenius algebras
come with a $G$ action whose invariants form a commutative algebra.
Due to the nature of the $G$ action
this commutative algebra is graded by conjugacy classes, and under
certain conditions the metric descends and
the resulting algebra is again Frobenius. The induced multiplication
is multiplicative in the conjugacy classes and we call such
a multiplication commutative ``stringy''.
\subsubsection{Examples}
Examples of commutative ``stringy'' products
are orbifold (quantum) cohomology \cite{CR}.
For cohomology of global orbifolds
it was shown in \cite{FG} and recently in \cite{JKK} that
there is a group graded version for global orbifold cohomology
which has the structure of a $G$ Frobenius algebra, as we had previously
postulated \cite{talk}.
For new developments on quantum
deformations of the $G$--Frobenius algebras see \cite{JKK}.
\subsubsection{Special $G$--Frobenius algebras}
The special reconstruction data reflects this situation in the case
that the $A_g$ algebras are cyclic $A_e$ modules. This is a restriction
which leads to an answer in terms of cocycles for a large
class of examples. This class includes all Jacobian Frobenius algebras as
well as symmetric products and special cases of geometric actions on
manifolds (were the cohomology of the fixed point sets is generated by restriction
from the ambient space).
The general idea can be generalized to the
non--cyclic case, although computations get more involved.
\begin{df}
{Given a $G$--category $\mathcal{C}$, we call the tuple $(X^g):g\in G$ a G--collection.
The category of $G$--collections of a $G$--category is the category whose
objects are $G$--collections and whose morphisms are collections of morphisms
$(f^g)$
s.t.\ the diagrams
$$\begin{matrix}
X^g&\stackrel{i_g}{\rightarrow}&X\\
\downarrow f^g&&\downarrow f\\
Y^g& \stackrel{i_g}{\rightarrow} & Y
\end{matrix}$$
commute.}
\end{df}
\begin{df}
{ A G--Frobenius functor is a functor from the category of $G$--collections
of a $G$--category to $G$--Frobenius algebras.}
\end{df}
\subsection{Reconstruction/classification}
The main question of the reconstruction/classification
program is whether one can extend
a functor from a $G$--category $\mathcal{C}$ to Frobenius algebras to a
$G$--Frobenius functor, and if so how many ways are there to do this.
One can view this as the analogue of solving the associativity equations for
general Frobenius algebras. Some of the solutions correspond to
quantum cohomology, some to singularities, etc. and maybe others
to other ``string''--schemes. The structures of possible
``stringy'' products provide a common approach. The systematic
consideration of all possible products confines the choices of
string equivalents of classical concepts and allows to identify divers
approaches.
The answer to the main
question of reconstruction/classification can be
answered in the special case where all of the twisted
sectors are cyclic in terms of group cohomological data (see below).
This is the content of the Reconstruction Theorem of \cite{K1}.
The consequences are sometimes quite striking as in the case
of symmetric products, where there is only {one}
possible ``stringy'' orbifold
product.
The restrictions on the possible multiplicative structures are even
stricter if one is considering data stemming from a $G$--intersection category.
\section{$G$--Frobenius algebras}
\label{orb}
We fix a finite group $G$ and denote its unit element by $e$. We
furthermore fix a ground field $k$ of characteristic zero for
simplicity. With the usual precautions the characteristic
of the field does not play an important role and furthermore
the group really only needs to be completely disconnected.
\begin{df}
{ A {G--twisted Frobenius algebra} ---or $G$--Frobenius algebra for short---
over
a field $k$ of characteristic 0 is
$<G,A,\circ,1,\eta,\varphi,\chi>$, where
\begin{tabular}{ll}
$G$&finite group\\
$A$&finite dim $G$-graded $k$--vector space \\
&$A=\oplus_{g \in G}A_{g}$\\
&$A_{e}$ is called the untwisted sector and \\
&the $A_{g}$ for $g \neq
e$ are called the twisted sectors.\\
$\circ$&a multiplication on $A$ which respects the grading:\\
&$\circ:A_g \otimes A_h \rightarrow A_{gh}$\\
$1$&a fixed element in $A_{e}$--the unit\\
$\eta$&non-degenerate bilinear form\\
&which respects grading i.e. $\eta|_{A_{g}\otimes A_{h}}=0$ unless
$gh=e$.\\
\end{tabular}
\begin{tabular}{ll}
$\varphi$&an action of $G$ on $A$
(which will be by algebra automorphisms), \\
&$\varphi\in \mathrm{Hom}(G,\mathrm{Aut}(A))$, s.t.\
$\varphi_{g}(A_{h})\subset A_{ghg^{-1}}$\\
$\chi$&a character $\chi \in \mathrm {Hom}(G,k^{*})$ \\
\end{tabular}
\vskip 0.3cm
\noindent Satisfying the following axioms:
\noindent{\sc Notation:} We use a subscript on an element of $A$ to signify that it has homogeneous group
degree --e.g.\ $a_g$ means $a_g \in A_g$-- and we write $\varphi_{g}:= \varphi(g)$ and $\chi_{g}:= \chi(g)$.
\begin{itemize}
\item[a)] {Associativity}
$(a_{g}\circ a_{h}) \circ a_{k} =a_{g}\circ (a_{h} \circ a_{k})$
\item[b)] {Twisted commutativity}
$a_{g}\circ a_{h} = \varphi_{g}(a_{h})\circ a_{g}$
\item[c)]
{$G$ Invariant Unit}:
$1 \circ a_{g} = a_{g}\circ 1 = a_g$
and
$\varphi_g(1)=1$
\item[d)]
{Invariance of the metric}:
$\eta(a_{g},a_{h}\circ a_{k}) = \eta(a_{g}\circ a_{h},a_{k})$
\item[i)]
{Projective self--invariance of the twisted sectors}
$\varphi_{g}|A_{g}=\chi_{g}^{-1}id$
\item[ii)]
{$G$--Invariance of the multiplication}
$\varphi_{k}(a_{g}\circ a_{h}) = \varphi_{k}(a_{g})\circ \varphi_{k}(a_{h})$
\item[iii)]
{Projective $G$--invariance of the metric}
$\varphi_{g}^{*}(\eta) = \chi_{g}^{-2}\eta$
\item[iv)]
{Projective trace axiom}
$\forall c \in A_{[g,h]}$ and $l_c$ left multiplication by $c$:
$\chi_{h}\mathrm {Tr} (l_c \varphi_{h}|_{A_{g}})=
\chi_{g^{-1}}\mathrm {Tr}( \varphi_{g^{-1}} l_c|_{A_{h}})$
\end{itemize}}
\end{df}
\subsection{Remark} In the case of trivial characters the notion
of $G$--Frobenius algebras has appeared also under the name
of group--crossed algebras in \cite{Tu}, where they appeared
from the point of view of homotopy field theory.
\subsection{$G$--graded tensor product} Given two $G$--Frobenius algebras
$$\langle G,A,\circ,1,\eta,\varphi,\chi\rangle \ \ \text{ and }
\langle G,A',\circ',1',\eta',\varphi',\chi'\rangle$$ we
defined \cite{K1} their tensor product
as $G$--Frobenius algebras to be the $G$--Frobenius algebra
$\langle G,\bigoplus_{g \in G}( A_g \otimes A'_g),
\circ\otimes \circ',1\otimes 1',\eta\otimes \eta',\varphi\otimes \varphi',
\chi\otimes \chi'\rangle$.
We will use the short hand notation $A \hat \otimes A'$ for this product.
For the program outlined in \S1 the following data is the starting point
in order to construct a $G$--Frobenius algebra.
\begin{df}
A reconstruction data is collection of Frobenius algebras
$$(A_g,\eta_g,1_g):g\in G$$ together with maps of algebras $r_g:
A_e \rightarrow A_g$, isomorphisms $\psi_g:A_g \tilde\rightarrow
A_{g^{-1}}$ and a $G$--action $\varphi$ on $A_e$.
\end{df}
In general, we would like to find the $G$--Frobenius algebra structures
compatible with these data. For many purposes such as symmetric
products it is however enough to restrict to a more specialized situation.
\subsection{Special $G$--Frobenius algebras}
We briefly review special $G$--Frobenius algebras. For this class
of algebras which include the algebras having their origin in singularities
with isolated singularities and symmetric products a classification
of all possible stringy multiplications is possible in terms of
group cohomological data. For details
see \cite{K1,K2}.
\begin{df}
We call a $G$-Frobenius algebra special if all $A_g$ are cyclic
$A_e$ modules via the multiplication $A_e \otimes A_g \rightarrow A_g$
and there exists a collection of cyclic generators $1_g$ of $A_g$ such that
$\varphi_g(1_h)= \varphi_{g,h}1_{ghg^{-1}}$ with $\varphi_{g,h}\in k^*$.
\end{df}
The last condition is automatic, if the Frobenius algebra $A_e$
only has $k^*$ as invertibles, as is the case for cohomology algebras of
connected compact manifolds
and Milnor rings of quasi--homogeneous functions with an
isolated critical point at zero.
Fixing the generators $1_g$ we obtain maps $r_g:A_e \rightarrow A_g$ by setting
$r_g(a_e)= a_e1_g$. This yields a short exact sequence
\begin{equation}
0\rightarrow I_g \rightarrow A_e \stackrel{r_g}{\rightarrow} A_g \rightarrow 0
\end{equation}
It is furthermore useful to fix a section $i_g$ of $r_g$.
We denote the concatenation $\pi_g:= i_g \circ r_g$.
\begin{df}
A special $G$ reconstruction datum is a collection of Frobenius
algebras $(A_g,\eta_g,1_g): g\in G$ together with
an action of $G$ by algebra automorphisms
on $A_e$ and the structure of a cyclic $A_e$ module algebra on each $A_g$ with
generator $1_g$ such that $A_g$ and
$A_g^{-1}$ are isomorphic as of $A_e$ modules algebras.
\end{df}
\begin{df}
{\it Given a Frobenius algebra $A_e$ and a collection of
cyclic $A_e$--modules $A_g:g \in G$
a graded cocycle is a map $\gamma: G\times G \rightarrow A_e$
which satisfies
$$\gamma(g,h)\gamma(gh,k)\equiv \gamma(g,hk)\gamma(h,k) \; \mathrm{ mod }\; I_{ghk}$$
Such a cocycle is called
section independent if
$$(I_g + I_h)\gamma(g,h) \subset I_{gh}$$
Two such cocycles are considered to be the same if $\gamma(g,h) \equiv
\gamma'(g,h) \; \mathrm{ mod }\; I_{gh}$ and isomorphic, if they are
related by the usual scaling for group cocycles.
Given non--degenerate parings $\eta_g$ on the $A_g$,
a cocycle is said to be compatible with the metric, if
$$
\check r_g(1_g) = \gamma(g,g^{-1})
$$
where $\check r$ is the dual in the sense of vector spaces with
non--degenerate metric.
}
\end{df}
We will again use the notation $\gamma_{g,h}:=\gamma(g,h)$.
\subsection{Special $G$--Frobenius structure in terms of the cocycles}
\label{special}
Fixing a cyclic generator $1_g \in A_g$, a special $G$--Frobenius
algebra is completely characterized by two
cocycles: $\gamma$ a section independent graded cocycle compatible with the
metric and $\varphi\in Z^1(G,k^*[G])$
where $k^*[G]$ is the group ring restricted to invertible coefficients
with $G$--module structure induced by the adjoint action:
$$
\phi(g)\cdot(\sum_h \mu_h h)= \sum_h \mu_h ghg^{-1}
$$
The multiplication and $G$--action on the generators defines these
cocycles. Set
$$
1_g 1_h = \gamma_{g,h} 1_{gh} \quad \varphi_{g}(1_{h}) = \varphi_{g,h}
1_{ghg^{-1}}
$$
Defining $\varphi(g) := \sum_h \varphi_{g,h} ghg^{-1}$ and
$\gamma(g,h):=\gamma_{g,h}$ we obtain the desired cocycles.
The section independence follows from the fact that
$$(I_g+I_h)\gamma_{g,h}1_{gh}= (I_g+I_h)1_g1_h=0$$
In general, the multiplication is thus given by
\begin{equation}
\label{specialmult}
a_g b_h = i_g(a_g)i_h(b_h)\gamma_{g,h}1_{gh}
\end{equation}
for any choice of sections $i_g$.
The cocycles furthermore satisfy the following two compatibility equations:
\begin{equation}
\label{grpcompat}
\varphi_{g,h}\gamma_{ghg^{-1},g} = \gamma_{g,h}
\end{equation}
and
\begin{equation}
\label{algaut}
\varphi_{k,g} \varphi_{k,h} \gamma_{kgk^{-1},khk^{-1}}
= \varphi_{k} (\gamma_{g,h}) \varphi_{k,gh}
\end{equation}
which follow from the twisted commutativity and
the fact that $\varphi$ acts by automorphisms.
\subsubsection{Remark}
Notice that if $\gamma_{g,h}$ is non--zero i.e. $A_gA_h \neq 0$ then
(\ref{grpcompat})
determines $\varphi_{g,h}$. We also would like to remark that
if $A_gA_hA_k \neq 0$
(\ref{algaut}) follows from (\ref{grpcompat})
(cf.\ \cite{K1}).
\begin{df}
We call a pair of a section independent cocycle and
a non--abelian cocycle compatible if they satisfy the equations
(\ref{grpcompat}) and (\ref{algaut}).
\end{df}
\begin{thm}(Reconstruction) Given a special $G$ reconstruction datum
the structures of special $G$--Frobenius algebras are in 1--1
correspondence compatible pairs of
a graded, section independent $G$ 2--cocycle with values in $A_e$ that is
compatible with the metric and a
non--abelian $G$ 2--cocycle with values in $K^{*}$.
Satisfying the following conditions:
\begin{itemize}
\item[i)]$\varphi_{g,g}=\chi_g^{-1}$
\item[ii)]
$\eta_{e}(\varphi_{g}(a),\varphi_{g}(b)) =
\chi_{g}^{-2}\eta_{e}(a,b)$
\item[iii)] The projective trace axiom
$\forall c \in A_{[g,h]}$ and $l_c$ left multiplication by $c$:
\begin{equation}
\chi_{h}\mathrm {Tr} (l_c \varphi_{h}|_{A_{g}})=
\chi_{g^{-1}}\mathrm {Tr}( \varphi_{g^{-1}} l_c|_{A_{h}})
\end{equation}
\end{itemize}
\end{thm}
\subsection{Remark}
Changing the cyclic generators by elements
of $k^*$ leads to isomorphic $G$--Frobenius algebras and
to cohomologous cocycles $\gamma,\varphi$ in $Z^2(G,A_e)$ and $Z^2(G,k^*[G])$.
\subsection{Grading and Shifts}
\label{shifts}
Consider the graded version for Frobenius algebras,
where each Frobenius $A_g$ algebra comes naturally graded,
e.g.\ by cohomological degree or weight in the case
of a quasi--homogeneous isolated singularity. Usually this grading
takes values in ${\bf Q}$.
In this case the metric is also of a fixed degree, e.g.\ the dimension
or the highest weight in the Milnor ring.
Then each $A_g$ is graded as well. We denote the degree of the
pairing $\eta_g$ by $d_g$ and also use the shorthand notation
$d:= d_e$.
\subsubsection{Remark}
It is well known that the direct sum of graded Frobenius algebras is a graded
Frobenius algebra only if the degrees match (see e.g. \cite{K1}).
\subsubsection{The shifts}
From the previous Remark it follows that in order to form the sum
$A:=\bigoplus_{g\in G}A_g$, we need to shift the degrees of the
elements of $A_g$ at least uniformly, i.e.\ if an element $a_g$ in
$A_g$ has degree $\deg(a_g)$ we assign to it the new shifted
degree $\deg^s(a_g) = \deg(a_g) + s_g$.
This observation does not
fix the shifts uniquely. Let us denote the shift in degree of
$A_g$ by $s_g$ and set
$$s_{g}^+ := s_{g}+s_{g^{-1}}, \quad s_{g}^-:= s_{g}-s_{g^{-1}}$$
Then
$$s_{g}^+:= d-d_{g}$$
for the grading reasons mentioned above.
The shift $s^-$ is not fixed, however, there is a standard choice
provided there exists a canonical choice of linear representation of $G$.
\begin{df}
{\it The standard shift for a G--Frobenius algebra with a choice of linear
representation $\rho: G \rightarrow GL_n(k)$
is given by
$$s_{g}^+:= d-d_{g}$$
and
\begin{multline*}
s_{g}^- := \frac{1}{2\pi i}\mathrm{tr} (\log(g))-\mathrm{tr}(\log(g^{-1})):=
\frac{1}{2\pi i}(\sum_i \lambda_i(g)-\sum_i \lambda_i(g^{-1}))\\
=\sum_{i: \lambda_i \neq 0} (\frac{1}{2\pi i}2\lambda_i(g)-1)
\end{multline*}
where the $\lambda_i(g)$
are the logarithms of the eigenvalues
of $\rho(g)$ using the branch with arguments in $[0,2\pi)$ i.e.\
cut along the positive real axis.}
\end{df}
In total we obtain:
$$
s_{g}= \frac{1}{2}(s_g^+ + s_g^-)= \frac{1}{2}(d-d_g)
+ \sum_{i:\lambda_i \neq0} (\frac{1}{2\pi i}\lambda_i(g)-\frac{1}{2})
$$
\subsubsection{Remark} This grading having its origin in physics
specializes to the so--called age grading or the orbifold grading of \cite{CR}
in the respective situations.
\subsection{Super-grading}
\label{super} We can enlarge the framework by considering
super--algebras rather than algebras. This will introduce the
standard signs.
The action of $G$ as well as the untwisted sector should be even.
The axioms that change are
\begin{itemize}
\item[b$^{\sigma}$)] {Twisted super--commutativity}
$a_{g}\circ a_{h} = (-1)^{\tilde a_g\tilde a_h} \varphi_{g}(a_{h})\circ a_{g}$
\item[iv$^{\sigma}$)]
{Projective super--trace axiom}
$\forall c \in A_{[g,h]}$ and $l_c$ left multiplication by $c$:
$\chi_{h}\mathrm {STr} (l_c \varphi_{h}|_{A_{g}})=
\chi_{g^{-1}}\mathrm {STr}( \varphi_{g^{-1}} l_c|_{A_{h}})$
\end{itemize}
where $\mathrm{STr}$ is the super--trace.
Here we denoted by $\tilde a$ the $\mathbb{Z}/2\mathbb{Z}$ degree of $a$.
\section{Intersection G--Frobenius algebras}
We will now concentrate on the situation of functors from $G$--intersection
categories to Frobenius algebras.
Given a $G$--class in such a category a functor to Frobenius algebras will
provide the following structure which reflects the possibility
to take fixed point sets iteratively. Say we look at the fixed
points with respect to
elements $g_1, \dots, g_n$. These fixed point sets
will be invariant under the group spanned by the elements
$g_1, \dots, g_n$ and they are just the intersection of
the respective fixed point sets of the elements $g_i$.
The underlying spaces are therefore invariant with respect
to permutation of the elements $g_i$, and if $g$ appears twice
among the $g_i$ then one can shorten
the list by omitting one of the $g_i$. Also if a list $g_i$
includes $g^{-1}$ we may replace it by $g$. Finally, the fixed
point set under the action of the group generated by two elements
$g$ and $h$ is a subset of the fixed point set of the group
generated by their product $gh$. Translating this into the
categorical framework, we obtain:
\begin{df}
A $G$--intersection Frobenius datum of level $k$ is the following:
For each collection $(g_1,\ldots, g_n)$ with $n\leq k$ of elements
of $G$, a Frobenius algebra $A_{g_1,\dots,g_n}$ and the following
maps:
Isomorphisms
$$\Psi_{\sigma}:
A_{g_1,\dots,g_n}\rightarrow A_{g_{\sigma(1)},\dots,g_{\sigma(n)}}$$
for each $\sigma \in \mathbb{S}_n$ called {permutations}.
Isomorphisms
$$\Psi^{g_1,\dots, g_i, \dots, g_n}_{g_1,\dots, g_i^{-1},\dots ,g_n }:
A_{g_1,\dots, g_i, \dots, g_n} \rightarrow
A_{g_1,\dots, g_i^{-1},\dots ,g_n}$$
commuting with the permutations.
Morphisms
$$r_{g_1,\dots, g_i, \dots g_n}^{g_1,\dots , \hat g_i,\dots ,g_n}:
A_{g_1,\dots,\hat g_i, \dots, g_{n}}
\rightarrow A_{ g_1,\dots,g_n}$$
commuting with the permutations. (Here the symbol $\hat {}$
is used to denote omission.)
Isomorphisms
$$i_{g_1,\dots, g, \dots , g,\dots, g_n}^{g_1,\dots g, \dots, \hat g,\dots ,g_n}:
A_{g_1,\dots, g, \dots , g,\dots, g_n}
\rightarrow A_{g_1,\dots g, \dots, \hat g,\dots ,g_n}$$
commuting with the permutations.
And finally morphisms:
$$r^{g_1,\dots,g_{i}g_{i+1},\dots, g_n}_{g_1,\dots,g_{i},g_{i+1},\dots, g_n}:
A_{g_1,\dots,g_{i}g_{i+1},\dots, g_n} \rightarrow
A_{g_1,\dots,g_{i},g_{i+1},\dots, g_n}$$
commuting with the permutations.
If this data exists for all $k$ we call the data simply
$G$--intersection Frobenius datum.
\end{df}
\subsection{Notation}
We set $r_{g_1,\dots,g_n}:= r_{g_1, \dots, g_n}^{g_1,\dots, g_{n-1}}
\circ \dots \circ r_{g_1}$
and we set $I_{g_1,\dots,g_n}:= \mathrm{Ker}( r_{g_1,\dots, g_n})$.
Notice that this definition of $I_{g_1,\dots,g_n}$ is independent
of the order of the $g_i$.
\begin{df}
An intersection $G$--Frobenius algebra of level $k\geq 2$ is
an intersection $G$--Frobenius datum of level $k\geq 2$
together with a $G$--Frobenius algebra structure on $A:= \bigoplus A_g$.
An intersection $G$--Frobenius algebra of level $k\geq 2$ is called special,
if all of the $A_{g_1,\dots g_n}$ are cyclic $A_e$ module algebras generated by
the $1_{g_1, \dots g_n}$.
\end{df}
\subsection{Remarks}
\begin{itemize}
\item[1)]
In order to (re)--construct
a suitable multiplication on $\bigoplus A_g$ it is often
convenient to use the double and triple intersections (i.e.\ level 3).
Where the double intersection are used for the multiplication and
triple intersections are used to
show associativity.
\item[2)] We can use the double intersections to define $G$--Frobenius
algebras based on each of the $A_g$ i.e.\ on
$\bigoplus_{h\in Z(g)} A_{g,h}$ for each fixed
$g$--where $Z(g)$ denotes the centralizer of $g$.
\end{itemize}
\begin{df}
A {$G$--action} for an intersection $G$--Frobenius datum of
level $k$ is given by
a collection of morphisms
$$\phi_g(A_{g_1,\dots,g_n},h) \in
\mathrm{Hom}(A_{g_1,\dots,g_n,h},A_{g_1,\dots,g_n,ghg^{-1}})$$
which are compatible with the structural homomorphisms and
satisfy
$$\phi_g(A_{g_1,\dots,g_n},g'hg^{\prime -1})\phi_{g'}
(A_{g_1,\dots,g_n},h)= \phi_{gg'}(A_{g_1,\dots,g_n},h)$$
\end{df}
\begin{df}
We call an intersection $G$ Frobenius datum
a special $G$ intersection Frobenius datum
datum, if all of the $A_{g_1,\dots,g_n}$ are cyclic $A_e$
module algebras via the restriction maps such that the $A_e$ module
structures are compatible with the restriction morphisms $r$. Here
the generators are given by $r_{g_1,\dots, g_n}(1)$ and the $A_e$
module structure is given by $a\cdot b:= r_{g_1,\dots,g_n}(a)b$.
\end{df}
\subsection{Remark}
In the case of special intersection $G$--Frobenius algebras, there
are two ways to look at the multiplication. One way is to use the
restrictions $r_g$ and sections $i_g$ to define the multiplication
as discussed in \S \ref{special}. A second possibility is to use
the intersection structure. This can be done in the following way:
first push forward to double intersections, second use the
Frobenius algebra structure there to multiply, then pull the
result back up to the invariants of the product, but allowing to
multiply with an obstruction class before pulling back. This is
discussed below in \S \ref{multiplication}.
The precise relation between the two procedures is given by
the following Proposition and \ref{specialmult}.
\begin{prop}\cite{K2}
{\it \label{intersect} Given a special $G$ intersection algebra datum
(of level $2$),
the following decomposition holds for section independent cocycles
$\gamma$:
\begin{equation}
r_{gh}(\gamma_{g,h}) =\check r_{g,h}^{gh}(\tilde\gamma_{g,h})
= i_{g,h}^{gh}(\tilde \gamma_{g,h}) \check r_{g,h}^{gh}(1_{g,h})
=\bar \gamma_{g,h} \gamma_{g,h}^{\perp}
\end{equation}
for some section $i_{g,h}$ of $r_{g,h}$, $\tilde \gamma_{g,h} \in (A_{g,h})^{e}$,
$\bar \gamma_{g,h}\in i_g,h)(A_g,h)$ of degree $e$.
and $ \gamma_{g,h}^{\perp}:=\check r_{g,h}^{gh}(1_{g,h})$
Here $e=s_{g}+s_{h}-s_{gh}-s^{+}_{g,h}+s^{+}_{gh}$ with $s^{+}_{g,h} :=
d-d_{g,h}$ and $d_{g,h}=\deg(\rho_{g,h})$
and we again used the unshifted degrees. (In particular if the
$s^{-}=0$ then $e= \frac{1}{2}(s_g^++s_h^++s_{gh}^+)-s_{g,h}^+
=\frac{1}{2}(d-d_g-d_h-d_{gh})+d_{g,h}$)}
\end{prop}
Here $\check r$ is again the dual for maps between
vector spaces with non--degenerate
bilinear forms.
That is using the multiplication in $A_{g,h}$
\begin{equation}
a_g \circ b_h =
\check r^{gh}_{g,h}(r^g_{g,h}(a_g)r^h_{g,h}(b_h)\tilde\gamma_{g,h})
\end{equation}
\subsubsection{Remark} The decomposition into the terms
$\tilde \gamma$ and $\gamma^{\perp}$ can be understood as decomposing the
cocycle into a part which comes from the normal bundle of $X^{g,h}
\subset X^{gh}$ which is captured by $\gamma^{\perp}$ and an additional
obstruction part.
Also generalizing the fact that
\begin{equation}
I_g \gamma_{g} = I_g \check r_g (1_g)=0
\end{equation}
the following lemma holds:
\begin{lm}
\begin{equation}
(I_g +I_h)\gamma_{g,h}^{\perp}\subset I_{gh}
\end{equation}
\end{lm}
\subsection{Multiplication}
\label{multiplication} Fix a special intersection $G$ Frobenius algebra
of level at least 2. From the section independence of $\gamma$, we
see that the multiplication
$A_g \otimes A_h \rightarrow A_{gh}$ can be factored through
$A_{g,h}$. To be more precise, we have the following commutative diagram.
$$
\begin{matrix}
A_g\otimes A_h &\stackrel{\mu}{\rightarrow} &A_{gh}\\
\downarrow r^g_{g,h}\otimes r^h_{g,h}&&
\uparrow \check r^{g,h}_{gh}\circ l_{\tilde\gamma_{g,h}}\\
A_{g,h}\otimes A_{g,h}&\stackrel{\mu}{\rightarrow}&A_{g,h}
\end{matrix}
$$
where $l_{\tilde\gamma_{g,h}}$ is the left multiplication with $\tilde\gamma_{g,h}$.
Vice--versa we can use this diagram as an Ansatz for any $G$ intersection
Frobenius algebra of level at least 2.
\subsection{Associativity equations}
\label{ass}
Fix an intersection $G$ Frobenius algebra of level at least $3$ then
the associativity equations can be factored through
$A_{g,h,k}$. More precisely, we have the following commutative diagram
of restriction maps:
\begin{equation}
\label{assdiagram}
\begin{matrix}
&&&&A_{ghk}&&&&\\
&&&\swarrow&&\searrow&&&\\
A_{gh}&\rightarrow&A_{gh,k}&&\downarrow&&A_{g,hk}&\leftarrow&A_{hk}\\
\downarrow&&&\searrow&&\swarrow&&&\downarrow\\
A_{g,h}&&\rightarrow&&A_{g,h,k}&&\leftarrow&&A_{h,k}\\
\end{matrix}
\end{equation}
More technically:
Using the associativity equations for the $\gamma$, we set
\begin{equation}
\label{tildetripel}
r_{ghk}(\gamma_{g,h} \gamma_{gh,k}):=
\gamma_{g,h,k}
\end{equation}
Associativity dictates that also
\begin{equation}
r_{ghk}(\gamma_{h,k} \gamma_{g,hk})=
\gamma_{g,h,k}
\end{equation}
By analogous arguments as for the decomposition of the $\gamma_{g,h}$'s one can obtain:
\begin{equation}
\gamma_{g,h,k}= i_{g,h,k}^{ghk}(\tilde\gamma_{g,h,k}) \check r_{g,h,k}^{ghk}(1_{g,h,k})
= \check r_{g,h,k}^{ghk}(\tilde\gamma_{g,h,k})
\end{equation}
for some $\tilde \gamma_{g,h,k}\in A_{g,h,k}$.
Vice--versa having defined suitable $\tilde \gamma_{g,h}$
to show associativity one needs to show that
\begin{equation}
\check r_{gh,k}^{ghk}
(r_{gh,k}^{gh}(\check r^{gh}_{g,h}(\tilde \gamma_{g,h}))\tilde \gamma_{gh,k})=
\check r_{g,h,k}^{ghk}(\tilde \gamma_{g,h,k})
\end{equation}
for some $\tilde \gamma_{g,h,k}$.
This approach is actually now independent of the setup of special $G$
intersection
Frobenius algebras, where such a decomposition is guaranteed, and
is suitable for all $G$ intersection Frobenius data respectively
intersection $G$--categories.
\section{Discrete Torsion}
\label{disc}
\subsection{The twisted group ring $k^{\alpha}[G]$}
Recall that given an element $\alpha \in Z^2(G,k^*)$
one defines the twisted group ring
$k^{\alpha}[G]$ to be given by the same linear structure with multiplication
given by the linear extension of
\begin{equation}
g\otimes h \mapsto \alpha(g,h) gh
\end{equation}
with $1$ remaining the unit element.
To avoid confusion we will denote elements of $k^{\alpha}[G]$ by
$\hat g$ and the multiplication with $\cdot$
Thus
$$
\hat g \cdot \hat h = \alpha(g,h) \widehat{gh}
$$
For $\alpha$ the following equations hold:
\begin{equation}
\alpha (g,e) = \alpha(e,g)=1, \qquad
\alpha(g,g^{-1})=\alpha(g^{-1},g)
\end{equation}
Furthermore
$$
\hat{g}^{-1}= \frac{1}{\alpha(g,g^{-1})}\widehat{g^{-1}}
$$
and
$$
\hat g\cdot \hat h\cdot\hat {g}^{-1} =
\frac{\alpha(g,h)\alpha(gh,g^{-1})}{\alpha({g,g^{-1})}}\widehat{ghg^{-1}}
= \frac{\alpha(g,h)}{\alpha(ghg^{-1},g)} \widehat{ghg^{-1}}=
\epsilon(g,h)\widehat{ghg^{-1}}
$$
with
\begin{equation}
\epsilon(g,h):=\frac{\alpha(g,h)}{\alpha(ghg^{-1},g)}
\end{equation}
\subsubsection{Remark}
\label{conorm} If the field $k$ is algebraically closed
we can find a representative for each class $[\alpha]\in H^2(G,k^*)$
which also satisfies
$$\alpha(g,g^{-1})=1$$
\subsubsection{The $G$--Frobenius Algebra structure of $k^{\alpha}[G]$}
Fix $\alpha \in Z^2(G,k^*)$.
Recall from \cite{K1,K2} the following structures which turn
$k^{\alpha}[G]$ into a special $G$--Frobenius algebra:
\begin{eqnarray}
\gamma_{g,h}=\alpha(g,h) &&\eta(\hat g,\widehat{g^{-1}}) =\alpha(g,g^{-1})\nonumber\\
\chi_g= (-1)^{\tilde g}
&&\varphi_{g,h}=
\frac{\alpha(g,h)}{\alpha(ghg^{-1},g)}=:\epsilon(g,h)
\end{eqnarray}
\subsubsection{Relations}
The $\epsilon(g,h)$ which are by definition given as
$$\epsilon(g,h):= \frac{\alpha(g,h)}{\alpha(ghg^{-1},h})$$ satisfy the equations:
\begin{eqnarray}
\epsilon(g,e)&=&\epsilon(g,g)=1\\\nonumber
\epsilon(g_1g_2,h)&=&
\epsilon(g_1,g_2hg_2^{-1})\epsilon(g_2,h)\nonumber\\
\epsilon(k,gh)& =& \epsilon(k,g)\epsilon(k,h)\frac{\alpha(kgk^{-1},khk^{-1})}{\alpha(g,h)}\nonumber\\
\epsilon(h,g)&=&\epsilon(g^{-1},ghg^{-1})
\frac{\alpha([g,h],h)}{\alpha([g,h],hgh^{-1})}
\end{eqnarray}
This yields for {commuting elements}:
\begin{eqnarray}
\label{eps}
\epsilon(g,e)=\epsilon(g,g)=1 && \epsilon(g,h) =\epsilon(h^{-1},g)=\epsilon(h,g)^{-1}\nonumber\\
\epsilon(g_1g_2,h)= \epsilon(g_1,h)\epsilon(g_2,h) &&
\epsilon(h,g_1g_2) = \epsilon(h,g_1) \epsilon(h,g_2)
\end{eqnarray}
In the physics literature discrete torsion is sometimes defined to
be a function $\epsilon$ defined on commuting elements of $G$ taking
values in $U(1)$ and satisfying the equations (\ref{eps}).
\subsubsection{Remark} It is a nice exercise to check that the
trace axiom also holds (see \cite{K1,K3}).
\subsubsection{Remark} The function $\epsilon$ can be interpreted as
a cocycle in $Z^1(G,k^*[G])$ where $k^*[G]$ are the elements of
$k[G]$ with invertible coefficients regarded as a $G$ module by
conjugation (cf. \cite{K1,K2}). This means in particular that on
{\em commuting elements} $\epsilon$ only depends on the class of the cocycle $\alpha$.
\subsection{The action of discrete Torsion}
\begin{df}
Given a $G$--Frobenius algebra
$A$ and an element $\alpha \in Z^2(G,k)$, we define the
$\alpha$--twist of $A$ to be the $G$--Frobenius algebra
$A^{\alpha}:= A \hat\otimes k^{\alpha}[G]$.
\end{df}
\begin{prop}
\label{defprop}
Notice that as vector spaces
\begin{equation}
\label{alphaiso}
A^{\alpha}_{g}= A_g \otimes k \simeq A_g
\end{equation}
Using this identification the $G$--Frobenius structures given by
(\ref{alphaiso}) are
\begin{eqnarray}
\circ^{\alpha}|_{A^{\alpha}_{g}\otimes A^{\alpha}_{h}}= \alpha(g,h) \circ &&
\varphi^{\alpha}_g|_{A^{\alpha}_h}=\epsilon(g,h)\varphi_g\nonumber\\
\eta^{\alpha}|_{A^{\alpha}_g\otimes A^{\alpha}_{g^{-1}}}= \alpha(g,g^{-1})\eta&&
\chi_g=\chi_g
\end{eqnarray}
\subsubsection{Supergraded twisted group rings}
Fix $$\alpha \in Z^2(G,k^*), \sigma \in \mathrm{Hom}(G,\mathbb {Z}/2\mathbb{Z})$$ then there is a twisted
super--version of the group ring where now the relations
read
\begin{equation}
\hat g \hat h = \alpha(g,h)\widehat {gh}
\end{equation}
and the twisted commutativity is
\begin{equation}
\hat g \hat h = (-1)^{\sigma(g)\sigma(h)}\varphi_{g}(\hat h) \hat g
\end{equation}
and thus
\begin{equation}
\varphi_{g}(\hat h)=
(-1)^{\sigma(g)\sigma(h)}\alpha(g,h)\alpha(gh,g^{-1}) \widehat{ghg^{-1}} =:
\varphi_{g,h} \widehat{ghg^{-1}}
\end{equation}
and thus
\begin{equation}
\epsilon(g,h) := \varphi_{g,h} = (-1)^{\sigma(g)\sigma(h)}\frac{\alpha(g,h)}{\alpha(ghg^{-1},g)}
\end{equation}
\end{prop}
We would just like to remark that the
axiom iv$^{\sigma})$ of \ref{super} shows the difference between
super twists and discrete torsion.
\begin{df}
We denote the $\alpha$-twisted group ring
with super--structure $\sigma$ by $k^{\alpha,\sigma}[G]$.
We still denote $k^{\alpha,0}[G]$ by $k^{\alpha}[G]$
where $0$ is the zero map and we denote $k^{0,\sigma}[G]$
just by $k^{\sigma}[G]$ where $0$ is the unit of the group $H^2(G,k^*)$.
\end{df}
A straightforward calculation shows
\begin{lm}
$k^{\alpha,\sigma}[G] = k^{\alpha}[G]\otimes k^{\sigma}[G]$.
\end{lm}
\begin{lm} Let $\langle G,A,\circ,1,\eta,\varphi,\chi\rangle$ be
a $G$--Frobenius algebra or more generally super Frobenius algebra with
super grading $\tilde{}\;\in {\rm Hom}(A,\mathbb {Z}/2\mathbb{Z})$
then
$A\otimes k^{\sigma}[G]$ is isomorphic to the super $G$--Frobenius algebra
$\langle G,A,\circ^{\sigma},1,\eta^{\sigma},\varphi^{\sigma},\chi^{\sigma}\rangle$ with super grading
${}^{\sim \sigma}$, where
\begin{eqnarray*}
\circ^{\sigma}|_{A_g\otimes A_h}=(-1)^{\tilde g \sigma(h)}\circ
&\quad &
\varphi^{\sigma}_{g,h} = (-1)^{\sigma(g)\sigma(h)}\varphi_{g,h}\\
\eta_g^{\sigma}=(-1)^{\tilde g \sigma(g)}\eta_g
&\quad& \chi^{\sigma} = (-1)^{\sigma(g)}\chi_g\\
\tilde a_g^{\sigma}= \tilde a_g + \sigma(g)&&\\
\end{eqnarray*}
\end{lm}
\begin{df}
Given a $G$--Frobenius algebra
$A$ a twist for $A$ is a pair of functions
$(\lambda:G\times G \rightarrow k^*,\mu:G\times G \rightarrow k^*)$
Such that $A$ together with the new $G$--action
$$\varphi^{\lambda}(g)(a) = \oplus_h \lambda(g,h) \varphi(g)(a_h)$$
and the new multiplication
$$
a_g \circ^{\mu} b_h = \mu(g,h) a_g \circ b_h
$$
is again a $G$--Frobenius algebra.
A twist is called universal if it is defined for all $G$--Frobenius algebras.
\end{df}
\subsubsection{Remark} We could have started from a pair of functions
$(\lambda:A\times A \rightarrow k^*,\mu:G\times A \rightarrow k^*)$ in order
to projectively change the multiplication and $G$ action, but it is
clear that the universal twists (i.e.\ defined for any $G$--Frobenius
algebra) can only take into account
the $G$ degree of the elements.
\subsubsection{Remark}
These twists arise from a projectivization of the $G$--structures
induced on a module over $A$ as for instance the associated
Ramond--space (cf.\ \cite{K1}). In physics terms this means that
each twisted sector will have a projective vacuum, so that fixing
their lifts in different ways induces the twist. Mathematically
this means that the $g$ twisted sector is considered to be a Verma
module over $A_g$ based on this vacuum.
\begin{thm}
Given a (super) $G$--Frobenius algebra $A$
the universal twists are in 1--1 correspondence with
elements $\alpha \in Z^2(G,k^*)$ and the isomorphism classes of universal
twists are given by $H^2(G,k^*)$. Furthermore
the universal super re--gradings are
in 1-1 correspondence with $\mathrm{Hom}(G,\mathbb {Z}/2\mathbb{Z})$ and these
structures can be realized by tensoring with $k^{\sigma}[G]$
for $\sigma \in \mathrm{Hom}(G,\mathbb {Z}/2\mathbb{Z})$.
\end{thm}
Here a super re--grading is a new super grading on $A$ with which
$A$ is a super $G$--Frobenius algebra and universal means that
the operation of re--grading is defined for all $G$--Frobenius algebras.
We call the operation of forming a tensor product with $k^{\alpha}[G]:
\alpha \in Z^2(G,k^*)$ a twist by discrete torsion. The term discrete
refers to the isomorphism classes of twisted $G$--Frobenius
algebras which correspond to classes in $H^2(G,k^*)$. Furthermore,
we call the operation of forming a tensor product with
$k^{\sigma}[G]:\sigma \in {\rm Hom}(G,\mathbb {Z}/2\mathbb{Z})$ super--twist.
\subsection{Remark} If $k$ is algebraically closed, then in each class
of $H^2(G,k^*)$ there is a representative with $\alpha(g,g^{-1})=1$.
Using these representatives it is possible to twist a special
$G$--Frobenius algebra without changing its underlying special
reconstruction data.
\section{Symmetric group Frobenius algebra}
In this section, we will consider the structure of special
$G$--Frobenius algebras when the group is a symmetric group $\mathbb{S}_n$.
The symmetric groups have two characteristics, which we will use.
First they are generated by self inverse transpositions and second
there is a natural grading on the elements given by the minimal
number of transpositions needed to present an element.
Using the latter condition in order to fix a grading compatibility for
special $\mathbb{S}_n$ Frobenius algebras, we showed \cite{K2} that the structure
of a special $\mathbb{S}_n$--Frobenius algebra is essentially unique if it exists and
can be expressed solely in terms of the dual of the restriction maps
$\check r_{\tau}$ for $\tau$ a transposition. It thus depends only
on the maps $r_{\tau}$ and the metrics $\eta_{\tau}$. Also notice that
the $\mathbb{S}_n$ action acts transitively on all $A_{\tau}$ since any two
transpositions are conjugate.
If we specify the reconstruction data of the special $\mathbb{S}_n$ Frobenius manifold
to be that stemming from a symmetric product, it can furthermore
be shown that the unique structure does indeed exist \cite{K2}.
\subsection{Notation}
\label{sym}
We define the degree of $\sigma \in \mathbb{S}_n$ to be
$|\sigma| :=$ the minimal length of $\sigma$ as a word in transpositions.
We define the length of $\sigma$ as $l(\sigma):=$ the number of cycles
in a cycle decomposition. Notice $|\sigma|= n-l(\sigma)$.
We also set $l(\sigma_1, \dots,\sigma_n)= |\langle
\sigma_1,\dots,\sigma_n\rangle\backslash\{1,\dots,n\}|$ where $\langle
\sigma_1,\dots,\sigma_n\rangle$ is the group generated by the $s_i$ and
the quotient is by the natural permutation action.
\begin{df} We call two elements
$\sigma,\sigma' \in \mathbb{S}_n$ {transversal}, if
$|\sigma\sigma'|=|\sigma|+|\sigma'|$.
\end{df}
\subsection{Normalizability}
\label{norm}
\begin{df}
We call a non--abelian cocycle $\varphi$
normalized if
$\forall \tau,\sigma\in \mathbb{S}_n, |\tau|=1: \varphi_{\sigma,\tau}= 1$.
We call a cocycle $\gamma:\mathbb{S}_n \times \mathbb{S}_n \rightarrow A$
{normalizable}
if for all {transversal} pairs $\tau, \sigma \in \mathbb{S}_n , |\tau|=1:
\gamma_{\sigma,\tau}\in A^*_e$, $A^*$ being the invertible elements of $A_e$, and
{normalized} if it is normalizable and
for all {transversal } $\tau, \sigma \in \mathbb{S}_n , |\tau|=1:
\gamma_{\sigma,\tau}=1$.
\end{df}
In the example of symmetric products of an irreducible Frobenius
algebra the invertibles are precisely $k^*$.
\subsection{Discrete Torsion for the symmetric group}
As is well known (see e.g. \cite{Kar}) $H^2(\mathbb{S}_n,k^*)=\mathbb {Z}/2\mathbb{Z}$ and
$\mathrm{Hom}(\mathbb{S}_n,k^*)=\mathbb {Z}/2\mathbb{Z}$. We denote the non--trivial element of
$H^2(\mathbb{S}_n,k^*)$ by $[\Phi]$. There is a representative $\Phi$ of
this class which actually satisfies $\Phi(\sigma,\sigma')=\pm 1$ for
transversal $\sigma,\sigma'$. We denote the generator for the super--twist
by $\Sigma$.
\begin{thm}\cite{K2}
\label{normalize}
Any non--abelian cocycle $\varphi$ after possibly twisting
by the discrete-torsion $\Phi$ and super-twist $\Sigma$ can be normalized.
Any compatible pair of a normalizable $\mathbb{S}_n$ cocycle $\gamma$ and a
normalized non--abelian cocycle $\varphi$ can be normalized by a
rescaling $1_{\sigma} \mapsto \lambda_{\sigma}1_{\sigma}$.
And vice--versa given for
any normalized $\mathbb{S}_n$ cocycle $\gamma$ there are only
two compatible non--abelian cocycle $\varphi$
differing by the super--twist $\Sigma$ namely:
\begin{equation}
\varphi_{\sigma,\sigma'}= (-1)^{p|\sigma||\sigma'|}
\end{equation}
where $p$ is either $0$ or $1$.
\end{thm}
\subsection{Uniqueness}
\label{unique}
\begin{thm} \cite{K2}
Given a special $\mathbb{S}_n$ algebra datum,
a choice of normalized cocycle $\gamma:\mathbb{S}_n \times \mathbb{S}_n \rightarrow A$
is unique up to a super--twist by $\Sigma$.
It is determined by $\gamma_{\tau,\tau}=\check r_{\tau}(1_{\tau})$ and without
the twist it is given by equation (\ref{expform}).
\end{thm}
\subsubsection{Explicit form of the cocycles}
\begin{equation}
\label{expform}
\gamma_{\sigma,\sigma'}=\pi_{{\sigma\s'}}(\gamma_{\sigma,\sigma'}\prod_{i=1}^{|\sigma'|}
\gamma_{\tau'_{i+1},\prod_{j=1}^{i}\tau'_{j}})
=\pi_{\sigma\s'}(\prod _{i=1}^{|\sigma'|}
\gamma_{\sigma\prod_{j=1}^{i}\tau'_{i-1},\tau'_{i}})
=\pi_{\sigma\s'}(\prod_{i \in I}\gamma_{\tau'_{i},\tau'_{i}})
\end{equation}
where
$I:=\{i: |\sigma(\prod_{j=1}^{i-1}\tau'_{j})\tau'_{i}|
=|\sigma\prod_{j=1}^{i-1}\tau'_{j}|-2\}$.
\subsection{Existence}
We would also would like to recall the following existence theorem:
\begin{thm} \cite{K2}
\label{existence}
The equations
\begin{eqnarray}
\label{nottrans}
r_{\sigma,\sigma'}(\gamma_{\sigma,\sigma'})&=&r_{\sigma\s'}(\prod_{i\in I}\gamma_{\tau_{i},\tau_{i}})=
\prod_{i\in I'}\pi_{\sigma\s'}(\gamma_{\tau_{i},\tau_{i}})\prod_{j\in I''}
r_{\sigma\s'}(\gamma_{\tau_{j},\tau_{j}})\nonumber\\
&=&\bar \gamma_{\sigma,\sigma'}\gamma_{\sigma,\sigma'}^{\perp}
\end{eqnarray}
where
\begin{eqnarray}
I'=\{ i \in I:
\pi_{\sigma,\sigma'}(\gamma_{\tau_{i},\tau_{i}})=\pi_{\sigma\s'}(\gamma_{\tau_{i},\tau_{i}})\}\nonumber\\
I''=\{ i \in I: \pi_{\sigma,\sigma'}(\gamma_{\tau_{i},\tau_{i}})
\neq\pi_{\sigma\s'}(\gamma_{\tau_{i},\tau_{i}})\}
\end{eqnarray}
and $\gamma^{\perp}_{\sigma,\sigma'}=\check r^{gh}_{g,h}(1_{g,h})$ are well
defined and yield a group cocycle compatible with the special
$\mathbb{S}_n$ intersection data.
$$A_{\sigma_1,\dots,\sigma_k}= (A^{\otimes l(\sigma_1,\dots,\sigma_k)},
\eta^{\otimes l(\sigma_1,\dots,\sigma_k)},
1^{\otimes l(\sigma_1,\dots,\sigma_k)})
$$
derived from any Frobenius algebra $(A,\eta,1)$. The restriction maps
are contractions via multiplication.
\end{thm}
For details on reconstruction data we refer to \cite{K1,K2}.
\subsubsection{Remark} In order to understand the form a the
cocycle above let us consider the case of the second symmetric
product. Thus we need to consider the ${\bf S}_2$-Frobenius
algebra:
$$A= A_e\oplus A_{\tau}= A\otimes A \oplus A$$
with the metric $\eta \otimes \eta \oplus \eta$. The restriction
map $r_{\tau}$ is just the multiplication $\mu: A \otimes A
\rightarrow A$. Now $\check r(1_{\tau})= \sum a_i \otimes b_i$ and
$r_{\tau} (\check r_{\tau})=\sum a_ib_i=e$ where $e$ is the Euler
class.
In general, if $\tau = (ij)$ then $r_{(ij)}$ contracts the i--th
and j--th component. Then $\gamma_{\tau,\tau}=\check r_{\tau}$ is a sum
over elements which differ from one only in the i--th and j--th
factor. Considering a product of $\gamma_{\tau,\tau}$ and restricting it
to $A_{\sigma\s'}$ amounts to performing several contractions using
the multiplication. For each individual $\gamma_{\tau,\tau}$ there are
only two choices: those which get contracted and yield Euler
classes -- this is the set $I''$-- and those that do not get
contracted -- this is the set $I'$.
\subsubsection{Remark} For the existence proof in \cite{K2} we used
the theory of intersection Frobenius algebras as can bee seen from
the decomposition of the cocycles. This also makes it easier
to compare with the results of \cite{LS}.
\subsubsection{Remarks}
\begin{itemize}
\item[1)] In the case that $A$ is the trivial one dimensional
Frobenius algebra this structure coincides with the group ring $k[\mathbb{S}_n]$.
\item[2)]
Applying our result to the situation where $A$ is the Frobenius
algebra associated to a variety or a compact space we recover
the results of \cite{FG} and taking invariants those of \cite{U}.
\end{itemize}
\begin{df} Given a Frobenius algebra $A$,
the series of $\mathbb{S}_n$--Frobenius algebras determined by Theorem
\ref{existence} is called the second quantization of $A$.
\end{df}
This terminology is based on \cite{DMVV}.
\section{The Twist for the Hilbert scheme}
If one changes the sign in the metric $\eta_{\tau}$
this also changes the multiplication. These changes are such, that
they are uniquely realized as a twist with a discrete torsion whose
cocycle $\alpha$ is determined by $\alpha(\tau,\tau)=-1$ and normalization.
This cocycle is actually trivial in cohomology with
coefficients in ${\bf C}^*$, but nevertheless changes the
multiplication and metric as desired.
\subsection{The twisted group ring $k^{\alpha}[\mathbb{S}_n]$}
Recall that $H^2(\mathbb{S}_n, {\mathbb C})= \mathbb {Z}/2\mathbb{Z}$, but the twists are actually
given by $\alpha \in Z^2(\mathbb{S}_n,k)$.
Any normalized cocycle is fixed by $\alpha(\tau,\tau)$, $\tau$ any transposition.
Thus we may regard
$\alpha \in Z^2(\mathbb{S}_n,k)$ which is fixed by $\alpha(\tau,\tau)=-1$ for any
transposition $\tau$.
Notice that $[\alpha]=0 \in H^2(\mathbb{S}_n, {\mathbb C})$ as is easily seen from
the existence and uniqueness together with Remark \ref{conorm}.
It is not trivial however in $H^2(\mathbb{S}_n, {\mathbb Q}^*)$.
In any case, we can consider the twist by this class.
\begin{prop}
\label{twist}
Given any $\mathbb{S}_n$--Frobenius algebra $A$, twisting it by the normalized
discrete torsion $\alpha \in Z^2(G,k^*)$ defined by $\alpha(\tau,\tau)=-1$
changes the structures via:
\begin{eqnarray}
a_{\sigma} \circ^{\alpha} b_{\sigma'} &=& (-1)^{\frac{1}{2}(|\sigma|+|\sigma|-|\sigma\s'|)}
a_{\sigma} \circ b_{\sigma'}\nonumber\\
\varphi^{\alpha}_{\sigma}(a_{\sigma'}) &=& \varphi_{\sigma}(a_{\sigma'})\nonumber\\
\chi_{\sigma}^{\alpha}&=&\chi_{\sigma}\nonumber\\
\eta^{\alpha}_{\sigma}&=&(-1)^{|\sigma|}\eta_{\sigma}
\label{metric}
\end{eqnarray}
\end{prop}
This is how the multiplication and metric get to be changed via a sign while
the $\mathbb{S}_n$ action remains unchanged.
\begin{proof} Due to \ref{defprop} it suffices to show that this holds for the
twisted group ring i.e.\ the following equations hold.
\begin{eqnarray}
\alpha(\sigma,\sigma') &=& (-1)^{\frac{1}{2}(|\sigma|+|\sigma|-|\sigma\s'|)}\\
\label{gamma}
\epsilon(\sigma,\sigma') &=& 1
\label{phi}
\end{eqnarray}
The equation (\ref{gamma}) follows
from $\alpha(\tau,\tau)=-1$ and the general structure of $\alpha(\sigma,\sigma')$ of \ref{unique}
in particular from the equation (\ref{expform})
by noticing that
$$
|I|=\{i: |\sigma(\prod_{j=1}^{i-1}\tau'_{j})\tau'_{i}|
=|\sigma\prod_{j=1}^{i-1}\tau'_{j}|-2\} =
\frac{1}{2}(|\sigma|+|\sigma'|-|\sigma\s'|)
$$
The correction of $\chi$ is always trivial and the one for $\varphi$
is given by $\epsilon$ which satisfies
$$
\epsilon(\sigma,\sigma') = \frac{\alpha(\sigma,\sigma')}{\alpha(\sigma\s'\sigma^{-1},\sigma)}
= 1
$$
since
$$
|\sigma\s'\sigma^{-1}|=|\sigma'| \text { and so also } |\sigma'\sigma|= |\sigma\s'|
$$
Finally for the last equation of (\ref{metric}) we read off
$$\alpha(\sigma,\sigma^{-1})=
(-1)^{\frac{1}{2}(|\sigma|+|\sigma^{-1}|- |\sigma\s^{-1}|)}= (-1)^{|\sigma|}$$
\end{proof}
\subsection{A family of multiplications}
The only freedom of choice for a normalized cocycle is $\gamma_{\tau,\tau}$
which is determined uniquely from the metrics $\eta_e$ and $\eta_{\tau}$.
Given a fixed metric $\eta_{\tau}$ it is possible to change it by
homothety to $\eta^{\lambda}_{\tau}= \lambda \eta_{\tau}$ and keeping
the cocycles normalized using discrete torsion. This is achieved
by twisting with the normalized discrete torsion cocycle
determined by $\alpha(\tau,\tau)=\lambda$.
Vice--versa the only twists with discrete torsion cocycles that
keep the cocycle $\gamma$ normalized are fixed by their value
$\alpha(\tau,\tau)=\lambda$. The effect on the metric $\eta_{\tau,\tau}$
is a scaling by $\lambda$.
Using the same arguments as in \ref{twist}
\begin{prop}
Let $\alpha\in Z^2(\mathbb{S}_n,k^*)$ by the normalized cocycle determined by
$\alpha(\tau,\tau)=\lambda$ and $A$ by the $\mathbb{S}_n$--Frobenius algebra
associated to the symmetric product then the $\mathbb{S}_n$ Frobenius
algebra $A^{\alpha}$ is the twisted algebra found in \cite{QW}.
\end{prop}
This Proposition explains the existence and the special role of these
deformed multiplications. In the complex case, the existence
of this family also shows
the triviality of the cocycles $[\alpha]\in H^2(\mathbb{S}_n,{\bf C}^*)$.
\subsection{Remark} Notice that the discrete torsion $\epsilon$ is
trivial.
To obtain a non--trivial $\epsilon$ one could super--twist
\cite{K1,K2}, but this would also change the permutation action by
tensoring on the determinant representation, which is not intended
for the current application as we wish to keep symmetric invariants,
not anti--symmetric ones. Another way would be
to super--twist and to twist with a non--trivial cohomology class,
which would restore the
action to a symmetric one.
These results can be compared with \cite{D1,D2,DMVV,K2}.
\subsection{Application to the Hilbert scheme}
Comparing our analysis with that of \cite{LS} finally yields:
\begin{thm}{The cohomology of $Hilb^{[n]}$, the Hilbert scheme
of n--points for a K3 surface, is the $\mathbb{S}_n$ invariant part of
the unique $\mathbb{S}_n$--Frobenius Algebra
associated to the symmetric product of the cohomology of the surface
twisted by the specific discrete torsion given above. Or
in other words the $\mathbb{S}_n$--Frobenius Algebra structure for the extended
global orbifold cohomology twisted by specific discrete torsion, which is fixed
by the map of \cite{LS}.
In general, the sequence of spaces $Hilb^{[n]}$
gives rise to the twisted second quantization of the underlying surface
on the cohomological (motivic) level.}
\end{thm}
\subsubsection{Remark} The necessity of this type of twist and the particular
choice are fixed by two remarks.
First, the chosen isomorphism of \cite{LS} involves
the intersection form of the resolution of the double points,
which is negative definite. The intersection form of the natural
symmetric group Frobenius algebra
does not reflect this property.
The way that discrete torsion changes
the metric then fixes the cocycle as a normalized cocycle,
by regarding the metric of $A_{\tau}$ for $\tau$ a transposition.
This geometrically corresponds to a simple blow-up
along the diagonal.
|
hep-th/0610248
|
\section{Introduction}
\label{IntroSection}
Maximally supersymmetric ${\cal N}=4$ Yang-Mills theory (MSYM) has
attracted a great deal of theoretical interest over the years. It is
widely believed that the 't~Hooft (planar) limit of MSYM, in which the
number of colors $N_c$ is taken to infinity, is dual at strong coupling
($\lambda \equiv g^2 N_c \to \infty$) to weakly-coupled
gravity in five-dimensional anti-de Sitter space~\cite{Maldacena}.
The duality implies that the full quantum anomalous dimensions of
various series of gauge-invariant composite operators are equal to
energies of different gravity modes or configurations of strings in
anti-de Sitter
space~\cite{BPS,OtherAnomalousDim,BMN,Nontrivial,DhokerTasi}.
Heuristically, the Maldacena duality conjecture hints that even
quantities unprotected by supersymmetry should have perturbative
series that can be resummed in closed form. The strong-coupling limits
of these resummed expressions, possibly supplemented by
non-perturbative contributions, should match results for the
appropriate observables in weakly-coupled supergravity or string theory.
This intuition does appear to apply to the higher-loop on-shell
scattering amplitudes of color-non-singlet gluons, even though the
Maldacena conjecture does not directly address on-shell amplitudes of
massless colored quanta. There is now significant evidence of a very simple
structure in the planar limit. In particular, the planar
contributions to the two-loop and three-loop four-gluon amplitudes
have been shown to obey iterative relations~\cite{Iterate2,Iterate3}:
The dimensionally regularized amplitudes ($d=4-2\epsilon$) can be expressed
in terms of lower-loop amplitudes, along with a set of three constants
for each order in the loop expansion.
An analogous relation is conjectured to hold for generic
maximally-helicity-violating (MHV) $n$-gluon scattering amplitudes, to
all loop orders~\cite{Iterate2,Iterate3}. The MHV amplitudes are
those for which two of the gluons have negative helicity and the
remaining $(n-2)$ have positive helicity, or the parity conjugate
case. Indirect evidence for the extension to the $n$-point MHV case
was provided first by studying the consistency of collinear limits at
two loops~\cite{Iterate2}. Later, the iteration relation was
demonstrated to hold directly for the two-loop five-gluon amplitude,
for the ``even'' terms in the amplitude~\cite{CSV06} and soon thereafter
for the ``odd'' terms as well, {\it i.e.} for the complete
amplitude~\cite{BCKRS}. (Odd and even refer to the behavior of the
ratio of terms in the loop amplitude to the tree amplitude, under
parity.) After infrared divergences have been subtracted from the
proposed all-loop iterative relation, the resulting finite remainder
is neatly proportional to an exponential of the product of the
one-loop finite remainder with the so-called cusp anomalous dimension.
Presumably the weak--strong duality between anti-de Sitter space
and conformal field theory (AdS/CFT) plays a role in this simplicity.
The form of the proposed iterative structure of multi-loop planar
MSYM\ is based on the understanding of how soft and collinear infrared
singularities factorize and exponentiate in gauge
theory~\cite{Akhoury,Sudakov,Sen83,SoftGluonSummation,MagneaSterman,%
IROneLoop,CataniIR,TYS}. For the pole terms in the amplitudes, such
behavior is universal, and holds in any massless gauge theory. What
is remarkable in planar MSYM\ is that the finite terms in the MHV
scattering amplitudes can be organized into the same exponentiated
form as the divergent terms. In a non-supersymmetric theory, QCD,
exponentiation of finite terms has also been observed in the context
of threshold resummation of the Drell-Yan cross
section~\cite{EynckLaenenMagnea}. In the case of four-gluon
amplitudes in MSYM, the behavior is valid for arbitrary values of the
scattering angle (ratio of $t/s$). For amplitudes with more than
four gluons, there are many kinematical variables,
and so the constraints imposed by the iterative structure are
even stronger. It is clearly of interest to test whether
this structure persists beyond three loops. In this paper we shall
provide an integral representation for the planar four-loop four-gluon
amplitude. Our result will enable such a test to be performed at
four loops, once the relevant integrals have been evaluated to
sufficiently high order in $\epsilon$.
Another remarkable property of planar MSYM\ is the integrability of the
dilatation operator, interpreted as a Hamiltonian, for many sectors of
the theory. Integrable structures were identified in
anomalous dimension matrices in QCD a while ago~\cite{QCDIntegrable}.
In planar MSYM, Minahan and Zarembo~\cite{MinahanZarembo} mapped the
one-loop dilatation operator for non-derivative single-trace operators
to an integrable spin-chain Hamiltonian, and used a Bethe ansatz to
compute the anomalous dimensions. Such integrable structures have
since been extended to higher perturbative orders for various sectors
of the theory~\cite{MoreIntegrable,BeisertDispersion,Integrable}.
They are also known to be present at strong coupling, from the
form of the classical sigma model on target space
AdS${}_5\times S^5$~\cite{BPR}. (Berkovits has given a formal
argument that the integrability extends to the quantum level
on the world sheet~\cite{Berkovits}.)
The iterative structure of MSYM\ amplitudes may somehow be related
to integrability. If an infinite number of conserved charges
are present, the form of the quantum corrections should be severely
constrained, as it would be by the proposed iterative
structure~\cite{Iterate2,Iterate3}. An iterative structure may also
arise in correlation functions of gauge-invariant composite operators
in planar MSYM~\cite{Schubert}; but its precise structure, if it
exists in this context, has not yet been clarified.
Integrability is a powerful computational tool.
Integrability, or in some cases, the assumption of integrability,
has been employed to compute a variety of one-loop and
multi-loop anomalous dimensions in planar MSYM\
from Bethe ans\"atze~\cite{MinahanZarembo,MoreIntegrable,BeisertDispersion}
and from the Baxter equation~\cite{BGK06,Belitsky}.
One of the most interesting developments along these lines has been the
all-orders proposal of Eden and Staudacher (ES), based on an
asymptotic all-loop Bethe ansatz~\cite{AsymptoticBA}, for the
large-spin limit of the anomalous dimensions of leading-twist
operators in MSYM~\cite{EdenStaudacher}. This quantity,
$\gamma_K(\alpha_s)$, is also referred to as the cusp (or sometimes,
soft) anomalous dimension associated with a Wilson line.
Equivalently, it represents the leading large-$x$ behavior~\cite{KM}
of the DGLAP kernel $P_{ii}(x)$ for parton evolution, $i\to i$,
\begin{equation}
P_{ii}(x) \to { \gamma_K(\alpha_s) \over 2 \, (1-x)_+ }
+ B(\alpha_s) \, \delta(1-x) + \ldots,
\qquad {\rm as}\ \ x\to 1.
\label{largexP}
\end{equation}
Taking Mellin moments, $\gamma(j) \equiv - \int_0^1 dx \, x^{j-1} P(x)$,
we see that
the cusp anomalous dimension gives the dominant behavior of the
leading-twist anomalous dimensions as the spin $j\to\infty$,
\begin{equation}
\gamma(j) = {1\over2} \gamma_K(\alpha_s)\ \ln(j) + {\cal O}(j^0)\,.
\label{gammajgammaK}
\end{equation}
From the asymptotic all-loop Bethe ansatz, ES derived an integral
equation for a fluctuation density $\hat\sigma$, from which the
cusp anomalous dimension can be determined. The integral equation
is straightforward to solve perturbatively in $\alpha_s$.
In terms of the expansion parameter
\begin{equation}
\hat{a} \equiv { g^2 N_c\over 8 \pi^2 } = { N_c \alpha_s \over 2 \pi} \,,
\label{ahdef}
\end{equation}
the ES prediction for (one quarter of) the cusp anomalous dimension
is~\cite{EdenStaudacher}
\begin{eqnarray}
{\gamma_K(\hat{a})\over4}
&\equiv& f_0(\hat{a})
\label{f0def}\\
&=& \hat{a} - \zeta_2 \, \hat{a}^2
+ \Bigl( \zeta_2^2 + 3 \, \zeta_4 \Bigr) \hat{a}^3
\nonumber\\
&&\hskip0.0cm\null
- \Bigl( \zeta_2^3 + 6 \, \zeta_2 \, \zeta_4 - \zeta_3^2
+ {25\over2} \, \zeta_6 \Bigr) \hat{a}^4
\nonumber\\
&&\hskip0.0cm\null
+ \Bigl( \zeta_2^4 + 9 \, \zeta_2^2 \, \zeta_4
- 2 \, \zeta_2 \, \zeta_3^2
+ 25 \, \zeta_2 \, \zeta_6 - 10 \, \zeta_3 \, \zeta_5
+ {39\over4} \, \zeta_4^2
+ {245\over4} \, \zeta_8 \Bigr) \hat{a}^5
\ +\ \cdots~~~~~~
\label{gammaKA}\\
&=& \hat{a} - {\pi^2\over6} \, \hat{a}^2
+ {11\over180} \pi^4 \, \hat{a}^3
- \Bigl( {73\over2520} \, \pi^6 - \zeta_3^2 \Bigr) \hat{a}^4
\nonumber\\
&&\hskip0.0cm\null
+ \Bigl( {887\over56700} \, \pi^8
- {\pi^2\over3} \, \zeta_3^2
-10 \, \zeta_3 \, \zeta_5 \Bigr) \hat{a}^5
+\ \cdots \,.
\label{gammaKB}
\end{eqnarray}
Here $f_0(\hat{a})$ can be identified with the first of a series of
three constants (per loop order) appearing in the iterative relation
for the four-gluon amplitude~\cite{Iterate2,Iterate3}. The first
three terms of \eqn{gammaKB} agree with previously-known
results~\cite{Makeenko,KLV,KLOV,Iterate3,MOS}, so the new predictions
begin with the $\hat{a}^4$ term. The conjecture~(\ref{gammaKA}) has also been
arrived at recently by Belitsky, using a proposed
generalization of the Baxter equation to all loop orders~\cite{Belitsky}.
In QCD the three-loop cusp (soft) anomalous dimension has been
computed by Moch, Vermaseren and Vogt as part of the impressive
computation of the full leading-twist anomalous dimensions~\cite{MVV}
needed for next-to-next-to-leading order evolution of parton
distribution functions. (Terms proportional to the number of quark
flavors were obtained first in ref.~\cite{SoftNf}.)
Kotikov, Lipatov, Onishchenko and Velizhanin (KLOV)~\cite{KLOV}
made the inspired observation, based on evidence at two loops~\cite{KL02},
that the MSYM\ anomalous dimensions
may be obtained simply from the ``leading-transcendentality''
contributions in QCD.
The cusp anomalous dimensions are
polynomials in the Riemann $\zeta$ values, $\zeta_n \equiv \zeta(n)$,
or their multi-index generalizations, $\zeta_{n_1,n_2,\ldots}$.
(These cannot show up below five loops.)
The degree of transcendentality of $\zeta_n$ is just $n$,
and the transcendentality is additive for products of $\zeta$ values.
At $L$ loops, the leading-transcendentality contributions to the cusp
anomalous dimension have degree (or weight) equal to $2L-2$.
All MSYM\ leading-twist anomalous dimensions computed to date
have had uniform, leading transcendentality, whereas the corresponding
QCD results contain an array of terms of lower transcendentality,
all the way down to rational numbers.
The KLOV conversion principle applies to the leading-twist anomalous
dimensions for any spin $j$, with an appropriate definition of leading
transcendentality for the harmonic sums $S_{\vec{m}}(j)$ that appear
in the results. Using assumptions of integrability, Staudacher
confirmed the three-loop KLOV result through
$j=70$~\cite{Staudacher,StaudacherPrivate}, building on earlier work
of Beisert, Kristjansen and Staudacher~\cite{MoreIntegrable} at $j=4$.
Eden and Staudacher extended this analysis to the three-loop cusp
anomalous dimension (the $j\to\infty$ limit), in the course of
arriving at their all-orders proposal~(\ref{gammaKA}) based on
integrability~\cite{EdenStaudacher}. The three-loop cusp anomalous
dimension in MSYM\ was independently determined from the $1/\epsilon^2$ pole
in the three-loop four-gluon scattering amplitude~\cite{Iterate3},
providing a confirmation of the KLOV result in the limit $j\to\infty$.
An important feature of the ES proposal~(\ref{gammaKA}) is that it is
consistent with KLOV's observation that the MSYM\ anomalous dimensions
are homogeneous in the transcendentality, at least through three
loops.
The ES proposal emerges from mapping single-trace gauge-invariant
operators to spin chains. The dilatation operator, whose
eigenvalues are anomalous dimensions,
is mapped to the spin-chain Hamiltonian. The form of the
$S$ matrix for this spin chain is fixed, up to an overall
phase, called the {\it dressing\/} factor~\cite{AFS04},
by the superconformal [PSU$(2,2|4)$] symmetry of both
AdS${}_5\times S^5$ and ${\cal N}=4$ supersymmetric gauge theory.
At one and two loops, superconformal symmetry in combination with explicit
calculations fixes the dressing
factor to be 1. At higher loops, it is not so constrained.
A nontrivial dressing factor is required by the strong-coupling
behavior~\cite{AFS04,OtherDressing,HernandezLopez}, and there have been
several recent investigations of it using properties
such as worldsheet crossing symmetry~\cite{JanikCrossing,BHL}.
Yet the order in the weak-coupling expansion at which it becomes
nontrivial is still uncertain.
The presence of a dressing factor at three loops
would lead to a shift in the anomalous dimension at four loops.
For example, ES have proposed~\cite{EdenStaudacher} a modification
of the asymptotic Bethe ansatz~\cite{AsymptoticBA} at this order,
with coefficient $\beta$, which is consistent with the presently
known integrable structure. This modification alters the predicted
four-loop anomalous dimension in \eqn{gammaKB} to
\begin{equation}
- \Bigl( {73\over2520} \, \pi^6 - \zeta_3^2 + 2\beta \zeta_3 \Bigr) \hat{a}^4
\,.
\label{betashifteq}
\end{equation}
Thus a calculation of the four-loop cusp anomalous dimension has
the potential to probe a nontrivial dressing factor. In a
new preprint~\cite{BESNew}, Beisert, Eden and Staudacher
(BES) have shown how to extend the above dressing factor to all orders
in the coupling, as well as to ensure its consistency with other
constraints. This leads to the prediction that $\beta = \zeta_3$.
In this paper, we perform this calculation, in order to test whether the
ES all-orders proposal gives the correct result, or to reveal how it needs
to be modified if it doesn't. We do so by evaluating
the infrared poles of the planar four-loop four-gluon scattering amplitude
through $1/\epsilon^2$, the order at which the four-loop cusp anomalous
dimension appears. The form of the infrared singularities at all loop
orders are fully understood~\cite{MagneaSterman}, up to a set of constants
that must be computed explicitly. At $1/\epsilon^2$ this undetermined constant
is precisely the cusp anomalous dimension appearing in the ES formula.
First, though, we need a representation of the planar
four-loop four-gluon amplitude. Rather than construct this
representation from Feynman diagrams, we shall employ the unitarity
method~\cite{NeqFourOneLoop,Fusing,UnitarityMachinery,OneLoopReview,
TwoLoopSplitting}. This method was also used to construct the planar
two- and three-loop amplitudes~\cite{BRY,Iterate3}.
Because the unitarity method builds amplitudes at
any loop order from on-shell lower-loop amplitudes, structure
uncovered at the tree and one-loop levels can easily
be fed back into the construction of the higher-loop amplitudes.
We will find that the planar four-loop amplitude can be expressed
as a linear combination of just eight four-loop integrals.
A very interesting feature of the integrals appearing in the planar
four-point amplitudes through four loops is that they are all, in
a well-defined sense, conformally invariant. To analyze the conformal
properties of potential four-loop integrals, we make use of the recent
description of such integrals by Drummond, Henn, Sokatchev, and one
of the authors~\cite{DHSS}.
Once we know what four-loop four-point integrals enter into the amplitude,
we must compute them explicitly through ${\cal O}(\epsilon^{-2})$.
Here we make use of important recent advances in multi-loop
integration~\cite{SmirnovDoubleBox,%
LoopIntegrationAdvance,SmirnovTripleBox,Tausk,TwoloopOffandMassive,
Buch,AnastasiouDaleo,CzakonMB}.
In particular, we use the program {\tt MB}~\cite{CzakonMB} to
automatically extract poles in $\epsilon$ from the Mellin-Barnes
representation of loop integrals, and to integrate
the resulting contour integrals. We carry out the integrations
analytically for coefficients of the first five poles,
$1/\epsilon^8$ through $1/\epsilon^4$. These coefficients are expressed
in terms of well-studied functions, harmonic polylogarithms
(HPLs)~\cite{HPL,HPL2},
making it straightforward to confirm the expected infrared structure
for arbitrary values of the scattering angle.
For the coefficients of the $1/\epsilon^3$ and $1/\epsilon^2$ poles,
our analysis is numerical. We evaluate the amplitude
at four kinematic points,
$(s,t)=(-1,-1)$, $(-1,-2)$, $(-1,-3)$, and $(-1,-15)$.
Numerical evaluation suffices because the expected
behavior is completely specified at order $1/\epsilon^3$, and specified up
to one unknown, but predicted, constant, $f_0^{(4)}$, at order $1/\epsilon^2$.
We find consistent results from all four kinematic points.
We also need to evaluate the infrared singular terms to the same order
$1/\epsilon^2$. These may be expressed in terms of lower-loop amplitudes.
For this purpose, we must expand the one-, two-, and
three-loop amplitudes to ${\cal O}(\epsilon^4)$, ${\cal O}(\epsilon^2)$ and ${\cal O}(\epsilon^0)$,
respectively. Fortunately, these are precisely the orders required to
test the full iteration relation at three loops, so the needed
lower-loop expressions can all be found, in analytic form, in
ref.~\cite{Iterate3}. (These results are based partly on the earlier
evaluation of the double-box~\cite{SmirnovDoubleBox} and
triple-ladder~\cite{SmirnovTripleBox} integrals.)
We find that the ES conjecture is incorrect, although our numerical
results suggest that a simple modification of the four-loop ES
prediction may yield the correct answer. In particular, if we flip
the sign of the $\zeta_3^2$ term in their prediction, we obtain
a value within the error bars of our result.
If we choose to interpret the
modification as a dressing factor within the form taken
in ref.~\cite{EdenStaudacher}, it suggests taking
the value $\beta = \zeta_3$ for their parameter. This value
would violate the apparent ``uniform transcendentality'' observed to date
generally for quantities in the ${\cal N}=4$ theory,
for example in the anomalous dimension of the higher-twist operator
$\, {\rm Tr}(X^2 Z^3)+\cdots$~\cite{EdenStaudacher}.
The same value, $\beta = \zeta_3$,
was suggested independently by BES~\cite{BESNew}, based on properties
of the spin-chain model. This coincidence, modulo caveats we shall
discuss in \sect{LargeCouplingSection}, provides some support
to violation of uniform transcendentality in quantities other
than the cusp anomalous dimension.
This possibility can be tested via other perturbative computations;
if uniform transcendentality is nonetheless maintained,
our result might instead imply that the form postulated
for the dressing factor in ES and
elsewhere~\cite{AFS04,OtherDressing,BeisertKlose,BeisertInvariant,%
BeisertDynamic,BeisertPhase,BHL} is not general enough.
We also use our four-loop results to investigate the strong-coupling
limit of the cusp anomalous dimension. In the AdS/CFT correspondence,
this quantity can be computed from the energy of a long, folded string,
spinning in AdS${}_5$~\cite{StrongCouplingLeadingGKP}.
The first two coefficients in the strong-coupling, large-$N_c$ limit
of the cusp anomalous dimension have been determined using a semi-classical
expansion based on this string
configuration~\cite{StrongCouplingLeadingGKP,Kruczenski,Makeenko,
StrongCouplingSubleading}.
We shall discuss how our four-loop result can be used to give a
remarkably accurate estimate for these coefficients.
We employ an approximate formula devised by
Kotikov, Lipatov and Velizhanin~\cite{KLV,KLOV}
to interpolate between the weak- and strong-coupling regimes.
Using our four-loop result as input to this formula, the
first two strong-coupling coefficients are estimated to within
2.6\% and 5\%, respectively, of the values computed from string theory.
This agreement provides direct evidence in support of the
AdS/CFT correspondence as well as a smooth transition between weak and
strong coupling.
An even better approximation for the cusp anomalous dimension can be found
by incorporating into the interpolating formula the string predictions for
the first two coefficients in the strong-coupling expansion. Based on our
success in accurately estimating the two leading strong-coupling
coefficients, we expect this improved approximation to be accurate to
within a few percent, for all values of the coupling. Curiously, our
approximate formula predicts that the third coefficient in the
strong-coupling expansion should be very small, and may even vanish.
The formula also predicts the numerical value of the five-loop
coefficient. This value turns out to be extremely close to a
simple modification of the five-loop ES prediction, flipping the
signs of the terms containing odd $\zeta$ values, as at four loops.
We have confirmed our analysis using Pad\'e approximants, which
also give insight into the complex analytic structure of $f_0(\hat{a})$.
Our representation of the planar four-loop four-gluon amplitude in
terms of eight four-loop integrals can be used for more than just the
extraction of the cusp anomalous dimension. As mentioned above,
it can also be used to check the proposed iterative structure at four loops.
In order to do so, one would need to evaluate the integrals all the way
through the finite terms, ${\cal O}(\epsilon^0)$, instead of just the level
carried out in this paper, ${\cal O}(\epsilon^{-2})$. One would also need
to evaluate all integrals appearing in the lower-loop amplitudes
to two orders higher in $\epsilon$.
This paper is organized as follows. In \sect{ReviewSection}, we review
the iterative structure of MSYM\ loop amplitudes, commenting in particular
on how the cusp anomalous dimension appears in the
infrared singular terms. In \sect{ConstructionSection}, we present the
construction of the four-loop amplitudes via the unitarity method and
also describe the conformal properties of the resulting integrals. In
\sect{FourLoopAmpSection}, we give analytical results for the
amplitudes through ${\cal O}(\epsilon^{-4})$ and numerical results though
${\cal O}(\epsilon^{-2})$, allowing us to extract a numerical value for the
four-loop cusp anomalous dimension. The four-loop anomalous dimension
is then used in \sect{LargeCouplingSection} to estimate the
coefficients that appear at strong coupling and also to estimate higher-loop
contributions to the cusp anomalous dimension. Our conclusions are
given in \sect{AnalysisSection}. Two appendices are included,
one presenting Mellin-Barnes representations for the integrals
appearing in the four-loop amplitude, and one reviewing properties
of harmonic polylogarithms.
\section{Iterative structure of MSYM\ loop amplitudes}
\label{ReviewSection}
In this paper we consider the planar contributions to
gluonic scattering in MSYM\ with gauge group $SU(N_c)$,
that is, the leading terms as $N_c \to \infty$.
We do not discuss subleading-color contributions; at present
they do not appear to have a simple iterative structure~\cite{Iterate2}.
The leading-color terms have the same color structure as the
corresponding tree amplitudes. The leading-$N_c$
contributions to the $L$-loop $SU(N_c)$ gauge-theory $n$-point
amplitudes may be written as,
\begin{eqnarray}
{\cal A}_n^{(L)} & = & g^{n-2}
\Biggl[ { 2 e^{- \epsilon \gamma} g^2 N_c \over (4\pi)^{2-\epsilon} } \Biggr]^{L}
\sum_{\sigma\in S_n/Z_n}
\, {\rm Tr}( T^{a_{\sigma(1)}}
\ldots T^{a_{\sigma(n)}} )
A_n^{(L)}(\sigma(1), \sigma(2), \ldots, \sigma(n))\,,
\label{LeadingColorDecomposition}
\end{eqnarray}
where $\gamma$ is Euler's constant, and
the sum runs over non-cyclic permutations of the external legs.
In this expression we have suppressed the (all-outgoing) momenta $k_i$
and helicities $\lambda_i$, leaving only the index $i$ as a label. This
decomposition holds for all particles in the gauge super-multiplet,
as they are all in the adjoint representation. The color-ordered
partial amplitudes $A_n$ are independent of the color factors,
and depend only on the kinematics. For MSYM, supersymmetric
Ward identities~\cite{SWI} imply
that the four-gluon helicity amplitudes
$({+}{+}{+}{+})$ and $({-}{+}{+}{+})$ (as well as their parity conjugates)
vanish identically. Furthermore, the nonvanishing four-point
(MHV) amplitudes are all related by simple overall factors.
Hence we do not need to specify the helicity configuration,
{\it i.e.} whether the color ordering is $({-}{-}{+}{+})$
or $({-}{+}{-}{+})$.
It is convenient to scale out a factor of the tree amplitude,
and work with the quantities $M_n^{(L)}$ defined by
\begin{equation}
M_n^{(L)}(\rho;\epsilon) = A_n^{(L)}(\rho)/A_n^{(0)}(\rho) \,.
\label{RescaledLoopAmplitude}
\end{equation}
Here $\rho$ indicates the dependence on the external momenta,
$\rho \equiv \{ s_{12}, s_{23}, \ldots \}$,
where $s_{i(i+1)} = (k_i+k_{i+1})^2$ are invariants built
from color-adjacent momenta.
The iteration relation proposed in ref.~\cite{Iterate3}
then takes the form,
\begin{equation}
{\cal M}_n(\rho) \equiv 1 + \sum_{L=1}^\infty a^L M_n^{(L)}(\rho;\epsilon)
= \exp\Biggl[\sum_{l=1}^\infty a^l
\Bigl(f^{(l)}(\epsilon) M_n^{(1)}(\rho;l \epsilon) + C^{(l)}
+ E_n^{(l)}(\rho;\epsilon) \Bigr) \Biggr] \,.
\label{ExponentialResum}
\end{equation}
In this expression, the factor,
\begin{equation}
a \equiv { N_c \alpha_s \over 2\pi } (4\pi e^{-\gamma})^\epsilon\,,
\label{alphaberdef}
\end{equation}
keeps track of the loop order of perturbation theory, and coincides
with the prefactor in brackets in \eqn{LeadingColorDecomposition}.
(It becomes equal to $\hat{a}$ in \eqn{ahdef} as $\epsilon \to 0$.)
The quantity $M_n^{(1)}(\rho;l\epsilon)$ is the one-loop amplitude,
with the tree amplitude scaled out according
to~\eqn{RescaledLoopAmplitude}, and with the substitution
$\epsilon \to l\epsilon$ performed. (That is, it is evaluated in
$d = 4 - 2l\epsilon$.) Each $f^{(l)}(\epsilon)$ is given by a three-term series
in $\epsilon$, beginning at ${\cal O}(\epsilon^0)$,
\begin{equation}
f^{(l)}(\epsilon) = f_0^{(l)} + \epsilon f_1^{(l)} + \epsilon^2 f_2^{(l)} \,.
\label{flexp}
\end{equation}
The objects $f_k^{(l)}$, $k=0,1,2$, and $C^{(l)}$ are pure constants,
independent of the external kinematics $\rho$, and also independent of the
number of legs $n$. We expect them to be polynomials in the
Riemann values $\zeta_m$ with rational coefficients, and a uniform degree
of transcendentality, which is equal to $2l-2+k$ for $f_k^{(l)}$,
and $2l$ for $C^{(l)}$. (Multiple zeta values $\zeta_{n_1,n_2,\ldots}$
may also appear, but there are no independent ones of weight less than
eight, and so they can only appear starting at five-loop order.)
The $f_k^{(l)}$ and $C^{(l)}$ can be determined by matching
to explicit computations. The one-loop values are defined to be,
\begin{equation}
f^{(1)}(\epsilon) = 1\,, \hskip 2 cm C^{(1)} = 0\,,
\hskip 2 cm E_n^{(1)}(\rho;\epsilon) = 0\,.
\label{OneLoopfCE}
\end{equation}
The $E_n^{(l)}(\rho;\epsilon)$ are non-iterating ${\cal O}(\epsilon)$ contributions to the
$l$-loop amplitudes, which vanish as $\epsilon \rightarrow 0$, $E_n^{(l)}(\rho;0)
= 0$. These terms contribute to the exponentiated form of the
amplitudes~(\ref{ExponentialResum}) even for $\epsilon \rightarrow 0$
because they can appear multiplied by the infrared-divergent
parts of the one-loop amplitude $M_n^{(1)}(\rho;l \epsilon)$.
After canceling the infrared divergences between real
emission and virtual contributions, such terms should not contribute
to infrared-safe observables.
The first two values of $f^{(l)}(\epsilon)$ in the three-term
expansion~(\ref{flexp}), namely $f^{(l)}_0$ and
$f^{(l)}_1$, can be identified with quantities appearing in the
resummed Sudakov form factor~\cite{MagneaSterman},
\begin{eqnarray}
f^{(l)}_0 &=& {1\over4} \, \hat\gamma_K^{(l)} \,,
\label{f0toK}\\
f^{(l)}_1 &=& {l \over 2} \, \hat{\cal G}_0^{(l)} \,.
\label{f1toG}
\end{eqnarray}
The first object, $f^{(l)}_0$, is identified
with the $l$-loop cusp anomalous dimension.
The quantities $f^{(l)}_0$ and $\hat{\cal G}_0^{(l)} = (2/l)f^{(l)}_1$
are known through three loops~\cite{KL02,Makeenko,KLV,KLOV,Iterate3,MOS},
\begin{eqnarray}
f_0^{(1)} &=& 1 \,, \nonumber\\
f_0^{(2)} &=& - \zeta_2 \,, \label{f0Values}\\
f_0^{(3)} &=& {11\over2} \zeta_4 \nonumber
\end{eqnarray}
(see also \eqn{gammaKB}), and
\begin{eqnarray}
\hat{\cal G}_0^{(1)} &=& 0 \,, \nonumber\\
\hat{\cal G}_0^{(2)} &=& - \zeta_3 \,, \label{calGValues}\\
\hat{\cal G}_0^{(3)} &=& 4 \zeta_5 + {10\over 3} \zeta_2 \zeta_3 \,. \nonumber
\end{eqnarray}
A principal task of this paper is to compute $f_0^{(4)}$
and compare the result with the prediction~(\ref{gammaKB}).
\Eqn{ExponentialResum} is equivalent~\cite{Iterate3} to
\begin{equation}
M_n^{(L)}(\rho;\epsilon) = X_n^{(L)}\bigl[M_n^{(l)}(\rho; \epsilon)\bigr]
+ f^{(L)}(\epsilon) M_n^{(1)}(\rho; L \epsilon) + C^{(L)}
+ E_n^{(L)}(\rho; \epsilon) \,,
\label{iterX}
\end{equation}
where the quantities $X_n^{(L)} = X_n^{(L)}[M_n^{(l)}]$
only depend on the lower-loop amplitudes $M_n^{(l)}(\rho;\epsilon)$
with $l<L$.
The $X_n^{(L)}$ can be computed simply by performing
the following Taylor expansion,
\begin{equation}
X_n^{(L)}\bigl[ M_n^{(l)} \bigr]
= M_n^{(L)}
- \ln\Biggl( 1 + \sum_{l=1}^\infty a^l M_n^{(l)} \Biggr)
\Biggr\vert_{a^L\ {\rm term}} \,.
\label{Xsol}
\end{equation}
\Eqns{iterX}{Xsol} express the $L$-loop amplitude explicitly
in terms of lower-loop amplitudes, plus constant remainders.
Here we need the values of $X_n^{(L)}$ for $L=2,3,4$,
\begin{eqnarray}
X_n^{(2)}\bigl[M_n^{(l)}\bigr]
&=& {1\over2} \Bigl[ M_n^{(1)} \Bigr]^2 \,,
\label{X2} \\
X_n^{(3)}\bigl[M_n^{(l)}\bigr] &=& - {1\over3} \Bigl[ M_n^{(1)} \Bigr]^3
+ M_n^{(1)} M_n^{(2)}\,,
\label{X3}\\
X_n^{(4)}\bigl[M_n^{(l)}\bigr] &=& {1\over4} \Bigl[ M_n^{(1)} \Bigr]^4
- \Bigl[ M_n^{(1)} \Bigr]^2 M_n^{(2)}
+ M_n^{(1)} M_n^{(3)}
+ {1\over2} \Bigl[ M_n^{(2)} \Bigr]^2 \,.
\label{X4}
\end{eqnarray}
We note that the exponentiated result~(\ref{ExponentialResum})
leads to a simple exponentiated form for suitably-defined
``finite remainders'' $F_n^{(L)}$ associated with the multi-loop
amplitudes~\cite{Iterate3}. We define
\begin{equation}
F_n^{(L)}(\rho; \epsilon) = M_n^{(L)}
- \sum_{l=0}^{L-1} \hat I_n^{(L-l)} \, M_n^{(l)} \,,
\label{FnLdef}
\end{equation}
where the $\hat I_n^{(L-l)}(\rho;\epsilon)$ are iteratively-defined divergent terms,
and $M_n^{(0)} \equiv 1$. After some algebra,
one finds that $I_n^{(L)}(\rho;\epsilon)$ and $F_n^{(L)}(\rho;\epsilon)$ obey
iterative relations very similar to \eqn{iterX}.
In the limit as $\epsilon\to0$, the relation for $F_n^{(L)}(\rho;\epsilon)$ becomes
\begin{eqnarray}
F_n^{(L)}(\rho;0) &\equiv& X_n^{(L)}\bigl[ F_n^{(l)}(\rho;0) ]
+ f^{(L)}(\rho;0) F_n^{(1)}(\rho;0) + C^{(L)} \,.
\label{Fsolzero}
\end{eqnarray}
Because $\epsilon$ has disappeared from \eqn{Fsolzero}, it can be
solved neatly for $F_n^{(L)}(\rho;0)$ for any $L$,
in terms of the one-loop remainder $F_n^{(1)}(\rho;0)$ alone~\cite{Iterate3}.
The solution can be represented as,
\begin{eqnarray}
{\cal F}_n(\rho;0) \equiv 1 + \sum_{L=1}^\infty \hat{a}^L F_n^{(L)}(\rho;0)
&=& \exp\Biggl[\sum_{l=1}^\infty \hat{a}^l
\Bigl(f^{(l)}_0 F_n^{(1)}(\rho;0) + C^{(l)} \Bigr) \Biggr]
\nonumber\\
&\equiv&
\exp\Biggl[ {1\over 4} \gamma_K(\hat{a}) \ F_n^{(1)}(\rho;0) + C(\hat{a}) \Biggr] \,,
\label{F0Resum}
\end{eqnarray}
where $C(\hat{a}) = \sum_{l=1}^\infty C^{(l)} \hat{a}^l$ and we used the relation
(\ref{f0toK}) of $f^{(l)}_0$ to the cusp anomalous dimension. The result
for $F_n^{(L)}(\rho;0)$ is given by the $\hat{a}^L$ term in the Taylor
expansion of the exponential.
Next we present the specific forms of the iterative amplitude
relations~(\ref{iterX}) through four loops, specializing to $n=4$.
The two-loop version is~\cite{Iterate2}
\begin{equation}
M_4^{{(2)}}(\rho;\epsilon)
= {1 \over 2} \Bigl[ M_4^{{(1)}}(\rho;\epsilon) \Bigr]^2
+ f^{(2)}(\epsilon) \, M_4^{{(1)}}(\rho;2\epsilon) + C^{(2)}
+ {\cal O}(\epsilon) \,,
\label{TwoLoopOneLoopAgainn}
\end{equation}
where
\begin{equation}
f^{(2)}(\epsilon) = - (\zeta_2 + \zeta_3 \epsilon + \zeta_4 \epsilon^2) \,,
\label{f2def}
\end{equation}
and the constant $C^{(2)}$ is given by
\begin{equation}
C^{(2)} = - {1\over 2} \zeta_2^2 \,.
\label{C2def}
\end{equation}
The three-loop version, explicitly verified in ref.~\cite{Iterate3}, is
\begin{eqnarray}
M_4^{(3)}(\rho;\epsilon) &=& - {1\over 3} \Bigl[M_4^{(1)}(\rho;\epsilon)\Bigr]^3
+ M_4^{(1)}(\rho;\epsilon)\, M_4^{(2)}(\rho;\epsilon)
+ f^{(3)}(\epsilon) \, M_4^{(1)} (\rho; 3\,\epsilon) \nonumber\\
&& \hskip 1cm \null
+ C^{(3)} + {\cal O}(\epsilon) \,,
\label{ThreeLoopFourPtIteration}
\end{eqnarray}
where
\begin{equation}
f^{(3)}(\epsilon) = {11\over 2} \, \zeta_4
+ \epsilon (6 \zeta_5 + 5 \zeta_2 \zeta_3 ) +
\epsilon^2 (c_1 \zeta_6 + c_2\zeta_3^2) \,,
\label{f3def}
\end{equation}
and the constant $C^{(3)}$ is given by
\begin{equation}
C^{(3)} = \biggl( {341\over 216} \, + {2\over 9} c_1 \biggl) \zeta_6
+ \biggl( - {17\over 9} + {2\over 9} c_2 \biggr)\zeta_3^2\,.
\label{C3def}
\end{equation}
The constants $c_1$ and $c_2$ are expected to be rational numbers.
They drop out from the right-hand side of
\eqn{ThreeLoopFourPtIteration} because of a cancellation between
$f_2^{(3)}$ and $C^{(3)}$. A computation of the three-loop five-point
amplitude, or of the three-loop splitting amplitude, could be used to
determine them.
The four-loop iteration relation would have the following form,
\begin{eqnarray}
M_4^{(4)}(\rho;\epsilon) &=& {1\over4} \Bigl[ M_4^{(1)}(\rho;\epsilon) \Bigr]^4
- \Bigl[ M_4^{(1)}(\rho;\epsilon) \Bigr]^2 M_4^{(2)}(\rho;\epsilon)
+ M_4^{(1)}(\rho;\epsilon) M_4^{(3)}(\rho;\epsilon)
\nonumber\\
&& \hskip1cm\null
+ {1\over2} \Bigl[ M_4^{(2)}(\rho;\epsilon) \Bigr]^2
+ f^{(4)}(\epsilon) \, M_4^{(1)} (\rho; 4\,\epsilon) + C^{(4)}
+ {\cal O}(\epsilon) \,.
\label{FourLoopFourPtIteration}
\end{eqnarray}
As we shall not be computing the $1/\epsilon$ and finite terms in the
present paper, we cannot do more here than verify the (universal)
divergent terms and extract the value of the four-loop cusp anomalous
dimension. We leave to future work the important task of verifying
\eqn{FourLoopFourPtIteration}, using the integral form of the
four-loop amplitude presented in this paper.
\section{Construction of four-loop planar MSYM\ loop amplitude}
\label{ConstructionSection}
The unitarity
method~\cite{NeqFourOneLoop,Fusing,UnitarityMachinery,OneLoopReview,
TwoLoopSplitting} is an efficient way to determine the representations
of loop amplitudes in terms of basic loop integrals. The coefficients
of the loop integrals are obtained by sewing sets of on-shell
tree amplitudes. If we are using a four-dimensional form of
the unitarity method, the tree amplitudes can be significantly
simplified before sewing. At one-loop supersymmetric amplitudes
are fully determined from their four-dimensional cuts, but unfortunately
for higher loops no such theorem has been proven.
In the present calculation, we will
therefore use $D$-dimensional unitarity, for which we will need the tree
amplitudes to be evaluated
without assuming the four-dimensional helicity states
for the external legs. These amplitudes are nonetheless simpler than
the completely off-shell amplitudes that would implicitly arise
in a conventional Feynman-diagram calculation.
For MSYM, a key feature is that the on-shell tree amplitudes have the full
${\cal N}=4$ supersymmetry manifest, in the form of simple $S$-matrix
Ward identities~\cite{SWI}.
It is impossible to maintain the full ${\cal N}=4$ supersymmetry
in any off-shell formalism, because the superspace constraints
imply the equations of motion via the Bianchi identities~\cite{GSWANP}.
The unitarity method derives its efficiency
from the ability to use simplified forms of tree amplitudes to produce
simplified loop integrands. (To maintain the supersymmetry,
we apply the four-dimensional helicity (FDH) scheme~\cite{FDH},
a variation on dimensional reduction (DR)~\cite{Siegel}, in performing
the sum over intermediate gluon polarization states.)
The unitarity method expresses the amplitude in terms of a set of loop
integrals. In general gauge theories, such as QCD, the number of
required integrals proliferates rapidly as the number of loops increases,
and sophisticated algorithms based on integration-by-parts
identities~\cite{IBP} have been devised to relate such integrals to a smaller
class of master integrals, successfully through two
loops~\cite{IBP2loop}.
Fortunately, the number of required integrals grows much more slowly
for gluon-gluon scattering in planar ${\cal N}=4$ super Yang-Mills theory.
At $L=1,2,3$ the respective numbers are $1,1,2$, and the required
integrals are all shown in \fig{LowerLoopFigure}.
We will see that at $L=4$ eight integrals are required.
\begin{figure}[t]
\centerline{\epsfxsize 5.5 truein \epsfbox{lowerloop.eps}}
\caption{Integrals required for $gg \to gg$ scattering in planar MSYM\
at one loop (1), two loops (2) and three loops ((3)a and (3)b).
The box (1), planar double box (2) and three-loop ladder (3)a
integrals are scalar integrals, with no loop-momentum dependent
factors in the numerator. The tennis-court integral (3)b contains a factor
of $(l_1+l_2)^2$, where $l_1$ and $l_2$ are marked with arrows
in the figure.
}
\label{LowerLoopFigure}
\end{figure}
The result for the one-loop four-point amplitude is~\cite{GSB}
\begin{equation}
M_4^{{(1)}}(\epsilon) =
- {1 \over 2} \, {\cal I}^{(1)}(s,t) \,,
\label{OneLoopAmplitude}
\end{equation}
where the Mandelstam variables are $s = (k_1 + k_2)^2$
and $t = (k_2 +k_3)^2$.
The factor of $1/2$ in~\eqn{OneLoopAmplitude} follows from our
normalization convention for $A_n^{(L)}$, which is defined
by \eqn{LeadingColorDecomposition}.
The one-loop scalar box integral ${\cal I}_4^{(1)}(s,t)$ (multiplied by a
convenient normalization factor), depicted in \fig{LowerLoopFigure},
is
\begin{equation}
{\cal I}^{(1)}(s,t) \equiv s t \, I_4^{(1)}(s,t) \,,
\end{equation}
where $I_4^{(1)}(s,t)$ is defined in eq.~(B1) of
ref.~\cite{Iterate3}. We absorb the factor of $st$ into the
definition of the integrals we use, because it cancels a factor of
$1/(st)$ appearing in the explicit expression for $I_4^{(1)}(s,t)$,
and matches the form in which it appears in the MSYM\ amplitudes.
This integral is given in terms of HPLs in eq.~(B1) of that reference,
through the order we require here, ${\cal O}(\epsilon^4)$. The higher-order
terms in $\epsilon$ are needed to be able to evaluate terms in the
infrared/iterative representation~(\ref{FourLoopFourPtIteration}) of
the four-loop amplitude through ${\cal O}(\epsilon^{-2})$. For example, in the
term $[ M_4^{(1)}(\epsilon) ]^4\ \propto\ [ {\cal I}^{(1)}(\epsilon) ]^4$ in
\eqn{FourLoopFourPtIteration}, if one takes the leading $1/\epsilon^2$ term
from three of the four factors, then the coefficient of $\epsilon^4$ in the
fourth factor contributes to the ${\cal O}(\epsilon^{-2})$ term in the product.
The planar two-loop MSYM\ four-point amplitude is given by~\cite{BRY}
\begin{equation}
M^{(2)}_4(\epsilon) =
{1\over 4} \,
\Bigl[ {\cal I}^{(2)}(s,t) + {\cal I}^{(2)}(t,s) \Bigr] \,.
\label{TwoloopPlanarResult}
\end{equation}
The two-loop scalar double-box integral, shown
in \fig{LowerLoopFigure}, is
\begin{equation}
{\cal I}^{(2)}(s,t) \equiv s^2 t \, I_4^{(2)}(s,t) \,,
\end{equation}
where $I_4^{(2)}(s,t)$ is defined in
eq.~(B4) of ref.~\cite{Iterate3}. As at one loop,
we have rescaled the integral to remove the rational prefactor.
This integral was first evaluated through ${\cal O}(\epsilon^0)$ in terms
of polylogarithms~\cite{SmirnovDoubleBox}.
Because the infrared/iterative
expression~(\ref{FourLoopFourPtIteration})
contains, for example,
$[ M_4^{(2)}(\epsilon) ]^2\ \propto\ [ {\cal I}^{(2)}(\epsilon) ]^2$,
and because the expansion of ${\cal I}^{(2)}(\epsilon)$ begins at
order $1/\epsilon^4$, we need its expansion through ${\cal O}(\epsilon^2)$.
This expansion is presented in terms of HPLs in eq.~(B5) of
ref.~\cite{Iterate3}.
The three-loop planar amplitude is given by~\cite{BRY,Iterate3}
\begin{equation}
M^{(3)}_4(\epsilon) =
-{1\over8} \,
\Bigl[ {\cal I}^{{(3)}{\rm a}}(s,t) +
2 \, {\cal I}^{{(3)}{\rm b}}(t,s) +
{\cal I}^{{(3)}{\rm a}}(t,s) +
2 \, {\cal I}^{{(3)}{\rm b}}(s,t) \Bigr] \,.
\label{ThreeLoopPlanarResult}
\end{equation}
The scalar triple-ladder and non-scalar ``tennis-court'' integrals,
illustrated in \fig{LowerLoopFigure}, are
\begin{eqnarray}
&& {\cal I}^{{(3)}{\rm a}}
\equiv s^3 t \, I_4^{{(3)} \rm a}(s,t)\,, \nonumber \\
&& {\cal I}^{{(3)}{\rm b}}
\equiv s t^2 \, I_4^{{(3)} \rm b}(s,t)\,,
\end{eqnarray}
where $I_4^{{(3)} \rm a}(s,t)$ and $I_4^{{(3)} \rm b}(s,t)$
are defined in
eqs. (3.1) and (3.2), respectively, of ref.~\cite{Iterate3}.
Because these integrals multiply ${\cal I}^{(1)}$ in the term
$M_4^{(1)}(\epsilon) M_4^{(3)}(\epsilon)$ in \eqn{FourLoopFourPtIteration},
we need their expansion through ${\cal O}(\epsilon^0)$.
These expansions were first carried out in terms of HPLs
for $I_4^{{(3)} \rm a}$ in ref.~\cite{SmirnovTripleBox},
and for $I_4^{{(3)} \rm b}$ in ref.~\cite{Iterate3}.
The results are collected in eqs.~(B7) and (B9) of ref.~\cite{Iterate3}.
The coefficients of the integrals in the two- and three-loop
expressions~(\ref{TwoloopPlanarResult}) and
(\ref{ThreeLoopPlanarResult}) were originally determined~\cite{BRY}
using iterated two-particle cuts. Such cuts can be evaluated to all
orders in $\epsilon$ because ${\cal N}=4$ supersymmetry relates all nonvanishing
four-point amplitudes; therefore precisely the same algebra enters as
at one loop, for which it leads to the
amplitude~(\ref{OneLoopAmplitude}). More generally, an ansatz for the
planar contributions to the integrands was proposed in terms of a
``rung insertion rule''~\cite{BRY,BDDPR} to be described below, which
was based largely on the structure of the iterated two-particle cuts.
At three loops, the planar integrals generated by the rung rule can
all be constructed using iterated two-particle cuts. Also, the
three-loop planar amplitude~(\ref{ThreeLoopPlanarResult}) has the
correct infrared poles and a remarkable iterative
structure~\cite{Iterate3}, so there is little doubt that it is the
complete answer.
However, beyond three loops --- and even at three loops for non-planar
contributions --- the rung rule generates graphs that cannot
be obtained using iterated two-particle cuts.
It is less certain that the rung rule gives the correct results
for such contributions. Indeed, we shall see that there are
additional, non-rung-rule, contributions to the planar
amplitude beginning at four loops.
Nevertheless, we start constructing the planar
four-loop MSYM\ amplitude using the diagrams generated by
the rung rule. According to this rule,
each diagram in the planar $L$-loop amplitude can be used
to generate planar $(L+1)$-loop diagrams as follows:
First, one generates a set of diagrams by inserting a new line
joining each possible pair of internal lines.
Next, one removes from this set all diagrams with triangle
or bubble subdiagrams. Besides the scalar propagator associated
with the new line, one also includes an additional numerator factor for
the diagram, beyond that inherited from the $L$-loop diagram,
of $i(l_1+l_2)^2$. Here $l_1$ and $l_2$ are the momenta
flowing through each of the legs to which the new line is joined.
Each distinct $(L+1)$-loop contribution is counted once, even if it
can be generated in multiple ways. (Contributions corresponding
to identical graphs but with different numerator factors should be
counted as distinct.) Rung-rule diagrams have also been referred
to as ``Mondrian diagrams'' because of their visual
similarity~\cite{Iterate3}.
At one loop, the only no-triangle graph for the four-point process
is the box graph depicted in \fig{LowerLoopFigure}.
In going to two loops, there is only one inequivalent way to
add a rung to the box graph without creating a triangle.
Connecting opposing sides of the box, we form the planar double
box in \fig{LowerLoopFigure}.
(One might also imagine attaching a propagator between two adjacent external
legs. However, this operation yields the same double box.)
To generate the three-loop graphs, we add a rung, either
vertically or horizontally, to one of the boxes of the two-loop
box diagram, thus yielding the triple ladder (3)a and tennis-court
integral (3)b of \fig{LowerLoopFigure}.
(Attaching a propagator between external legs again gives nothing new.)
Of course, there are several permutations of external legs present
for any given type of graph; all of them are produced by the rung rule.
Here we are identifying only the topologically distinct integrals
that arise.
\begin{figure}[t]
\centerline{\epsfxsize 6.4 truein \epsfbox{rr.eps}}
\caption{``Rung-rule'' contributions to the leading-color four-loop amplitude,
in terms of integral functions given in
eqs.~(\ref{MBIntegralA})--(\ref{MBIntegralF}).
An overall factor of $st$ has been suppressed in each figure,
compared with the definitions in
eqs.~(\ref{MBIntegralA})--(\ref{MBIntegralF}).}
\label{rrFigure}
\end{figure}
What happens when we try to add rungs to the three-loop integrals?
There are two inequivalent ways to add a rung inside either
the left- or the right-most box of the triple ladder integral
in \fig{LowerLoopFigure};
these give the integrals of \fig{rrFigure}(a) and~(c).
Adding a vertical rung inside the middle
box does not yield a topologically distinct integral. Adding
a horizontal rung inside the middle box yields the integral
of \fig{rrFigure}(d). Inserting a vertical rung inside the upper-right
box of the tennis-court integral in \fig{LowerLoopFigure}
gives us \fig{rrFigure}(e), and a horizontal one,
\fig{rrFigure}(b). Finally, adding a horizontal rung inside the left-side
box of the tennis-court integral gives us a new kind of integral,
shown in~\fig{rrFigure}(f), which has no two-particle cuts.
\begin{figure}[t]
\centerline{\epsfxsize 5.5 truein \epsfbox{cuts.eps}}
\caption{Generalized cuts that provide information about the
planar four-loop amplitude. (i) A two-particle cut separating
a tree amplitude from a three-loop amplitude.
(ii) A two-particle cut separating a one-loop amplitude from
a two-loop amplitude. (iii) A ``3--3'' cut separating the amplitude
into a product of three tree amplitudes. (iv) An ``upper-2--3--lower-2''
cut separating the amplitude into a product of four tree amplitudes.
(v) A ``lower-2--3--lower-2'' cut. (vi) A ``3--lower-3'' cut.
}
\label{CutsFigure}
\end{figure}
The propagators and momentum numerators present in integrals
\fig{rrFigure}(a)--(e) are determined by the two-particle cuts,
as depicted in \fig{CutsFigure}(i) and (ii).
That is, the rung rule is guaranteed to be correct for them.
There is no such guarantee
for integral~\fig{rrFigure}(f), which has no two-particle
cut with real external momenta. In order to check it, we need
to compute a three-particle cut. We chose to perform the
generalized-unitarity cut
of \fig{CutsFigure}(iv), a threefold-cut with a central three-particle
cut and two secondary two-particle cuts. This cut reveals that the
rung rule is more robust than might have been expected,
based on its origin in iterated two-particle cuts:
The rule does in fact give the correct form for the numerator
of the integral in~\fig{rrFigure}(f), even though no two-particle cut
can detect it.
Because we have no proof that the $(-2\epsilon)$-dimensional parts of loop
momenta are unimportant to the computation, we perform these
calculations in $D$ dimensions. For this purpose, we need tree
amplitudes with (some of) the external states kept in general
dimension. In the three-particle cuts, one has contributions from
three-gluon states, and from states with gluons and fermion (gluino)
pairs crossing the cut, in addition to states with scalar pairs or
fermion pairs and a lone scalar. In principle, one could
evaluate these cuts by computing all the required amplitudes, and
summing over the particle multiplet. However, it is easier to use a
trick. The trick takes advantage of the fact that the MSYM\ multiplet,
consisting of the gluon, four Majorana fermions, and six real scalars,
can also be understood as a single ${\cal N}=1$ multiplet in ten
dimensions. Instead of summing over the multiplet seen as the
${\cal N}=4$ multiplet in four dimensions, we sum over the ${\cal N}=1$
multiplet in ten dimensions. This trick reduces the number of types
of intermediate states one has to consider, though of course the total
number of states is unchanged. The loop momenta are in any event kept
in $D$ dimensions. (The trick is compatible with the FDH
regularization scheme~\cite{FDH}, where the momenta are taken to be in $D$
dimensions, but the number of states in the loops is kept at the
four-dimensional value.)
\begin{figure}[t]
\centerline{\epsfxsize 3.4 truein \epsfbox{nonrr.eps}}
\caption{Non-rung-rule contributions to the leading-color four-loop amplitude,
in terms of integral functions defined in
eqs.~(\ref{MBIntegralD2}) and (\ref{MBIntegralF2}).
Integral (d${}_2$) follows the labeling of integral (d) in \fig{rrFigure},
and integral (f${}_2$) follows the labeling of integral (f).
An overall factor of $st$ has been suppressed in each figure,
compared with eqs.~(\ref{MBIntegralD2}) and (\ref{MBIntegralF2}).}
\label{nonrrFigure}
\end{figure}
The cut of \fig{CutsFigure}(iv) also reveals the presence of a
non-rung-rule integral, shown in \fig{nonrrFigure}(d${}_2$).
The integral has, of course, no two-particle cuts, but it can be
obtained from the integral topology of \fig{rrFigure}(d)
by canceling two propagators, labeled 10 and 13.
The integral of \fig{rrFigure}(f) could also have been checked using
another generalized-unitarity cut, the two-fold three-particle cut
of \fig{CutsFigure}(iii). This cut also reveals the presence of
the ``four-square'' integral, shown in~\fig{nonrrFigure}(f${}_2$).
This integral can be obtained from the topology of \fig{rrFigure}(f)
by canceling the propagator labeled 13.
The cut of \fig{CutsFigure}(iii)
also detects the integral of \fig{nonrrFigure}(d${}_2$).
Under mild assumptions, which we discuss in \sect{correctnesssection},
it is sufficient to compute two additional multiple cuts,
shown in \fig{CutsFigure}(v) and (vi), in order
to rule out any additional contributions. We have computed these cuts,
and we indeed find that no additional integrals appear.
\section{Integral representation of the four-loop planar amplitude}
\label{FourLoopAmpSection}
We find that the four-loop planar amplitude is given by
\begin{eqnarray}
M^{(4)}_4(\epsilon) &=&
{1\over16}
\Bigl[ {\cal I}^{\rm (a)}(s,t) + {\cal I}^{\rm (a)}(t,s)
+ 2 \, {\cal I}^{\rm (b)}(s,t) + 2 \, {\cal I}^{\rm (b)}(t,s)
+ 2 \, {\cal I}^{\rm (c)}(s,t) + 2 \, {\cal I}^{\rm (c)}(t,s)
\nonumber\\
&&\hskip0.4cm\null
+ {\cal I}_4^{\rm (d)}(s,t) + {\cal I}^{\rm (d)}(t,s)
+ 4 \, {\cal I}^{\rm (e)}(s,t) + 4 \, {\cal I}^{\rm (e)}(t,s)
+ 2 \, {\cal I}^{\rm (f)}(s,t) + 2 \, {\cal I}^{\rm (f)}(t,s)
\nonumber\\
&&\hskip0.4cm\null
- 2 \, {\cal I}^{\rm (d_2)}(s,t) - 2 \, {\cal I}^{\rm (d_2)}(t,s)
- {\cal I}^{\rm (f_2)}(s,t)
\Bigr]
\,.
\label{FourLoopPlanarResult}
\end{eqnarray}
The rung-rule integrals ${\cal I}^{\rm (a)}$ through ${\cal I}^{\rm (f)}$ are depicted
in \fig{rrFigure}. The two additional integrals, ${\cal I}^{\rm (d_2)}$ and
${\cal I}^{\rm (f_2)}$, depicted in \fig{nonrrFigure}, do not follow from
the rung rule, and were detected using generalized cuts with at
least one three-particle channel, as discussed in
\sect{ConstructionSection}.
The integrals appearing in the four-point amplitude are defined
generically, for diagram ``$(x)$'' by
\begin{equation}
{\cal I}^{(x)}(s,t) \equiv
(-i e^{\epsilon \gamma} \pi^{-d/2})^4
\int \mbox{d}^d p\,\mbox{d}^d q\,\mbox{d}^d u\, \mbox{d}^d v\
{ s t \, {\cal N}^{(x)} \over \prod_j p_j^2 } \,,
\label{genericintdef}
\end{equation}
where $p,q,u,v$ are the four independent loop integration variables,
and $d=4-2\epsilon$. The product in the denominator of \eqn{genericintdef}
runs over the labels of internal lines in the graph $(x)$. (For
graphs (b) and (c), line 11 corresponds to a numerator factor, so it
should be excluded from this product. Similarly, lines 10 and 13 are
to be omitted from the denominator product for graph (d${}_2$), and
line 13 from the product for graph (f${}_2$).) Each line carries
momentum $p_j$, which is some linear function of $p,q,u,v$ and the
external momenta. The line label $j$ is shown next to each internal
line. The numerator factor ${\cal N}^{(x)}$ is also shown explicitly,
to the left of the graph for $(x)$. We have omitted an overall factor
of $st$ from ${\cal N}^{(x)}$ associated with each integral from the
figure, in order to avoid cluttering it.
For the quadruple-ladder graph (a), and for the non-rung-rule graphs
(d${}_2$) and (f${}_2$), ${\cal N}^{(x)}$ is completely independent of the
loop momentum, and so it may be pulled outside of the integral.
For the other graphs, at least one factor in ${\cal N}^{(x)}$
depends on the loop momenta. In generating a Mellin-Barnes
representation for these integrals, we think of these factors
as additional ``propagators'' appearing in the numerator instead
of the denominator. Accordingly, we
attach a line label $j$ to each such
factor. The presence of such a factor is also indicated graphically
by a pair of parallel arrows, marking the lines whose momenta are
summed, then squared, to generate the numerator factor.
For example, the quadruple ladder integral (a) is defined by
\begin{eqnarray}
{\cal I}^{\rm (a)}
&=& (-i e^{\epsilon \gamma} \pi^{-d/2})^4 \, s^4 t
\int
{\mbox{d}^d p\,\mbox{d}^d r\,\mbox{d}^d u\,\mbox{d}^d v
\over p^2 \, (p - k_1)^2 \, (p - k_1 - k_2)^2
\, q^2 \, (p - q)^2 \, (q-k_1-k_2)^2 }
\nonumber \\
&& \hskip3cm \times
{1\over u^2 \, (q - u)^2 \, (u-k_1-k_2)^2
\, v^2 \, (u - v)^2 \, (v-k_1-k_2)^2 \, (v+k_4)^2 }
\nonumber \\
&=& (-i e^{\epsilon \gamma} \pi^{-d/2})^4
\int
{\mbox{d}^d p\,\mbox{d}^d r\,\mbox{d}^d u\,\mbox{d}^d v \ s^4 t
\over \prod_{j=1}^{13} p_j^2}\,.
\label{QuadrupleLadder}
\end{eqnarray}
Similarly, integral (b) is defined by
\begin{eqnarray}
{\cal I}^{\rm (b)}
&=& (-i e^{\epsilon \gamma} \pi^{-d/2})^4 \, s t^2
\int
{\mbox{d}^d p\,\mbox{d}^d r\,\mbox{d}^d u\,\mbox{d}^d v \ [(v+k_1)^2]^2
\over p^2 \, (p - k_1)^2 \, (p - v - k_1)^2 \,
\, q^2 \, (p - q)^2 \, (q-v -k_1)^2 }
\nonumber \\
&& \hskip3cm \times
{1\over u^2 \, (q - u)^2 \, (u-v-k_1)^2 \, (u+k_4)^2
\, v^2 \, (v - k_2)^2 \, (v-k_2-k_3)^2 }
\nonumber \\
&=& (-i e^{\epsilon \gamma} \pi^{-d/2})^4
\int
{\mbox{d}^d p\,\mbox{d}^d r\,\mbox{d}^d u\,\mbox{d}^d v \ s t^2 \, (p_{11}^2)^2
\over p_{12}^2 \, p_{13}^2 \, p_{14}^2 \prod_{j=1}^{10} p_j^2}\,.
\label{Integralb}
\end{eqnarray}
The double arrows indicate that in this case the numerator factor appears
squared, $(p_{11}^2)^2 \equiv (l_8 + l_{10})^4 = [(v+k_1)^2]^2$.
\section{Establishing the correctness of the integrand}
\label{correctnesssection}
In this section we justify the result~(\ref{FourLoopPlanarResult}) for
the four-loop planar amplitude, based upon our evaluation of the
unitarity cuts. Because we have not evaluated all possible unitarity
cuts, we have to impose some mild assumptions about the types of
integrals that should be present. We will see that another, stronger
assumption of conformal invariance also holds for the individual
integrals that appear, although we do not require it. In addition, as
we shall see in the next section, the agreement of the infrared
singularities through ${\cal O}(\epsilon^{-2})$ with their known
form~\cite{MagneaSterman} --- up to the one unknown constant at
${\cal O}(\epsilon^{-2})$ --- provides a non-trivial consistency check on our
construction.
The analysis determining the integrand of the four-loop four-point
amplitude proceeds in several steps:
\begin{itemize}
\item We assume that there are no integrals with triangle or
bubble subgraphs.
\item We classify the four-loop planar integrals
of this type topologically. We begin with the subset of
graphs having only cubic vertices, from which we can obtain
the remaining graphs.
\item We construct a set of generalized cuts capable of detecting
all such integrals. That is, each such integral, when restricted
to the generalized cut kinematics, should be
nonvanishing for at least one cut in the set. For it to
be nonvanishing, it must have a propagator present for each
line being cut.
We used this set of cuts to deduce the
terms in the expression~(\ref{FourLoopPlanarResult}). Indeed, we
find that each such cut of the expression is completely consistent
with our evaluation of the cut.
This step confirms the result, under the ``no triangle'' assumption.
\item Alternatively, we consider the result of assuming that only
{\it conformally-invariant integrals} contribute, when the external
legs are taken off-shell so that the integrals become well-defined
(finite) in four dimensions.
Such integrals were considered recently in ref.~\cite{DHSS}.
We find that the conformal-invariance assumption is a powerful one;
it allows all eight integrals contributing to \eqn{FourLoopPlanarResult}
to be present, while forbidding all but two of the additional
no-triangle integrals. The potential contributions of these remaining
two integrals are easily ruled out by examining the two-particle cuts.
\end{itemize}
Next we elaborate on each step in the analysis.
\subsection{Unitarity construction}
\label{unitarity subsection}
Our assumption, that there are no integrals with triangle or
bubble subgraphs in multi-loop ${\cal N}=4$ super-Yang-Mills theory,
is sometimes referred to as the ``no triangle hypothesis''.
(Such an assumption also appears to be applicable to ${\cal N}=8$
supergravity, at least at one loop~\cite{GravNoTriangle}.)
We now discuss evidence in favor of this hypothesis.
First, notice that a bubble subgraph would lead to an
ultraviolet subdivergence. In the absence of cancellations
between different integral topologies, such subdivergences
are forbidden by the finiteness of MSYM~\cite{Finiteness}.
Keep in mind that all cancellations between different particles
in the supermultiplet have already been taken into account, so
the coefficients of all bubble integrals should indeed be zero.
Next, suppose there were a triangle subgraph in some multi-loop
integral. Excise a region around the triangle, cutting through open
propagators attached to the integral. This excised region represents
a one-particle-irreducible triangle-type contribution to a
one-loop $n$-particle scattering amplitude.
If all $n$ cut legs are gluons, then we know that
such a contribution is forbidden in MSYM, for arbitrary
$n$, by applying loop-momentum power-counting to a computation
of the one-loop amplitude using background-field
gauge~\cite{SuperspaceBook,NeqFourOneLoop}.
However, in the present case some of the $n$ legs might be
associated with other fields of the ${\cal N}=4$ supermultiplet.
Supersymmetry Ward identities~\cite{SWI} typically relate
many such amplitudes to the gluonic case, but we do not
know of a general proof. Thus we do not claim to
have a full proof that all topologies with triangle subgraphs
are absent, although we strongly suspect that it is the case.
We now wish to classify the four-loop planar integrals
containing no triangle or bubble subgraphs.
We can perform this classification first
for the subset of such graphs that have only cubic (three-point)
vertices, for the following reason:
If a graph contains a quartic or higher-point vertex, we can ``resolve''
the vertex into multiple three-point vertices by moving some of the
lines attached to the vertex. Such a procedure never decreases the
number of propagators associated with a given loop. Therefore a
no-triangle graph will remain a no-triangle graph under this procedure.
So, if we know all the no-triangle graphs with only cubic
vertices, we can get every remaining no-triangle graph by sliding
cubic vertices together until they coincide, a procedure known
as ``canceling propagators''.
\begin{figure}[t]
\centerline{\epsfxsize 1.3 truein \epsfbox{gdiagram.eps}}
\caption{The only one-particle-irreducible purely-cubic
four-loop four-point graph with no triangle or bubble subgraphs,
besides the rung-rule graphs in \fig{rrFigure}. }
\label{gFigure}
\end{figure}
The cubic subset can be classified iteratively in the number
of loops using the Dyson-Schwinger equation. In order to
use the Dyson-Schwinger approach for the case at hand,
the planar four-loop four-point amplitude, we also need to
classify the planar cubic no-triangle graphs of the following types:
one loop for the number of legs $n$ up to 7, two loops
for $n$ up to 6, and three loops for $n$ up to 5.
The result is that there are a total of 13 planar cubic four-loop
four-point no-triangle graphs.
Of the 13 graphs, seven are one-particle-irreducible.
Six of these seven have the topology of the rung-rule graphs
shown in \fig{rrFigure}. The seventh is shown in \fig{gFigure}.
Note that here we are only classifying
graphs, and not yet specifying the loop-momentum polynomials
associated with each graph.
The six one-particle-reducible graphs can be obtained
by sewing external trees onto four-loop graphs with
either two or three external legs. (There are no planar cubic
no-triangle graphs at four loops with fewer than two
external legs.) From the point of view of the cuts,
these one-particle-reducible graphs are equivalent to
certain of the non-cubic graphs obtained by canceling external
propagators, so we shall defer their description briefly.
\begin{figure}[t]
\centerline{\epsfxsize 6.5 truein \epsfbox{notriangle.eps}}
\caption{No-triangle planar four-loop graphs, in addition
to those given in \figs{rrFigure}{nonrrFigure}. The notation
indicates the rung-rule graph in \fig{rrFigure}, or graph (g) in
\fig{gFigure}, that a given graph here is derived from
by canceling propagators. }
\label{NoTriangleFigure}
\end{figure}
The next step is to cancel propagators between vertices
in the cubic graphs in \figs{rrFigure}{gFigure}. That is, we merge adjacent
three-point vertices into four-point (or higher-point) vertices
by eliminating the line(s) between them, while insisting on
at least four propagators around every sub-loop.
This procedure is also straightforward to carry out.
It generates the two non-rung-rule graphs in the
expression~(\ref{FourLoopPlanarResult}), shown in \fig{nonrrFigure},
as well as the 16 additional graphs shown in \fig{NoTriangleFigure}.
We have given each graph a notation which indicates the rung-rule
graph (or graph (g)) from which it can be generated by canceling one or more
propagators. For example, graph (b${}_3$) is found by canceling
propagators 13 and 14 in rung-rule graph (b), and graph (e${}_6$)
is found by canceling propagators 8 and 11 in rung-rule graph (e).
Some graphs in \fig{NoTriangleFigure} can be generated from more
than one cubic graph. In fact, five of the six
one-particle-reducible cubic graphs mentioned earlier are equivalent,
upon canceling external propagators, to diagrams in \fig{NoTriangleFigure},
namely graphs (b${}_2$), (d${}_3$), (d${}_5$), (e${}_1$), and (g${}_1$).
Hence we do not provide a separate figure for the
one-particle-reducible cubic graphs.
The sixth graph has the form of a massless version of graph (d${}_5$),
sewn as an external bubble to a four-point tree graph. Such
integrals vanish in dimensional regularization, so we need
not consider them.
To complete the direct justification of the four-loop
result~(\ref{FourLoopPlanarResult}), given the no-triangle assumption,
we just need to show that each graph appearing in
figs.~\ref{rrFigure}, \ref{nonrrFigure}, \ref{gFigure}
and \ref{NoTriangleFigure} can be detected by at least one of the
generalized cuts we have computed,
out of the six cuts shown in \fig{CutsFigure}.
Table~\ref{CutDetectTable} summarizes
some of the cuts that detect the no-triangle graphs.
For some of the graphs, other cuts also detect them,
but for brevity they were not listed in the table.
Almost all graphs appear in multiple cuts.
Only two of the graphs, (b${}_4$) and (e${}_6$), appear
in a unique cut, the ``3--lower-3'' cut labeled (vi) in \fig{CutsFigure}.
\begin{table}
\caption{\label{CutDetectTable}
The no-triangle graphs, and some of the cuts from \fig{CutsFigure}
that detect them.
(In some cases, the diagram must be rotated or flipped first.)}
\vskip .4 cm
\begin{tabular}{||c|c||c|c||c|c||}
\hline
\hline
Graph & Cuts & Graph & Cuts & Graph & Cuts \\
\hline
\hline
(a) & (i), (ii), (iii) &
(b${}_1$) & (v), (vi) &
(d${}_5$) & (i), (iii), (iv) \\
\hline
(b) & (i), (v) &
(b${}_2$) & (i), (vi) &
(e${}_1$) & (i), (iii), (iv) \\
\hline
(c) & (i), (ii), (iii), (iv) &
(b${}_3$) & (v), (vi) &
(e${}_2$) & (iii), (iv) \\
\hline
(d) & (i), (iii), (v) &
(b${}_4$) & (vi) &
(e${}_3$) & (i), (vi) \\
\hline
(e) & (i), (iii), (iv), (vi) &
(c${}_1$) & (i), (iii), (iv)&
(e${}_4$) & (i), (vi) \\
\hline
(f) & (iii), (iv), (vi) &
(d${}_1$) & (i), (iii), (iv) &
(e${}_5$) & (i), (iii), (iv) \\
\hline
(d${}_2$) & (iii), (iv) &
(d${}_3$) & (i), (iii), (iv) &
(e${}_6$) & (vi) \\
\hline
(f${}_2$) & (iii), (vi) &
(d${}_4$) & (i), (iii), (iv) &
(g) & (i), (vi) \\
\hline
& &
& &
(g${}_1$) & (i), (vi) \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Conformal properties}
\label{conformalsubsection}
We have finished our justification of the
representation~(\ref{FourLoopPlanarResult}) for the four-loop planar
amplitude. In the rest of this section we would like to examine the
consequences of making a stronger assumption than the no-triangle
hypothesis. This assumption is that each of the integral functions
that appears is conformally invariant. Here we are inspired by the
discussion of conformally-invariant integrals by Drummond, Henn,
Sokatchev, and one of the authors~\cite{DHSS}. Although the
requirement of conformal invariance is natural because of the
conformal invariance of the theory in four dimensions, we do not
have a proof that these integrals are the only ones that can
appear in the amplitudes. Nevertheless, as we shall see, the
conformal properties offer a rather useful guide.
(It is possible that extensions of the conformal-invariance
analysis in ref.~\cite{GKS} could be used to prove that only
such conformally invariant integrals can be present.)
We actually wish to study the conformal properties of the integrals in
four dimensions, yet they are ill-defined there because of the infrared
divergences associated with on-shell, massless external legs. We
therefore adopt a different infrared regularization of the integrals by taking
the external legs off shell, letting $k_i^2\neq0$, $i=1,2,3,4$, instead of
using a dimensional regulator as in the rest of the paper. We demand
that each integral that appears be conformally invariant. Actually,
the integrals need only transform covariantly,
carrying conformal weights associated with each of the external legs,
in such a way that they can be made invariant by multiplying by appropriate
overall factors of $s$ or $t$. In canceling the conformal weights
using the external invariants, we should not use any factors that vanish
as the external legs return on shell, in the limit $k_i^2 \to 0$.
Such factors would lead to power-law divergences or to vanishings
of the integrals that are too severe, compared to the typical
logarithmic dependences on $k_i^2$ from the known form of
infrared singularities. Thus integrals that require powers of $k_i^2$
to be conformally invariant should not appear in any on-shell amplitude.
This on-shell restriction turns out to be a powerful one.
The net result of the conformal-invariance requirement
will be that, besides the
eight integrals already present in \eqn{FourLoopPlanarResult} and
\figs{rrFigure}{nonrrFigure}, remarkably only two other potential
conformal integrals survive. The first of them is the (d${}_5$) graph
from \fig{NoTriangleFigure}. This graph has the structure of a
potential propagator correction at four loops. However, in the
context of the $gg\to gg$ scattering amplitude of the ${\cal N}=4$
theory, it can be excluded very simply, without computing any
generalized cuts. One only has to use the structure of the three-loop
amplitude, and the simplest two-particle cut, cut (i) in
\fig{CutsFigure}, to see that it cannot be present. The second of the
additional potential integrals has the topology of integral (d)
in \fig{rrFigure}, but it has a different numerator factor,
to be described below.
This new integral, (d${}'$), is also easy to exclude using
the same two-particle cut (i) in \fig{CutsFigure}.
It is rather striking that every integral identified via the unitarity
cuts is conformally invariant, and that there are only two other conformal
integrals, which can be eliminated easily via two-particle cuts.
Furthermore, the two integrals that are not present differ from the
eight that are present in how the conformal invariance is achieved,
as we shall discuss below.
To analyze the conformal invariance properties, we shall use changes
of variables as suggested in ref.~\cite{DHSS}. (The same conformal
integrals appear in coordinate-space correlators of gauge invariant
operators~\cite{Schubert}; the coincidence is presumably an accident
of there being a limited number of conformal integrals.) As an
example, consider the two-loop double box depicted in
\fig{LowerLoopFigure},
\begin{eqnarray}
{\cal I}^{(2)}(s,t) &=& (-i e^{\epsilon \gamma} \pi^{-2})^2 s^2 t
\nonumber\\
&&\hskip 3mm\times
\int { {\rm d}^4 p\ {\rm d}^4 q
\over p^2 (p-k_1)^2 (p-k_1-k_2)^2 q^2 (q-k_4)^2 (q-k_3-k_4)^2
(p+q)^2}\,.\hskip 8mm
\label{DoubleBox}
\end{eqnarray}
We have taken $d=4$, with the $k_i$ off shell to serve as
an infrared regulator. Next, use the change of variables,
\begin{equation}
k_1 = x_{41} \,, \hskip .8cm k_2 = x_{12} \,,
\hskip .8cm k_3 = x_{23} \,, \hskip .8cm k_4 = x_{34} \,, \hskip .8cm
p = x_{45}, \hskip .8cm q = x_{64} \,,
\label{twoloopktox}
\end{equation}
where $x_{ij} \equiv x_i - x_j$. This choice of variables
automatically ensures that momentum is conserved,
$k_1 + k_2 + k_3 + k_4 = 0$. Note that the external invariants
become
\begin{equation}
s = (k_1 + k_2)^2 = x_{24}^2 \,,
\hskip 1cm t = (k_2 + k_3)^2 = x_{13}^2 \,.
\label{sttox}
\end{equation}
Performing the change of variables~(\ref{twoloopktox})
in the double box, we obtain,
\begin{equation}
{\cal I}^{(2)}(x_1,x_2,x_3,x_4) = (-i e^{\epsilon \gamma} \pi^{-2})^2
x_{24}^4 x_{13}^2 \int {\rm d}^4 x_5\ {\rm d}^4 x_6 \,
{1\over x_{45}^2 x_{15}^2 x_{25}^2 x_{46}^2 x_{36}^2 x_{62}^2 x_{56}^2}
\,.
\label{TwoLoopDualForm}
\end{equation}
The principal conformal-invariance constraints on integrals
constructed from the invariants $x_{ij}^2$ are exposed by
performing an inversion on all points,
$x_i^\mu \rightarrow {x_i^\mu / x_i^2}$.
(We cannot impose such an inversion on the $k_i$ directly,
because it would violate the constraint of momentum conservation.)
Under the inversion, we have
\begin{equation}
x_{ij}^2 \rightarrow {x_{ij}^2 \over x_i^2 x_j^2} \,, \hskip 1 cm
{{\rm d}^4 x_5} \rightarrow {{\rm d}^4 x_5 \over x_5^{8}} \,, \hskip 1 cm
{{\rm d}^4 x_6} \rightarrow {{\rm d}^4 x_6 \over x_6^{8}} \,.
\end{equation}
It is easy to see that the planar double-box
integral is invariant under inversion, {\it i.e.}
$
{\cal I}^{(2)}(x_1,x_2,x_3,x_4)\ \rightarrow\ {\cal I}^{(2)}(x_1,x_2,x_3,x_4).
$
For this result to hold, it is important that the unintegrated points
$x_1,x_2,x_3,x_4$ appear in the numerator just enough times to
cancel their appearance in the denominator. The integrated points
$x_5,x_6$ each appear four times in the denominator.
The dimensionally-regulated version of the
conformally-invariant integral ${\cal I}^{(2)}(x_1,x_2,x_3,x_4)$
is precisely the form in which the two-loop double box appears
in the two-loop planar amplitude (\ref{TwoloopPlanarResult}).
In order to analyze the conformal properties of integrals beyond two
loops, it is helpful to follow the discussion of ref.~\cite{DHSS}, and
introduce a set of dual diagrams~\cite{Nakanishi}. We construct the
dual to a diagrammatic representation of a planar loop-momentum
integral by placing vertices corresponding to the $x_i$ at the centers
of the loops and in between pairs of external lines. Denominator
factors of $x_{ij}^2$ are denoted by drawing dark solid (blue) lines
between the corresponding vertices. Numerator factors are denoted by
drawing dotted lines between the corresponding vertices. One solid
line crosses each propagator in a loop. The conformal weight in each
$x_i$ variable is then given by the number of solid lines entering the
corresponding vertex, less the number of dotted lines. A
conformally-invariant integral will have weight four at each internal
vertex (to balance the weight of the integration measure), and weight
zero at each external vertex. In the diagrams, we shall omit one
dotted line connecting external vertices $x_2$ and $x_4$, and another
one connecting $x_1$ and $x_3$, in order to simplify the presentation.
These two omitted lines correspond to the overall factor of $st =
x_{24}^2 x_{13}^2$ omitted from the momentum-space diagrams in
\figs{rrFigure}{nonrrFigure}. Expressions for integrals in terms of
the $x_i$ variables can be read off quickly from the dual diagrams
(and vice versa).
\begin{figure}[t]
\centerline{\epsfxsize 2.4 truein \epsfbox{twoloopdual.eps}}
\caption{The two-loop planar double box and its dual diagram.
The double box is represented by light colored lines and the
dual diagram by dark (blue) lines.
A dark line connecting $x_i$ with $x_j$ represents the factor $1/x_{ij}^2$.
A dotted line signifies a numerator factor of $x_{ij}^2$.
The momentum corresponding to any $x_{ij}$ is given by the sum of
momenta of the light lines crossing the dark line joining $x_i$ and $x_j$.
An overall factor of $st$ has been removed for clarity.
}
\label{twoloopdualFigure}
\end{figure}
For example, \fig{twoloopdualFigure} contains the diagram dual to the
double box. Each of the solid lines starting at an $x_i$ and ending at
an $x_j$ corresponds to a factor of $1/x_{ij}^2$ appearing in
\eqn{TwoLoopDualForm}, while the dotted line corresponds to a factor
of $x_{24}^2 = s$. With this identification the dual figure is in
direct correspondence with \eqn{TwoLoopDualForm}, after removing one
overall factor of $st = x_{24}^2 x_{13}^2$ (in order to reduce the
visual clutter in the diagram). Because the number of solid lines
minus the number of dotted lines at each of the two internal vertices
$x_5$ and $x_6$ is four, the integral is conformally invariant with
respect to these points. Similarly, since each of the external points
$x_1,x_2, x_3, x_4$ has one more solid line than dotted line
emanating from it, the conformal weight is unity.
If we multiply back by the $x_{24}^2 x_{13}^2$ factor removed previously,
then we obtain an integral which is conformally
invariant with respect to the external as well as internal points.
The assumption of conformal invariance for the integrals immediately
implies the ``no-triangle'' rule for the momentum-space diagrams.
A loop with only three propagators would necessarily result in a
negative weight for the $x$ point corresponding to the loop momentum,
because only three lines enter the dual diagram vertex.
That negative weight can only be eliminated by additional denominator
powers of $x$ --- that is, by additional propagators which would
turn the triangle subgraph into at least a box subgraph.
By placing the dual diagrams on top of the original momentum-space
diagrams, we can read off directly the change of variables between
the $x_{ij}$ and the momenta:
$x_{ij}$ is just the sum of the momenta of the each momentum-space
line crossed by the dual line running from $x_i$ to $x_j$.
In the rung-insertion rule, when a rung is inserted between two
parallel lines with momenta $l_1$ and $l_2$, to go from
an $L$-loop contribution to an $(L+1)$-loop one, the loops on
either side of the parallel lines have acquired a new propagator.
Hence each of their dual $x$ vertices has a new solid line emanating
from it. The rung-rule momentum-insertion factor of $i(l_1+l_2)^2$
is represented by a dotted line stretching between the two vertices,
so it restores the conformal invariance for those two loops.
This property may help to explain the form of the rung rule.
\begin{figure}[t]
\centerline{\epsfxsize 6.2 truein \epsfbox{rrdual.eps}}
\caption{The rung-rule dual diagrams. A factor of $st$ has been
removed.}
\label{rrdualFigure}
\end{figure}
\begin{figure}[t]
\centerline{\epsfxsize 3.8 truein \epsfbox{nonrrdual.eps}}
\caption{The non-rung rule dual diagrams. A factor of $st$ has been
removed.}
\label{nonrrdualFigure}
\end{figure}
\begin{figure}[t]
\centerline{\epsfxsize 3.8 truein \epsfbox{confdual.eps}}
\caption{The two four-loop dual integrals, in addition
to those given in \figs{rrFigure}{nonrrFigure}, that survive
the requirement of conformal invariance. Both integrals are
ruled out by two-particle cuts. In this case, no factor of
$st$ has been removed.
}
\label{confdualFigure}
\end{figure}
Let us now focus on four-loop four-point integrals. We may find all
conformally-invariant integrals by drawing the set of all dual
diagrams that have conformal weight zero with respect to all $x_i$.
Again to prevent cluttering the diagrams in
\figs{rrdualFigure}{nonrrdualFigure} with dotted lines we have removed
a factor of $st$ from the figures. The complete list of
conformal four-loop integrals, as it turns out, contains the rung-rule
diagrams of \fig{rrdualFigure}, the non-rung-rule diagrams of
\fig{nonrrdualFigure}, and the two extra integrals shown in
\fig{confdualFigure}.\footnote{It is possible to dress
diagram (f) in \fig{rrdualFigure} with dotted lines in a second way,
but that dressing is related simply to the one shown by the
exchange of legs $k_1$ and $k_3$, or in other words $s \leftrightarrow t$.}
\begin{figure}[t]
\centerline{\epsfxsize 5.5 truein \epsfbox{nonconfexamples.eps}}
\caption{Two examples of dual diagrams which do not lead to
new conformal integrals in the on-shell limit, for reasons
discussed in the text. }
\label{nonconfexamplesFigure}
\end{figure}
Because conformal invariance in the sense discussed above implies the
``no-triangle'' rule, the complete list of candidate graphs for
conformal integrals are given by the same no-triangle list described
above, namely the diagrams in figs.~\ref{rrFigure}, \ref{nonrrFigure},
\ref{gFigure} and \ref{NoTriangleFigure}. Upon attempting to draw the dual
diagrams to \figs{gFigure}{NoTriangleFigure}, we find that in all cases except
(d${}_5$), either it is impossible to add dotted or even solid lines
so as to obtain a conformally-invariant integral,
or the obtained conformally-invariant cases are equivalent to
previously-included cases, or else
conformal invariance can only be achieved by adding solid or dotted
lines connecting neighboring external vertices. The latter lines
are not admissible, however, because differences of neighboring
external $x$ points correspond to individual external
momenta $k_i$. The corresponding factors are then $k_i^2$.
Their presence would lead to an unwanted power-law vanishing or
divergence of the integral in the on-shell limit.
\Fig{nonconfexamplesFigure} illustrates two examples of dual diagrams
that cannot be made conformal, or that reduce to previous cases.
First consider the graph labeled (b${}_4$). The internal dual points
all have weight four, so no dotted lines can attach to them.
The dotted line shown between external points $x_1$ and $x_3$
reduces the $x_1$ weight to zero and the $x_3$ weight to one.
It is nonvanishing in the massless limit. However, there is
no other numerator factor available that is nonvanishing in this limit,
to further reduce the conformal weights of external legs $x_3$
and $x_4$. In particular, $x_{34}^2 = k_4^2 \to 0$ in the on-shell
limit.
As a second example, consider the graph labeled (d${}_1$)
in \fig{nonconfexamplesFigure}. Here there is one pentagon subgraph,
and hence one internal $x$ point to which a dotted line can attach.
The dotted line shown can be used to reduce the conformal weight
of $x_4$ from three to two, which would then balance its weight
with that of the opposite external point $x_2$. (Balanced opposing
weights can always be reduced to zero using powers of $s=x_{24}^2$ or
$t=x_{13}^2$.)
However, this choice of dotted line merely cancels the propagator
that it crosses, and thereby reduces the (d${}_1$) graph to
the (d${}_2$) graph in \fig{nonrrdualFigure}, which we already
know is conformally invariant and present in the four-loop planar
amplitude. Without using the dotted line shown, it is impossible to
balance the $x_4$ conformal weight, in the massless limit, so the only
surviving possibility reduces to an existing integral.
It is curious that the two conformally-invariant integrals
represented in \fig{confdualFigure}, which are
not present in the planar four-loop four-point amplitude,
can be distinguished from the eight
in \figs{rrdualFigure}{nonrrdualFigure} that are present,
by the fact that they do not have explicit overall factors
of both $s$ {\it and} $t$. As drawn in \fig{confdualFigure},
they have three powers of $s$, but no powers of $t$.
The integral (d${}'$) has the same basic topology as (d)
in \fig{nonrrdualFigure}, but the dotted lines emanating
from the two pentagon loops are connected to external legs
$x_1$ and $x_3$ instead of to each other. So it has two
``rung-rule-type'' numerator factors involving the squares
of the sums of three loop momenta, but no power of $t$.
At the moment, however, we have no good argument why explicit
factors of both $s$ and $t$ have to be present in order that an
integral be present in the four-point amplitude.
\section{Analytic and numerical results}
\label{IntegralResultsSection}
Our next task is to evaluate the integrals entering the four-loop
planar amplitude~(\ref{FourLoopPlanarResult}) in a Laurent
expansion around $\epsilon=0$.
For the case at hand, massless gluon-gluon scattering in ${\cal N}=4$
super-Yang-Mills theory, all of the integrals encountered
can be evaluated through three loops in terms of a class of functions
known as harmonic polylogarithms (HPLs)~\cite{HPL}.
We expect this class of functions
to continue to suffice at four loops. We know it suffices through
${\cal O}(\epsilon^{-4})$, for which we have analytic results.
\subsection{Analytic expressions through ${\cal O}(\epsilon^{-4})$}
\label{AnalyticSubsection}
The analytic results for the four-loop integrals were obtained
with the help of the {\tt MB} program~\cite{CzakonMB}.
We let $x=-t/s$ and $L=\ln(-x)$. Through ${\cal O}(\epsilon^{-4})$ the
results, expressed in terms of the HPLs defined in
\app{HarmonicPolyLogAppendix}, are,
\begin{eqnarray}
{\cal I}^{\rm (a)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 4 \over 9 \, \epsilon^8 }
+ { 35\over 72 \, \epsilon^7 } L
- { 187 \, \pi^2 \over 432 \, \epsilon^6 }
\nonumber\\
&&\hskip0.1cm
+ \, { 10\over 9 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
+ {23\over48} \pi^2 L - {1169\over240} \zeta_3 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, {10\over 9 \, \epsilon^4} \Biggl[
- 22 \, H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.3cm
+ {L} {} ( 16 \, H_{0,0,1}(x) + H_{0,1,1}(x) + H_{1,0,1}(x) )
- {L^2 \over2} ( 10 \, H_{0,1}(x) + H_{1,1}(x) )
\nonumber\\
&&\hskip1.3cm
- {\pi^2\over2} ( 7 \, H_{0,1}(x) + H_{1,1}(x) - L \, H_{1}(x) )
\nonumber\\
&&\hskip1.3cm
+ {2\over3} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
- {97\over12} \zeta_3 \, L
- {2339\over3600} \pi^4 \Biggr]
\nonumber\\
&&\hskip0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,,
\label{IaAnalytic}
\end{eqnarray}
\begin{eqnarray}
{\cal I}^{\rm (b)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 4 \over 9 \, \epsilon^8 }
+ { 17\over 18 \, \epsilon^7 } L
+ {1\over \epsilon^6} \Biggl[ { 53 \over 72 } L^2
- { 211 \over 432 } \pi^2 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 8 \over 9 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
+ {25\over96} L^3
- {45\over64} \pi^2 L - {601\over96} \zeta_3 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 8 \over 9 \, \epsilon^4} \Biggl[
- 10 \, H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.3cm
+ L {}\Bigl( {71\over8} \, H_{0,0,1}(x)
+ H_{0,1,1}(x) + H_{1,0,1}(x) \Bigr)
- {L^2 \over2} \Bigl( {31\over4} \, H_{0,1}(x) + H_{1,1}(x) \Bigr)
\nonumber\\
&&\hskip1.3cm
- {\pi^2\over2} \Bigl( 3 \, H_{0,1}(x) + H_{1,1}(x)
- {15\over 8} L \, H_{1}(x)
+ {73\over96} L^2 \Bigr)
\nonumber\\
&&\hskip1.3cm
+ {53\over48} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
- {L^4\over48}
- {1579\over96} \zeta_3 \, L
- {743\over4608} \pi^4 \Biggr]
\nonumber\\
&&\hskip0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,,
\label{IbAnalytic}
\end{eqnarray}
\begin{eqnarray}
{\cal I}^{\rm (c)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 4 \over 9 \, \epsilon^8 }
+ { 13\over 24 \, \epsilon^7 } L
+ {1\over 18 \, \epsilon^6} \Biggl[ L^2
- { 29 \over 3 } \pi^2 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 8 \over 9 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
- {L^3\over24}
+ {\pi^2\over12} L - {1175\over192} \zeta_3 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 8 \over 9 \, \epsilon^4} \Biggl[
- {73\over4} \, H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.3cm
+ L {}\Bigl( {107\over8} \, H_{0,0,1}(x)
+ H_{0,1,1}(x) + H_{1,0,1}(x) \Bigr)
- {L^2 \over2} \Bigl( {17\over2} \, H_{0,1}(x) + H_{1,1}(x) \Bigr)
\nonumber\\
&&\hskip1.3cm
- {\pi^2\over2} \Bigl( {23\over4} \, H_{0,1}(x) + H_{1,1}(x)
- {7\over 8} L \, H_{1}(x)
- {5\over12} L^2 \Bigr)
\nonumber\\
&&\hskip1.3cm
+ {29\over48} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
+ {L^4\over48}
- {253\over32} \zeta_3 \, L
- {5663\over23040} \pi^4 \Biggr]
\nonumber\\
&&\hskip0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,,
\label{IcAnalytic}
\end{eqnarray}
\begin{eqnarray}
{\cal I}^{\rm (d)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 4 \over 9 \, \epsilon^8 }
+ { 11\over 18 \, \epsilon^7 } L
+ {1\over 8 \, \epsilon^6} \Biggl[ L^2
- { 169 \over 54 } \pi^2 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 10 \over 9 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
- {3\over40} L^3
- {11\over80} \pi^2 \, L - {521\over240} \zeta_3 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 10 \over 9 \, \epsilon^4} \Biggl[
- {11\over5} \, H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.3cm
+ L {}\Bigl( {5\over2} \, H_{0,0,1}(x)
+ H_{0,1,1}(x) + H_{1,0,1}(x) \Bigr)
- {L^2 \over2} \Bigl( {14\over5} \, H_{0,1}(x) + H_{1,1}(x) \Bigr)
\nonumber\\
&&\hskip1.3cm
- {\pi^2\over2} \Bigl( {2\over5} \, H_{0,1}(x) + H_{1,1}(x)
- {7\over 10} L \, H_{1}(x)
- {27\over40} L^2 \Bigr)
\nonumber\\
&&\hskip1.3cm
+ {31\over60} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
+ {3\over80} L^4
- {271\over120} \zeta_3 \, L
- {101\over600} \pi^4 \Biggr]
\nonumber\\
&&\hskip0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,,
\label{IdAnalytic}
\end{eqnarray}
\begin{eqnarray}
{\cal I}^{\rm (e)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 4 \over 9 \, \epsilon^8 }
+ { 3\over 4 \, \epsilon^7 } L
+ {1\over \epsilon^6} \Biggl[ {25 \over 72} L^2
- { 49 \over 108 } \pi^2 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 8 \over 9 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
- {L^3\over24}
- {41\over96} \pi^2 \, L - {1657\over384} \zeta_3 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 8 \over 9 \, \epsilon^4} \Biggl[
- {25\over4} \, H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.3cm
+ L {}\Bigl( {47\over8} \, H_{0,0,1}(x)
+ H_{0,1,1}(x) + H_{1,0,1}(x) \Bigr)
- {L^2 \over2} \Bigl( {11\over2} \, H_{0,1}(x) + H_{1,1}(x) \Bigr)
\nonumber\\
&&\hskip1.3cm
- {\pi^2\over2} \Bigl( {7\over4} \, H_{0,1}(x) + H_{1,1}(x)
- {11\over 8} L \, H_{1}(x)
- {5\over12} L^2 \Bigr)
\nonumber\\
&&\hskip1.3cm
+ {41\over48} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
- {13\over384} L^4
- {107\over16} \zeta_3 \, L
- {7153\over46080} \pi^4 \Biggr]
\nonumber\\
&&\hskip0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,,
\label{IeAnalytic}
\end{eqnarray}
\begin{eqnarray}
{\cal I}^{\rm (f)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 8 \over 9 \, \epsilon^8 }
+ { 107\over 72 \, \epsilon^7 } L
+ {1\over \epsilon^6} \Biggl[ {49 \over 72} L^2
- { 235 \over 216 } \pi^2 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 4 \over 3 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
- {7\over144} L^3
- {11\over12} \pi^2 \, L - {1001\over144} \zeta_3 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 4 \over 3 \, \epsilon^4} \Biggl[
- {13\over2} \, H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.3cm
+ L {}\Bigl( {23\over4} \, H_{0,0,1}(x)
+ H_{0,1,1}(x) + H_{1,0,1}(x) \Bigr)
- {L^2 \over2} ( 5 \, H_{0,1}(x) + H_{1,1}(x) )
\nonumber\\
&&\hskip1.3cm
- {\pi^2\over2} \Bigl( {11\over6} \, H_{0,1}(x) + H_{1,1}(x)
- {13\over12} L \, H_{1}(x)
- {19\over48} L^2 \Bigr)
\nonumber\\
&&\hskip1.3cm
+ {17\over24} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
- {5\over288} L^4
- {1405\over144} \zeta_3 \, L
+ {4253\over17280} \pi^4 \Biggr]
\nonumber\\
&&\hskip0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,,
\label{IfAnalytic}
\end{eqnarray}
\begin{eqnarray}
{\cal I}^{\rm (d_2)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
- { 2 \over 3 \, \epsilon^5 } \, \zeta_3
+ {1\over \epsilon^4} \Biggl[ - {4\over3} \, \zeta_3 \, L
+ {11\over432} \, \pi^4 \Biggr]
\ +\ {\cal O}(\epsilon^{-3}) \Biggr\} \,, \hskip 3.5 cm
\label{Id2Analytic}
\end{eqnarray}
\begin{eqnarray}
{\cal I}^{\rm (f_2)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 16 \over 9 \, \epsilon^8 }
+ { 32 \over 9 \, \epsilon^7 } L
+ {1\over \epsilon^6} \Biggl[ {91 \over 36} L^2
- { 235 \over 108 } \pi^2 \Biggr]
\nonumber\\
&&\hskip-0.1cm
+ \, { 8 \over 3 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
+ {29\over144} L^3
- {199\over144} \pi^2 \, L - {1073\over144} \zeta_3 \Biggr]
\nonumber\\
&&\hskip-0.1cm
+ \, { 8 \over 3 \, \epsilon^4} \Biggl[
- H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.0cm
+ L {}\Bigl( {5\over2} \, H_{0,0,1}(x)
+ H_{0,1,1}(x) + H_{1,0,1}(x) \Bigr)
- {L^2 \over2} ( 4 \, H_{0,1}(x) + H_{1,1}(x) )
\nonumber\\
&&\hskip1.0cm
- {\pi^2\over2} \Bigl( H_{1,1}(x)
- {3\over2} L \, H_{1}(x)
+ {115\over144} L^2 \Bigr)
\nonumber\\
&&\hskip1.0cm
+ {11\over12} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
- {L^4\over36}
- {1037\over72} \zeta_3 \, L
+ {6467\over17280} \pi^4 \Biggr]
\nonumber\\
&&\hskip-0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,.
\label{If2Analytic}
\end{eqnarray}
Using \eqn{FourLoopPlanarResult}, together with the above results for
the integrals, the total four-loop planar amplitude,
$M_4^{(4)}$, has the expansion,
\begin{eqnarray}
M_4^{(4)}(s,t) &=& (-t)^{-4\epsilon} \Biggl\{
{ 2 \over 3 \, \epsilon^8 }
+ { 4 \over 3 \, \epsilon^7 } L
+ {1\over \epsilon^6} \Biggl[ L^2
- { 13 \over 18 } \pi^2 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 4 \over 3 \, \epsilon^5} \Biggl[
H_{0,0,1}(x) - L \, H_{0,1}(x)
+ {1\over2} ( L^2 + \pi^2) H_{1}(x)
+ {L^3\over4}
- {5\over6} \pi^2 \, L - {59\over12} \zeta_3 \Biggr]
\nonumber\\
&&\hskip0.1cm
+ \, { 4 \over 3 \, \epsilon^4} \Biggl[
- H_{0,0,0,1}(x) - H_{0,0,1,1}(x)
- H_{0,1,0,1}(x) - H_{1,0,0,1}(x)
\nonumber\\
&&\hskip1.3cm
+ L {}\Bigl( {5\over2} \, H_{0,0,1}(x)
+ H_{0,1,1}(x) + H_{1,0,1}(x) \Bigr)
- {L^2 \over2} ( 4 \, H_{0,1}(x) + H_{1,1}(x) )
\nonumber\\
&&\hskip1.3cm
- {\pi^2\over2} \Bigl( H_{1,1}(x)
- {3\over2} L \, H_{1}(x)
+ {15\over16} L^2 \Bigr)
\nonumber\\
&&\hskip1.3cm
+ {11\over12} L^3 \, H_{1}(x) + \zeta_3 \, H_{1}(x)
+ {L^4\over32}
- {28\over3} \zeta_3 \, L
+ {637\over17280} \pi^4 \Biggr]
\nonumber\\
&&\hskip0.1cm
+\ {\cal O}(\epsilon^{-3}) \Biggr\} \,.
\label{M4Analytic}
\end{eqnarray}
\begin{table}
\caption{\label{FourLoopTable}
Numerical values of individual four-loop integrals, and $M_4^{(4)}$,
at $(s,t)=(-1,-1)$. The uncertainties at orders $\epsilon^{-3}$ and $\epsilon^{-2}$
are indicated in parentheses. (The presence of two digits in parentheses
signifies the uncertainty in the last two digits of the central value.)}
\vskip .4 cm
\def\hskip .001 cm $\!\!${\hskip .001 cm $\!\!$}
\begin{tabular}{||c||c|c|r|r|r|r|r||}
\hline
\hline
Integral \hskip .001 cm $\!\!$ & $\epsilon^{-8}$ \hskip .001 cm $\!\!$ & $\epsilon^{-7}$
\hskip .001 cm $\!\!$ & $\epsilon^{-6}$\hbox{~~~~~~~} \hskip .001 cm $\!\!$ & $\epsilon^{-5}$\hbox{~~~~~~~}
\hskip .001 cm $\!\!$ & $\epsilon^{-4}$\hbox{~~~~~~~} \hskip .001 cm $\!\!$ & $\epsilon^{-3}$\hbox{~~~~~~~~}
\hskip .001 cm $\!\!$ & $\epsilon^{-2}$\hbox{~~~~~~~} \\
\hline
\hline
(a) & $
4/9
$ & $
0
$ & $
-4.27225931
$ \hskip .001 cm $\!\!$ & $
-11.30789527
$ \hskip .001 cm $\!\!$ & $
-18.44325855
$ \hskip .001 cm $\!\!$ & $
-58.84504
(10)
$ \hskip .001 cm $\!\!$ & $
-180.852
(3) \hphantom{1}
\, $ \\
\hline
(b) & $
4/9
$ & $
0
$\hskip .001 cm $\!\!$ & $
-4.82057067
$ \hskip .001 cm $\!\!$ & $
-10.53107909
$ \hskip .001 cm $\!\!$ & $
3.00827162
$ \hskip .001 cm $\!\!$ & $
67.62584
(13)
$ \hskip .001 cm $\!\!$ & $
190.235
(3) \hphantom{1}
\, $
\\
\hline
(c) & $
4/9
$ & $
0
$ & $
-5.30034310
$ \hskip .001 cm $\!\!$ & $
-10.38082198
$ \hskip .001 cm $\!\!$ & $
12.55376125
$ \hskip .001 cm $\!\!$ & $
81.91311
(64)
$ \hskip .001 cm $\!\!$ & $
99.292
(5) \hphantom{1}
\, $ \\
\hline
(d) & $
4/9
$ & $
0
$ & $
-3.86102580
$ \hskip .001 cm $\!\!$ & $
-7.70172456
$ \hskip .001 cm $\!\!$ & $
-16.94003184
$ \hskip .001 cm $\!\!$& $
-80.03212
(51)
$ \hskip .001 cm $\!\!$ & $
-52.555
(21)
\, $ \\
\hline
(e) & $
4/9
$ & $
0
$ & $
-4.47787607
$ \hskip .001 cm $\!\!$ & $
-8.45252237
$ \hskip .001 cm $\!\!$& $
-4.13769237
$ \hskip .001 cm $\!\!$ & $
-11.60392
(20)
$ \hskip .001 cm $\!\!$ & $
28.823
(9) \hphantom{1}
\, $ \\
\hline
(f) & $
8/9
$ & $
0
$ & $
-10.73776405
$ \hskip .001 cm $\!\!$ & $
-16.90406921
$ \hskip .001 cm $\!\!$ & $
46.68731190
$ \hskip .001 cm $\!\!$ & $
219.08111
(12)
$ \hskip .001 cm $\!\!$ & $
364.167
(7) \hphantom{1}
\, $\\
\hline
(d${}_2$)
& $
0
$ \hskip .001 cm $\!\!$ & $
0
$ \hskip .001 cm $\!\!$ & $
0\hbox{~~~~~~~~}
$ \hskip .001 cm $\!\!$ & $
-0.80137127
$ \hskip .001 cm $\!\!$ & $
2.48032408
$ \hskip .001 cm $\!\!$ & $
36.23672
(11)
$ \hskip .001 cm $\!\!$ & $
132.811
(13)
\, $ \\
\hline
(f${}_2$)
& $
16/9
$ \hskip .001 cm $\!\!$ & $
0
$ \hskip .001 cm $\!\!$ & $
-21.47552810
$ \hskip .001 cm $\!\!$ & $
-35.41088096
$ \hskip .001 cm $\!\!$ & $
92.92365579
$ \hskip .001 cm $\!\!$ & $
521.48787
(31)
$ \hskip .001 cm $\!\!$ & $
1314.856
(12)
\, $ \\
\hline
\hline
$M_4^{(4)}$
& $
2/3
$ \hskip .001 cm $\!\!$ & $
0
$ \hskip .001 cm $\!\!$ & $
-7.12804762
$ \hskip .001 cm $\!\!$ & $
-13.64293336
$ \hskip .001 cm $\!\!$ & $
2.64276920
$ \hskip .001 cm $\!\!$ & $
27.34123
(13)
$ \hskip .001 cm $\!\!$ & $
33.278
(7) \hphantom{1}
\, $ \\
\hline
\hline
\end{tabular}
\end{table}
In table~\ref{FourLoopTable} we present the numerical values of the
eight integrals appearing in the four-loop planar amplitude, through
${\cal O}(\epsilon^{-2})$.
The values through ${\cal O}(\epsilon^{-4})$ can be
found easily from the analytic expression given above.
The numerical values for $\epsilon^{-3}$ and $\epsilon^{-2}$
were obtained using the {\tt CUBA} numerical integration
package~\cite{CUBA}, which is incorporated into the {\tt MB}
program~\cite{CzakonMB}. In the table, we also give
the total value of the amplitude $M_4^{(4)}$, according to
\eqn{FourLoopPlanarResult}.
\subsection{Values of lower-loop amplitudes at $(s,t)=(-1,-1)$}
\label{LowerLoopSymmetricSubsection}
Next we need to compare our results for \eqn{FourLoopPlanarResult}
with the prediction~(\ref{FourLoopFourPtIteration}) based
on the known structure of the infrared poles.
To do this, we evaluate the lower-loop amplitudes, using formulas from
ref.~\cite{Iterate3}, at
the symmetric kinematical point $(s,t)=(-1,-1)$, through the accuracy
needed to evaluate \eqn{FourLoopFourPtIteration} to ${\cal O}(\epsilon^{-2})$.
At this point, a limited number of analytic expressions appear,
built out of $\ln2$, $\pi$, $\zeta_3$, $\zeta_5$, $\mathop{\rm Li}\nolimits_4({\textstyle{1\over2}})$,
$\mathop{\rm Li}\nolimits_5({\textstyle{1\over2}})$, $\mathop{\rm Li}\nolimits_6({\textstyle{1\over2}})$, and the harmonic sum~\cite{HPLMaitre},
\begin{equation}
s_6 \equiv S(\{-5,-1\},\infty)
= \sum_{i_1=1}^\infty { (-1)^{i_1} \over i_1^5 }
\sum_{i_2=1}^{i_1} { (-1)^{i_2} \over i_2 }
= 0.98744142640329971377\ldots.
\label{s6def}
\end{equation}
The one-loop amplitude $M_4^{(1)}(s,t;\epsilon)$ in \eqn{OneLoopAmplitude},
evaluated at $(s,t)=(-1,-1)$, has the $\epsilon$-expansion,
\begin{eqnarray}
M_4^{(1)}(-1,-1;\epsilon) &=&
- {2 \over \epsilon^2} + {2\over3} \pi^2
+ \epsilon {}\biggl( {\pi^2\over2} \ln2 + {77\over12} \zeta_3 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ \epsilon^2 \biggl( - 2 \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}})
- {1\over12} \ln^{4}2
+ {\pi^2\over3} \ln^{2}2 + {49\over720} \pi^4 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ \epsilon^3 \biggl( 2 \mathop{\rm Li}\nolimits_5({\textstyle{1\over2}})
- {1\over60} \ln^{5}2 + {\pi^2\over9} \ln^{3}2
+ {\pi^4\over360} \ln2
- {245\over144} \pi^2 \zeta_3 + {62\over5} \zeta_5 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ \epsilon^4 \biggl( - 2 \mathop{\rm Li}\nolimits_6({\textstyle{1\over2}})
+ {\pi^2\over6} \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}})
- {1\over360} \ln^{6}2 + {5\over144} \pi^2 \ln^{4}2
- {\pi^4\over180} \ln^{2}2
\nonumber\\ &&\hskip1.0cm
-\ {7\over6} \pi^2 \zeta_3 \ln2
- {343\over36} \zeta_3^2 - {\pi^6\over10080} \biggr)
+\ {\cal O}(\epsilon^5)
\label{M1_11_A}\\
&=&
- {2\over\epsilon^2} + 6.5797362673929057461
\nonumber\\ &&\hskip0.1cm
+\ 11.133742693869288271\ \epsilon + 7.1556624851455749140\ \epsilon^2
\nonumber\\ &&\hskip0.1cm
-\ 5.760188577405266135\ \epsilon^3 - 23.794568007684383734\ \epsilon^4
+ {\cal O}(\epsilon^5)\,. \nonumber\\
\label{M1_11_Num}
\end{eqnarray}
The two-loop amplitude $M_4^{(2)}(s,t;\epsilon)$
in~\eqn{TwoloopPlanarResult}, is given by
\begin{eqnarray}
M_4^{(2)}(-1,-1;\epsilon) &=&
{2 \over \epsilon^4} - {5 \, \pi^2 \over4 \, \epsilon^2 }
+\ {1\over\epsilon} \biggl( - \pi^2 \ln2 - {37\over3} \zeta_3 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ 4 \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}}) + {1\over6} \ln^{4}2
- {2\over3} \pi^2 \ln^{2}2 - {\pi^4\over30}
\nonumber\\ &&\hskip0.1cm
+\ \epsilon {}\biggl( - 4 \mathop{\rm Li}\nolimits_5({\textstyle{1\over2}}) + {1\over30} \ln^{5}2
- {2\over9} \pi^2 \ln^{3}2 + {43\over360} \pi^4 \ln2
+ {77\over12} \pi^2 \zeta_3 - {3919\over80} \zeta_5 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ \epsilon^2 \biggl( - 7 s_6 + 4 \mathop{\rm Li}\nolimits_6({\textstyle{1\over2}}) + {22\over3} \pi^2 \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}})
+ {1\over180} \ln^{6}2 + {\pi^2\over4} \ln^{4}2
- {59\over240} \pi^4 \ln^{2}2
\nonumber\\ &&\hskip1.0cm
+\ {307\over24} \pi^2 \zeta_3 \ln2
+ {4319\over288} \zeta_3^2 - {541\over6480} \pi^6 \biggr)
+\ {\cal O}(\epsilon^3)
\label{M2_11_A}\\
&=&
{2\over\epsilon^4} - {12.337005501361698274 \over \epsilon^2}
\nonumber\\ &&\hskip0.1cm
-\ {21.666456936158779398 \over \epsilon} - 4.2998350584631215560
\nonumber\\ &&\hskip0.1cm
+\ 30.635795346547106621\ \epsilon + 68.218654436238118625\ \epsilon^2
+ {\cal O}(\epsilon^3)\,. \nonumber\\
\label{M2_11_Num}
\end{eqnarray}
The three-loop amplitude $M_4^{(3)}(s,t;\epsilon)$
in~\eqn{ThreeLoopPlanarResult} is given by
\begin{eqnarray}
M_4^{(3)}(-1,-1;\epsilon) &=&
- {4\over 3\,\epsilon^6} + {7\,\pi^2 \over 6 \, \epsilon^4}
+ {1\over\epsilon^3} \biggl( \pi^2 \ln2 + {71\over6} \zeta_3 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ {1\over\epsilon^2} \biggl( - 4 \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}}) - {1\over6} \ln^{4}2
+ {2\over3} \pi^2 \ln^{2}2 - {89\over3240} \pi^4 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ {1\over\epsilon} \biggl( 4 \mathop{\rm Li}\nolimits_5({\textstyle{1\over2}}) - {1\over30} \ln^{5}2
+ {2\over9} \pi^2 \ln^{3}2 - {73\over360} \pi^4 \ln2
- {3779\over432} \pi^2 \zeta_3 + {8621\over120} \zeta_5 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ 14 s_6 - 4 \mathop{\rm Li}\nolimits_6({\textstyle{1\over2}}) - {91\over6} \pi^2 \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}})
- {1\over180} \ln^{6}2 - {83\over144} \pi^2 \ln^{4}2
+ {191\over360} \pi^4 \ln^{2}2
\nonumber\\ &&\hskip1.0cm
- 23\ \pi^2 \zeta_3 \ln2
- {1385\over144} \zeta_3^2 + {43159\over233280} \pi^6
+\ {\cal O}(\epsilon)
\label{M3_11_A}\\
&=&
- {4\over 3\,\epsilon^6} + {11.514538467937585056\over \epsilon^4}
\nonumber\\ &&\hskip0.1cm
+\ {21.065428484578982255\over\epsilon^3} - {1.6228781926783846589\over\epsilon^2}
\nonumber\\ &&\hskip0.1cm
-\ {40.219043687209842734\over\epsilon} - 67.305777557207060997
+\ {\cal O}(\epsilon)\,.
\label{M3_11_Num}
\end{eqnarray}
The iterative formula for the four-loop amplitude in terms
of the lower-loop amplitudes is given in \eqn{FourLoopFourPtIteration}.
Inserting the values at $(s,t)=(-1,-1)$ of $M_4^{(1)}$,
$M_4^{(2)}$ and $M_4^{(3)}$, we obtain
\begin{eqnarray}
M_4^{(4)}(-1,-1;\epsilon)\Bigr|_{\rm iter.} &=&
{2\over3\,\epsilon^8} - {13\,\pi^2 \over 18\,\epsilon^6}
+ {1\over\epsilon^5} \biggl( - {2\over3} \pi^2 \ln2 - {68\over9} \zeta_3 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ {1\over\epsilon^4} \biggl( {8\over3} \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}}) + {1\over9} \ln^{4}2
- {4\over9} \pi^2 \ln^{2}2 + {89\over2592} \pi^4 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ {1\over\epsilon^3} \biggl( - {8\over3} \mathop{\rm Li}\nolimits_5({\textstyle{1\over2}}) + {1\over45} \ln^{5}2
- {4\over27} \pi^2 \ln^{3}2 + {22\over135} \pi^4 \ln2
\nonumber\\ &&\hskip1.0cm
+ {251\over36} \pi^2 \zeta_3 - {7469\over120} \zeta_5 \biggr)
\nonumber\\ &&\hskip0.1cm
+\ {1\over\epsilon^2} \biggl( - 14 s_6 + {8\over3} \mathop{\rm Li}\nolimits_6({\textstyle{1\over2}})
+ {139\over9} \pi^2 \mathop{\rm Li}\nolimits_4({\textstyle{1\over2}})
+ {1\over270} \ln^{6}2 + {131\over216} \pi^2 \ln^{4}2
\nonumber\\ &&\hskip1.0cm
-\ {607\over1080} \pi^4 \ln^{2}2
+ {791\over36} \pi^2 \zeta_3 \ln2
+ {895\over432} \zeta_3^2 - {20759\over102060} \pi^6
- {1\over8} f_0^{(4)} \biggr)
\nonumber\\ &&\hskip0.1cm
+\ {\cal O}(\epsilon^{-1})
\label{M4_iter_11_A}\\
&=&
{0.66666666666666666667\over\epsilon^8} - {7.1280476230089812249\over\epsilon^6}
\nonumber\\ &&\hskip0.1cm
-\ {13.642933355332790075\over\epsilon^5} + {2.6427691992903098962\over\epsilon^4}
\nonumber\\ &&\hskip0.1cm
+\ {27.341074205440151100\over\epsilon^3}
\nonumber\\ &&\hskip0.1cm
+\ {1\over\epsilon^2} \Bigl( 29.611139840724282137 - {1\over8} f_0^{(4)} \Bigr)
+\ {\cal O}(\epsilon^{-1})\,.
\label{M4_iter_11_Num}
\end{eqnarray}
Comparing \eqn{M4_iter_11_Num} with the last row of
table~\ref{FourLoopTable}, we verify the pole behavior of the
four-loop amplitude $M_4^{(4)}(-1,-1;\epsilon)$ precisely through
${\cal O}(\epsilon^{-4})$. The agreement at ${\cal O}(\epsilon^{-3})$ is good to 5 digits.
At ${\cal O}(\epsilon^{-2})$, we can extract the value of $f_0^{(4)}$.
We obtain,
\begin{equation}
f_0^{(4)} =
-29.335 \pm 0.052
\,.
\label{f04ans}
\end{equation}
This value should be compared with that predicted by Eden and Staudacher,
from \eqn{gammaKB},
\begin{equation}
f_0^{(4)}\Bigr|_{\rm ES}\ =\
- {73\over2520} \, \pi^6 + \zeta_3^2
\ =\
- 26.404825523390660965
\ldots\,.
\label{f04ESans}
\end{equation}
The results do not agree. The difference can be expressed as,
\begin{eqnarray}
f_0^{(4)} &=& f_0^{(4)}\Bigr|_{\rm ES}\ +\ \Delta f_0^{(4)}\,,
\label{DeltafDef}\\
\Delta f_0^{(4)} &=&
- 2.930 \pm 0.052
\ldots\,.
\label{DeltafNum}
\end{eqnarray}
We can also parametrize the difference $\Delta f_0^{(4)}$
as a multiple of the weight-6 expression $\zeta_3^2$.
Making this parametrization, we find that
\begin{eqnarray}
\Delta f_0^{(4)} &=& r \, \zeta_3^2 \,,
\label{rDef}\\
r &=&
- 2.028 \pm 0.036
\ldots\,.
\label{rNum}
\end{eqnarray}
This result is quite suggestive. To about 1.5\% precision
on the value of the correction term $\Delta f_0^{(4)}$,
it is equal to $-2\zeta_3^2$, a result which would have
the net effect of {\it flipping the sign of the $\zeta_3^2$
term in the Eden--Staudacher prediction~(\ref{gammaKA}),
while leaving the $\pi^6$ term unaltered}. Of course,
the ES prediction follows directly from an integral equation,
and so flipping the sign of the $\zeta_3^2$ term
is not possible without other modifications.
In \sect{AnalysisSection} we discuss possible reasons
for the discrepancy.
\subsection{Cross checks at asymmetrical kinematical points}
In order to cross check our numerical evaluation of the integrals
at the symmetric kinematical point $(s,t)=(-1,-1)$,
as well as check the behavior of the ${\cal O}(\epsilon^{-3})$ and
${\cal O}(\epsilon^{-2})$ terms in the planar four-loop amplitude as a function of
the scattering angle, we have performed the numerical analysis
of the last subsection at three additional kinematical points,
$(s,t)=(-1,-2)$, $(-1,-3)$ and $(-1,-15)$.
We have used the expressions for the lower-loop amplitudes
in ref.~\cite{Iterate3} to numerically evaluate the
infrared-based iterative formula~(\ref{FourLoopFourPtIteration})
at the asymmetric kinematical points $(s,t)=(-1,-2)$,
$(-1,-3)$ and $(-1,-15)$.
The results are,
\begin{eqnarray}
M_4^{(4)}(-1,-2;\epsilon)\Bigr|_{\rm iter.} &=&
{2\over3\,\epsilon^8} - {0.92419624075\over\epsilon^7} - {6.64759460909\over\epsilon^6}
- {4.23222757233\over\epsilon^5}
\nonumber\\ &&\hskip0cm\null
+ {15.89245103368\over\epsilon^4} + {16.11914613046\over\epsilon^3}
\nonumber\\ &&\hskip0cm\null
+ {1\over\epsilon^2} \Bigl( 1.31283053842 - {1\over8} f_0^{(4)} \Bigr)
+\ {\cal O}(\epsilon^{-1})\,,
\label{M4_iter_12_Num}
\end{eqnarray}
\begin{eqnarray}
M_4^{(4)}(-1,-3;\epsilon)\Bigr|_{\rm iter.} &=&
{2\over3\,\epsilon^8} - {1.46481638489\over\epsilon^7} - {5.92109866220\over\epsilon^6}
+ {0.72092946726\over\epsilon^5}
\nonumber\\ &&\hskip0cm\null
+ {19.05722201166\over\epsilon^4} + {4.86152575608\over\epsilon^3}
\nonumber\\ &&\hskip0cm\null
+\ {1\over\epsilon^2} \Bigl( - 5.61581265989 - {1\over8} f_0^{(4)} \Bigr)
+\ {\cal O}(\epsilon^{-1})\,,
\label{M4_iter_13_Num}
\end{eqnarray}
\begin{eqnarray}
M_4^{(4)}(-1,-15;\epsilon)\Bigr|_{\rm iter.} &=&
{2\over3\,\epsilon^8} - {3.61073360147\over\epsilon^7} + {0.20548826868\over\epsilon^6}
+ {14.13192416428\over\epsilon^5}
\nonumber\\ &&\hskip0cm\null
+ {7.41700629511\over\epsilon^4} - {48.55010675803\over\epsilon^3}
\nonumber\\ &&\hskip0cm\null
+\ {1\over\epsilon^2} \Bigl( 43.61197714 - {1\over8} f_0^{(4)} \Bigr)
+\ {\cal O}(\epsilon^{-1})\,.
\label{M4_iter_115_Num}
\end{eqnarray}
Numerical evaluation of the eight four-loop integrals
from \figs{rrFigure}{nonrrFigure} gives the following
results for the amplitude $M_4^{(4)}$ (we omit the
$1/\epsilon^8$ through $1/\epsilon^4$ poles, as they agree analytically),
\begin{eqnarray}
M_4^{(4)}(-1,-2;\epsilon) &=&
{\cal O}(\epsilon^{-8}\!\cdots \epsilon^{-4})+
{16.11929 \pm 0.00008 \over\epsilon^3}
+ {4.985 \pm 0.006 \over\epsilon^2}
+ {\cal O}(\epsilon^{-1})\,,\hskip 10mm
\label{M4_amp_12_Num}\\
M_4^{(4)}(-1,-3;\epsilon) &=&
{\cal O}(\epsilon^{-8}\!\cdots \epsilon^{-4})+
{4.8617 \pm 0.0003 \over\epsilon^3}
- {1.943 \pm 0.008 \over\epsilon^2}
+ {\cal O}(\epsilon^{-1})\,,\hskip7mm
\label{M4_amp_13_Num}\\
M_4^{(4)}(-1,-15;\epsilon) &=&
{\cal O}(\epsilon^{-8}\!\cdots \epsilon^{-4})
- {48.5499 \pm 0.0002 \over\epsilon^3}
+ {47.29 \pm 0.02 \over\epsilon^2}
+ {\cal O}(\epsilon^{-1})\,.\hskip 7mm
\label{M4_amp_115_Num}
\end{eqnarray}
Comparing these sets of numbers at ${\cal O}(\epsilon^{-3})$,
we observe good agreement at all points; the
$(s,t)=(-1,-2)$ point is slightly off, at 1.9\,$\sigma$,
but the other two points are within 1\,$\sigma$.
At ${\cal O}(\epsilon^{-2})$, we can express the agreement
in terms of the parameter $r$ introduced in \eqn{rDef}.
At the asymmetric kinematical points, we extract the values,
\begin{eqnarray}
r &=&
- 2.059 \pm 0.036
, \qquad (s,t)=(-1,-2),
\label{rNum12}\\
r &=&
- 2.062 \pm 0.045
, \qquad (s,t)=(-1,-3),
\label{rNum13}\\
r &=&
- 2.074 \pm 0.104
, \qquad (s,t)=(-1,-15).
\label{rNum115}
\end{eqnarray}
These values are all consistent, within errors, with the
value~(\ref{rNum}) extracted at $(s,t)=(-1,-1)$.
(The values at different points, however, have an unknown
correlation between them, because the integrals contain pieces
that are independent of the kinematics, and the numerical
integration for each value of $(s,t)$ was performed with
the same sequence of quasi-random integration points.
This means the results for $r$ from the various kinematic points
cannot be combined to reduce the error.)
In summary, our numerical integration of the four-loop planar
integrand results in a value of the cusp anomalous dimension,
\begin{eqnarray}
f_0(\hat{a})
&=& \hat{a} - {\pi^2\over6} \, \hat{a}^2
+ {11\over180} \pi^4 \, \hat{a}^3
- \Bigl( {73\over2520} \, \pi^6 - (1+r) \zeta_3^2 \Bigr) \hat{a}^4
+\ \cdots \,.
\label{gammaKNum}
\end{eqnarray}
where $r$ is given in eqs.~(\ref{rNum}), (\ref{rNum12}),
(\ref{rNum13}) and (\ref{rNum115}). All values are consistent with
the appealing value of $r=-2$, which corresponds to the value $\beta =
\zeta_3$ for the dressing-factor parameter $\beta$ in
\eqn{betashifteq}. As noted above, this value would merely flip the
sign of the $\zeta_3^2$ term in the ES prediction~(\ref{gammaKA}).
However, we obviously cannot exclude, on numerical grounds alone,
nearby rational or transcendental numbers. For example, it is
conceivable that $r$ takes on the value $r = - 5/(2\zeta_3) = -
2.0797\ldots$. This value would correspond to a rational
dressing-factor parameter, $\beta = 5/4$, and would violate the KLOV
maximum-transcendentality principle. On the other hand, additional
evidence points toward $r=-2$ as the correct analytical value, as we
shall discuss in the next section.
\section{Estimating strong-coupling behavior}
\label{LargeCouplingSection}
Kotikov, Lipatov and Velizhanin (KLV)~\cite{KLV} made an intriguing proposal
for approximating the cusp anomalous dimension (or equivalently,
$f_0(\hat{a})$), for all values of the coupling $\hat{a}$.
They suggested combining perturbative information with the
knowledge from string theory~\cite{StrongCouplingLeadingGKP} that
at large values of the coupling $f_0$ has square-root behavior,
$f_0 \sim \sqrt{\hat{a}}$. They proposed the following approximate relation
as a means for incorporating the known analytic behavior,
\begin{equation}
\hat{a}^n = \sum_{r=n}^{2n} C_r \, [\tilde{f}_0(\hat{a})]^r \,,
\label{KLVapproxn}
\end{equation}
where the constants $C_r$ can be fixed using perturbative information.
As we shall discuss below, they can also be fixed using
strong-coupling information. As more information
becomes available the integer $n$ can be increased.
The strong-coupling square-root behavior of $f_0$
is automatically imposed by the fact that
$\hat{a}^n \sim C_{2n} [f_0(\hat{a})]^{2n}$ at large $\hat{a}$.
Similarly, the weak-coupling linear behavior follows
from $\hat{a}^n \sim C_{n} [f_0(\hat{a})]^{n}$ at small $\hat{a}$.
KLV used the approximation~(\ref{KLVapproxn}) for $n=1$,
together with the one- and two-loop expressions for the
cusp anomalous dimension, to write (for the case of
a supersymmetric regulator)
\begin{equation}
\hat{a} =
\tilde{f}_0 + {\pi^2\over6}\ (\tilde{f}_0)^2
\,. \label{fapprox1}
\end{equation}
This formula makes the weak-coupling predictions (beyond
two loops),
\begin{eqnarray}
\tilde{f}_0 &=&
\hat{a} - {\pi^2\over6}\ \hat{a}^2
+ {\pi^4\over18}\ \hat{a}^3
- {5\over216} \pi^6\ \hat{a}^4 + {7\over648} \pi^8\ \hat{a}^5
- {7\over1296} \pi^{10}\ \hat{a}^6
+ \ldots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$,}
\label{fapprox1weak}
\end{eqnarray}
and it predicts coefficients in the strong-coupling
expansion, as
\begin{eqnarray}
\tilde{f}_0 &=&
{ 2 \sqrt{3} \over \pi}\ \sqrt{{\hat{a}\over2}}\
-\ { 3 \over \pi^2 }
\ +\ {\cal O}( \hat{a}^{-1/2} )
\label{fapprox1strong} \\
&\approx&
1.1027\ \sqrt{{\hat{a}\over2}}\ -\ 0.30396
\ +\ {\cal O}( \hat{a}^{-1/2} ) \,.
\label{fapprox1strongnum}
\end{eqnarray}
The coefficients of the
leading~\cite{StrongCouplingLeadingGKP,Kruczenski,Makeenko} and
subleading~\cite{StrongCouplingSubleading} terms in this expansion are
predicted from string theory to be,
\begin{eqnarray}
f_0 &=&
\sqrt{{\hat{a}\over2}}\ -\ { 3\ln2 \over 4\pi }
\ +\ {\cal O}( \hat{a}^{-1/2} )
\label{fstringstrong} \\
&\approx&
\sqrt{{\hat{a}\over2}}\ -\ 0.16547670011448
\ +\ {\cal O}( \hat{a}^{-1/2} ) \,.
\label{fstringstrongnum}
\end{eqnarray}
As noted by KLV, the leading coefficient is estimated correctly
to 10\% by the formula~(\ref{fapprox1}). The subleading coefficient
is off by almost a factor of two, however.
What happens as we incorporate more perturbative information?
Using the three-loop value for the cusp anomalous dimension,
and setting $n=2$ in \eqn{KLVapproxn}), gives the approximation
\begin{equation}
\hat{a}^2 =
(\tilde{f}_0)^2
+ {\pi^2\over3}\ (\tilde{f}_0)^3
+ {\pi^4\over60}\ (\tilde{f}_0)^4
\,. \label{fapprox2}
\end{equation}
This approximation makes the weak-coupling predictions (beyond three
loops),
\begin{eqnarray}
\tilde{f}_0 &=&
\hat{a} - {\pi^2\over6}\ \hat{a}^2
+ {11\over180} \pi^4\ \hat{a}^3
- {31\over1080} \pi^6\ \hat{a}^4
+ {329\over21600} \pi^8\ \hat{a}^5
- {169\over19440} \pi^{10}\ \hat{a}^6
+ \ldots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$,}
\label{fapprox2weak}
\end{eqnarray}
and has the strong-coupling expansion,
\begin{eqnarray}
\tilde{f}_0 &=&
{2\over \pi}\, 15^{1/4}\ \sqrt{{\hat{a}\over2}}\
-\ { 5 \over \pi^2 }
\ +\ {\cal O}( \hat{a}^{-1/2} )
\label{fapprox2strong} \\
&\approx&
1.2529\ \sqrt{{\hat{a}\over2}}\ -\ 0.50661
\ +\ {\cal O}( \hat{a}^{-1/2} ) \,.
\label{fapprox2strongnum}
\end{eqnarray}
For both the leading and next-to-leading coefficients in the
strong-coupling expansion, the three-loop approximation~(\ref{fapprox2})
leads to a larger disagreement with the string
prediction~(\ref{fstringstrongnum}) than does the two-loop
version~(\ref{fapprox1}).
We also note that the numerical value of the four-loop
coefficient predicted by \eqn{fapprox2} is $-27.595$,
which is a bit closer to our result than is the ES prediction,
but still about 6\% off.
Despite the somewhat discouraging results from including the three-loop
values, we proceed to incorporate our four-loop
cusp anomalous dimension into the $n=3$ version of the
approximation, obtaining
\begin{equation}
\hat{a}^3 =
(\tilde{f}_0)^3
+ {\pi^2\over2}\ (\tilde{f}_0)^4
+ {\pi^4\over15}\ (\tilde{f}_0)^5
+ \left( {\pi^6\over378} - 3(1+r) \zeta_3^2 \right) (\tilde{f}_0)^6
\,. \label{fapprox3}
\end{equation}
Here we have introduced the same coefficient $r$ defined in \eqn{rDef},
which is constrained to be quite close to $-2$ by our
numerical result~(\ref{rNum}).
The weak-coupling expansion of this formula predicts (beyond four loops),
\begin{eqnarray}
\tilde{f}_0 &=&
\hat{a} - {\pi^2\over6}\ \hat{a}^2
+ {11\over180} \pi^4\ \hat{a}^3
- \left( {73\over2520} \pi^6 - (1+r) \zeta_3^2 \right) \hat{a}^4
\nonumber\\
&& \hskip0cm \null
+ \left( {1769\over113400} \pi^8
- {4\over3} (1+r) \pi^2 \zeta_3^2 \right) \hat{a}^5
- \left( {4111\over453600} \pi^{10}
- {13\over10} (1+r) \pi^4 \zeta_3^2 \right) \hat{a}^6
+ \ldots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$.}
\label{fapprox3weak}
\end{eqnarray}
The strong-coupling expansion is given by,
\begin{eqnarray}
\tilde{f}_0 &=&
\alpha^{-1/6}\ \sqrt{{\hat{a}\over2}}
\ -\ { \pi^4 \over 720 \alpha }\
+\ {\pi^2\over256} \left( {\pi^6\over2835} + (1+r)\zeta_3^2 \right)
\alpha^{-11/6}\ \sqrt{{2\over\hat{a}}}
\ +\ {\cal O}( \hat{a}^{-1} ) \,,~~~
\label{fapprox3strong}
\end{eqnarray}
where
\begin{equation}
\alpha\ =\ {\pi^6\over3024} - {3\over8} (1+r) \zeta_3^2 \,,
\label{alphaDef}
\end{equation}
and we have given one more term in the expansion than before.
Curiously, with the ES value $r=0$, the approximate relation~(\ref{fapprox3})
breaks down, because $\alpha$ is negative, and hence $\alpha^{-1/6}$ is
not real.
For $r=-2$, formula~(\ref{fapprox3strong}) becomes,
\begin{eqnarray}
\tilde{f}_0 &\approx&
1.02550\ \sqrt{{\hat{a}\over2}}\ -\ 0.157356\
-\ 0.0562398\ \sqrt{{2\over\hat{a}}}
\ +\ {\cal O}( \hat{a}^{-1} ) \,.
\label{fapprox3strongnum}
\end{eqnarray}
The numerical agreement between \eqn{fapprox3strongnum} and
the string-theory result in \eqn{fstringstrongnum} is
quite impressive: The leading coefficient agrees within 2.6\%,
and the subleading coefficient within 5\%.
The coefficient of the term proportional to $\sqrt{{2/\hat{a}}}$
in \eqn{fapprox3strongnum} is fairly small. As we discuss below,
an improved estimate suggests that it may be considerably smaller,
or perhaps even vanish.
In the KLV type of approximation, the predicted value
of the strong-coupling coefficients depends quite sensitively
on the value of the four-loop contribution to the anomalous dimension.
For example, if we scale the numerical value of the four-loop contribution as
follows,
\begin{equation}
f_0^{(4)}\ \rightarrow\ - (1+ \delta)
\Bigl( {73\over2520} \, \pi^6 + \zeta_3^2 \Bigr)
\end{equation}
instead of \eqn{fapprox3strongnum}, we find at strong coupling,
\begin{eqnarray}
\tilde{f}_0 &\approx&
(0.8597725 + 10.9855 \, \delta)^{-1/6}
\sqrt{{\hat{a}\over2}}\ - {1 \over 6.33501 + 81.1995 \, \delta}
\ +\ {\cal O}( \hat{a}^{-1/2} ) \,,
\end{eqnarray}
which exhibits a strong sensitivity under just a few percent change
in the four-loop contribution.
More generally, the sensitivity of the
strong-coupling prediction to the higher-loop orders used in
a KLV approximation, allows us to test whether a given ansatz appears
compatible with strong coupling, as we discuss below.
\begin{figure}[t]
\centerline{\epsfxsize 6.5 truein \epsfbox{cuspapprox123.eps}}
\caption{Approximations to the cusp anomalous dimension in planar MSYM\
based on the formula~(\ref{KLVapproxn}) and perturbative information
through two loops (dotted line), three loops (dot-dash line)
and four loops (solid line). Also shown is the strong-coupling
prediction from string theory (dashed line).}
\label{cuspapprox123Figure}
\end{figure}
In \fig{cuspapprox123Figure} we plot these estimates as a
function of the coupling, and we also display the strong-coupling
limit~(\ref{fstringstrongnum}) predicted by string theory.
As noted above, the approximation~(\ref{fapprox1}) using only two-loop
information works quite well, in fact better than the three-loop
approximation~(\ref{fapprox2}). However, the behavior of the four-loop
formula~(\ref{fapprox3}) is clearly extremely close to the string theory
prediction. For this curve, or rather band, we have varied the parameter
$r$ between $-2.05$ and $-1.95$, consistent with \eqn{rNum}.
We have constructed two further approximations by incorporating the knowledge
of the precise strong-coupling coefficients. Matching the four-loop
perturbative information and the leading strong-coupling coefficient
gives the approximation
\begin{eqnarray}
\hat{a}^4 &=&
(\tilde{f}_0)^4
+ {2\over3} \pi^2\ (\tilde{f}_0)^5
+ {13\over90} \pi^4\ (\tilde{f}_0)^6
+ \left( {23\over1890} \pi^6 - 4(1+r) \zeta_3^2 \right) (\tilde{f}_0)^7
+ 16\ (\tilde{f}_0)^8
\,. \nonumber\\
&&\hskip0cm \null
\label{fapprox4}
\end{eqnarray}
It has the weak-coupling expansion,
\begin{eqnarray}
\tilde{f}_0 &=&
\hat{a} - {\pi^2\over6}\ \hat{a}^2
+ {11\over180} \pi^4\ \hat{a}^3
- \left( {73\over2520} \pi^6 - (1+r) \zeta_3^2 \right) \hat{a}^4
\nonumber\\
&& \hskip0cm \null
+ \left( {4747\over302400} \pi^8
- {3\over2} (1+r) \pi^2 \zeta_3^2 - 4 \right) \hat{a}^5
\nonumber\\
&& \hskip0cm \null
- \left( {5023\over544320} \pi^{10}
- {19\over12} (1+r) \pi^4 \zeta_3^2 - {20\over3} \pi^2 \right) \hat{a}^6
+ \ldots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$.}
\label{fapprox4weak}
\end{eqnarray}
If we add the next-to-leading strong-coupling coefficient
as another constraint, we obtain the approximation,
\begin{eqnarray}
\hat{a}^5 &=&
(\tilde{f}_0)^5
+ {5\over6} \pi^2\ (\tilde{f}_0)^6
+ {\pi^4\over4}\ (\tilde{f}_0)^7
+ \left( {17\over504} \pi^6 - 5(1+r) \zeta_3^2 \right) (\tilde{f}_0)^8
\nonumber\\
&&\hskip0cm \null
+ {240\, \ln2 \over \pi}\ (\tilde{f}_0)^9
+ 32\ (\tilde{f}_0)^{10}
\,, \label{fapprox5}
\end{eqnarray}
with the weak-coupling expansion,
\begin{eqnarray}
\tilde{f}_0 &=&
\hat{a} - {\pi^2\over6}\ \hat{a}^2
+ {11\over180} \pi^4\ \hat{a}^3
- \left( {73\over2520} \pi^6 - (1+r) \zeta_3^2 \right) \hat{a}^4
\nonumber\\
&& \hskip0cm \null
+ \left( {727\over45360} \pi^8
- {5\over3} (1+r) \pi^2 \zeta_3^2 - {48\,\ln2\over\pi} \right) \hat{a}^5
\nonumber\\
&& \hskip0cm \null
- \left( {13387\over1360800} \pi^{10}
- {341\over180} (1+r) \pi^4 \zeta_3^2
- 88 \, \pi \, \ln2 + {32\over5} \right) \hat{a}^6
+ \ldots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$.}
\label{fapprox5weak}
\end{eqnarray}
We may also use the improved approximation (\ref{fapprox5}) to determine
subleading terms in the strong-coupling expansion. We have
\begin{eqnarray}
\tilde f_0 &=&
\sqrt{{\hat{a}\over2}}\ -\ { 3\ln2 \over 4\pi }\ -\
\Bigl( {17\over 161280} \pi^6 - {81\over32 \pi^2} \ln^2 2
- {1\over 64} (1+r) \zeta_3^2 \Bigr) \sqrt{{2\over \hat{a}}}
\ +\ {\cal O}(\hat{a}^{-1}) \,.~~~~
\label{fapprox5strong}
\end{eqnarray}
For $r = -2$, this expression evaluates to
\begin{eqnarray}
\tilde f_0 &\approx&
\sqrt{{\hat{a}\over2}}\ -\ 0.16548 \
-\ 0.000693\ \sqrt{{2\over \hat{a}}}\
+\ {\cal O}(\hat{a}^{-1}) \,.
\end{eqnarray}
The first two numerical coefficients automatically reproduce
the input~\cite{StrongCouplingLeadingGKP,Kruczenski,Makeenko,%
StrongCouplingSubleading}
from string theory, \eqn{fstringstrongnum}, so the content
of this equation is in the coefficient of $\sqrt{2/\hat{a}}$.
Quite interestingly, this coefficient is very small,
suggesting that it may even vanish in the exact expression.
It is an intriguing question whether such a tiny or vanishing result
could be obtained from string theory.
We plot these two approximations, for $r=-2$, along with the four-loop
approximation~(\ref{fapprox3}), in \fig{cuspapprox345Figure}.
Obviously, they are extremely close to one another.
As a measure of that, the values they predict for the
five-loop planar cusp anomalous dimension (for $r=-2$)
are 167.03, 166.34 and 165.25, corresponding to the three approximations
in eqs.~(\ref{fapprox3weak}), (\ref{fapprox4weak}) and (\ref{fapprox5weak}).
We expect the last of these numbers to be the most accurate,
given that \eqn{fapprox5weak} uses all available information
at both strong and weak coupling.
\begin{figure}[t]
\centerline{\epsfxsize 6.5 truein \epsfbox{cuspapprox345.eps}}
\caption{Approximations to the cusp anomalous dimension in planar MSYM\
based on the formula~(\ref{KLVapproxn}), four-loop
perturbative information, and zero, one or two constants from the
strong-coupling expansion.}
\label{cuspapprox345Figure}
\end{figure}
Can we use these approximations to guide corrections to the ES prediction
at higher loops? As a first step, consider the five-loop ES prediction in
\eqn{gammaKB}. The ES numerical value is $131.22$, which disagrees
significantly with the prediction of our approximate
formula (\ref{fapprox5weak}). However, we can modify the
five-loop ES prediction (\ref{gammaKB}) in a manner analogous
to the modification required at four loops to fit the value $r=-2$.
We flip the signs of the terms containing $\zeta_n$ values for
odd $n$, and leave untouched the terms containing only even $\zeta$
values (those terms containing only $\pi$'s and rational numbers).
Then the terms containing $\zeta_3$ and/or $\zeta_5$ in
the five-loop coefficient in \eqn{gammaKB} acquire the
same sign as the $\pi^8$ term, instead of the opposite sign, giving
an analytic expression through five loops:
\begin{eqnarray}
f_0^{\rm modified}(\hat{a})
&=& \hat{a} - {\pi^2\over6} \, \hat{a}^2
+ {11\over180} \pi^4 \, \hat{a}^3
- \Bigl( {73\over2520} \, \pi^6 + \zeta_3^2 \Bigr) \hat{a}^4
\nonumber\\
&&\hskip2.0cm\null
+ \Bigl( {887\over56700} \, \pi^8
+ {\pi^2\over3} \, \zeta_3^2
+10 \, \zeta_3 \, \zeta_5 \Bigr) \hat{a}^5
+\ \cdots \,.
\label{f0modified}
\end{eqnarray}
The numerical value of the five-loop coefficient in \eqn{f0modified},
$165.65$,
agrees with the five-loop coefficient in our approximate
formula~(\ref{fapprox5weak}) to a remarkable 0.25\%.
This excellent agreement in turn reinforces the notion
that $r=-2$ gives the correct analytical value of the
four-loop coefficient. (We remark that a similar procedure
at three loops, using the approximation~(\ref{KLVapproxn})
for $n=4$ and incorporating the two leading strong-coupling
coefficients, works very well to estimate the next
perturbative term: It predicts a four-loop
coefficient of 30.22, which compares nicely with our
result~(\ref{f04ans}).)
We now continue this procedure to higher loops, comparing
various predictions to our approximate formula. To obtain
the ES prediction to higher-loop order, we use the integral
equation from which eq.~(\ref{gammaKB}) is
derived~\cite{EdenStaudacher},
\begin{equation}
f(\hat{a})\ =\ \hat{a} - 4 \, \hat{a}^2 \int_0^\infty dt \, \hat \sigma(t) \,
{J_1(\sqrt{2 \hat{a}}\, t) \over \sqrt{2 \hat{a}}\, t} \,,
\label{ffromsigma}
\end{equation}
where the fluctuation density $\hat \sigma(t)$ is obtained by
solving the integral equation,
\begin{equation}
\hat \sigma(t) = {t \over e^t -1} \Biggl[
{J_1(\sqrt{2 \hat{a}} \, t)\over \sqrt{2 \hat{a}}\, t}
- 2 \, \hat{a} \int_0^\infty dt' \hat K(\sqrt{2 \hat{a}}\, t, \sqrt{2 \hat{a}}\, t') \,
\hat \sigma(t') \Biggr] \,,
\label{IntegralEquation}
\end{equation}
with the kernel,
\begin{equation}
\hat K(t, t') = {J_1(t) J_0(t') - J_0(t) J_1(t') \over t - t'} \,,
\end{equation}
where $J_0$ and $J_1$ are standard Bessel functions.
Solving this equation perturbatively through 12 loops,
we find the numerical values of the ES prediction to be,
\begin{eqnarray}
f_0^{\rm ES} & = &
\hat{a} - 1.6449 \, \hat{a}^2 + 5.9528 \,\hat{a}^3 - 26.405 \, \hat{a}^4
+ 131.22 \, \hat{a}^5 - 705.54 \, \hat{a}^6 \nonumber \\
&& \null
+ 4021.9\, \hat{a}^7 - 23974. \, \hat{a}^8
+ 1.4800 \, 10^5 \, \hat{a}^9 - 9.3958 \, 10^5 \, \hat{a}^{10} \nonumber \\
&& \null
+ 6.1024 \, 10^6 \, \hat{a}^{11} - 4.0387 \, 10^7 \, \hat{a}^{12}
+ \cdots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$.}
\label{fESweaknum}
\end{eqnarray}
The values in \eqn{fESweaknum} may be contrasted to the ones obtained
from the weak coupling expansion of
our approximate formula (\ref{fapprox5}) (with $r=-2$),
\begin{eqnarray}
f_0^{\rm approx} & = &
\hat{a} - 1.6449 \, \hat{a}^2 + 5.9528 \, \hat{a}^3 -
29.295 \, \hat{a}^4 + 165.25 \, \hat{a}^5 - 1002.7 \, \hat{a}^6 \nonumber \\
&& \null
+ 6379.3 \, \hat{a}^7 - 41997. \, \hat{a}^8
+ 2.8371\, 10^5 \, \hat{a}^9 -
1.9555\, 10^6 \, \hat{a}^{10} \nonumber \\
&& \null
+ 1.3699\, 10^7 \, \hat{a}^{11} -
9.7237\, 10^7 \, \hat{a}^{12}
+ \cdots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$.}
\label{ApproximateNumerical}
\end{eqnarray}
The agreement between \eqns{fESweaknum}{ApproximateNumerical} at five
loops and beyond is rather poor. This disagreement is not surprising,
given that the ES four-loop value, used as input into our
approximate formula~(\ref{ApproximateNumerical}), differs significantly
from our calculation.
We have evaluated further terms in the weak-coupling
expansion of our approximate formula~(\ref{fapprox5}),
up to 75 loops. The ratio of the $n^{\rm th}$
term to the $(n-1)^{\rm st}$ term in the series slowly settles down to a
value near $(-8)$ --- at 75 loops it is
$(-7.95)$ --- suggesting a radius of convergence of $1/8$,
and a nearest singularity on the negative real axis at $\hat{a}_c = -1/8$.
This value does appear to agree with the location of the nearest
singularity in the original ES equation~\cite{LipatovPotsdam,BESNew}.
What happens if we generalize the four- and five-loop sign
flips in the ES prediction, and require that all contributions
at a given loop order come in with the same sign?
Doing so, we obtain,
\begin{eqnarray}
f_0^{\rm modified\ ES}
&= &
\hat{a} - 1.6449 \, \hat{a}^2 + 5.9528 \, \hat{a}^3 -
29.295 \, \hat{a}^4 + 165.65 \, \hat{a}^5 - 1007.2 \, \hat{a}^6 \nonumber \\
&& \null
+ 6404.7 \, \hat{a}^7 - 42020. \, \hat{a}^8
+ 2.8223 \, 10^5 \, \hat{a}^9 -
1.9307 \,10^6 \, \hat{a}^{10}
\nonumber \\
&& \null
+ 1.3406 \,10^7 \, \hat{a}^{11} -
9.4226 \, 10^7 \, \hat{a}^{12}
+ \cdots \,, \nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$.}
\label{ESmodifiedNumerical}
\end{eqnarray}
This modified formula is much closer to our approximate expression
(\ref{ApproximateNumerical}), differing even at 12 loops by only 3
percent. Of course, we have no reason to expect this naive
modification of signs in the ES formula to be correct to all loop
orders, though it appears to get the numerically largest contributions
correct. Indeed, if we use this series to systematically construct
KLV approximations with larger values of $n$ in \eqn{KLVapproxn}
as more terms are kept, the large $\hat{a}$
coefficients do not settle to the string values~(\ref{fstringstrong}).
For example, using the weak-coupling series~(\ref{ESmodifiedNumerical})
up to eight loops as input to the KLV approximation~(\ref{KLVapproxn})
with $n=7$ leads to a prediction at strong coupling,
\begin{eqnarray}
\tilde f_0 &\approx&
0.95968 \, \sqrt{{\hat{a}\over2}}\ -\ 0.081182
\ +\ {\cal O}(\hat{a}^{-1/2}) \,.
\label{fapprox8strongnum}
\end{eqnarray}
which compares poorly with either the string result
(\ref{fstringstrongnum}) or with the four-loop prediction
(\ref{fapprox3strongnum}). It is noteworthy that the series
(\ref{ESmodifiedNumerical}) corresponds to the contemporaneous
proposal in ref.~\cite{BESNew}. (We have checked
agreement through 30 loops.)
In order to improve the agreement of the higher-loop
KLV approximations with the string theory strong-coupling coefficients,
further modifications to the proposal are needed.
Similar conclusions follow from the investigation of a
sequence of Pad\'e approximants, as discussed below.
The surprisingly good agreement of
\eqns{ApproximateNumerical}{ESmodifiedNumerical} does, however, suggest
that a simple repair of the integral equation is possible. As a
trivial example, by modifying the kernel in \eqn{IntegralEquation} to
$\hat K(\sqrt{2 \hat{a}}\, t, -\sqrt{2 \hat{a}}\, t')$, we obtain
\begin{eqnarray}
f_0^{\rm modified\ K}
&= &
\hat{a} - 1.6449 \, \hat{a}^2 + 5.9528 \,\hat{a}^3
- 29.295 \, \hat{a}^4 + 165.65 \, \hat{a}^5 - 1011.9 \, \hat{a}^6 \nonumber \\
&& \null
+ 6490.0 \, \hat{a}^7 - 43050. \, \hat{a}^8 + 2.9271 \, 10^5 \, \hat{a}^9 -
2.0282 \, 10^6 \, \hat{a}^{10} \nonumber \\
&& \null
+ 1.4265 \, 10^7 \, \hat{a}^{11} - 1.0156 \, 10^8 \, \hat{a}^{12}
+ \cdots \,,
\nonumber\\
&& \hskip1cm \hbox{as $\hat{a}\to 0$.}
\label{KmodifiedNumerical}
\end{eqnarray}
which is in fairly good agreement with our approximate result.
Although this ad hoc modification should only be taken as an
illustration, it does show how one can use our approximate formula
(\ref{ApproximateNumerical}) to guide corrections to the ES integral
equation. (It also suffers from the problem that strong-coupling
extrapolations do not match the string values.)
A proposed modification of the integral equation, valid
through ${\cal O}(\hat{a}^2)$ and leading to a modification in the anomalous
dimension at ${\cal O}(\hat{a}^4)$, is given in eqs.~(89) and (91) of
ref.~\cite{EdenStaudacher}. The choice $\beta = \zeta_3$ corresponds
to $r = -2$ in \eqn{rDef}, as can be seen from eq.~(92) of the same
reference. This is also the choice preferred by
crossing symmetry in the strong-coupling limit of the dressing factor,
within the family of modifications newly studied in ref.~\cite{BESNew}.
It is useful to investigate other approximation schemes, to make
sure that our results are not significantly biased by
the form of the KLV approximate formula~(\ref{KLVapproxn}).
A well-known method for incorporating information from different
expansion regions --- here from both weak and strong coupling ---
is that of Pad\'e approximants, which fit a function to a ratio
of polynomials in a suitable variable. In the complex
$\hat{a}$ plane,
we expect that $f_0(\hat{a})$ has poles or branch cuts. As
discussed earlier, there is evidence from the behavior of the KLV
approximation and the ES kernel that the singularity nearest the origin
is at $\hat{a}_c = -1/8$. For the purposes of constructing the Pad\'e
approximant, however, we will model the singularities by a branch cut
terminating on the negative real axis at the point $-\xi^2/8$.
This assumption leads us to introduce a variable
$u \equiv \sqrt{1+8\hat{a}/\xi^2}$, and define the $[(m+1)/m]$ Pad\'e
approximant by
\begin{equation}
f_0^{[(m+1)/m]}(u) = (u-1) { N_0 + N_1 \, u + \ldots + N_m \, u^m
\over 1 + D_1 \, u + \ldots + D_m \, u^m } \,.
\label{Padem}
\end{equation}
The relative degrees of the numerator and denominator polynomials
are fixed by the strong-coupling requirement $f_0 \sim \sqrt{\hat{a}} \sim u$.
The factor of $(u-1)$ comes from the vanishing of $f_0$ at $\hat{a}=0$.
The remaining $2m+1$ constants can be fixed by perturbative data,
or by a mix of weak- and strong-coupling data.
Note that when $m$ increases by 1, two more input numbers are required.
\begin{figure}[t]
\centerline{\epsfxsize 6.5 truein \epsfbox{Pade4loopcoeff.eps}}
\caption{Value estimated for the four-loop cusp anomalous dimension
$f_0^{(4)}$ by the $[3/2]$ Pad\'e approximant~(\ref{Pade32}),
as a function of the parameter $\xi$ controlling the point
at which the cut is assumed to terminate, $\hat{a} = -\xi^2/8$.}
\label{Pade4loopcoeffFigure}
\end{figure}
Consider first the $[3/2]$ Pad\'e approximant,
\begin{equation}
f_0^{[3/2]}(u) = (u-1) { N_0 + N_1 \, u + N_2 \, u^2
\over 1 + D_1 \, u + D_2 \, u^2 } \,,
\label{Pade32}
\end{equation}
which is implicitly a function of $\xi$, which parametrizes the
location of the cut termination on the negative real axis.
We solve for $N_0$, $N_1$, $N_2$, $D_1$ and $D_2$ as a function of $\xi$,
using the two strong-coupling coefficients in \eqn{fstringstrong},
plus the one-, two- and three-loop coefficients. We then expand
the result in $\hat{a}$ in order to estimate the four-loop coefficient.
This procedure will give us information about $\xi$. The four-loop
coefficient is plotted in \fig{Pade4loopcoeffFigure}. The value $\xi=1$
is picked out rather clearly by this plot, as the (approximate)
first location where the curve crosses the value
$f_0^{(4)} = -(73\,\pi^6/2520 + \zeta_3^2) = -29.2947$
(for $r=-2$). At $\xi=1$, formula~(\ref{Pade32}) predicts $-29.521$.
The second crossing is a spurious feature of the Pad\'e,
and the third crossing, near $\xi=2$, is highly implausible,
based on the strong growth in the perturbative coefficients already at
four loops, not to mention at higher orders in the KLV approximate
formulas.
In \fig{PadeKLVratioFigure} we plot the ratio of
the $[3/2]$ Pad\'e to the KLV approximate formula~(\ref{fapprox5}),
for various values of $\xi$ near 1 that produce four-loop
coefficients that are not too far from $-29.2947$.
Except for the pathological case of $\xi=1.05$, all the ratios
are within about 1\% of unity. For the preferred value of
$\xi=1$, the $[3/2]$ Pad\'e agrees with the KLV approximate formula
everywhere to within about 0.1\%. We also investigated
the $[4/3]$ Pad\'e, for $\xi=1$. This approximant requires two
more input terms, namely the four- and five-loop coefficients.
We used the ``sign-flipped'' values in \eqn{f0modified}.
The resulting expression predicts the six-loop coefficient to be
$- 1005.5$, in quite good agreement with the value of $-1002.7$
in \eqn{ApproximateNumerical}. On other hand, a plot of this
function reveals a singularity on the positive real axis, at $u=4.791$,
which is an artifact of a positive root to the cubic denominator polynomial.
(The quadratic denominator polynomial in the $[3/2]$ Pad\'e
for $\xi=1$ has complex roots, relatively far from the real axis.)
We also constructed a sequence of Pad\'e approximants~(\ref{Padem})
to see how the strong-coupling coefficients are predicted by the
proposed sign-flipped sequence~(\ref{f0modified}). As with
the KLV approximation, we do not find any convergence to the
string values~(\ref{fstringstrong}). For example, the $[4/3]$ Pad\'e,
based on the first seven loops in \eqn{f0modified}, estimates
\begin{eqnarray}
\tilde f_0 &\approx&
0.95696 \, \sqrt{{\hat{a}\over2}}\ -\ 0.061164
\ +\ {\cal O}(\hat{a}^{-1/2}) \,,
\end{eqnarray}
which is close to the KLV estimation~(\ref{fapprox8strongnum}),
and significantly different from the string values.
In summary, the different nature, yet very similar numerical
results from the Pad\'e approximation method gives us confidence
that the KLV approximation~(\ref{fapprox5}) is good to better than
a percent for all values of the coupling. It also gives confirming
evidence that the singularity nearest the origin is located
at $\hat{a}_c=-1/8$.
\begin{figure}[t]
\centerline{\epsfxsize 6.5 truein \epsfbox{PadeKLVratio.eps}}
\caption{Ratio of the $[3/2]$ Pad\'e approximant~(\ref{Pade32}),
as a function of the parameter $\xi$, to the KLV approximate
formula~(\ref{fapprox5}), as a function of the coupling $\hat{a}$.}
\label{PadeKLVratioFigure}
\end{figure}
\section{Analysis and Conclusions}
\label{AnalysisSection}
Using unitarity, we constructed the planar four-loop four-gluon
amplitude in MSYM, and verified the structure of its infrared
divergences through $1/\epsilon^2$. At order $1/\epsilon^2$ we were able to
numerically test the prediction of Eden and Staudacher for the
four-loop cusp anomalous dimension. Our result disagrees with their
predictions. Using an approximate interpolating formula due to
Kotikov, Lipatov and Velizhanin~\cite{KLV}, with the first four loop
orders as input, we are able to estimate the first two coefficients of the
strong-coupling expansion, coming to within 2.6 and 5 percent of the string
theory prediction. Improving the approximate formula by using the
string predictions as additional input, we obtain a formula for the
cusp anomalous dimension which we expect to be valid for all
couplings, to better than one percent. We have confirmed this
using Pad\'e approximations.
Not surprisingly, given the
disagreement at four loops, at higher orders the weak-coupling
expansion of our approximate formula~(\ref{fapprox5}) has
large disagreements with the ES prediction~(\ref{fESweaknum}).
There are a number of conceivable reasons for the discrepancy at
four loops, which we shall examine briefly in turn:
\begin{itemize}
\item Different definition of the coupling $g$.
\item Difference between the cusp anomalous dimension defined
for space-like {\it vs.} time-like kinematics.
\item Breakdown of integrability.
\item The wrapping problem.
\item The dressing factor.
\end{itemize}
We expect the latter reasons to be more likely than the former.
In principle our result could differ from the ES result simply because
we are expanding in coupling constants that differ beginning at three
loops, say $\hat{a}^\prime = \hat{a} + x_0 \, \zeta_3^2 \, \hat{a}^4$ for the
appropriate value of $x_0$. We use the classical coupling as our
expansion parameter, so if this explanation is correct, the asymptotic
Bethe ansatz would have to be using a different coupling. For
example, in the magnon dispersion relation, discussed in eq.~(2.18) of
ref.~\cite{BeisertDynamic}, a parameter is adjusted to match the
perturbative coupling of MSYM, via the relation
$\alpha\beta = \hat{a}/2$. (Here $\alpha$ and $\beta$
describe the action of two central charges adjoined to an
SU$(2|2)$ algebra acting on the spin chain.)
If this simple substitution were to have corrections starting at order $\hat{a}^4$,
it would lead to a four-loop discrepancy between the ES prediction from
integrability and our direct calculation in MSYM.
Suppose our result and the ES result corresponded to cusp
anomalous dimensions of different signature, {\it i.e.} one
space-like and one time-like.
Could this difference account
for the different values? According to the results of
Dokhshitzer, Marchesini and Salam~\cite{DMS}, it cannot,
because the leading large-$x$ limits of the space-like and time-like
DGLAP kernels should be identical, to all loop orders.
Integrability of the dilatation operator, interpreted as a spin model
Hamiltonian, has not been proven to hold to all orders. However,
the fact that there are very similar integrable structures at
very strong gauge coupling --- in the classical sigma
model~\cite{BPR} and even in its quantum corrections~\cite{Berkovits} ---
suggests that integrability should persist.
Generically, a wrapping problem can occur if one tries to apply
a Bethe ansatz solution for a spin-interaction that has longer
range than the number of spin sites, which is equal to the twist $J$.
Because the range of the interaction increases with the loop order,
at fixed $J$ this problem becomes more severe with increasing
loop order. It may well be that the wrapping problem prevents the
asymptotic Bethe ansatz from being applied to short (twist-two)
operators, even if it can be used for longer ones. The wrapping
problem is generically supposed to arise at order $\hat{a}^{J-2}$
for operators of twist $J$. It is only other symmetries
that protect the one-, two- and three-loop cusp anomalous
dimensions from being affected by the wrapping problem.
It is quite possible that the discrepancy is resolved by the so-called
``dressing factor'', an overall phase for the AdS/CFT $S$-matrix,
which is consistent with integrability, the PSU$(2,2|4)$ symmetry,
crossing symmetry, {\it etc.} The strong-coupling expansion of the
dressing factor is known to be nontrivial. Indeed, the first two
terms in the semi-classical expansion of the dressing factor on the
string side have been worked
out~\cite{AFS04,OtherDressing,HernandezLopez}. There
have also been very interesting recent analyses of the properties of
the dressing factor under the worldsheet crossing
symmetry~\cite{JanikCrossing,BHL}. However, it remains unclear at
which order in the weak-gauge-coupling expansion the dressing factor
becomes non-trivial. If the reason for the discrepancy is indeed the
dressing factor, then we now know it begins at four loops.
Eden and Staudacher made a specific proposal for modifying
the asymptotic Bethe
ansatz. They proposed a dressing factor containing a parameter $\beta$,
thereby modifying the kernel of their integral
equation. This leads to a shift in the four-loop cusp anomalous
dimension proportional to $\beta$. If this proposal
is correct, we determine the value of this parameter to be close to,
if not exactly, $\beta = \zeta_3$.
Anomalous dimensions of many other classes of operators are linked
through the PSU$(2,2|4)$ symmetry, and are therefore affected by a
nontrivial dressing factor. As just one example, the anomalous
dimension of the operator ${\cal O} \equiv \, {\rm Tr}(X^2 Z^3)+\ldots$, where
$X$ and $Z$ are two of the three complex scalar fields in MSYM, is
altered by an amount proportional to $\beta$, to have the
form~\cite{EdenStaudacher},
\begin{equation}
4\hat{a} - 6 \, \hat{a}^2 + 17 \, \hat{a}^3
- \biggl( {115\over2} + 8 \, \beta \biggr) \hat{a}^4 + \cdots \,.
\label{X2Z3anomdim}
\end{equation}
Thus the Eden--Staudacher proposal for the resolution of the discrepancy
could be tested by a direct computation of the four-loop anomalous
dimension of ${\cal O}$. For $\beta = \zeta_3$,
it would appear to result in a non-uniform transcendentality
for this quantity~\cite{EdenStaudacher}. (A caveat is that
one needs to account also for the transcendentality assignment of
harmonic sums~\cite{KLOV}, which are sums of rational numbers.)
As pointed out by Eden and Staudacher, a nonzero value of $\beta$,
through its effect on the anomalous dimension of ${\cal O}$,
apparently rules out the Hubbard model Hamiltonian as a candidate
for the $SU(2)$ dilatation operator beyond three loops~\cite{RSS}.
The Eden--Staudacher modification is the lowest-order term of a
general form for the dressing factor $S_{12}=\exp(2i\theta(x_1,x_2))$,
with
\begin{equation}
\theta(x_k,x_j) = \sum_{r=2}^\infty \sum_{v=r+1}^\infty
c_{r,v} \bigl( q_r(x_k) q_{v}(x_j)-q_{v}(x_k) q_{r}(x_j)\bigr)\,,
\label{GeneralDressingPhase}
\end{equation}
suggested in the literature~\cite{AFS04,OtherDressing,BeisertKlose,
BeisertInvariant,BeisertDynamic,BeisertPhase,BHL}.
In this equation, the $q_r$ are the spin-chain charges,
\begin{equation}
q_r(x_k) = {i\over r-1}
\biggl( {1\over (x^+_k)^{r-1}}- {1\over (x^-_k)^{r-1}}\biggr)\,,
\end{equation}
and the $x^\pm_i$ are rapidities entering
into the Bethe ansatz by which the spin-chain has been solved at
lower orders.
All known quantities, anomalous dimensions or amplitudes, in
the ${\cal N}=4$ supersymmetric theory, have a uniform transcendentality.
That is, all polylogarithms or zeta constants at a given order in the
perturbative expansion have the same polylog weight or transcendentality.
The ES proposal deviates from this observed property
in some quantities, such as \eqn{X2Z3anomdim}.
Is the dressing phase in \eqn{GeneralDressingPhase} general
enough to accommodate our expectations for the ${\cal N}=4$ theory?
As noted by Eden and Staudacher, maintaining uniform transcendentality
with purely rational coefficients
requires every power of $t$ or $t'$ in their integral equation to
come along with a power of the coupling $g$. Now, the coefficient
of the $\zeta_3^2$ terms, which we wish to modify, arises from terms
in the kernel proportional to $t t'$. This term is {\it odd\/} in
$t'$. Such a term cannot arise from Fourier transforming a lone charge
$q$, because we are interested in symmetric densities,
and hence will ultimately take the symmetric part of this transform.
The symmetric part is either zero (for $q_r$ with $r$ odd) or a symmetric
function of $t'$. In \eqn{GeneralDressingPhase}, every term is
linear in a charge $q(x_j)$, and so it cannot lead to the modifications
we need. We must therefore seek a more general dressing
factor, or else introduce non-rational coefficients.
Very interestingly, in a contemporaneous
paper, Beisert, Eden and Staudacher~\cite{BESNew} have
done just the latter, in such a way that, at
least for the anomalous dimension under consideration,
uniform transcendentality is maintained.
This leads to a modified integral equation of the
ES type for the cusp anomalous dimension.
At four loops the resulting anomalous dimension is in
complete agreement with our direct computation of this quantity.
Their proposal is compatible with integrability and with the KLOV
transcendentality principle for the cusp anomalous dimension,
and violates perturbative Berenstein-Maldacena-Nastase (BMN)
scaling~\cite{BMN} starting at four loops.
At five loops their proposal also matches our
prediction of this coefficient, given in \eqn{f0modified}, based on
using both KLV~\cite{KLV} and Pad\'e approximations. At higher-loop
orders, their proposal corresponds to \eqn{ESmodifiedNumerical}
and appears to properly incorporate the numerically largest contributions.
That is, it matches reasonably well our approximate expression
(\ref{ApproximateNumerical}).
However, we find that successive KLV and Pade approximations, based on
truncations at increasingly higher orders through 13 and 11 loops
respectively, do not match the strong-coupling string results. This
indicates a tension between the weak- and strong-coupling results, and
suggests to us that further modifications may be necessary. The question
merits further study.
We remark that, assuming the KLOV conversion principle, our
result for the leading-color four-loop cusp anomalous dimension
also predicts a piece of the corresponding result in QCD.
Which piece? At three loops and below, the QCD result is
a polynomial in the $SU(N_c)$ Casimir operators $C_A$ and $C_F$,
while the MSYM\ result is composed solely of $C_A = N_c$,
so it has no subleading-color terms.
The MSYM\ result provides one constraint on the leading-transcendentality
parts of the coefficients of the color factors in QCD, such that,
after the group theory Casimirs
have been set to the values $C_F = C_A = N_c$, the leading-color terms
are equal to the MSYM\ result. Starting at four loops, however,
there are color factors that cannot be reduced to polynomials
in $C_A$ and $C_F$.
The relevant color factors are those of $L$-loop propagator diagrams.
In MSYM, any triangle subdiagram leads to a group-theory factor of $C_A$,
times a lower-loop group-theory factor. So the question is, when
do no-triangle propagator diagrams first appear? At three loops,
there is one nonplanar no-triangle propagator diagram, but its color factor
vanishes identically using the Jacobi identity
(see {\it e.g.} ref.~\cite{TwoLoopSplitting}).
At four loops, \fig{NoTriangleFigure}(d${}_5$) illustrates
the unique such planar graph (when the two pairs of external lines
on the left and right sides of the graph are each replaced with single lines).
There are a number of nonplanar graphs as well.
Hence the cusp anomalous dimension in MSYM\ can now have subleading-color
terms. Presumably the KLOV principle will apply with respect
to the leading-color, planar terms, once the fermionic color factors in
the QCD result are shifted from the fundamental to adjoint representation.
But will it also apply to subleading-color terms? If conformal invariance
is the main issue~\cite{KLOV}, then it should. But if planarity is
important, perhaps it will not. Of course, the question is a moot one
until the cusp anomalous dimension in QCD, and the subleading-color part
of that in MSYM, are both known at four loops. Similar remarks apply
to the four-loop form-factor quantity ${\cal G}_0^{(4)}$.
In order to confirm that the four-loop cusp anomalous dimension is
given exactly by the ${\cal O}(\hat{a}^4)$ term in \eqn{f0modified}, it would
be important to evaluate it analytically. To do so, the four-loop
integrand~(\ref{FourLoopPlanarResult}) would have to be evaluated
analytically through ${\cal O}(\epsilon^{-2})$ instead of only through
${\cal O}(\epsilon^{-4})$ as done here. It would also be extremely
interesting to evaluate the four-loop integrand through
${\cal O}(\epsilon^{0})$, to explicitly check the four-loop iteration relation
(\ref{FourLoopFourPtIteration}) for scattering
amplitudes~\cite{Iterate2,Iterate3}.
The program {\tt MB}~\cite{CzakonMB} can be used to express the coefficients
at each order in the Laurent expansion in $\epsilon$ as a sum of finite
contour integrals. However, the number of such integrals increases
rapidly with decreasing inverse power of $\epsilon$.
This property, along with an increase in dimensionality of the
integrals, makes it harder to compute the integrals purely numerically,
requiring a substantial increase in computational resources for
the same relative level of accuracy. For example, the order $\epsilon^{-2}$
term in $M_4^{(4)}$ in table~\ref{FourLoopTable} has a relative
precision of $2\times 10^{-4}$, 40 times larger than that of the
order $\epsilon^{-3}$ term, despite having had significantly more computing
resources applied to it.
Although a brute-force computation of the $1/\epsilon$ and $\epsilon^0$ terms
could probably be completed with sufficient resources, it would be
considerably lengthier than the present computation. Accordingly,
we believe that additional analytic work would be rather desirable
before proceeding further.
The information provided in this paper offers a guide to further
progress in determining the integrable structure of planar MSYM, as
well as in studying the transition from weak to strong coupling. For
the cusp anomalous dimension, the striking match between the first two
coefficients of the strong-coupling expansion, as estimated by us,
using our four-loop result as input to the KLV
approximation~\cite{KLV}, and as obtained from string
theory~\cite{StrongCouplingLeadingGKP,Kruczenski,Makeenko,
StrongCouplingSubleading}, provides good evidence that we have an
excellent numerical
understanding of this anomalous dimension at any coupling.
The same approximation strongly suggests that the
correct analytic forms of the four- and five-loop perturbative
coefficients are the ones given in \eqn{f0modified}. These
types of approximations can also be useful for checking whether a given
ansatz for higher-order terms in the weak-coupling expansion are consistent
with the known string theory strong-coupling results.
This should help in finding the correct integral equation describing the
MSYM\ cusp anomalous dimension. Another intriguing result from the
extrapolation to strong coupling is that the next term in the
expansion (${\cal O}(\hat{a}^{-1/2})$) should be very small and may even
vanish. Finally, the remarkably good numerical properties of the
approximate formulas indicate that the transition between weak and
strong coupling is smooth for planar ${\cal N}=4$ super-Yang-Mills
theory.
\section*{Acknowledgments}
We are grateful to Niklas Beisert, John Joseph Carrasco, Alexander
Gorsky, Henrik Johansson, Igor Klebanov, Radu Roiban, Emery Sokatchev,
Marcus Spradlin,
Matthias Staudacher and Marvin Weinstein for very stimulating
discussions. We also thank Thomas Hahn for communications regarding
{\tt CUBA}. We are particularly indebted to Matthias Staudacher for
repeated encouragement in the course of this work. We thank
Niklas Beisert, Burkhard Eden and Matthias Staudacher for sending us
an advance copy of their new paper~\cite{BESNew} and for discussions.
We also thank Radu Roiban for his useful comments on the manuscript.
We thank Academic Technology Services at UCLA for computer support.
This research was supported by the US Department of Energy under
contracts DE--FG03--91ER40662 and DE--AC02--76SF00515. MC was
supported by the Sofja Kovalevskaja Award of the Alexander von
Humboldt Foundation sponsored by the German Federal Ministry of
Education and Research. The work of VAS was supported by the Russian
Foundation for Basic Research through project 05-02-17645. DAK and
VAS also acknowledge the support of the ECO-NET program of the {\sc
Egide} under grant 12516NC. The figures were generated using
Jaxodraw~\cite{Jaxo}, based on Axodraw~\cite{Axo}.
|
hep-ph/0610274
|
\section{Introduction}
Three events for the decay mode \,$\Sigma^+\to p\mu^+\mu^-$\, with a dimuon invariant mass of
\,$214.3\pm0.5$\,MeV\, have been recently observed by the HyperCP Collaboration~\cite{Park:2005ek}.
It is possible to account for these events within the standard model (SM) when long-distance
contributions are properly included~\cite{He:2005yn}.
However, the probability of having all three events at the same dimuon mass in the SM
is less than one percent.
This suggests a new-particle interpretation for the events, for which the branching ratio
is \,$\bigl(3.1^{+2.4}_{-1.9}\pm1.5\bigr)\times 10^{-8}$~\cite{Park:2005ek}.\,
This possibility has been explored to some extent in the literature, where it has been
shown that kaon decays place severe constraints on the couplings of the hypothetical
new particle~\cite{He:2005we,Deshpande:2005mb,Geng:2005ra}.
In particular, it was found that the flavor-changing coupling of the new state, $X$, to
$\bar{d}s$ has to be of a pseudoscalar or axial-vector nature to explain why the state
has not been seen in \,$K\to\pi\mu^+\mu^-$.\,
At least one model containing a particle with these properties has
appeared in the literature~\cite{Gorbunov:2000cz}.
All these previous analyses of $X$ considered only the effects of two-quark operators
for $\bar d sX$.
However, it is well known in the case of light-Higgs production in kaon decay that there
are also four-quark operators that can contribute at the same level as the two-quark
ones~\cite{sdH,Leutwyler:1989xj,Gunion:1989we,Grzadkowski:1992av}.
These four-quark contributions are most conveniently described in chiral perturbation
theory ($\chi$PT) which implements low-energy theorems governing the couplings of light
(pseudo)scalars to hadrons.
In this paper we generalize existing studies appropriate for kaon decay to the case of
hyperon decay.
This allows us to discuss the production of light (pseudo)scalars in hyperon decay
consistently, including the effects of both the two- and four-quark operators with
the aid of~$\chi$PT.
We consider the cases of scalar and pseudoscalar Higgs bosons in the SM and in the
two-Higgs-doublet model~(2HDM), expressing our results in a form that can be
easily applied to more complicated Higgs models.
This paper is organized as follows. We begin by collecting in Sec.~\ref{constraints}
the existing constraints on light Higgs bosons from kaon, $B$-meson, and
hyperon decays if we interpret the HyperCP events as being mediated by a light Higgs boson.
In Secs.~\ref{scalarH} and~\ref{pseudoscalarH} we compute the production rates in both
kaon and hyperon decays for a light scalar and pseudoscalar Higgs boson, respectively.
Finally in Sec.~\ref{final} we summarize our results and state our conclusions.
\section{Summary of Existing Constraints\label{constraints}}
In Ref.~\cite{He:2005we} we parameterized the possible couplings of the new particle, $X$,
to $\bar d s$ and $\bar\mu\mu$ assuming that it had definite parity.
Whereas this is a reasonable assumption for the diagonal couplings of $X$ to fermions,
it is not for its flavor-changing neutral couplings (FCNCs).
Two-quark FCNCs are predominantly induced by Higgs-penguin diagrams, which result in left- and
right-handed couplings, implying that the scalar and pseudoscalar ones are present simultaneously.
For this reason, we revisit the existing constraints for $X$ being a scalar particle, ${\cal H}$,
or a pseudoscalar particle, ${\cal A}$, assuming them to have two-fermion FCNCs described by
\begin{subequations} \label{quark}
\begin{eqnarray} \label{Hsd}
{\cal L}_{{\cal H}sd}^{} &=& \frac{g_{\cal H}^{}}{v} \left[m_s^{}\,\bar d(1+\gamma_5^{})s
\,+\, m_d^{}\,\bar{d}(1-\gamma_5^{})s \right]{\cal H} \,\,+\,\, {\rm H.c.} \,\,,
\\ \label{Asd}
{\cal L}_{{\cal A}sd}^{} &=&
\frac{ig_{\cal A}^{\vphantom{\sum}}}{v} \left[m_s^{}\,\bar d(1+\gamma_5^{})s
\,-\, m_d^{}\,\bar{d}(1-\gamma_5)s \right]{\cal A} \,\,+\,\, {\rm H.c.} \,\,,
\end{eqnarray}
\end{subequations}
where the $g$'s are coupling constants, $m_q^{}$ is a quark mass, and
\,$v=2^{-1/4}\,G_F^{-1/2}= 246\,{\rm GeV}$.\,
In addition, the diagonal couplings to charged leptons are assumed to have definite
parity and be proportional to the lepton mass,
\begin{eqnarray}
{\cal L}_{{\cal H}\ell}^{} \,\,=\,\,
\frac{g_\ell^{}\, m_\ell^{}}{v}\,\bar\ell \ell\, {\cal H} \,\,, \hspace{2em}
{\cal L}_{{\cal A}\ell}^{} \,\,=\,\,
\frac{i g_\ell^{}\, m_\ell^{}}{v}\,\bar\ell \gamma_5 \ell\, {\cal A} \,\,.
\end{eqnarray}
For a (pseudo)scalar of mass 214.3\,MeV, it is then natural to assume that the decay
\,$X\to\mu^+\mu^-$\, will dominate over the other kinematically allowed modes:
\,$X\to e^+e^-,\,\nu\bar\nu,\,\gamma\gamma$.\,
We will restrict ourselves to this case, assuming that \,${\cal B}(X\to\mu^+\mu^-)\sim 1$.\,
This is true, for example, for a~light SM Higgs boson where \,$g_\ell^{}=1$,\, or for light
pseudoscalars in the 2HDM types I and II, where \,$g_\ell^{}=\cot\beta$ and $-\tan\beta$,
respectively.
In all these cases decays, \,$X\to e^+e^-$\, are suppressed at least by
\,$(m_e^{}/m_\mu^{})^2\sim 10^{-5}$.
To be consistent with the HyperCP observation, $X$ must be short-lived and
decay inside the detector.
This is compatible with the estimate for the total width
\,$\Gamma_{\cal A}\sim 10^{-7}$\,MeV~\cite{Geng:2005ra} of a pseudoscalar particle, ${\cal A}$.
It was shown in Ref.~\cite{He:2005we} that the muon anomalous magnetic moment imposes
the constraint
\begin{equation}
|g_\ell^{}| \,\,\lesssim\,\, 1.2 \,\,.\label{gmu}
\end{equation}
A coupling satisfying this constraint implies a width
\begin{equation}
\Gamma_{\cal A}^{} \,\,\lesssim\,\, 3.7 \times 10^{-7}~{\rm MeV}\,\,,
\end{equation}
consistent with the observation. In contrast, the corresponding constraint for a scalar
particle is \,$|g_\ell^{}| \lesssim 0.98$,\, leading to a longer lifetime,
\begin{equation}
\Gamma_{\cal H}^{} \,\,\lesssim\,\, 6.9 \times 10^{-9}~{\rm MeV}\,\,.
\end{equation}
The estimated lifetime for the HyperCP particle is therefore consistent with that of
a pseudoscalar or scalar that decays predominantly into muons.
In addition to the two-quark contributions to the amplitudes for
\,$K\to\pi{\cal H(A)}$\, and \,$\Sigma^+\to p{\cal H(A)}$\, induced by
the interactions in Eq.~(\ref{quark}), we will also include contributions arising from the usual
SM four-quark \,$|\Delta S|=1$\, operators, along with flavor-conserving couplings of
$\cal H(A)$.
We will adopt the chiral-Lagrangian approach to evaluate the hadron-level interactions.
Later on we will discuss specific models and consider the bounds
appropriate for them, including all the relevant two- and four-quark contributions.
It is useful to start with one example to illustrate the ingredients needed to construct
a model that can satisfy all the existing constraints.
For this purpose, we consider a pseudoscalar ${\cal A}$ with two-quark couplings as in
Eq.~(\ref{Asd}) supplemented with simple parameterizations for the
four-quark amplitudes for both kaon and hyperon decays.
For $B$-meson decay, we assume that the two-quark contribution completely dominates.
\subsection{$\bm{K\to\pi{\cal A}}$}
Introducing the dimensionless quantity $M_{4K}$ for the four-quark contribution, we express
the amplitude for \,$K^\pm\to\pi^\pm\cal A$\, and its branching ratio, respectively, as
\begin{eqnarray}
i{\cal M}(K^\pm\to\pi^\pm{\cal A}) &=&
g_{\cal A}^{}\,\frac{m_K^2-m_\pi^2}{v} \,-\, M_{4K}^{}\,\frac{m_K^2}{v} \,\,,
\nonumber \\
{\cal B}(K^\pm\to\pi^\pm{\cal A}) &=&
4.43\times10^{8} \left|g_{\cal A}^{}-1.08\,M_{4K}^{}\right|^2 \,\,.
\label{keq}
\end{eqnarray}
This mode is constrained by its nonobservation in the BNL E865~\cite{Ma:1999uj} or
FNAL HyperCP~\cite{Park:2001cv} measurements of \,$K^\pm\to\pi^\pm\mu^+\mu^-$.\,
It is also constrained by its nonobservation in the isospin-related mode
\,$K_S\to\pi^0\mu^+\mu^-$\, by CERN NA48~\cite{Batley:2004wg}.
Of these three experiments, E865 had the best statistics, collecting 430 events in
\,$K^+\to\pi^+\mu^+\mu^-$.\, A new particle ${\cal A}$ of mass 214.3\,MeV would have
contributed only in their first dimuon-mass bin, where
\,$0.21{\rm\,GeV}<m_{\mu\mu}^{}<0.224$\,GeV\, and approximately 30 events were observed.
To obtain a~conservative bound, we assume that all the events in the first bin are statistically
Gaussian and can be attributed to the new particle (either a scalar or a pseudoscalar).
Further assuming uniform acceptance, we obtain at 95\% C.L.
\begin{equation}
{\cal B}(K^+\to\pi^+X) \,\,\lesssim\,\, 8.7 \times 10^{-9} \,\,.
\label{K-bound}
\end{equation}
The NA48 Collaboration collected 6 events for \,$K_S\to\pi^0\mu^+\mu^-$~\cite{Batley:2004wg},
and none of them have the 214.3-MeV invariant mass required if
they originate from the new particle ${\cal A}$.
Using the $K_S$ flux and the acceptance at low $m_{\mu\mu}^{}$ in Ref.~\cite{Batley:2004wg},
we estimate a single event sensitivity of \,$\bigl(5.3^{+0.6}_{-0.4}\bigr)\times 10^{-10}$.\,
With no events observed and Poisson statistics, this translates into the 95\%-C.L. bound
\begin{equation}
{\cal B}(K_S^{}\to\pi^0X) \,\,\lesssim\,\, 1.8\times10^{-9} \,\,. \label{KSbound}
\end{equation}
We employ these bounds when we discuss specific models, but for now we use
the E865 result in Eq.~(\ref{K-bound}), combined with Eq.~(\ref{keq}), to find
\begin{equation}
\left|g_{\cal A}^{}-1.08\, M_{4K}^{}\right| \,\,\lesssim\,\, 4.4\times 10^{-9} \,\,.
\label{kbound}
\end{equation}
\subsection{$\bm{\Sigma^+\to p{\cal A}}$}
In this case, we need two new dimensionless quantities $A_4$ and $B_4$ to parameterize
the effect of the four-quark operators, writing the amplitude as
\begin{subequations} \label{S->pA}
\begin{eqnarray}
{\cal M}(\Sigma^+\to p{\cal A}) &=&
i\bar{p}\left(A_{p\cal A}^{}\,-\,B_{p\cal A}^{}\gamma_5^{}\right)\Sigma^+ \,\,,
\end{eqnarray}
where
\begin{eqnarray}
A_{p\cal A}^{} &=&
g_{\cal A}^{}\,\frac{m_\Sigma^{}-m_N^{}}{v} \,+\, A_4^{}\, \frac{f_\pi^{}}{v} \,\,,
\nonumber \\
B_{p\cal A}^{} &=&
g_{\cal A}^{}\,(D-F)\,\frac{m_\Sigma^{}+m_N^{}}{v}\,\frac{m_K^2}{m_K^2-m_{\cal A}^2}
\,+\, B_4^{}\, \frac{f_\pi^{}}{v} \,\,,
\hspace*{1em}
\end{eqnarray}
\end{subequations}
the parameters $D$ and $F$ coming from a chiral Lagrangian to be discussed in a later
section and $\,f_\pi^{}=92.4$\,MeV\, being the pion-decay constant.
The resulting branching ratio is
\begin{eqnarray}
{\cal B}(\Sigma^+ \to p{\cal A}) &=&
1.91\times 10^6 \left|g_{\cal A}^{}+0.36\,A_4^{}\right|^2 \,+\,
4.84\times 10^4 \left|g_{\cal A}^{}+0.14\,B_4^{}\right|^2
\label{gensig}
\end{eqnarray}
with the choice $\,D-F=0.25$.\,
Combining the statistical and systematic errors of the HyperCP measurement~\cite{Park:2005ek}
in quadrature, we require
\begin{eqnarray}
{\cal B}(\Sigma^+\to p{\cal A}) &=& \bigl(3.1^{+2.8}_{-2.4}\bigr)\times 10^{-8} \,\,,
\label{bound2}
\end{eqnarray}
and therefore
\begin{eqnarray}
\left|g_{\cal A}^{} + 0.36\, A_4^{}\right| &=& (1.3\pm 0.6) \times 10^{-7} \,\,,
\label{sigbound}
\end{eqnarray}
where we have used the larger of the errors in Eq.~(\ref{bound2}) and ignored the contribution
from the P-wave term in Eq.~(\ref{gensig}), assuming that \,$B_4\lesssim A_4$.\,
This assumption is satisfied by all the models we discuss,
but when checking a specific model, we do so without neglecting $B_4$.
A comparison of Eqs.~(\ref{kbound}) and~(\ref{sigbound}) shows why it is not possible to have
a (pseudo)scalar with penguin-like flavor-changing neutral couplings, as in Eq.~(\ref{quark}),
as an explanation for the HyperCP result given the constraints from kaon decay.
It also shows how this is no longer true if there are four-quark contributions to
the amplitudes that are comparable to the penguin amplitudes.
In particular, if we assume that in a given model $g_{\cal A}^{}$, $M_{4K}$, and $A_4$ have
comparable magnitudes, we see that in order to satisfy both Eqs.~(\ref{kbound})
and~(\ref{sigbound}) we need a cancelation between the two- and four-quark contributions to
the kaon amplitude that reduces them by a factor of about 20.
As we will show in later sections, this is possible in many models.
For this cancelation to work, however, $g_{\cal A}^{}$ and $M_{4K}$ must also have similar phases.
As we will see, this is a requirement that is much harder to satisfy.
In the simple models we consider in this paper, the phase of $g_{\cal A}^{}$ is much larger
than the phase of $M_{4K}$ so that the cancelation does not happen for the imaginary part.
\subsection{$\bm{b\to s X}$}
Finally we consider the constraints on the new particle from its nonobservation in $B$-meson
decay.
In this case, the four-quark contributions are negligible, and we can neglect $m_s^{}$
compared to $m_b^{}$.
The Lagrangian for \,$b\to s X$\, can then be expressed as
\begin{eqnarray}
{\cal L}_{Xbs}^{} &=& \frac{g^\prime\,m_b^{}}{v}\, \bar{s}\bigl(1+\gamma_5^{}\bigr)b\,X
\,\,+\,\, {\rm H.c.} \,\,,
\label{gbi}
\end{eqnarray}
where \,$g'=g_{\cal H}^\prime\,(ig_{\cal A}^\prime)$\, for \,$X=\cal H\,(\cal A)$.\,
This leads to the partial decay rate
\begin{eqnarray}
\Gamma(b\to s X) &\simeq& |g^\prime|^2\,\frac{m_b^3}{8\pi v^2} \,\,.
\label{gambx}
\end{eqnarray}
Using for illustration \,$m_b^{}=4.3$\,GeV\, and the $B^+$ lifetime~\cite{hfag} results in
\begin{eqnarray}
{\cal B}(b\to s X) &=& 1.3\times 10^8\, |g^\prime|^2 \,\,.
\label{bxrate}
\end{eqnarray}
One could obtain a similar number for \,$b\to d X$.\,
The latest experimental average
\,${\cal B}(b\to s\mu^+\mu^-)=\bigl(4.27^{+1.23}_{-1.22}\bigr)\times 10^{-6}$~\cite{hfag}
covers the full kinematic range for $m_{\mu\mu}^{}$.
To constrain $g^\prime$, it is better to limit the comparison to the
measured rate at the lowest measured $m_{\mu\mu}^{}$ invariant-mass bin.
BABAR quotes in Table II of Ref.~\cite{Aubert:2004it}
\begin{eqnarray}
{\cal B}(b\to s\ell^+\ell^-)_{m_{\ell^+\ell^-}\in[0.2{\rm\,GeV},1.0{\rm\,GeV}]}^{} &=&
\bigl(0.08\pm 0.36^{+0.07}_{-0.04}\bigr)\times 10^{-6} \,\,.
\end{eqnarray}
This is an average for electrons and muons, but no noticeable difference between them was found.
Belle quotes on Table IV of Ref.~\cite{Iwasaki:2005sy} the corresponding number
\begin{eqnarray}
{\cal B}(b\to s\ell^+\ell^-)_{m_{\ell^+\ell^-}\in[0.2{\rm\,GeV},1.0{\rm\,GeV}]}^{} &=&
\bigl(11.3\pm 4.8^{+4.6}_{-2.7}\bigr)\times 10^{-7} \,\,.
\end{eqnarray}
To be conservative, we constrain the Higgs coupling by requiring that the induced rate be below
the 95\%-C.L. upper-range of the measured \,$b\to s\ell^+\ell^-$\, rate in the lowest
measured $m_{\mu\mu}^{}$ bin.
Thus, combining errors in quadrature for the more restrictive BABAR result gives
\begin{eqnarray}
{\cal B}(b\to s\ell^+\ell^-)_{m_{\ell^+\ell^-}<1\rm\,GeV}^{} &\lesssim& 8.0\times 10^{-7}
\hspace{2em} \rm (BABAR)
\end{eqnarray}
and correspondingly
\begin{eqnarray}
|g^\prime| &\lesssim& 7.8\times 10^{-8} \,\,.\label{ggbs}
\end{eqnarray}
The exclusive \,$B\to(K,K^\star)\mu^+\mu^-$\, modes have been measured, but the resulting
constraints are not better than Eq.~(\ref{ggbs}). This constraint, Eq.~(\ref{ggbs}), is
difficult to satisfy in models where $g_{\cal A}^{}$ and $g^\prime$ are related by top-quark
CKM angles, as happens in the simple models we consider here.
\section{Scalar Higgs boson\label{scalarH}}
In this section we discuss in detail the case of a light Higgs boson in the standard model
and in the two-Higgs-doublet model.
We will use known low-energy theorems to implement the four-quark contributions to kaon and
hyperon amplitudes.
\subsection{Two-quark $\bm{|\Delta S|=1}$ interactions}
The effective Lagrangian for the $sd\cal H$ coupling, where $\cal H$ is either
the standard-model Higgs boson $H^0$ or the lightest scalar Higgs-boson $h^0$ in the 2HDM,
has been much discussed in the literature~\cite{sdh_sm,sdh_2hdm,sdH,Leutwyler:1989xj,Dawson:1989bm} and
can be written as ${\cal L}_{{\cal H}sd}$ in Eq.~(\ref{Hsd}), where
\begin{eqnarray} \label{gH}
g_{\cal H}^{} \,\,=\,\,
\frac{G_F^{}}{4\sqrt2\,\pi^2}\sum_{q=u,c,t}m_q^2 V^*_{qd}V_{qs}^{}\,F(q) \,\,,
\end{eqnarray}
with $V_{kl}^{}$ being the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix and
$F(q)$ depending on the model.
In the SM, for a Higgs mass much smaller than the $W$ mass,
\begin{eqnarray}
F(q) \,\,=\,\, 3/4 \,\,,
\end{eqnarray}
whereas in the 2HDM the expression for $F(q)$ is much lengthier~\cite{sdh_2hdm,Dawson:1989bm}.
Using CKM and mass parameters from Ref.~\cite{Charles:2004jd}, we find in the SM
\begin{eqnarray}
g_{\cal H}^{} &=& (-1.3-0.6i)\times10^{-6} \,\,,
\end{eqnarray}
to be compared with Eqs.~(\ref{kbound}) and~(\ref{sigbound}) above.
Employing the expression for $F(q)$ derived in Ref.~\cite{Dawson:1989bm},
we obtain a similar number in the 2HDM type II, for instance,
\begin{eqnarray} \label{gH2hdm}
g_{\cal H}^{} &=& (5.0+1.9i)\times10^{-7}
\end{eqnarray}
for the parameters
\begin{eqnarray} \label{2hdmpar}
\tan\beta \,\,\simeq\,\, 2.57 \,\,, \hspace{2em} \sin(\beta-\alpha) \,\,\simeq\,\, 0.149 \,\,,
\hspace{2em} m_{H^+}^{} \,\,=\,\, 250{\rm~GeV} \,\,,
\end{eqnarray}
where \,$\tan\beta$\, is the ratio of vacuum expectation values of the two Higgs doublets,
$\alpha$ the mixing angle in the neutral-Higgs-boson mass matrix, and $m_{H^+}$
the mass of the charged Higgs bosons.\footnote{We have also set \,$\kappa=m_{H^+}^2/m_W^2$\,
in $F(q)$, where $\kappa$ is defined in Ref.~\cite{Dawson:1989bm}.}
We note that the $\alpha$ and $\beta$ values above satisfy the constraint
\,$\sin^2(\beta-\alpha)<0.06$\, from LEP~\cite{Abbiendi:2004gn}.
We see right away that $g_{\cal H}^{}$ can be in the right ball park to explain the HyperCP
observation, Eq.~(\ref{sigbound}), but conflicts with the kaon bound, Eq.~(\ref{kbound}).
To evaluate the hadronic amplitudes from this 2-quark contribution, we employ chiral
perturbation theory.
Using the operator matching of Ref.~\cite{He:2005we}, we write the lowest-order chiral
realization of ${\cal L}_{{\cal H}sd}$ as
\begin{eqnarray} \label{LsdH}
{\cal L}_{\cal H}^{} &=&
b_D^{} \left\langle \bar{B}{}^{} \left\{ h_{\cal H}^{}, B^{} \right\} \right\rangle
+ b_F^{} \left\langle \bar{B}{}^{} \left[ h_{\cal H}^{}, B^{} \right] \right\rangle
+ b_0^{} \left\langle h_{\cal H}^{} \right\rangle \left\langle\bar{B}{}^{}B^{}\right\rangle
\,+\, \mbox{$\frac{1}{2}$} f^2 B_0^{} \left\langle h_{\cal H}^{} \right\rangle
\,\,+\,\, {\rm H.c.} \,\,,
\end{eqnarray}
where $\,\langle\cdots\rangle\equiv{\rm Tr}(\cdots)\,$ in flavor-SU(3) space,
$\,f=f_\pi^{}=92.4\rm\,MeV$,\, and
\begin{eqnarray}
h_{\cal H}^{} \,\,=\,\,
-2g_{\cal H}^{}\,\bigl(\xi^\dagger h M\xi^\dagger+\xi M h\xi\bigr)\frac{\cal H}{v} \;,
\end{eqnarray}
with $h$ being a 3$\times$3-matrix having elements
$\,h_{kl}^{}=\delta_{k2}^{}\delta_{3l}^{}\,$ which selects out $\,s\to d\,$ transitions,
$\,M={\rm diag}(\hat{m},\hat{m},m_s^{})=
{\rm diag}\bigl(m_\pi^2,m_\pi^2,2m_K^2-m_\pi^2\bigr)/(2B_0^{})\,$ the quark-mass matrix
in the isospin-symmetric limit $\,m_u^{}=m_d^{}=\hat{m}$,\,
and the baryon and meson fields represented by the usual 3$\times$3-matrices
$B$ and \,$\Sigma=\xi\xi=e^{i\varphi/f}$,\, respectively.
To derive amplitudes, we also need the chiral Lagrangian for the strong interactions of
the hadrons~\cite{Gasser:1983yg,bcpt}.
At leading order in the derivative and $m_s^{}$ expansions, it can be written as
\begin{eqnarray} \label{Lstrong}
{\cal L}_{\rm s}^{} &=&
\left\langle \bar{B}^{}\, {i}\gamma^\mu \bigl(
\partial_\mu^{}B+\bigl[{\cal V}_\mu^{},B \bigr] \bigr) \right\rangle
- m_0^{} \left\langle \bar{B}^{} B^{} \right\rangle
+ D \left\langle \bar{B}^{} \gamma^\mu\gamma_5^{}
\left\{ {\cal A}_\mu^{}, B^{} \right\} \right\rangle
+ F \left\langle \bar{B}^{} \gamma^\mu\gamma_5^{}
\left[ {\cal A}_\mu^{}, B^{} \right] \right\rangle
\nonumber \\ && +\,\,
b_D^{} \left\langle\bar{B}^{}\left\{M_+,B^{}\right\}\right\rangle
+ b_F^{} \left\langle\bar{B}^{}\left[M_+,B^{}\right]\right\rangle
+ b_0^{} \left\langle M_+ \right\rangle \left\langle \bar{B}^{} B^{} \right\rangle
\nonumber \\ &&
+\,\,
\mbox{$\frac{1}{4}$} f^2 \left\langle
\partial^\mu\Sigma^\dagger\, \partial_\mu^{}\Sigma \right\rangle +
\mbox{$\frac{1}{2}$} f^2 B_0^{} \left\langle M_+ \right\rangle \,\,,
\end{eqnarray}
where
$\,{\cal V}^\mu =
\frac{1}{2}\bigl(\xi\,\partial^\mu\xi^\dagger+\xi^\dagger\,\partial^\mu\xi\bigr),\,$
$m_0^{}$ is the baryon mass in the chiral limit,
$\,{\cal A}^\mu = \frac{i}{2}
\bigl(\xi\,\partial^\mu\xi^\dagger-\xi^\dagger\,\partial^\mu\xi\bigr),\,$
and \,$M_+^{}=\xi^\dagger M\xi^\dagger+\xi M^\dagger\xi$,\,
with further details being given in Ref.~\cite{He:2005we}.
{From} ${\cal L}_{\cal H}$ and ${\cal L}_{\rm s}$, we derive the leading-order diagrams
shown in Fig.~\ref{sdHdiagrams} for $\,\Sigma^+\to p\cal H$,\, yielding the amplitude
\begin{eqnarray} \label{MSpH}
{\cal M}_{2q}^{}(\Sigma^+\to p{\cal H}) &=&
g_{\cal H}^{}\,\frac{m_\Sigma^{}-m_N^{}}{v}\,\frac{m_K^2}{m_K^2-m_\pi^2}\,\bar{p}\Sigma^+
\nonumber \\ && -\,\,
g_{\cal H}^{}\,(D-F)\,\frac{m_\Sigma^{}+m_N^{}}{v}\,\frac{m_K^2-m_\pi^2}{m_K^2-m_{\cal H}^2}\,
\bar{p}\gamma_5^{}\Sigma^+ \,\,,
\end{eqnarray}
where the two terms correspond to the two diagrams, respectively, $m_{\Sigma,N}^{}$ are
isospin-symmetric masses, and we have used the relations
\,$m_\Sigma^{}-m_N^{}=2\bigl(b_D^{}-b_F^{}\bigr)\bigl(m_s^{}-\hat{m}\bigr),\,$
\,$m_K^2=B_0^{}\bigl(\hat{m}+m_s^{}\bigr),\,$ and \,$m_\pi^2=2B_0^{}\hat{m}$\,
derived from ${\cal L}_{\rm s}$.
Numerically, we will allow $D$ and $F$ to have the ranges
\,$0.6\le D\le 0.8$\, and \,$0.4\le F\le0.5$\,~\cite{bcpt}, leading to
\begin{eqnarray} \label{d-f}
0.1 \,\le\, D-F \,\le\, 0.4 \;,
\end{eqnarray}
which is their combination occurring in our amplitudes.
\begin{figure}[t]
\begin{picture}(160,100)(-80,-20)
\Text(-32,0)[r]{\footnotesize$\Sigma^+$} \Line(-30,0)(0,0)
\Line(0,0)(0,30) \Text(0,35)[]{\footnotesize$\cal H$}
\Line(0,0)(30,0) \Text(32,0)[l]{\footnotesize$p$}
\SetWidth{1} \BBoxc(0,0)(4,4) \Text(0,-20)[]{\footnotesize(a)}
\end{picture}
\begin{picture}(160,100)(-80,-20)
\Text(-32,0)[r]{\footnotesize$\Sigma^+$} \Line(-30,0)(0,0)
\Line(0,0)(0,50) \Text(2,15)[l]{\footnotesize$\bar{K}^0$}
\Text(0,55)[]{\footnotesize$\cal H$} \Line(0,0)(30,0) \Text(32,0)[l]{\footnotesize$p$}
\Vertex(0,0){2} \SetWidth{1} \BBoxc(0,30)(4,4) \Text(0,-20)[]{\footnotesize(b)}
\end{picture}
\caption{\label{sdHdiagrams}%
Diagrams contributing to $\,\Sigma^+\to p\cal H$\, arising from ${\cal L}_{{\cal H}sd}$
at leading order in $\chi$PT.
The square vertices come from ${\cal L}_{\cal H}$ in Eq.~(\ref{LsdH}),
and the solid vertex from ${\cal L}_{\rm s}$ in Eq.~(\ref{Lstrong}).
}
\end{figure}
It follows that the contribution of ${\cal L}_{{\cal H}sd}$ to the branching ratio of
$\,\Sigma^+\to p\cal H$\, for \,$m_{\cal H}^{}=214.3$\,MeV\, and the middle value
$\,D-F=0.25$\, is in the SM
\begin{equation}
{\cal B}_{2q}^{}(\Sigma^+\to p{\cal H}) \,\,=\,\, (40 + 1) \times 10^{-7} \,\,,
\end{equation}
where we have ignored the imaginary ($CP$ violating) part of the amplitude, and
the two numbers correspond to the contributions from the scalar and pseudoscalar
flavor-changing couplings, respectively.
Evidently, the scalar contribution is much larger than what HyperCP saw, but the pseudoscalar
contribution is within the range.
This, however, is only part of the story, as there are in addition 4-quark contributions
to be discussed in the next subsection.
Also from ${\cal L}_{\cal H}$, we derive the leading-order diagram for
\,$K\to\pi\cal H$,\, which is that in Fig.~\ref{sdHdiagrams}(a) with $\Sigma^+$ ($p$)
replaced by $K$ ($\pi$) and arises from the scalar coupling in ${\cal L}_{{\cal H}sd}$.
The resulting amplitude is
\begin{eqnarray} \label{M2q[K->pi-H]}
{\cal M}_{2q}^{}(K^+\to\pi^+{\cal H}) \,\,=\,\,
-\sqrt2\,{\cal M}_{2q}^{}(K^0\to\pi^0{\cal H})
\,\,=\,\, \frac{-g_{\cal H}^{}\, m_K^2}{v} \,\,,
\end{eqnarray}
and so \,${\cal M}_{2q}(K_L^{}\to\pi^0{\cal H})=
-{\rm Re}\,{\cal M}_{2q}(K^+\to\pi^+{\cal H})$.\,
Dropping again the imaginary parts of the amplitudes, we obtain in the SM the branching ratios
\begin{eqnarray}
{\cal B}_{2q}^{}(K^+\to\pi^+{\cal H}) \,\,=\,\, 9.3\times10^{-4} \,\,, \hspace{2em}
{\cal B}_{2q}^{}(K_L^{}\to\pi^0{\cal H}) \,\,=\,\, 3.9\times10^{-3} \,\,.
\end{eqnarray}
These numbers would easily be incompatible with that in Eq.~(\ref{K-bound})
and the 95\%-C.L. bound\footnote{We have inferred this number from Ref.~\cite{Alavi-Harati:2000hs}
which reported \,${\cal B}(K_L\to\pi^0\mu^+\mu^-)<3.8\times10^{-10}$\, at 90\% C.L.}
\begin{eqnarray} \label{KLbound}
{\cal B}(K_L^{}\to\pi^0\mu^+\mu^-) &<& 4.9\times10^{-10} \,\,,
\end{eqnarray}
but, as in the $\Sigma^+$ case, there are 4-quark contributions that
have to be considered as well.
The situation is similar in the 2HDM.
Adopting the real part of the coupling in Eq.~(\ref{gH2hdm}), for example, we find
\begin{eqnarray}
{\cal B}_{2q}^{}(\Sigma^+\to p{\cal H}) &=& (56 + 1) \times 10^{-8} \,\,,
\nonumber \\ \vphantom{\sum}
{\cal B}_{2q}^{}(K^+\to\pi^+{\cal H}) \,\,=\,\, 1.3\times10^{-4} \,\,, &&
{\cal B}_{2q}^{}(K_L^{}\to\pi^0{\cal H}) \,\,=\,\, 5.4\times10^{-4} \,\,.
\end{eqnarray}
\subsection{Four-quark $\bm{|\Delta S|=1}$ interactions}
The hadronic interactions of a light Higgs-boson due to 4-quark \,$|\Delta S|=1$\, operators
are best accounted for in the chiral Lagrangian approach.
The dominant contribution is generated by the $\,|\Delta I|=\frac{1}{2}\,$ component of
the effective Hamiltonian transforming as $(8_{\rm L}^{},1_{\rm R}^{})$.
The corresponding Lagrangian at leading order is given by~\cite{bcpt,dgh}.
\begin{eqnarray} \label{Lweak}
{\cal L}_{\rm w}^{} &=&
h_D^{} \left\langle \bar B \left\{ \xi^\dagger h \xi\,,\,B \right\} \right\rangle
+ h_F^{} \left\langle \bar B \left[ \xi^\dagger h \xi\,,\,B \right] \right\rangle
\nonumber \\ && +\,\,
\gamma_8^{}f^2 \left\langle h\,\partial_\mu^{}\Sigma\,
\partial^\mu \Sigma^\dagger \right\rangle
+ 2\tilde{\gamma}_8^{} f^2 B_0^{} \left\langle h \xi M_+ \xi^\dagger \right\rangle
\,\,+\,\, {\rm H.c.} \,\,,
\end{eqnarray}
where $h_{D,F}^{}$ can be extracted from hyperon nonleptonic decays,
$\,\gamma_8^{}=-7.8\times10^{-8}$\, from $\,K\to\pi\pi$,\, the sign following from various
predictions~\cite{Gunion:1989we,sdH,Leutwyler:1989xj,Bijnens:1998ee}, and $\tilde{\gamma}_8^{}$
is unknown as it does not contribute to any process with only kaons and pions.
The 4-quark \,$|\Delta S|=1$\, interactions of a light Higgs-boson $\cal H$ arises from
its tree-level couplings to quarks and $W^\pm$ bosons, as well as from its coupling to
gluons induced by a triangle diagram with heavy quarks in the loop.
To obtain the relevant chiral Lagrangians, one starts with ${\cal L}_{\rm s,w}$ above and
follows the prescription given in Refs.~\cite{sdH,Leutwyler:1989xj,Dawson:1989bm,Gunion:1989we}.
The results are
\begin{eqnarray} \label{LsH}
{\cal L}_{\rm s}^{\cal H} &=&
\left( \mbox{$\frac{1}{4}$}\, c_1^{}\, f^2 \left\langle
\partial^\mu\Sigma^\dagger\, \partial_\mu^{}\Sigma \right\rangle \vphantom{|_|^|}
+ \mbox{$\frac{1}{2}$}\, c_2^{}\, f^2 B_0^{} \left\langle M_+ \right\rangle
+ \mbox{$\frac{1}{2}$}\,f^2 B_0^{}\,\bigl\langle\hat{M}_+-M_+\bigr\rangle\right)\frac{\cal H}{v}
\,-\, k_1^{}\, m_0^{} \left\langle \bar{B}^{} B^{} \right\rangle \frac{\cal H}{v}
\nonumber \\ && +\,\,
k_2^{} \left( b_D^{}\, \bigl\langle \bar{B}^{}\, \bigl\{ \hat{M}_+, B^{} \bigr\} \bigr\rangle
+ b_F^{}\, \bigl\langle \bar{B}^{}\, \bigl[ \hat{M}_+, B^{} \bigr] \bigr\rangle
+ b_0^{}\, \bigl\langle \hat{M}_+ \bigr\rangle\, \bigl\langle\bar{B}B\bigr\rangle \right)
\frac{\cal H}{v} \,\,,
\end{eqnarray}
\begin{eqnarray} \label{LwH}
{\cal L}_{\rm w}^{\cal H} &=&
\left[ \gamma_8^{}\, c_3^{}\, f^2 \left\langle h\,\partial_\mu^{}\Sigma\,
\partial^\mu \Sigma^\dagger \right\rangle
+ 2\tilde{\gamma}_8^{}\, c_4^{}\, f^2 B_0^{}
\left\langle h \xi M_+ \xi^\dagger \right\rangle
+ 2\tilde{\gamma}_8^{}\, f^2 B_0^{}\,
\bigl\langle h \xi \bigl(\hat{M}_+-M_+\bigr) \xi^\dagger \bigr\rangle \right] \frac{\cal H}{v}
\nonumber \\ && +\,\,
k_3^{} \left( h_D^{}\left\langle \bar B\left\{\xi^\dagger h \xi\,,\,B \right\} \right\rangle
+ h_F^{}\left\langle\bar B\left[\xi^\dagger h\xi\,,\,B\right]\right\rangle\right)\frac{\cal H}{v}
\,\,+\,\, {\rm H.c.} \,\,,
\end{eqnarray}
where
\begin{eqnarray}
\begin{array}{c} \displaystyle
c_1^{} \,\,=\,\, 2k_G^{} \,\,, \hspace{2em}
c_2^{} \,\,=\,\, 3k_G^{}+1 \,\,, \hspace{2em}
c_3^{} \,\,=\,\, 4k_G^{}-2k_W^{} \,\,, \hspace{2em}
c_4^{} \,\,=\,\, 5k_G^{}-2k_W^{}+1 \,\,,
\vspace{1ex} \\ \displaystyle
k_1^{} \,\,=\,\, k_G^{} \,\,, \hspace{2em}
k_2^{} \,\,=\,\, 1 \,\,, \hspace{2em}
k_3^{} \,\,=\,\, 3k_G^{}-2k_W^{} \,\,,
\vspace{1ex} \\ \displaystyle
\hat{M}_+^{} \,\,=\,\, \xi^\dagger\hat{M}\xi^\dagger+\xi\hat{M}^\dagger\xi \,\,,
\end{array}
\end{eqnarray}
with
\begin{eqnarray}
k_G^{} \,\,=\,\, \frac{2(2k_u^{}+k_d^{})}{27} \,\,, \hspace{2em}
\hat{M} \,\,=\,\, {\rm diag}\bigl(k_u^{}\hat{m},\,k_d^{}\hat{m},\,k_d^{}m_s^{}\bigr) \,\,,
\end{eqnarray}
the expression for $k_G^{}$ corresponding to 3 heavy and 3 light quarks.
The parameters $k_{u,d}^{}$, $k_W^{}$, and $k_G^{}$ come from the couplings of
$\cal H$ to light quarks, $W^\pm$, and the gluons, respectively,
and depend on the model of the Higgs sector.
Thus
\begin{eqnarray} \label{ksm}
k_u^{} \,\,=\,\, k_d^{} \,\,=\,\, k_W^{} &=& 1 \hspace{3em} \mbox{in the SM} \,\,,
\\ \vphantom{\int}
\label{kI}
k_u^{} \,\,=\,\, k_d^{} \,\,=\,\, \frac{\cos\alpha}{\sin\beta} \,\,, \hspace{2em}
k_W^{} &=& \sin(\beta-\alpha) \hspace{3em} \mbox{in the 2HDM~I} \,\,,
\\
\label{kII}
k_u^{} \,\,=\,\, \frac{\cos\alpha}{\sin\beta} \,\,, \hspace{2em}
k_d^{} \,\,=\,\, -\frac{\sin\alpha}{\cos\beta} \,\,, &&
k_W^{} \,\,=\,\, \sin(\beta-\alpha) \hspace{3em} \mbox{in the 2HDM~II} \,\,.
\end{eqnarray}
The parameters $c_{1,2,3,4}^{}$ for the meson terms have already been obtained
in the literature~\cite{Gunion:1989we,sdH,Leutwyler:1989xj,Dawson:1989bm,Prades:1990vn},
whereas the new ones $k_{1,2,3}^{}$ follow from how the baryon parameters
depend on masses:
\,$m_0^{}\sim\Lambda$,\, $\,b_{D,F,0}^{}\sim 1$,\, \,$\chi_+^{}\sim\Lambda m_q^{}$,\,
and \,$h_{D,F}^{}\sim\Lambda^3/m_W^2$,\, where $\Lambda$ is a QCD mass scale.
Note that we work in that basis in which the mass terms in the Lagrangians are
not diagonal and must therefore include the corresponding tadpole diagrams in our
calculation.
For \,$\Sigma^+\to p\cal H$,\, we derive from ${\cal L}_{\rm s,w}^{({\cal H})}$
the diagrams shown in Fig.~\ref{4qHdiagrams}, finding
\begin{eqnarray} \label{MSpH'}
{\cal M}_{4q}^{}(\Sigma^+\to p{\cal H}) &=&
(k_d^{}-3k_G^{}+2k_W^{})\frac{h_D^{}-h_F^{}}{v}\, \bar{p}\Sigma^+
\nonumber \\ && +\,\,
4(k_G^{}-k_W^{})\,(D-F)\,\tilde{\gamma}_8^{}\,\frac{m_\Sigma^{}+m_N^{}}{v}\,
\frac{m_K^2-m_\pi^2}{m_K^2-m_{\cal H}^2}\, \bar{p}\gamma_5^{}\Sigma^+ \,\,,
\end{eqnarray}
where the first term comes from the upper three diagrams, which are at leading order, and
the second term results from the lower two diagrams, which are at next-to-leading order.
Now, the combination $\,h_D$$-$$h_F$\, also occurs in the amplitude for
\,$\Sigma^+\to p\pi^0$,\, which we write as
\begin{eqnarray}
{\cal M}(\Sigma^+\to p\pi^0) \,\,=\,\,
{i}\bar{p}\, \bigl(A_{p\pi^0}^{}-B_{p\pi^0}^{}\gamma_5^{}\bigr)\,\Sigma^+ \;,
\end{eqnarray}
where from ${\cal L}_{\rm s,w}$
\begin{eqnarray} \label{ABt}
A_{p\pi^0}^{} \,\,=\,\, \frac{-h_D^{}+h_F^{}}{2\,f} \,\,, \hspace{2em}
B_{p\pi^0}^{} \,\,=\,\,
(D-F)\frac{h_D^{}-h_F^{}}{2\,f}\,\,\frac{m_\Sigma^{}+m_N^{}}{m_\Sigma^{}-m_N^{}} \,\,.
\end{eqnarray}
Since from experiment~\cite{pdg}
\begin{eqnarray} \label{ABx}
A_{p\pi^0}^{} \,\,=\,\, -3.25\times10^{-7} \;, \hspace*{3em}
B_{p\pi^0}^{} \,\,=\,\, 26.67\times10^{-7} \;,
\end{eqnarray}
up to an overall sign, in our numerical evaluation of the 4-quark contributions to
\,$\Sigma^+\to p\cal H$\, we will explore different \,$h_D$$-$$h_F$\, values accordingly.
\begin{figure}[t]
\begin{picture}(120,80)(-60,-20)
\Text(-32,0)[r]{\footnotesize$\Sigma^+$} \Line(-30,0)(30,0)
\Line(0,0)(0,30) \Text(0,35)[]{\footnotesize$\cal H$}
\Text(32,0)[l]{\footnotesize$p$} \CBoxc(0,0)(4,4){Black}{Black}
\end{picture}
\begin{picture}(160,80)(-80,-20)
\Text(-52,0)[r]{\footnotesize$\Sigma^+$} \Line(-50,0)(50,0)
\Line(-20,0)(-20,30) \Text(-20,35)[]{\footnotesize$\cal H$}
\Text(0,-5)[]{\footnotesize$\Sigma^+$} \Vertex(-20,0){2}
\Text(52,0)[l]{\footnotesize$p$} \CBoxc(20,0)(4,4){Black}{Black}
\end{picture}
\begin{picture}(160,80)(-80,-20)
\Text(-52,0)[r]{\footnotesize$\Sigma^+$} \Line(-50,0)(50,0)
\Line(20,0)(20,30) \Text(20,35)[]{\footnotesize$\cal H$}
\Text(0,-5)[]{\footnotesize$p$} \Vertex(20,0){2}
\Text(52,0)[l]{\footnotesize$p$} \CBoxc(-20,0)(4,4){Black}{Black}
\end{picture}
\\
\begin{picture}(160,70)(-80,-10)
\Text(-32,0)[r]{\footnotesize$\Sigma^+$} \Line(-30,0)(30,0) \Line(0,0)(0,45)
\Text(0,50)[]{\footnotesize$\cal H$} \Vertex(0,0){2} \CBoxc(0,25)(4,4){Black}{Black}
\Text(32,0)[l]{\footnotesize$p$} \Text(7,12)[]{\footnotesize$\bar{K}^0$}
\end{picture}
\begin{picture}(160,70)(-80,-10)
\Text(-32,0)[r]{\footnotesize$\Sigma^+$} \Line(-30,0)(30,0) \Line(0,0)(0,45)
\Line(0,25)(25,25) \Text(0,50)[]{\footnotesize$\cal H$} \Vertex(0,0){2} \Vertex(0,25){2}
\CBoxc(25,25)(4,4){Black}{Black} \Text(32,0)[l]{\footnotesize$p$}
\Text(7,12)[]{\footnotesize$\bar{K}^0$} \Text(13,30)[]{\footnotesize$\bar{K}^0$}
\end{picture} \vspace*{-1ex}
\caption{\label{4qHdiagrams}%
Diagrams contributing to $\,\Sigma^+\to p\cal H$\, arising from the 4-quark operators.
The square vertices come from ${\cal L}_{\rm w}^{({\cal H})}$ in Eqs.~(\ref{Lweak})
and~(\ref{LwH}), whereas the dots are from ${\cal L}_{\rm s}^{({\cal H})}$ in
Eqs.~(\ref{Lstrong}) and~(\ref{LsH}).
}
\end{figure}
We can also derive from ${\cal L}_{\rm s,w}^{({\cal H})}$ the corresponding leading-order
diagrams for $\,K\to\pi\cal H$,\, which are the upper three in Fig.~\ref{4qHdiagrams}
with $\Sigma^+$ ($p$) replaced by $K$ ($\pi$) and yield
\begin{eqnarray} \label{M4q[K->pi-H]}
{\cal M}_{4q}^{}\bigl(K^+\to\pi^+{\cal H}\bigr) &=&
\frac{\gamma_8^{}}{v}\bigl[ 2(k_W^{}-k_G^{})\bigl(m_K^2+m_\pi^2-m_{\cal H}^2\bigr)
+ (k_d^{}-k_u^{})\,m_\pi^2 \bigr]
\nonumber \\ && +\,\,
\frac{\tilde{\gamma}_8^{}}{v}\,4(k_G^{}-k_W^{})\,m_K^2 \,\,,
\end{eqnarray}
\begin{eqnarray} \label{M4q[K->pi0H]}
{\cal M}_{4q}^{}\bigl(K^0\to\pi^0{\cal H}\bigr) &=&
\frac{\gamma_8^{}}{\sqrt2\,v} \Biggl[
2(k_G^{}-k_W^{})\bigl(m_K^2+m_\pi^2-m_{\cal H}^2\bigr)
+ (k_u^{}-k_d^{})\frac{m_\pi^2\,m_K^2}{m_K^2-m_\pi^2} \Biggr]
\nonumber \\ && +\,\,
\frac{\tilde{\gamma}_8^{}}{\sqrt2\,v} \Biggl[ 4(k_W^{}-k_G^{})\,m_K^2
+ (k_d^{}-k_u^{})\frac{m_\pi^2\,m_K^2}{m_K^2-m_\pi^2} \Biggr] \,\,.
\end{eqnarray}
Since $\tilde{\gamma}_8^{}$ is unknown, in evaluating its effect on
$\,K^+\to\pi^+\cal H$\, we will allow it to vary from $-10$ to 10 times $\gamma_8^{}$.
Naively we would expect $\gamma_8^{}$ and $\tilde{\gamma}_8^{}$ to be of the same order.
\subsection{Total contributions}
The total amplitude for \,$K^+\to\pi^+\cal H$\, comes from the sum of the contributions
in Eqs.~(\ref{M2q[K->pi-H]}) and~(\ref{M4q[K->pi-H]}).
If the $CP$-violating terms in the amplitudes are ignored, it is possible for the
2-quark and 4-quark contributions to cancel.
We show this possibility in Fig.~\ref{br(K->piH)}, where we plot the resulting
branching ratio in the SM as a function of the ratio
$\,r_8^{}\equiv\tilde{\gamma}_8^{}/\gamma_8^{}$\, for $\,m_{\cal H}^{}=214.3$\,MeV.\,
We find that \,${\cal B}(K^+\to\pi^+{\cal H})=0$\, when $\,r_8^{}\simeq-5.1$\, and that,
as the figure indicates, for only a very narrow range of $r_8^{}$ around this value does
the branching ratio ever fall below the upper limit in Eq.~(\ref{K-bound}).
In Fig.~\ref{br(K->piH)}, we also plot the corresponding branching ratio of
the isospin-related mode \,$K_L\to\pi^0\cal H$.\,
For \,$\Sigma^+\to p\cal H$,\, the total amplitude results from adding the contributions
in Eqs.~(\ref{MSpH}) and~(\ref{MSpH'}).
Including only the real part of amplitudes again, and using $\,r_8^{}\simeq-5.1$\,
determined above, we plot in Fig.~\ref{br(S->pH)} the branching ratio in the SM as a function
of \,$D$$-$$F$\, for the range in Eq.~(\ref{d-f}).
This figure shows that the curve resulting from the P-wave fit using Eqs.~(\ref{ABt})
and~(\ref{ABx}) satisfies the HyperCP constraints for certain $D$$-$$F$ values.
\begin{figure}[t] \vspace{4ex}
\includegraphics[width=4in]{fig_K2piH_sm.eps} \vspace{-1ex}
\caption{\label{br(K->piH)}%
Contributions of real parts of total amplitudes for $\,K^+\to\pi^+\cal H$\, (solid curve)
and $\,K_L\to\pi^0\cal H$\, (dashed curve) in the SM to their branching ratios as functions
of \,$r_8^{}=\tilde{\gamma}_8^{}/\gamma_8^{}$\, for $\,m_{\cal H}^{}=214.3$\,MeV.\,
The horizontal lines are the corresponding upper bounds in Eqs.~(\ref{K-bound}) and~(\ref{KLbound}).
}\end{figure}
\begin{figure}[ht]
\vspace{4ex}
\includegraphics[width=4in]{fig_S2pH_sm.eps} \vspace{-1ex}
\caption{\label{br(S->pH)}%
Contribution of real part of total amplitude for \,$\Sigma^+\to p\cal H$\, to its branching
ratio in the SM as function of \,$D$$-$$F$\, for $\,m_{\cal H}^{}=214.3$\,MeV\, and
$\,r_8^{}\simeq-5.1$.\,
The solid (dotted) curve corresponds to \,$h_D$$-$$h_F$\, extracted from the
P-wave (S-wave) fit to the $\,\Sigma^+\to p\pi^0$\, data using Eqs.~(\ref{ABt})
and~(\ref{ABx}).
The dashed lines correspond to the upper and lower bounds in the HyperCP result.
}
\end{figure}
In Figs.~\ref{br(KpiH)2hdm} and~\ref{br(SpH)2hdm}, we display the corresponding branching
ratios in the 2HDM~II obtained using the parameters in Eqs.~(\ref{gH2hdm}) and~(\ref{2hdmpar}).
In contrast to the SM case, here \,${\cal B}(K^+\to\pi^+{\cal H})=0$\, when
\,$r_8^{}\simeq6.7$,\, but the vanishing of the $K_L$ rate occurs at a different $r_8^{}$
value due to ${\cal M}_{4q}(K_L\to\pi^0{\cal H})$ and
$-{\rm Re}\,{\cal M}_{4q}(K^+\to\pi^+{\cal H})$ being unequal with \,$k_u\neq k_d$\,
in Eq.~(\ref{kII}).\footnote{We note that, although $\tilde{\gamma}_8^{}$ is not known from
experiment, there are model calculations~\cite{Leutwyler:1989xj,Bijnens:1998ee} of it yielding
\,$|\tilde{\gamma}_8^{}/\gamma_8^{}|\sim0.2\,$.\,
This would make the kaon rates greatly exceed their bounds, as can be seen from
Figs.~\ref{br(K->piH)} and~\ref{br(KpiH)2hdm}.}
As a consequence, the two kaon constraints cannot be satisfied simultaneously.
Furthermore, the \,$\Sigma^+\to p\cal H$\, curve that falls within the HyperCP limits is
the one resulting from the S-wave fit using Eqs.~(\ref{ABt}) and~(\ref{ABx}).
\begin{figure}[t]
\vspace{4ex}
\includegraphics[width=4in]{fig_K2piH_2hdm.eps} \vspace{-1ex}
\caption{\label{br(KpiH)2hdm}%
Contributions of real parts of total amplitudes for $\,K^+\to\pi^+\cal H$\, (solid curve)
and $\,K_L\to\pi^0\cal H$\, (dashed curve) in the 2HDM~II
to their branching ratios as functions of
\,$r_8^{}=\tilde{\gamma}_8^{}/\gamma_8^{}$\, for $\,m_{\cal H}^{}=214.3$\,MeV\,
and the parameters in Eq.~(\ref{2hdmpar}).
The horizontal lines indicate the upper bounds in Eqs.~(\ref{K-bound}) and~(\ref{KLbound}).
}
\end{figure}
\begin{figure}[ht]
\vspace{4ex}
\includegraphics[width=4.5in]{fig_S2pH_2hdm.eps} \vspace{-1ex}
\caption{\label{br(SpH)2hdm}%
Contribution of real part of total amplitude for $\,\Sigma^+\to p\cal H$\, to its branching
ratio in the 2HDM as function of \,$D$$-$$F$\, for $\,m_{\cal H}^{}=214.3$\,MeV\,
and the parameters in Eq.~(\ref{2hdmpar}).
The solid (dotted) curve corresponds to \,$h_D$$-$$h_F$\, extracted from the
P-wave (S-wave) fit to the $\,\Sigma^+\to p\pi^0$\, data using Eqs.~(\ref{ABt})
and~(\ref{ABx}).
The dashed lines correspond to the upper and lower bounds from the HyperCP result.
}
\end{figure}
To summarize this section, a light Higgs boson in the SM can be made compatible with
the empirical bounds for \,$\Sigma^+\to p{\cal H}$,\, while satisfying
the constraints from \,$K\to\pi{\cal H}$,\, if the real part of the 2-quark (penguin)
contribution to the respective amplitudes is combined with the 4-quark contribution.
Moreover, in the 2HDM such a particle can satisfy all these constraints if its diagonal
couplings to the up- and down-type quarks are the same.
For this to happen in either model, it is necessary for the two amplitudes to cancel
precisely, and we have shown that this is possible for certain values of the hadronic
constants $\tilde{\gamma}_8^{}$, $h_D$$-$$h_F$, and $D$$-$$F$.
Although $\tilde{\gamma}_8^{}$ is not known, unlike $h_D$$-$$h_F$ and $D$$-$$F$
which are extractable from hyperon nonleptonic and semileptonic decays~\cite{bcpt},
it has a definite value in the SM and cannot be fine-tuned.
We note that in all the \,$\Sigma^+\to p\cal H$\, cases discussed above
the $\bar{p}\gamma_5^{}\Sigma^+$ term in the amplitude is small compared to
the $\bar{p}\Sigma^+$ term and that, therefore, the $\tilde{\gamma}_8^{}$
contributions are important only in the kaon cases.
We also note that flipping the signs of $A_{p\pi^0}$ and $B_{p\pi^0}$ in Eq.~(\ref{ABx}),
whose overall sign is not fixed by experiment, would prevent the cancelation in the
hyperon case from occurring and thus resulting in rates much above the bounds.
It turns out that the imaginary part of the penguin amplitude is sufficient to eliminate
these scalar particles as candidates for the HyperCP events, as it cannot be canceled by
the 4-quark amplitudes~\cite{Cheng:1989ib}, having a size of
\begin{eqnarray}
|{\rm Im}\,g_{\cal H}^{}| &\sim& 5.8 \times 10^{-7} \,\,,
\end{eqnarray}
much larger than allowed by Eq.~(\ref{kbound}) with \,${\rm Im}\,M_{4K}=0$.\,
The scaling of the penguin amplitude to the $B$-meson system is also incompatible with
the \,$b\to s X$\, bound.
In the SM
\begin{eqnarray}
g_{\cal H}^\prime &=& \frac{3\,G_F^{}}{16\sqrt2\,\pi^2}\sum_{q=u,c,t}m_q^2 V^*_{qs}V_{qb}^{}
\,\,\sim\,\, -1.7 \times 10^{-4} \,\,,
\end{eqnarray}
which is much larger than allowed by Eq.~(\ref{ggbs}).
In the 2HDM, the relative size is also too large:
\,$|g_{\cal H}^\prime/g_{\cal H}^{}|\sim |V_{tb}/V_{td}|$.\footnote{One could arrive at
a similar conclusion about $\cal H$ in the 2HDM~II by analyzing the decay
\,$\eta\to\pi^0\cal H$,\, whose amplitude depends on the 4-quark parameters
$\,k_d^{}$$-$$k_u^{}$~\cite{Prades:1990vn}. Thus, from the 90\%-C.L. bound
\,${\cal B}(\eta\to\pi^0{\cal H})<5\times10^{-6}$~\cite{Dzhelyadin:1980ti}, one extracts
\,$|k_d^{}-k_u^{}|<0.45$\, for \,$m_{\cal H}^{}=214.3$\,MeV,\, which is incompatible with
the limit derived from Eq.~(\ref{kII}) plus the LEP constraint
\,$\sin^2(\beta-\alpha)<0.06$~\cite{Abbiendi:2004gn}, namely
\,$|k_d^{}-k_u^{}|=|2\,\cos(\alpha-\beta)/\sin(2\beta)|>1.9.$}
Both of these problems are associated with a structure in which the Higgs-penguin amplitude
is dominated by diagrams with up-type quarks and $W$ bosons in the loops.
It may be possible to remedy these problems in models with additional contributions to
the penguin, for example, from supersymmetric (SUSY) partners.
If the penguin can be sufficiently suppressed, Eqs.~(\ref{MSpH'}) and~(\ref{M4q[K->pi-H]})
suggest that models in which \,$k_W\sim k_G$\, could satisfy the kaon bounds while being able
to account for the HyperCP result.
\section{Pseudoscalar Higgs boson\label{pseudoscalarH}}
We now consider the possibility that the new particle is a light $CP$-odd pseudoscalar,
$\cal A$, in the two-Higgs-doublet model.
Specifically, we do so in types~I and~II of the model.
\subsection{Two-quark $\bm{|\Delta S|=1}$ interactions}
The 2-quark flavor-changing couplings of ${\cal A}$ in the 2HDM are induced at one loop
and have been evaluated in Refs.~\cite{Frere:1981cc,Hall:1981bc}.
The effective Lagrangian is the same in types I and II of the model and can be written as
${\cal L}_{{\cal A}sd}$ in Eq.~(\ref{Asd}), where
\begin{eqnarray} \label{g_P}
g_{\cal A}^{} \,\,=\,\,
\frac{G_F^{}}{16\sqrt2\,\pi^2}\sum_{q=u,c,t}^{}m_q^2\,V_{qd}^*V_{qs}^{}\,
\Biggl(\frac{A_1^{}(q)}{\tan\beta}+\frac{A_2^{}(q)}{\tan^3\beta}\Biggr) \,\,,
\end{eqnarray}
with $A_{1,2}(q)$ being functions of $m_q^{}$, $m_W^{}$, and $m_{H^+}$,
whose expressions can be found in Ref.~\cite{Frere:1981cc}.
The leading-order chiral realization of ${\cal L}_{{\cal A}sd}$ is then
\begin{eqnarray} \label{LsdP}
{\cal L}_{\cal A}^{} &=&
b_D^{} \left\langle \bar{B}{}^{} \left\{ h_{\cal A}^{}, B^{} \right\} \right\rangle
+ b_F^{} \left\langle \bar{B}{}^{} \left[ h_{\cal A}^{}, B^{} \right] \right\rangle
+ b_0^{} \left\langle h_{\cal A}^{} \right\rangle \left\langle\bar{B}{}^{}B^{}\right\rangle
\,+\, \mbox{$\frac{1}{2}$} f^2 B_0^{} \left\langle h_{\cal A}^{} \right\rangle
\,\,+\,\, {\rm H.c.} \,\,,
\end{eqnarray}
where
\begin{eqnarray}
h_{\cal A}^{} \,\,=\,\,
-2{i}g_{\cal A}^{}\,\bigl(\xi^\dagger h M\xi^\dagger-\xi M h\xi\bigr)\frac{\cal A}{v} \,\,.
\end{eqnarray}
The leading-order diagrams for $\,K\to\pi\cal A$\, and $\,\Sigma^+\to p\cal A$\,
arising from ${\cal L}_{\cal A}$, plus ${\cal L}_{\rm s}$, are similar to those in the case
of standard-model Higgs boson, displayed in Fig.~\ref{sdHdiagrams}.
The resulting amplitudes are
\begin{eqnarray} \label{MKpiP}
{\cal M}_{2q}^{}(K^+\to\pi^+{\cal A}) \,\,=\,\,
-\sqrt2\,{\cal M}_{2q}^{}(K^0\to\pi^0{\cal A})
\,\,=\,\, {i}g_{\cal A}^{}\,\frac{m_K^2-m_\pi^2}{v} \,\,,
\end{eqnarray}
\begin{eqnarray} \label{MSpP}
{\cal M}_{2q}^{}(\Sigma^+\to p {\cal A}) \,\,=\,\,
{i}g_{\cal A}^{}\,\frac{m_\Sigma^{}-m_N^{}}{v}\, \bar{p}\Sigma^+
\,-\,
{i}g_{\cal A}^{}\,(D-F)\,\frac{m_\Sigma^{}+m_N^{}}{v}\,\frac{m_K^2}{m_K^2-m_{\cal A}^2}\,
\bar{p}\gamma_5^{}\Sigma^+ \,\,.
\end{eqnarray}
\subsection{Four-quark $\bm{|\Delta S|=1}$ interactions}
The diagonal couplings of ${\cal A}$ to light quarks in the 2HDM are described
by~\cite{Hall:1981bc}
\begin{eqnarray} \label{L_qqP}
{\cal L}_{{\cal A}qq} \,\,=\,\, -\bar{q}\tilde{M}\gamma_5^{}q\, \frac{{i}{\cal A}}{v}
\,\,=\,\,
-\bar{q}_{\rm L}^{}\tilde{M}q_{\rm R}^{}\,\frac{{i}{\cal A}}{v} \,\,+\,\, {\rm H.c.} \,\,,
\end{eqnarray}
where
\begin{eqnarray}
q \,\,=\,\, (u\,\,\,d\,\,\,s)^{\rm T} \,\,, \hspace{2em}
\tilde{M} \,\,=\,\, {\rm diag}\bigl(l_u^{}{}\hat{m},\,l_d^{}{}\hat{m},\,l_d^{}{}m_s^{}\bigr) \,\,,
\end{eqnarray}
with
\begin{eqnarray} \label{lI}
l_u^{}{} \,\,=\,\, -l_d^{}{} &=& -\cot\beta \hspace{3em} \mbox{in the 2HDM~I} \,\,,
\\ \label{lII}
l_u^{}{} \,\,=\,\, -\cot\beta \,\,, &&
l_d^{}{} \,\,=\,\, -\tan\beta \hspace{3em} \mbox{in the 2HDM~II} \,\,.
\end{eqnarray}
Since the Lagrangian for the quark masses is
\,${\cal L}_q=-\bar{q}_{\rm L}^{}Mq_{\rm R}^{}\,+\, {\rm H.c.}$,\,
the effect of ${\cal L}_{{\cal A}qq}$ on interactions described by ${\cal L}_{\rm s,w}$
can be taken into account using ${\cal L}_{\rm s,w}$ and substituting $M$ with
\,$\tilde{M}{i}{\cal A}/v$\,~\cite{Grzadkowski:1992av}.
The resulting Lagrangians are
\begin{eqnarray} \label{LsP}
{\cal L}_{\rm s}^{\cal A} \,\,=\,\,
\left( b_D^{}\,\bigl\langle \bar{B}^{} \bigl\{ \tilde{M}_-^{}, B^{} \bigr\} \bigr\rangle
+ b_F^{}\, \bigl\langle \bar{B}^{} \bigl[ \tilde{M}_-^{}, B^{} \bigr] \bigr\rangle
+ b_0^{}\, \bigl\langle \tilde{M}_-^{} \bigr\rangle \bigl\langle \bar{B}^{} B^{} \bigr\rangle
\,+\, \mbox{$\frac{1}{2}$} f^2 B_0^{}\, \bigl\langle\tilde{M}_-^{}\bigr\rangle \right)
\frac{{i}{\cal A}}{v} \,\,,
\end{eqnarray}
\begin{eqnarray} \label{LwP}
{\cal L}_{\rm w}^{\cal A} \,\,=\,\,
2\tilde{\gamma}_8^{}\, f^2 B_0^{}\, \bigl\langle h \xi \tilde{M}_-^{}\xi^\dagger
\bigr\rangle \frac{{i}{\cal A}}{v}
\,\,+\,\, {\rm H.c.} \,\,,
\end{eqnarray}
where
\begin{eqnarray}
\tilde{M}_-^{} \,\,=\,\, \xi^\dagger\tilde{M}\xi^\dagger-\xi\tilde{M}^\dagger\xi \,\,.
\end{eqnarray}
In addition, if the SU(3) singlet $\eta_1^{}$ is included in ${\cal L}_{\rm s,w}^{({\cal A})}$ by
replacing $\Sigma$ with \,$\Sigma\,\exp\bigl(i\sqrt{2/3}\,\eta_1^{}/f\bigr)$,\,
the coupling of ${\cal A}$ to two gluons via the axial anomaly gives rise
to~\cite{Grzadkowski:1992av}
\begin{eqnarray} \label{LeP}
{\cal L}_{\eta_1^{}{\cal A}}^{} \,\,=\,\,
-\mbox{$\frac{1}{2}$}
\Bigl(m_{\eta_1^{}}^2-\mbox{$\frac{2}{3}$}m_K^2-\mbox{$\frac{1}{3}$}m_\pi^2\Bigr)
\Biggl[\eta_1^{}+\frac{f\,{\cal A}}{\sqrt6\,v}(2l_u^{}{}+l_d^{}{})\Biggr]^2 \,\,,
\end{eqnarray}
which modifies the $\eta_1^{}$-${\cal A}$ mixing generated by ${\cal L}_{\rm s}^{\cal A}$.
{From} ${\cal L}_{\rm s,w}^{({\cal A})}$, we derive the leading-order
diagrams shown in Fig.~\ref{4qPkaon} for \,$K\to\pi {\cal A}$,\, where
\begin{eqnarray}
\eta \,\,=\,\, \eta_8^{}\, \cos\theta - \eta_1^{}\,\sin\theta \,\,, \hspace{2em}
\eta' \,\,=\,\, \eta_8^{}\,\sin\theta + \eta_1^{}\, \cos\theta \,\,.
\end{eqnarray}
The resulting amplitudes are
\begin{subequations} \label{MKpiP'}
\begin{eqnarray}
&& \hspace*{-5em} {\cal M}_{4q}^{}\bigl(K^+\to\pi^+ {\cal A}\bigr) \,\,=\,\,
\frac{i\gamma_8^{}\, (l_u^{}{}-l_d^{}{})\,m_\pi^2}{2v}
\nonumber \\ && +\,\,
i\gamma_8^{}\,\bigl[\bigl(2m_K^2+m_\pi^2-3m_{\cal A}^2\bigr)\,c_\theta^{}
- \sqrt8\,\bigl(m_K^2-m_\pi^2\bigr)\,s_\theta^{}\bigr]
\nonumber \\ && \,\,\,\, \times\,\,
\frac{\bigl[4l_d^{}{}\,m_K^2-\bigl(3l_d^{}{}+l_u^{}{}\bigr)\,m_\pi^2\bigr]\,c_\theta^{}
+ \sqrt2\,\bigl[2l_d^{}{}\, m_K^2 + l_u^{}{}\,m_\pi^2
- \bigl(l_d^{}{}+2l_u^{}{}\bigr)\,\tilde{m}_0^2\bigr]\,s_\theta^{}}
{6\bigl(m_\eta^2-m_{\cal A}^2\bigr)\,v}
\nonumber \\ && +\,\,
i\gamma_8^{}\,\bigl[\bigl(2m_K^2+m_\pi^2-3m_{\cal A}^2\bigr)\,s_\theta^{}
+\sqrt8\,\bigl(m_K^2-m_\pi^2\bigr)\,c_\theta^{}\bigr]
\nonumber \\ && \,\,\,\, \times\,\,
\frac{\bigl[4l_d^{}{}\,m_K^2-\bigl(3l_d^{}{}+l_u^{}{}\bigr)\,m_\pi^2\bigr]\,s_\theta^{}
- \sqrt2\, \bigl[2l_d^{}{}\,m_K^2+l_u^{}{}\,m_\pi^2
- \bigl(l_d^{}{}+2l_u^{}{}\bigr)\,\tilde{m}_0^2\bigr]\, c_\theta^{}}
{6\bigl(m_{\eta'}^2-m_{\cal A}^2\bigr)\,v} \,\,,
\end{eqnarray}
\begin{eqnarray}
&& \hspace*{-5em} {\cal M}_{4q}^{}\bigl(K^0\to\pi^0{\cal A}\bigr) \,\,=\,\,
\frac{i\gamma_8^{}\,\bigl(l_u^{}{}-l_d^{}{}\bigr)\,\bigl(2m_K^2-m_\pi^2-m_{\cal A}^2\bigr)\,m_\pi^2}
{\sqrt8\,\bigl(m_{\cal A}^2-m_\pi^2\bigr)\,v}
\nonumber \\ && +\,\,
i\gamma_8^{}\,\bigl[ \bigl(2m_K^2+m_\pi^2-3m_{\cal A}^2\bigr)\,c_\theta^{}
- \sqrt8\, \bigl(m_K^2-m_\pi^2\bigr)\, s_\theta^{} \bigr]
\nonumber \\ && \,\,\,\, \times\,\,
\frac{\bigl[4l_d^{}{}\,m_K^2-\bigl(3l_d^{}{}+l_u^{}{}\bigr)\,m_\pi^2\bigr]\,c_\theta^{}
+ \sqrt2\,\bigl[2l_d^{}{}\,m_K^2+l_u^{}{}\,m_\pi^2-\bigl(l_d^{}{}+2l_u^{}{}\bigr)\,\tilde{m}_0^2\bigr]\,s_\theta^{}}
{6\sqrt2\,\bigl(m_{\cal A}^2-m_\eta^2\bigr)\,v}
\nonumber \\ && +\,\,
i\gamma_8^{}\,\bigl[ \bigl(2m_K^2+m_\pi^2-3m_{\cal A}^2\bigr)\, s_\theta^{}
+ \sqrt8\, \bigl(m_K^2-m_\pi^2\bigr)\, c_\theta^{} \bigr]
\nonumber \\ && \,\,\,\, \times\,\,
\frac{\bigl[4l_d^{}{}\,m_K^2-\bigl(3l_d^{}{}+l_u^{}{}\bigr)\,m_\pi^2\bigr]\,s_\theta^{}
- \sqrt2\,\bigl[2l_d^{}{}\,m_K^2+l_u^{}{}\,m_\pi^2-\bigl(l_d^{}{}+2l_u^{}{}\bigr)\,\tilde{m}_0^2\bigr]\,c_\theta^{}}
{6\sqrt2\,\bigl(m_{\cal A}^2-m_{\eta'}^2\bigr)\,v} \,\,,
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
c_\theta^{} \,\,=\,\, \cos\theta \,\,, \hspace{2em}
s_\theta^{} \,\,=\,\, \sin\theta \,\,, \hspace{2em}
\tilde{m}_0^2 \,\,=\,\, m_{\eta_1}^2-\mbox{$\frac{2}{3}$}m_K^2-\mbox{$\frac{1}{3}$}m_\pi^2 \,\,.
\end{eqnarray}
The $\tilde{\gamma}_8^{}$ contributions to this amplitude cancel
completely, as already noted in Ref.~\cite{Grzadkowski:1992av}.
Numerically, $\,\tilde{m}_0^{}\simeq819$\,MeV\, from fitting to the $\eta'$ mass after
diagonalizing the $\eta_{8,1}^{}$ masses derived from the Lagrangians in Eqs.~(\ref{Lstrong})
and~(\ref{LeP}), and consequently $\,\theta\simeq-19.7^\circ$.\,
\begin{figure}[b]
\begin{picture}(120,90)(-60,-10)
\Text(-32,0)[r]{\footnotesize$K$} \Line(-30,0)(30,0)
\Line(0,0)(0,30) \Text(0,35)[]{\footnotesize${\cal A}$}
\Text(32,0)[l]{\footnotesize$\pi$} \CBoxc(0,0)(4,4){Black}{Black}
\end{picture}
\begin{picture}(120,90)(-60,-10)
\Text(-32,0)[r]{\footnotesize$K$} \Line(-30,0)(30,0) \Line(0,0)(20,20) \Vertex(0,0){2}
\Line(0,0)(0,30) \Text(0,35)[]{\footnotesize${\cal A}$} \Text(19,9)[]{\footnotesize$K^0$}
\Text(32,0)[l]{\footnotesize$\pi$} \CBoxc(20,20)(4,4){Black}{Black}
\end{picture}
\begin{picture}(120,90)(-60,-10)
\Text(-32,0)[r]{\footnotesize$K$} \Line(-30,0)(30,0) \Line(0,0)(20,20) \Vertex(0,30){2}
\Line(0,0)(0,50) \Text(0,55)[]{\footnotesize${\cal A}$} \Text(19,9)[]{\footnotesize$K$}
\Text(-2,15)[r]{\footnotesize$\pi^0,\eta,\eta'$} \Text(32,0)[l]{\footnotesize$\pi$}
\CBoxc(20,20)(4,4){Black}{Black} \Vertex(0,0){2}
\end{picture}
\begin{picture}(120,90)(-60,-10)
\Text(-32,0)[r]{\footnotesize$K$} \Line(-30,0)(30,0) \Vertex(0,30){2}
\Line(0,0)(0,50) \Text(0,55)[]{\footnotesize${\cal A}$} \CBoxc(0,0)(4,4){Black}{Black}
\Text(2,15)[l]{\footnotesize$\pi^0,\eta,\eta'$} \Text(32,0)[l]{\footnotesize$\pi$}
\end{picture} \vspace*{-1ex}
\caption{\label{4qPkaon}
Diagrams contributing to $\,K\to\pi {\cal A}$\, arising from the 4-quark operators.
The dots come from ${\cal L}_{\rm s}^{({\cal A})}$ in Eqs.~(\ref{Lstrong}) and~(\ref{LsP}),
whereas the square vertices are from ${\cal L}_{\rm w}^{({\cal A})}$ in Eqs.~(\ref{Lweak})
and~(\ref{LwP}).
}
\end{figure}
The leading-order 4-quark contributions to \,$\Sigma^+\to p\cal A$\, arise from the diagrams
in Fig.~\ref{4qPdiagrams'} and can be expressed as
\begin{eqnarray} \label{MSpP'}
{\cal M}_{4q}^{}(\Sigma^+\to p {\cal A}) &=&
{i}\bar{p}\, \bigl(A_{p{\cal A}}^{}-B_{p{\cal A}}^{}\gamma_5^{}\bigr)\,\Sigma^+ \;,
\end{eqnarray}
where
\begin{eqnarray}
A_{p{\cal A}}^{} &=&
\frac{f\,A_{p\pi^0}^{}\,\bigl(l_d^{}{}-l_u^{}{}\bigr)\,m_\pi^2}
{2\bigl(m_{\cal A}^2-m_\pi^2\bigr)\,v}
\nonumber \\ && \!\!\! +\,\,
\frac{f\,A_{p\pi^0}^{} \left\{
\bigl[4l_d^{}{}\,m_K^2-\bigl(3l_d^{}{}+l_u^{}{}\bigr)\,m_\pi^2\bigr]\,c_\theta^2
+ \sqrt2\, \bigl[ 2l_d^{}{}\,m_K^2+l_u^{}{}\,m_\pi^2-\bigl(l_d^{}{}+2l_u^{}{}\bigr)\,\tilde{m}_0^2\bigr]\,
c_\theta^{}s_\theta^{} \right\}}
{2 \bigl(m_\eta^2-m_{\cal A}^2\bigr)\, v}
\nonumber \\ && \!\!\! +\,\,
\frac{f\,A_{p\pi^0}^{} \left\{
\bigl[4l_d^{}{}\,m_K^2-\bigl(3l_d^{}{}+l_u^{}{}\bigr)\,m_\pi^2\bigr]\,s_\theta^2
- \sqrt2\,\bigl[2l_d^{}{}\,m_K^2+l_u^{}{}\,m_\pi^2-\bigl(l_d^{}{}+2l_u^{}{}\bigr)\,\tilde{m}_0^2\bigr]\,
c_\theta^{}s_\theta^{} \right\}}
{2 \bigl(m_{\eta'}^2-m_{\cal A}^2\bigr)\, v} \,\,,
\hspace*{2em}
\end{eqnarray}
\begin{eqnarray}
B_{p{\cal A}}^{} &=&
\frac{f\,B_{p\pi^0}^{}\,\bigl(l_d^{}-l_u^{}\bigr)\,m_\pi^2}{2\bigl(m_{\cal A}^2-m_\pi^2\bigr)\,v}
\nonumber \\ && \!\!\! +\,\,
\frac{f\,B_{p\pi^0}^{} \left\{
\bigl[4l_d^{}\,m_K^2-\bigl(3 l_d^{}+l_u^{}\bigr)\,m_\pi^2\bigr]\,c_\theta^2
+ \sqrt2\, \bigl[ 2l_d^{}\,m_K^2+l_u^{}\,m_\pi^2-\bigl(l_d^{}+2l_u^{}\bigr)\,\tilde{m}_0^2\bigr]\,
c_\theta^{}s_\theta^{} \right\}}
{2 \bigl(m_\eta^2-m_{\cal A}^2\bigr)\, v}
\nonumber \\ && \!\!\! +\,\,
\frac{f\,B_{p\pi^0}^{} \left\{
\bigl[4l_d^{}\,m_K^2-\bigl(3 l_d^{}+l_u^{}\bigr)\,m_\pi^2\bigr]\,s_\theta^2
- \sqrt2\, \bigl[ 2l_d^{}\,m_K^2+l_u^{}\,m_\pi^2-\bigl(l_d^{}+2l_u^{}\bigr)\,\tilde{m}_0^2\bigr]\,
c_\theta^{}s_\theta^{} \right\}}
{2 \bigl(m_{\eta'}^2-m_{\cal A}^2\bigr)\, v} \,\,,
\hspace*{2em}
\end{eqnarray}
where $A_{p\pi^0}$ and $B_{p\pi^0}$ are given in Eq.~(\ref{ABt}).
We note that contributions with $\gamma_8^{}$ or $\tilde{\gamma}_8^{}$ appear only
at next-to-leading order.
\begin{figure}[b]
\begin{picture}(120,100)(-60,-20)
\Text(-32,0)[r]{\footnotesize$\Sigma^+$} \Line(-30,0)(30,0)
\Line(0,0)(0,50) \Text(0,55)[]{\footnotesize${\cal A}$} \Vertex(0,30){2}
\Text(32,0)[l]{\footnotesize$p$} \CBoxc(0,0)(4,4){Black}{Black}
\Text(2,20)[l]{\footnotesize$\pi^0,\eta,\eta'$}
\end{picture}
\begin{picture}(160,100)(-80,-20)
\Text(-52,0)[r]{\footnotesize$\Sigma^+$} \Line(-50,0)(50,0)
\Line(-20,0)(-20,50) \Text(-20,55)[]{\footnotesize${\cal A}$}
\Text(0,-5)[]{\footnotesize$\Sigma^+$} \Vertex(-20,0){2} \Vertex(-20,30){2}
\Text(52,0)[l]{\footnotesize$p$} \CBoxc(20,0)(4,4){Black}{Black}
\Text(-18,20)[l]{\footnotesize$\pi^0,\eta,\eta'$}
\end{picture}
\begin{picture}(160,100)(-80,-20)
\Text(-52,0)[r]{\footnotesize$\Sigma^+$} \Line(-50,0)(50,0)
\Line(20,0)(20,50) \Text(20,55)[]{\footnotesize${\cal A}$}
\Text(0,-5)[]{\footnotesize$p$} \Vertex(20,0){2} \Vertex(20,30){2}
\Text(52,0)[l]{\footnotesize$p$} \CBoxc(-20,0)(4,4){Black}{Black}
\Text(18,20)[r]{\footnotesize$\pi^0,\eta,\eta'$}
\end{picture} \vspace*{-2ex}
\caption{\label{4qPdiagrams'}%
Diagrams contributing to $\,\Sigma^+\to p\cal A$\, arising from the 4-quark operators.
The square vertices come from ${\cal L}_{\rm w}$ in Eq.~(\ref{Lweak}), whereas the dots
are from the Lagrangians in Eqs.~(\ref{Lstrong}), (\ref{LsP}), and~(\ref{LeP}).
}
\end{figure}
\subsection{Total contributions}
The total amplitudes for \,$K\to\pi\cal A$\, result from adding the contributions
in Eqs.~(\ref{MKpiP}) and~(\ref{MKpiP'}).
If the $CP$-violating terms in the amplitudes are ignored, it is possible for
the 2-quark and 4-quark contributions to cancel.
We show this possibility in Fig.~\ref{br(K->piP)}, where we plot the resulting branching ratios
as functions of the charged-Higgs-boson mass for $\,m_{\cal A}^{}=214.3$\,MeV\, and different
$\tan\beta$ values in the 2 versions of the 2HDM.
The total amplitude for \,$\Sigma^+\to p {\cal A}$\, is the sum of the contributions in
Eqs.~(\ref{MSpP}) and~(\ref{MSpP'}).
If the experimental values of $A_{p\pi^0}$ and $B_{p\pi^0}$ in Eq.~(\ref{ABx}),
as well as the middle value $\,D-F=0.25$,\, are used in the total amplitude,
the resulting branching ratios in the 2HDM are displayed in Fig.~\ref{br(S->pP)}.
\begin{figure}[t]
\vspace{2ex} \hspace*{-2em}
\includegraphics[width=7in]{fig_K2piA_2hdm.eps} \vspace*{-1em}
\caption{\label{br(K->piP)}%
Contributions of real parts of total amplitudes for $\,K^+\to\pi^+\cal A$\, (solid curve)
and $\,K_S\to\pi^0\cal A$\, (dashed curve) in the 2HDM to their branching ratios as
functions of charged-Higgs-boson mass for $\,m_{\cal A}^{}=214.3$\,MeV\,
and $\,\tan\beta=4\,(0.9)$ in type I\,(II) of the model.
The horizontal lines indicate the upper bounds in Eqs.~(\ref{K-bound}) and~(\ref{KSbound}).
}
\end{figure}
\begin{figure}[ht] \vspace{1ex}
\includegraphics[width=4in]{fig_S2pA_2hdm.eps} \vspace*{-1em}
\caption{\label{br(S->pP)}%
Contribution of real part of total amplitude for $\,\Sigma^+\to p\cal A$\, to its branching
ratio in the 2HDM~I (solid curve) and~II~(dotted curve) as function of charged-Higgs-boson
mass for $\,m_{\cal A}^{}=214.3$\,MeV\, and $\,\tan\beta=4\,(0.9)$ in type I\,(II).
The dashed lines indicate the bounds from the HyperCP result.
}
\end{figure}
We have found that only one of the kaon bounds can be satisfied if the HyperCP
result is assumed to be mediated by $\cal A$ in the 2HDM.
However, for certain $\tan\beta$ and $m_{H^+}$ values near the ones indicated in
Figs.~\ref{br(K->piP)} and~\ref{br(S->pP)}, all the kaon and hyperon constraints
can be nearly satisfied simultaneously.
Part of the difficulty in satisfying all of the constraints lies with the vanishing of
the $K^+$ and $K_S$ rates occurring at different $m_{H^+}$ values, which is due to
${\cal M}_{4q}(K_S\to\pi^0{\cal A})$ and $-{\rm Re}\,{\cal M}_{4q}(K^+\to\pi^+{\cal A})$
being unequal with $\,l_u\neq l_d$\, in Eqs.~(\ref{lI}) and~(\ref{lII}).
We note that the situation is not much different if the signs of $A_{p\pi^0}$ and
$B_{p\pi^0}$ in Eq.~(\ref{ABx}) are both flipped.
To summarize this section, we have found that it is possible for the real part of
the penguin amplitude to cancel against the 4-quark amplitude to approximately satisfy
the kaon bounds while explaining the HyperCP observation with a 2HDM pseudoscalar.
Unlike the scalar case, there is no free hadronic parameter at leading order in $\chi$PT
in this case.
The cancelation must happen as a function of the short-distance parameters that determine
the size of the amplitudes.
A feature shared by scalars and pseudoscalars in the 2HDM is that the imaginary part of
the penguin amplitude is incompatible with the kaon bounds in Eq.~(\ref{kbound}) and has no
counterpart that could cancel it in the 4-quark amplitude.
A related problem is that the scaling of the penguin amplitude to the $B$ system is also
incompatible with observation.
In view of these flaws, it is tempting to search for a model in which the penguin
amplitudes are completely suppressed, and the 2HDM~II seems to allow us to do that.
In the 2HDM II the penguin amplitudes are proportional to $l_u$, whereas the 4-quark
amplitudes receive contributions from both $l_u$ and $l_d$ in Eq.~(\ref{lII}).
Thus the model in the large-$\tan\beta$ limit has \,$l_u\to 0$.\,
Unfortunately, in this limit $l_d$ induces 4-quark amplitudes resulting in
\begin{eqnarray}
\frac{{\cal B}(\Sigma^+ \to p{\cal A})}{{\cal B}(K^+\to\pi^+{\cal A})} &\to& 0.025 \,\,,
\end{eqnarray}
which is inconsistent with Eqs.~(\ref{kbound}) and~(\ref{sigbound}).
In the 2HDM~I, which has $l_u$ and $l_d$ given in Eq.~(\ref{lI}), the 4-quark
amplitudes alone yield
\begin{eqnarray}
\frac{{\cal B}_{4q}^{}(\Sigma^+\to p{\cal A})}{{\cal B}_{4q}^{}(K^+\to\pi^+{\cal A})} &=& 0.53
\end{eqnarray}
for all values of $\tan\beta$, which is consistent with Eqs.~(\ref{kbound}) and~(\ref{sigbound}).
However, in this case it is the penguin amplitude that eliminates the pseudoscalar as
a possible HyperCP candidate.
These results suggest the ingredients of a model that can satisfy all constraints.
It is necessary for the penguin amplitudes to be dominated by additional particles, such as
SUSY partners, in such a way that $g_{\cal A}$ is not proportional to top-quark CKM angles.
We have sketched a scenario where this happens in Ref.~\cite{usletter}.
\section{Summary and Conclusions\label{final}}
We have summarized the existing constraints on the production of a light Higgs boson in kaon and
$B$-meson decays, as well as the implication of attributing the HyperCP events to the production
of a light Higgs boson in hyperon decay.
Production rates for such a particle in kaon and hyperon decays receive contributions from two-
and four-quark operators that can be comparable in some cases.
We have investigated the interplay of both production mechanisms with the aid of leading-order
chiral perturbation theory.
To this effect, we have implemented the low-energy theorems governing the couplings of light
(pseudo)scalars to hadrons at leading order in baryon $\chi$PT, generalizing
existing studies for kaon decay.
We first discussed the case of a scalar Higgs boson. We found that the leading-order amplitudes
in both kaon and hyperon decays depend on an unknown low-energy constant $\tilde\gamma_8^{}$,
as well as known constants from the hyperon sector.
This constant is connected to a weak-mass term in the chiral Lagrangian that can be
rotated away for processes that involve only pseudo-Goldstone bosons and is, therefore, unknown.
We applied our results to the process \,$\Sigma^+\to p X$\, relevant to the HyperCP
observation of \,$\Sigma^+\to p\mu^+\mu^-$.\,
We showed that the two-quark contributions in the SM and its 2HDM extensions are too large to
explain the HyperCP observation.
However, we also showed that there can be cancelations between the $CP$-conserving two- and
four-quark contributions to this process that lead to a rate comparable in size to the HyperCP
observation for both the SM and the 2HDM.
Such cancelations occur for a certain range of known constants from the hyperon sector,
the effect of $\tilde\gamma_8^{}$ being small.
In both cases, however, the two-quark penguin contribution has an imaginary
($CP$ violating) part that is too large to be compatible with the HyperCP result.
In the SM and in the 2HDM, the four-quark contributions have a $CP$-violating part that is
much smaller than that of the penguin amplitude and hence these models are ruled out as
explanations for the HyperCP observation.
More general models with additional $CP$-violating phases may be able to address this issue.
In addition, in these models the scaling of the two-quark operator to the $B$ system is
incompatible with the nonobservation of a light scalar in $B$ decay.
We then discussed the case of a pseudoscalar Higgs boson in the 2HDM.
In this case we computed the leading-order amplitudes in $\chi$PT and included, as well,
certain higher-order terms mediated by the $\eta^\prime$ state.
The resulting amplitudes for both kaon and hyperon decays do not depend on any unknown
hadronic parameters.
In particular, they do not depend on $\tilde\gamma_8^{}$, as observed in
Ref.~\cite{Grzadkowski:1992av}.
We then applied our results to the \,$\Sigma^+\to p\cal A$\, process.
Once again we found that the real part of the amplitude can be consistent with the HyperCP
observation for a certain range of parameters in the 2HDM ($\tan\beta$ and $m_{H^+}$),
but that the imaginary part of the penguin amplitude is too large.
The scaling of the two-quark operator to the $B$ system also produces a $B\to X_s^{}\cal A$
rate that is too large. Both of these problems can be solved in more general models that
modify the phase and scaling with CKM angles of the two-quark operator.
In conclusion, we have shown that it is possible to interpret the HyperCP observation as
evidence for a light Higgs boson, although it is not easy to arrange this in a model.
Typical Higgs-penguin operators have three problems:
\vspace*{-1ex}
\newcounter{num}
\begin{list}{(\alph{num})}{\usecounter{num}}
\item if they have the right size to fit the HyperCP observation, they induce \,$K\to\pi X$\,
at rates larger than the existing bounds;
\vspace*{-1ex}
\item if they are dominated by loop diagrams involving up-type quarks and $W$ bosons, they
have a~$CP$ phase that is too large;
\vspace*{-1ex}
\item if they are dominated by loop diagrams involving up-type quarks and $W$ bosons, their
scaling to the $B$ system is incompatible with the nonobservation of \,$B\to X_sX$.
\vspace*{-1ex}
\end{list}
We have found in this paper that (a) can be solved in some cases by the addition of the effects
of four-quark operators. We have suggested that more general models may be constructed to solve
(b) and (c). To show that this is possible, we have constructed a specific example in
Ref.~\cite{usletter}.
Disregarding existing bounds from kaon and $B$-meson decays, we have shown that many light
Higgs bosons have couplings of the right size to explain the HyperCP observation.
We think this is sufficiently intriguing to warrant a revisiting of the kaon and $B$ decay
results.
In particular, the $B$ factories are still operational and could reanalyze the very low
$m_{\mu\mu}^{}$ invariant-mass region in their measurements of \,$B\to X_s\mu^+\mu^-$\, modes.
The NA48 experiment might also be able to revisit the kaon modes.
\begin{acknowledgments}
The work of X.G.H. was supported in part by NSC and NCTS. The work of G.V. was supported
in part by DOE under contract number DE-FG02-01ER41155.
We thank Laurence Littenberg and Rainer Wanke for useful discussions on the kaon bounds and
Soeren Prell for useful discussions on the $B$ bounds.
\end{acknowledgments}
|
1805.08629
|
\section{Introduction}
Due to the inherent complexity of real world tasks (e.g., search and rescue, fire extinguishing, information collection) and limited capabilities of the available robots in the market, it is almost impossible for one robot to finish a complex task. As a result, cooperation among the robots is one of the basic requirements for any task completion. One form of cooperation among robots is coalition formation, where a subset of available robots forms a team that is assigned to a specific task.
In this paper, we do not study how the task is subdivided among the members of a robot coalition. Instead, we study how a set of $N$ robots can be optimally divided into $M$ coalitions to complete $M$ tasks. Most of the previous research in coalition formation deals with software agents \cite{michalak2010distributed,Rahwan08,ramchurn10}. But
real-world complexities, such as the on-board computational power of the robots and constrained communication, tend to limit the number of robots these algorithms can handle. The approach of this paper successfully handles as many as 100 robots.
The problem of coalition formation for multi-robot task allocation can be described as follows: given a set of $M$ tasks and $N$ robots ($N > M$), how to partition the robot group optimally into coalitions where each coalition will be assigned to a unique task. It has been proven that multi-robot coalition formation problem for task allocation is NP-hard to both solve exactly and to approximate within a factor of $O(M^{1-\epsilon}), \forall \epsilon>0$ \cite{adams2011coalition}. Therefore, solving the coalition formation problem for multi-robot task-allocation in a reasonable amount of time while retaining the quality of the formed coalitions is an extremely challenging problem. In this paper, we propose an efficient algorithm for coalition formation by a group of mobile robots for task allocation so that the algorithm can run on the robots' on-board computers.
We present a novel {\em value} function for the tasks which is used to assign only the required number of robots to each task. We also present a distance-based {\em cost} function to minimize the travel distances by the robots to the tasks. Our approach employs a clustering-based coalition formation methodology \cite{bansal2004correlation}. Clustering is a technique of gathering `common' elements under the same label. We exploit this idea to allocate nearby robots considered as `common' to the same cluster, centered around a specific task, while distant robots are assigned to different tasks. Results show that our approach finds a near-optimal solution (up to $97.66\%$ of the optimal) in a negligible amount of time.
\section{Related Work}
Autonomous robots need to cooperate with each other to complete complex tasks at hand. Forming teams (or, coalitions) for efficient task completion is a computationally hard problem. One of the earliest studies on task allocation via coalition formation is due to Shehory and Kraus \cite{Shehory98}, in which the authors have proposed a greedy algorithm that is guaranteed to find a solution within a factor of $(k + 1)$ of the optimal where $k$ is maximum size of any coalition formed. Coalition formation by a multi-agent system has been studied extensively in the following decade. Optimal \cite{Rahwan09, rahwan2007anytime} and near-optimal \cite{rahwan2007near} solutions for coalition formation have been proposed. Most of these proposed algorithms employ a search-based technique to find the best solution. Although coalition formation algorithms have been developed frequently in the last decade, very few of them are targeted for multi-robot/agent task allocation \cite{adams2011coalition}. Taxonomies of coalition formation algorithms for task allocation are proposed in \cite{lau2003task, gerkey2004formal}. Following the taxonomy in \cite{gerkey2004formal}, our work in this paper can be classified as addressing a single-task robot and multi-robot task problem. In other words, each robot is capable of completing only one task at a time and each task requires more than a single robot to finish. A distributed solution which formulates the coalition formation problem for multi-agent task allocation as a distributed set partitioning problem is proposed in \cite{tosic2004maximal}. In \cite{adams2011coalition}, the authors have proposed a modified version of the algorithm proposed in \cite{Shehory98} and the complexity of their algorithm, ($O(N^{\frac{3}{2}}M)$), is polynomial compared to that of Shehory and Kraus \cite{Shehory98}, which is $O(N^kM)$, exponential on the size of the largest coalition. However, both \cite{adams2011coalition,Shehory98} report similar sub-optimality guarantees.
Our proposed solution in this paper generates coalitions using a correlation clustering technique \cite{bansal2004correlation,demaine2003correlation}. It is a very commonly used technique in machine learning and pattern recognition. In correlation clustering, highly correlated points, robots in our case, are assigned to the same clusters whereas the points with low correlation are allocated to different clusters. One cluster is formed centering on one specific task and the result of this clustering process is equivalent to the generation of non-overlapping coalitions of robots.
\section{Problem Definition and Notations}
Let $R = \{r_1,r_2,..,r_N\}$ denote a set of $N$ robots. Each robot is characterized by a tuple $\langle P_i,\theta_i \rangle$ where $P_i$ and $\theta_i$ respectively denote the position and the orientation of the robot $r_i$. We assume that each robot is able to localize itself in the environment using an on-board GPS. Let $T = \{t_1,t_2,..,t_M\}$ denote a set of $M$ tasks ($N > M$). Any task $t_j$ is characterized by a tuple $\langle P_j,O_j \rangle$ where $P_j$ and $O_j$ respectively denote the task location and optimal number of robots needed to finish that task. The value of $O_j$ for each task $t_j$ is pre-defined and this information is assumed to be available with the robots. We assume that $\sum_{{1 \leq j \leq M}} O_j = N$. The robots are homogeneous in nature, i.e., any robot can be exchanged with any other robot. The environment is assumed to be a rectangle of size $length \times width$ is discretized into a set of square cells (denoted by $cell$) and one cell can only be occupied by at most one robot or one task at a time.
A {\em coalition} $c \subseteq R$ is a team of robots assigned to one task.
Without loss of generality, we sometimes call a coalition a cluster. A {\em coalition structure} $CS$, defined as a partition, can be thought of as a set of non-overlapping clusters which covers all the robots.
Let $CS=\{c_1,c_2,\cdots,c_M\}$ denote a coalition structure with $c_i$ assigned to task $t_i$ for $i=1,2,\cdots,M$. Let $\zeta$ denote the set of all possible partitions, and hence $CS \in \zeta$.
\newline
\textbf{Value Function} As the robots are homogeneous, the effectiveness of any robot coalition depends solely on the size of the coalition. We define $Val: CS \rightarrow \mathbb{R}$, a {\em value function} that
assigns a virtual reward to a coalition and is defined as
\begin{equation}
Val(c_i) = O_i^2 - (O_i - |c_i|)^2.
\label{eqn_task_val}
\end{equation}
$Val$ ensures that if the coalition $c_i$ assigned to a certain task, $t_i$, has $O_i$ members in it, i.e., $|c_i| = O_i$, then that coalition earns the maximum possible value. If the size of the coalition is greater or less than the associated $O_i$, then the value of the coalition is not the highest. On the other hand, if $|c_i| > 2|O_i|$, then the value of the coalition becomes negative. This makes sure that none of the formed coalitions is too large in size if it is not required by the pre-defined $O$ value. We define the value of a coalition structure as the summation of values of all the coalitions in it, i.e., $Val(CS) = \sum_{\forall c_i \in CS} Val(c_i)$. Note that the maximum value of any coalition structure can be mathematically computed as follows: $MAX\_VAL = \sum_{{1 \leq j \leq M}} O_j^2$.\newline
\textbf{Cost Function }The robots are initially randomly placed in the environment. When a robot is assigned to a task as part of a coalition, it needs to move to the task location to complete the task. Each robot spends a certain amount of energy (in terms of battery power) to move from one point to another. This is represented using the proposed {\em cost-distance} function, defined as $cost_{dist}(r_i, t_j) = \frac{d(P_i, P_j)}{\sqrt{length^2+width^2+1}}$ where $d$ denotes the Euclidean distance between two locations.
We next define the quantity $f_{val}(r_i, t_j)=1-cost_{dist}(r_i, t_j)$ to represent the probability
of a pair of robots or robot-task pair being in the same coalition. From here, we can calculate the `similarity' between a task and a robot \cite{demaine2003correlation} as
\begin{equation}
w(r_i, t_j) = \ln\left(\frac{f_{val}(r_i, t_j)}{1-f_{val}(r_i, t_j)}\right).
\label{eq_edge_weight}
\end{equation}
We use the same function to represent the similarity between two robots $r_i, r_j$. A higher value of $w$ indicates that the members of the pair of robots or the robot-task pair are `similar' and they should be in the same coalition, while a lower value of $w$ would mean that they are `dissimilar' and should be in different coalitions. To ensure that no two tasks are part of the same coalition, we define their similarity to be highly negative
\newline
\textbf{Problem Objective} The problem objective is to find a set of coalitions for all the tasks such that the generated coalition structure has the minimum cost, while its value is the maximum. For each coalition $c_i \in CS$ assigned to task $t_i$, a \textit{cohesion} function is defined as follows:
\[Co(c_i) = \sum\limits_{r_j \in c_i}w(r_j, t_i) + \sum\limits_{r_j,r_l \in c_i, j\neq l}w(r_j, r_l)\] and the cohesion quality of $CS$ is
\[ CQ(CS) = \sum\limits_{\forall c_i \in CS} Co(c_i).\]
Now we can formally define the multi-robot task allocation problem as follows:
\begin{defi}
Given a set of $N$ robots and $M$ tasks and each task $t_i$ requiring $O_i$ number of robots to finish it, find the coalition structure $CS^*$ containing $M$ coalitions (to be assigned to $M$ tasks) where:
\begin{align*}
CS^* &= \arg \max_{CS \in \zeta} CQ(CS),\\
\noalign{also satisfying}
Val(CS^*) &= MAX\_VAL.
\end{align*}
\end{defi}
\section{Coalition Formation Algorithm\\ for Task Allocation}
The total number of possible coalition structures (partitions) is exponential in the number of robots. For $N$ robots, and a fixed size $M, 1\leq M \leq N$, the total number of coalition structures containing exactly $M$ non-empty coalitions is given by the Stirling number of the second kind: $S(N,M) = \frac{1}{M!}\sum_{i=0}^{M}(-1)^{i}\binom{M}{i}(M-i)^N$.
Thus the number of possible coalition structures grows exponentially in the number of robots. With the goal of reducing the complexity of finding the optimal coalition structure, we use the framework of \cite{demaine2003correlation} which models the set of robots and tasks as a weighted complete graph. The robots and tasks are represented by vertices of the graph and edge weights correspond to the `similarity' of a pair of robots or robot-task being in the same coalition. The cohesion quality of a given coalition structure (partition of robots into coalitions) is calculated by summing the edge weights of all edges that are between robots in the same coalition. If two robots are in different coalitions, the weight of the edge between them is not included in the sum. To actually generate a coalition structure, we use a graph partitioning algorithm to split the vertices (robots) into groups (coalitions) under the constraint that the generated coalition structure has close to optimal cohesion quality.
\subsection{Linear Programming Formulation for Graph Partitioning}
For our purposes, $G = (A, E, w)$ will be an undirected, weighted, complete (fully-connected) graph. $A$ is the set of vertices which corresponds to the set of robots $R$ and set of tasks $T$ i.e. $A=T\cup R$, and $E$ is the edge set which consists of all possible pairs of robots and tasks from $A$ (thus $|E| = \binom{|R|+|T|}{2}$). The edge weight function $w:E\rightarrow \mathbb{R}$ is as defined in Eq. (\ref{eq_edge_weight}).
For any given coalition structure $CS$, the \emph{penalty} is defined by
\begin{equation}
Pen(CS) = Pen_p(CS) + Pen_m(CS),
\end{equation}
where $Pen_p(CS)$ corresponds the sum of positive edge weights across different coalitions and $Pen_m(CS)$ corresponds to the sum of negative edge weights within the same coalitions. More specifically:
\begin{equation}
Pen_p(CS) = \sum\limits_{\substack{w(r_i,r_l)>0 \\ r_i\in c_{k_1},r_l\in c_{k_2},\\ k_1\neq k_2}}|w(r_i,r_l)| +\sum\limits_{\substack{w(r_i,t_j)>0 \\ r_i\in c_{k_1} \\ k_1\neq j}}|w(r_i,t_j)|
\end{equation}
\vspace{-0.1in}
\begin{equation}
Pen_m(CS) = \sum\limits_{\substack{w(r_i,r_l)<0 \\ r_i,r_l\in c_{j}}}|w(r_i,r_l)| + \sum\limits_{\substack{w(r_i,t_j)<0 \\ r_i\in c_{j}}}|w(r_i,t_j)|
\end{equation}
Note that the subscript of a coalition matches the subscript of the task it is assigned to i.e. coalition $c_j$ corresponds to task $t_j$. The penalty incorporates both positive weighted edges between \emph{different} coalitions and negative weighted edges that are part of the \emph{same} coalition. Through the maximization of the sum of edge weights within coalitions, the optimal coalition structure is obtained, considering only the function $CQ(CS)$, without the $Val(CS)$ function. Through minimization of the penalty, the cohesion quality function ($CQ(CS)$) of the coalition structure is maximized \cite{dasgupta2012dynamic}.
As specified in \cite{demaine2003correlation}, for each edge $e=(a_i,a_j)\in E$, where $a_i,a_j \in A$, binary variables $x_{a_i,a_j} \in \{0,1\}$ for a clustering (coalition structure) $CS$ are defined as: $x_{a_i,a_j}=0 \leftrightarrow \exists c_l \in CS: a_i,a_j\in c_l$ (i.e., $a_i,a_j$ are in the same coalition) and $x_{a_i,a_j}=1 \leftrightarrow \exists c_{k_1}, c_{k_2}\in CS, k_1\neq k_2: a_i\in c_{k_1}, a_j\in c_{k_2}$ (i.e., $a_i,a_j$ are in different coalitions). We will use $x_{a_i,a_j}$ and $x_e$ interchangeably from here on. The $Pen(CS)$ is then reformulated using the following non-negative constants:
$$
m_e= \left\{
\begin{array}{cc}
|w(e)| & \text{if }w(e) < 0 \\
0 & \text{if }w(e) \geq 0
\end{array}
\right.
$$
$$
p_e= \left\{
\begin{array}{cc}
|w(e)| & \text{if }w(e) > 0 \\
0 & \text{if }w(e) \leq 0
\end{array}
\right.
$$
$Pen(CS)$ is then given as:
\begin{equation}
Pen(CS) = \sum\limits_{e\in E}p_e x_e + \sum\limits_{e\in E}m_e(1-x_e)
\label{eqn_penalty}
\end{equation}
As stated previously, finding a coalition structure with minimal penalty is equivalent to finding the structure with maximal cohesion quality $CQ(CS)$. This problem is given as the following $0$-$1$ integer linear program:\\
min: $\sum\limits_{e\in E}p_e x_e + \sum\limits_{e\in E}m_e(1-x_e)$\\
constraints:\\
$x_{a_i,a_j} \in \{0,1\}, \forall a_i,a_j\in A, i\neq j$\\
$x_{a_i,a_j}+x_{a_j,a_k} \geq x_{a_i,a_k}, \forall a_i,a_j,a_k \in A, i\neq j \neq k$\\
$x_{a_i,a_j}=x_{a_j,a_i} \forall a_i,a_j\in A, i\neq j$\\
The second constraint is the triangle inequality, while the third is the symmetry constraint. These ensure that a valid coalition structure is generated from the solution. Since this problem is NP-complete, it is relaxed to a linear program with the same objective function and the following constraints: ~\cite{demaine2003correlation,dasgupta2012dynamic}:\\
$x_{a_i,a_j} \in [0,1], \forall a_i,a_j\in A, i\neq j$\\
$x_{a_i,a_j}+x_{a_j,a_k} \geq x_{a_i,a_k}, \forall a_i,a_j,a_k \in A, i\neq j \neq k$\\
$x_{a_i,a_j}=x_{a_j,a_i} \forall a_i,a_j\in A, i\neq j$\\
Algorithm \ref{linear prog} shows the pseudo-code for the coalition structure formation based only on the cohesion quality. This process runs in polynomial time (in $N$) and gives a $O(\log N)$ approximation (see \cite{demaine2003correlation}). Although this problem can be solved in polynomial time, the solution may be non-integer i.e. fractional. Note that, if $0 < x_{a_i,a_j} < 1$, then there is no definite answer and we can think of $x_{a_i,a_j}$ as the probability that $a_i,a_j$ are in different coalitions. In this case, there is extra work to be done in order to determine whether or not $a_i,a_j$ are in the same coalition. This `rounding off' procedure is explained in the next section (IV.B Region Growing). It may also happen that some robots are not assigned to any coalition where there is a task. They may be in their own singleton coalition, or may be in a cluster with other robots but no task. In this case, extra work also has to be done to assign the robots to the best possible coalition. In fact, in such a case, there will be coalitions with tasks that will not have a sufficient amount of robots to complete the task, since in our formulation it is assumed that the total number of robots needed to complete all tasks is exactly $N$. The region growing technique explained in the next section can also be used in these scenarios.
Another situation to consider is that even if an integer solution is obtained, the value of the coalition structure found may not be the maximum ($MAX\_VAL$), meaning some coalitions will have too many or too few robots. In this case also, we use the region growing algorithm to optimize the value.
\begin{algorithm}
\small{
\KwIn{$R$: A set of robots;\\ $T$: A set of tasks.}
\KwOut{$CS$: A coalition structure;\\ $\mathcal{R}_{ua}$: A set of unassigned robots.}
$\mathcal{R}_{ua} \leftarrow \emptyset$\\
\For{each $(a_i, a_j) \in A, (A = T \cup R)$}
{
Calculate $w(a_i,a_j)$.
}
Set the linear program constraints after calculating the penalty function (Eq. \ref{eqn_penalty})\\
Obtain a solution that satisfies the above-mentioned constraints by solving the linear program.\\
\eIf {$0$-$1$ integer solution is obtained}
{
Whenever $x_{a_i,a_j}=0$, group $a_i,a_j$ into the same coalition. This will create a valid coalition structure $CS$ (due to the symmetric and triangle inequality constraints).
\\
\eIf{$Val(CS) \neq MAX\_VAL$}{Use the Region Growing algorithm (Algorithm \ref{region growing})}{return $CS$.}
}
{Add the robots, for which all the edges including them yields a fractional solution, to $\mathcal{R}_{ua}$.\\
Use the Region Growing algorithm (Algorithm \ref{region growing}).}
\caption{Coalition structure formation based on the $CQ$ function}
\label{linear prog}
}
\end{algorithm}
\subsection{Region Growing}
The coalition structure found by the Algorithm \ref{linear prog} maximizes the cohesion quality of $CS$, but does not take the value of $CS$ into consideration.
For this reason, it might so happen that one of the coalitions formed in this stage is unnecessarily large and as a consequence, while other coalitions may be smaller in size than required.
For instance, let us suppose that in a warehouse, $M$ stacks of boxes need to be moved from one place to the other. In this case, each stack of boxes needs four robots to carry it, because otherwise, either the stack will fall or it is probably too heavy for a fewer number of robots. On the other hand, if there are too many robots assigned to one task, then resources will be wasted. In this example, if the robot-coalition size is less than four, then the coalition is useless. In order to address such scenarios effectively, our objective function (Definition $1$) requires the value of the coalition structure to be maximized, after minimizing the cost of forming it. Therefore, the region growing algorithm aims to optimize the value of the coalition structure found by the linear programming approach.
\begin{algorithm}[ht!]
\small{
\KwIn{$CS$: Current coalition structure (result of the linear programming solution);
\\ \hskip0.35in $\mathcal{R}_{ua}$: unassigned robots.}
\KwOut{$CS'$: The final coalition structure.}
$T_{sort} \leftarrow$ Sort the tasks $T$ in descending order of the number of robots assigned to them.\\
\For{$t_i \in T_{sort}$}
{
$c_i \leftarrow$ current coalition from $CS$ formed for task $t_i$\\
\While{$|c_i| < O_i$}{
$rad \leftarrow length(cell)$\\
Grow a virtual ball of radius $rad$ around $t_i$\\
\If{$r_j \in \mathcal{R}_{ua}$ AND $dist(r_j,t_i) \leq rad$}{
$c_i \leftarrow c_i \cup r_j$ {\small /*$CS$ is updated to $CS'$*/}\\
$\mathcal{R}_{ua} \leftarrow \mathcal{R}_{ua} \setminus r_j$\\
}
$rad \leftarrow rad + length(cell)$\\
}
\If{$|c_i| > O_i$}{
Remove the farthest robots $r_k \in c_i$ s.t. $|c_i| = O_i$ {\small /*$CS$ is updated to $CS'$*/}\\
$\mathcal{R}_{ua} \leftarrow \mathcal{R}_{ua} \cup r_k$
}
}
\caption{Region Growing algorithm for value optimization and assigning unassigned robots to tasks}
\label{region growing}
}
\end{algorithm}
The region growing algorithm is executed under one or both of the following conditions: first, the solution found in the previous stage is fractional; or second, for the coalition structure ($CS$) found by the linear programming solution, $Val(CS)\neq MAX\_VAL$.
In the region growing process, a virtual ball (centered around a task) is iteratively grown. This ball decides which robots are ultimately clustered together for a particular task and which robots are removed from a cluster previously formed during the linear programming phase. This can happen if one coalition size was initially too large resulting from the previous solution. A ball is grown for each task. The region growing algorithm terminates when each robot is allocated to some task (i.e., assigned to a cluster)
Let $\mathcal{R}_{ua} \subseteq R$ denote the set of unassigned robots. One robot can be unassigned from any task because of one of the following two reasons: first, a fractional solution has been found in the previous round for this particular robot; or second, the cluster previously formed is unnecessarily large for the task. Before the start of the region growing algorithm, the set $\mathcal{R}_{ua}$ is initialized with all robots unassigned to any cluster in the previous stage.
In the region growing algorithm (shown as Algorithm 2), a virtual ball of a certain radius ($rad$) is grown for each task (with the task as its center) iteratively. Note that $rad$ is initialized to one $cell$ length. In other words, the ball will encompass all the robots which are one cell-away from the task $t_i$. In the next iteration, the radius is increased to two-cell length. If $t_i$'s virtual ball has already engulfed $O_i$ robots in it, then we stop growing the ball any further for this task. If there were more than $O_i$ robots assigned to this task, then those robots are declared unassigned now and added to the set $\mathcal{R}_{ua}$. Note that, the virtual ball of any task can engulf not only the already allocated robots to it in the linear programming phase, but also the robots which are part of the set $\mathcal{R}_{ua}$.
\lemma{The worst-case time complexity of the region growing algorithm is $O(MN)$.}\\
{\em Proof: }The worst-case time complexity of the region growing algorithm is easily seen to be $O(MN)$ as follows. In step 1, the $M$ tasks are sorted in the descending order of the number of robots assigned. This will be of complexity $O(M\log M)$. The time complexity for the rest of the algorithm can be determined by observing that $|\mathcal{R}_{ua}|$, the number of unassigned robots must be reduced to zero. Of course, since $|\mathcal{R}_{ua}|\le N$, if $s$ iterations are required (by growing the region with increasing radii each time) for each coalition $c_i$, $i=1,2,\cdots,M$, the modification of $\mathcal{R}_{ua}$ and $c_i$ together will take at most $sMN=O(MN)$ time.
Thus, the time complexity of the region growing algorithm is $O(M\log M+MN)$. Since $N>M$, we conclude the complexity is $O(MN)$.
\lemma{Each task $t_i$ will get $O_i$ number of robots assigned to it.}\\
{\em Proof:} First, note that, $\sum\limits_{j=1}^{M} O_j = N$. This also means that, the total number of extra robots (based on $O$-value) assigned to some tasks is equal to the total number of less robots assigned to the rest of the tasks. Therefore, if any task $t_j$ is assigned more robots than $O_j$, then there is definitely one task $t_k$ for which $|c_k| < O_k$. When the tasks are sorted in descending order, the first task $t_1$ in the list $T_{sort}$ will have either exactly $O_1$ or more robots assigned to it. If $|c_1|=O_1$, then we move on to $t_2$. If $|c_1|>O_1$, then we detach the extra $(|c_1|-O_1)$ robots from it and put them into the set $\mathcal{R}_{ua}$. If $|c_j|<O_j$ for any task $t_j \in T_{sort}, j>1$, then robots from $\mathcal{R}_{ua}$ will be assigned to $t_j$. Thus, every task, $t_j$, will have exactly $O_j$ robots assigned to it, i.e., $|c_j| = O_j, \forall t_j \in T$.
\section{Experiments}
\subsection{Settings}
We have implemented our algorithms using the Java programming language, and ran tests on a desktop computer (Intel i7-7700 processor, 16GB RAM). We varied the number of robots ($N$) between $[10, 100]$ in steps of $10$ and selected the number of tasks ($M$) from the set $[2,4,6,8,10]$. Remembering that the maximum value of $S(N,M)$ could grow exponentially with increasing $M$, We have kept the number of tasks at a maximum level of 10 so that the number of possible partitions to consider does not become prohibitively large. Additionally, note that if the task count was greater than $50\%$ of the number of robots, that robot-task pair was not considered. The distinct $2$D locations of the robots and the tasks were randomly generated from $\mathcal{U}(\{[1,100],[1,100]\})$.
Also, we considered all possible ways of assigning optimal numbers of robots for the tasks at hand. Thus, for each pair $(N,M)$ we considered for experimentation, we generated all possible $O_i$'s by partitioning $N$ into exactly $M$ parts using integer partitioning. For example, with $N=10$ and $M=2$, the set of $O_i$'s generated and tested are \{\{9,1\}, \{8,2\}, \{7,3\}, \{6,4\}, \{5,5\}\}. The results presented here represent the averages of the results obtained
over 10 runs with each of these settings.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\hspace{-0.2in}\includegraphics[width=0.53\linewidth]{figures/optimal_time_ratio}&
\hspace{-0.2in}\includegraphics[width=0.53\linewidth]{figures/optimal_cost_ratio_NEW}\\
(a)&(b)
\end{tabular}
\end{center}
\caption{(a) Runtime comparison on log-scale with a brute-force algorithm \cite{orlov2002efficient} ($4$ tasks); (b) Distance-based cost comparison with the optimal solution (higher is better -- $1$ being the best-case). Dotted lines indicate the theoretical bounds \cite{adams2011coalition} and the solid lines indicate the performances of our proposed approach.}
\label{brute-force_compare}
\end{figure}
\subsection{Results} In this section, we discuss our main findings from the experiments.\\
\textbf{Comparison with the optimal: }We have implemented a brute-force algorithm \cite{orlov2002efficient} that finds the optimal solution which can be compared with the solution produced by our proposed approach. As our proposed strategy always finds a solution with the maximum value, we found the coalition structure using the brute-force method which has the maximum value (using Eq.\ref{eqn_task_val}) and the minimum cost among all the maximum-valued coalition structures. We could test this algorithm for up to $12$ robots and $4$ tasks after which it became prohibitive on our test machine. Two metrics have been compared: 1) runtime and 2) total distance-cost among the robots and the tasks they are assigned to. The result is shown in Figure \ref{brute-force_compare}. As expected, this result (Fig. \ref{brute-force_compare}.(a)) shows that the brute-force algorithm takes considerably more time than our proposed approach (up to $1630$ times for $12$ robots and $4$ tasks). On the other hand, using our proposed approach, the robots need to travel almost similar amount of distances compared to the optimal solution. For example, with $6$ robots and $2$ tasks, the total distance traveled by the robots using the optimal solution was $296.49m.$; using our approach it was $303.58m.$; and this indicates a $97.66\%$ near-optimal result. Moreover, our proposed approach performs near-optimally in terms of finding the coalition structure with the lowest distance-cost measurement while keeping the value of the coalition structure optimal (Fig. \ref{brute-force_compare}.(b)). As the more distance traveled by the robots would result in higher battery expenditure, without loss of generality, we may claim that our proposed approach would eventually be able to bound the battery expenditure at a near-optimal.
We also compare this distance-cost ratio to the optimal with a theoretical worst-case bound proved in \cite{Shehory98,adams2011coalition}. The plot of this theoretical bound $(\max_{i \in [1,m]} O_i +1)$ is shown in Fig. \ref{brute-force_compare}.(b). This figure shows that our method always finds a significantly better solution in terms of closeness to the optimal in each of the test cases. The maximum and the minimum difference of these two ratios are found to be $9.1$ times (for $12$ robots and $2$ tasks) and $3.61$ times (for $8$ robots and $4$ tasks).\newline
\textbf{Performance of our approach: }
Next, we show how the performance of our proposed approach scales with a large set of robots and tasks. First, we test how the runtime of our proposed algorithm scales for up to $100$ robots and $10$ tasks. As can be observed in Fig. \ref{RG_runtime}.(a), the maximum time taken by our approach is about $230$ secs. for $100$ robots being assigned to $10$ tasks. Note that, for this setting, the astronomical number of possible coalition structures is $2.75 \times 10^{93}$.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\hspace{-0.2in}\includegraphics[width=0.53\linewidth]{figures/runtime}&
\hspace{-0.1in}\includegraphics[width=0.53\linewidth]{figures/cost_average}\\
(a)&(b)
\end{tabular}
\end{center}
\caption{(a) Runtime of our approach; (b) Normalized average distance-cost the robots.}
\label{RG_runtime}
\end{figure}
In Fig. \ref{RG_runtime}.(b), we show how the normalized average distance ($\sum\limits_{r_i, t_j}cost_{dist}/N$) traveled by the robots for moving from their initial locations to their allocated task locations changes with different number of robots and tasks.
In this figure, an almost-static trend can be noticed while the maximum difference between any two cases being about $0.06$. This shows that the average distance traveled by the robots does not change significantly with varying $N$ and $M$. We are also interested to see how much gain we make in terms of the value of the a coalition structure by using the region growing algorithm. Remember that the linear programming component does not take the $O$-values into account while forming the best coalition structure. Therefore, we cannot guarantee that this coalition structure will have the value $MAX\_VAL$. It is evident from the result (Fig. \ref{RG_value}.(a)) that we always gain a significant amount of coalition structure value by using the region growing algorithm (up to $3.2 \times 10^5\%$). In Fig. \ref{RG_value}.(b), we see that the value of the coalition structure produced as a final output is always the optimal thus showing the importance of the region growing algorithm.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\hspace{-0.2in}\includegraphics[width=0.53\linewidth]{figures/value_increment}&
\hspace{-0.1in}\includegraphics[width=0.53\linewidth]{figures/optimal_val_ratio}\\
(a)&(b)
\end{tabular}
\end{center}
\caption{(a) Increment in coalition structure value from the linear programming solution by using the region growing algorithm; (b) Comparison with the optimal value ($1$ being the best-case).}
\label{RG_value}
\end{figure}
We have empirically demonstrated that although we minimize the cost and maximize the value of the coalition structures in two successive procedures, our approach could still form a coalition structure (i.e., a set of coalitions assigned to the tasks) which not only has the optimal value, the cost of it is also very close to the optimal. Also, this near-optimal result is produced in a negligible amount of time given the notoriously intractable nature of the problem.
\section{Conclusion and Future Work}
We have proposed a multi-robot coalition formation algorithm for task allocation inspired by the idea of correlated clustering. Our proposed approach first finds a coalition structure with the minimum cost using a linear programming-based graph partitioning formulation and next, using a region growing approach, it optimizes the value of this found coalition structure. We have empirically shown that our proposed approach can yield a near-optimal solution within an insignificant amount of time. This approach also performs significantly better compared to a previously proposed theoretical bound. In the future, we plan to make this approach distributed so that we can avoid the single point of failure. Also, we plan to implement this approach on a group of robots in a real-world setting.
\bibliographystyle{abbrv}
|
2006.07481
|
\chapter{Lattice theory}
\begin{displayquote}
\textit{``I listened to K. Hepp (1963 - 64) and others describe their results in axiomatic field theory; I didn't understand what they said in detail but I got the message that I should think in position space rather than momentum space.'' -- K. Wilson, Nobel Lecture, 1982 \cite{Wilson:1993dy}}
\end{displayquote}
\section{Introduction}
What do we mean by a lattice theory? Suppose we wish to model a physical system of fields, which may be strongly interacting, in order to render the problem of solving it tractable by means of, say, a computer simulation.\footnote{In many cases the theory being studied is not directly realized in nature (as far as we know), but for the sake of exposition, here we imagine it is --- lattice QCD is an example of a realistic theory. It is a virtue of lattice theory, however, to be able to study the physics of (most) any model one wishes in a nonperturbative way.} The system possesses various physical properties, which we characterize by mathematical quantities. One class of properties of particular importance is the following. We imagine that there is a principle of locality, such that the interactions of separate chunks of the field weaken with increasing separation. By ``interaction,'' we choose to mean \textit{correlation}, that the values which characterize local properties of the system become less correlated as we look at ever more distant pairs of chunks. We characterize this locality by what is called the \textit{correlation length} $\xi$, a number with units of distance.
Now, to construct our model, we consider a natural idea. Perhaps the problem will become tractable by \textit{approximating} the continuous spacetime background by a discrete lattice of points, restricting the physical entities (fields) to take values only on those points (or on the links connecting them, in the case of gauge theory), and discretizing the interactions in some way. We call this the ``lattice model'' of the physical system. The separation between the points we denote by $a$, the lattice spacing. The lattice model, if it's a good model, should be able to predict approximations to the properties of the real system. That is, given some set of input parameters $g$, the model should ultimately produce numerical quantities in rough agreement with those of the real system, including a correlation length $\hat \xi$, which we choose to be dimensionless and such that $a \hat \xi$ should approximate the physical value $\xi$.
One might initially believe that all there is to do is choose a value for $a$ and values for $g$, plug it into our simulation and -- voil\'a! -- obtain a description of the system in rough agreement with the real system: $a \hat \xi \approx \xi$. But $\hat \xi = \hat \xi(g)$ is a function of the input parameters, so picking random values for $a$ and $g$ will not generally yield the correct $\xi$; they only match for certain combinations of $a$ and $g$. Furthermore, we know we'll probably need to pick small values of $a$, that is, small with respect to $\xi$, to approach the true values, since we expect that modeling a field theory by only a few lattice sites will generally produce terrible approximations (and of course, the number of sites we simulate with must be finite). Suppose, then, that we choose a value for $a$, and then \textit{scan} the space of $g$ until $a \hat \xi \approx \xi$ is achieved with some desired degree of accuracy. That's perfectly fine. But notice that this statement is equivalent to the following one. For any choice of $g$, and given the \textit{empirical} value of $\xi$, a value of $a$ follows: $a = \xi / \hat \xi(g)$. We say that the pair $(g,\xi)$ \textit{sets the scale} of the simulation. This means we can construct our model in terms of entirely dimensionless quantities, measure $\hat \xi$, and determine $a$ by comparison with $\xi$. This latter approach is far more useful in practice. One reason is that, ultimately, we expect the model to better approximate the physics as $a$ becomes smaller and smaller, but simulations with small parameter values can be less efficient than ones with $O(1)$ parameters, generally, and simulation with $a=0$ in all functions is certainly a non-starter. Thus in our simulations, we define the fields and any other quantities as dimensionless by scaling out the (to be determined) spacing $a$. Once we have determined $a$, we can measure any observable we wish and multiply it by appropriate powers of $a$ to obtain dimensionful predictions which approximate the physical system's properties, to a precision determined by $a$.
In many cases the real system is continuous, so we are often interested in obtaining a limit $a \to 0$, or at least $a$ so small that there is no discernible difference between what we observe in experiment and what we simulate. But this must therefore correspond to a particular limit $g \to g_*$ of the model parameters where $\hat \xi(g_*) = \infty$.\footnote{Because there is usually more than one parameter, there are usually many points in the space of parameters that constitute a continuum limit, and one therefore speaks of the \textit{critical surface} in parameter space, as they often form a submanifold.} If such a limit exists, we call it the \textit{continuum limit} of the lattice theory. If there does not exist any such limit (i.e. point in the space of $g$ parameters), then the lattice theory has no continuum limit, and therefore cannot describe any physics that is known to be continuous. In many cases, however, the system being simulated is actually discrete, for example in condensed matter systems like ferromagnets. In such cases, the existence of a continuum limit is nevertheless an essential aspect in the explanation of its critical properties, as we will come to understand throughout this chapter. It is even possible that the quantum description of gravity will have a fundamental discreteness about it, but whatever it is, it must possess a nontrivial long-distance limit in which it reproduces General Relativity; the existence of such a limit is related to the existence of a continuum limit. But we shall not in this thesis concern ourselves with theories of gravity.
We have determined that the spacing $a$, the physical value $\xi$, and the lattice parameters $g$, are intimately related. The manner in which they are related is therefore of paramount importance in lattice theory. Their relation is, for historical reasons, called the \textit{renormalization group}. Often the relation is characterized by an inversion of sorts, giving parameters as functions of the spacing: $g = g(a)$. If the model has a continuum limit, and if this limit occurs for small values $g_*$, then we may use perturbation theory to study the approach to the continuum. In the event that the limit occurs for $g_* = 0$, the theory is called \textit{asymptotically free}. The enterprise of lattice QCD is based on the assumption of asymptotic freedom: $a \to 0$ as $g \to 0$. But continuum limits need not always occur for small $g$. When $g_*$ is large, a nonperturbative means of determining the continuum limit is necessary. Assuming the physical $\xi$ is always finite, then the continuum limit is characterized by the phenomenon of $\hat \xi \to \infty$. Thus, by simulating a lattice model at many $g$ values until $\hat \xi$ is observed to get ever larger in some region of parameter space, one may nonperturbatively approaching continuum limits, in principle. The problem in practice with this approach is that such an array of simulations can become extremely costly computationally, so other methods must be devised, not to mention the inherent limitation of working in a finite volume, that $\hat \xi \leq N$ where $N$ is the number of lattice sites along each direction. One such method is based on the notion of \textit{finite-size scaling}, and we will explicitly see it in action when we use the Binder cumulant to locate the continuum limit of a particular scalar model.
\section{Discretization and lattice models}
We now give a description of how one typically constructs a lattice theory. We mostly follow the presentation of Montvay and M\"unster \cite{Montvay:1994cy}, except specifying to $d=3$ rather than 4 in the discussion of renormalized quantities. As such, the material in this subsection is largely just review, and many demonstrations are omitted.
\subsection{Lattice actions} In this work we will often focus on the infamous scalar field theory in dimension $d$ with quartic interaction, denoted $\phi^4_d$, and determined by the continuum action
\begin{equation} \label{phi4d_action}
S(\varphi) = \int_{\mathbb{R}^d} \mathrm{d}^d x \Bigg( \frac{1}{2} (\partial \varphi)^2(x) + \frac{m_0^2}{2} \varphi^2(x) + \frac{g_0}{4!} \varphi^4(x) \Bigg).
\end{equation}
To discretize the theory, we define a square lattice $\mathfrak{L}_d := (a \mathbb{Z}_N)^d$ where $a$ is the lattice spacing, $N$ denotes the number of sites in each direction, and $\mathbb{Z}_N$ is the integers mod $N$. Sites of the lattice with dimension of distance are written as $x = an$ for $n\in\mathbb{Z}_N$. The dimensionful lattice field will be denoted $\varphi(x)$. We also typically will assume periodic boundary conditions for the fields, $\varphi(x+N a \mathrm{e}_\mu) = \varphi(x)$, for $\mathrm{e}_\mu$ the unit vector in the $\mu$ direction, $\mu = 1,\dots,d$. The corresponding lattice action is obtained by choosing a discretization of the spatial derivatives. The simplest choice is the forward-difference operator $a \hat \partial_\mu f(x) = f(x+a\mathrm{e}_\mu)-f(x)$, which yields the action
\begin{equation}
S(\varphi) = a^d \sum_{x \in \mathfrak{L}_d} \Bigg( \frac{1}{2} \sum_{\mu = 1}^d (\hat \partial_\mu \varphi (x) )^2 + \frac{m_0^2}{2} \varphi^2(x) + \frac{g_0}{4!} \varphi^4(x) \Bigg).
\end{equation}
This is not the only choice of discretization, however. Intuitively, \textit{any} terms differing from the continuum action by $O(a)$ are valid lattice actions, as they would then imply the same continuum limit.\footnote{A more precise statement is that any action in the same \textit{universality class} constitutes a valid discretization. What is needed is that the long-distance properties of both the continuum action and the lattice discretization are identical.} It will even become apparent that adding higher-order terms like $\varphi^6$ are valid as well!
The lattice action above is not yet dimensionless, as it depends explicitly on the spacing $a$, and as such is not convenient for simulations. One option is to scale $a$ out of all quantities in the action and simulate with that. Another option that is most popular is to instead define the simulation action by\footnote{Most of the time we will not bother with dimless site index labels $n$ for $\hat \varphi$, where $x= an$, using $x$ as the argument for both dimless and dimful fields. We also write $x + \mu$ to mean $x + a \mathrm{e}_\mu$, where $\mathrm{e}_\mu$ is a unit vector along the $\mu$ direction.}
\begin{equation} \label{phi4_lat_action}
S(\hat \varphi) = \sum_{x \in \mathfrak{L}_d} \Big( - \beta \sum_{\mu = 1}^d \hat \varphi(x) \hat \varphi(x+\mu) + \hat \varphi^2(x) + \lambda (\hat \varphi^2(x) - 1)^2 \Big),
\end{equation}
which is equivalent to the dimensionful lattice action by the relations
\begin{equation}
\hat \varphi = a^{d_\phi} \sqrt{\beta} \; \varphi, \quad \frac{1}{2} (am_0)^2 = \frac{1-2\lambda}{\beta} - d, \quad g_0 = \frac{4! \lambda}{\beta^2} a^{d-4},
\end{equation}
where $d_\phi = d/2-1$ is the canonical mass dimension of the (position space) field $\varphi$. The first term in the simulation action is of the form of a nearest-neighbor interaction, the same kind which defines the Ising model. The action furthermore possesses a $\mathbb{Z}_2$ symmetry $\varphi \mapsto - \varphi$. In the limit $\lambda \to \infty$, the partition function becomes that of the Ising model, since for larger $\lambda$ values, the contribution of configurations with $\hat \varphi^2 \neq 1$ becomes vanishingly small, thereby constraining the field to have unit size. Alternatively, one may go the opposite direction, and starting from an Ising model derive a scalar field theory by appropriately changing variables in the partition function, see \cite{Kopietz:2010zz}. It turns out that they are related in an even more general way, that of \textit{universality}, which we will describe when we get to RG.
For perturbative calculations, it is convenient to work in momentum space. The continuum action in momentum space is obtained by Fourier transformation and is given by
\begin{align}
S(\varphi) = \frac{1}{2} & \int_{\mathbb{R}^d} \frac{\mathrm{d}^d p}{(2\pi)^d} \varphi(p) \big( p^2 + m_0^2 \big) \varphi(-p) \nonumber \\
& + \frac{g_0}{4!} \Big( \prod_{i=1}^4 \int_{\mathbb{R}^d} \frac{\mathrm{d}^d p_i}{(2\pi)^d} \; \varphi(p_i) \Big) (2 \pi)^d \delta(p_1 + p_2 + p_3 + p_4).
\end{align}
A continuum action needs to be regularized, for example, with a momentum cutoff $\mn\Lambda$ or by continuing $d$ onto the complex plane. This becomes apparent if one does perturbation theory without a regulator, where one encounters singular loop integrals (see eq. (\ref{mass_divergence})), but it should be noted that such singularities occur even for free theories in observables at zero distance. Although we will focus on lattice regularization and sharp cutoffs in the rest of this chapter, we will work with smooth cutoffs in the continuum in chapters 3 and 4. The dimful\footnote{``Dimensionful'' and ``dimensionless'' are sometimes shortened to ``dimful'' and ``dimless'' in this work.} lattice action in momentum space is given by
\begin{align}
S(\varphi) = \frac{1}{2 V} & \sum_{p \in \mathfrak{B}_d} \varphi(p) \big( \hat p^2 + m_0^2 \big) \varphi(-p) \nonumber \\
& + \frac{g_0}{4! V^4} \Big( \prod_{i=1}^4 \sum_{p_i \in \mathfrak{B}_d} \varphi(p_i) \Big) V \hat \delta(p_1 + p_2 + p_3 + p_4),
\end{align}
where $V = L^d = (aN)^d$ is the lattice volume, $\mathfrak{B}_d \cong (2 \pi / a)^d \mathbb{Z}^d_N$ is the Brillouin zone, $\hat \delta(p)$ is the (dimless) Kronecker delta, and
\begin{equation}
\hat p_\mu = \frac{2}{a} \sin \frac{p_\mu a}{2}
\end{equation}
is the often-encountered lattice momentum function.
\subsection{Observables} All the observable quantities predicted by a quantum field theory have expressions as expectation values of functions of the fundamental field. Furthermore, these are the quantities directly measured in lattice simulations.
The observables of the theory are the expectation values
\begin{equation}
\langle \mathcal{O}(\varphi) \rangle_S = \frac{1}{Z} \int \mathscr{D} \varphi \; \mathcal{O}(\varphi) \mathrm{e}^{-S(\varphi)}, \quad \mathrm{where} \quad Z = \int \mathscr{D} \varphi \; \mathrm{e}^{-S(\varphi)}
\end{equation}
is the partition function. From the statistical physics perspective, the factor $\mathrm{e}^{-S}$ is a Boltzmann weight over the configuration space of $\varphi$, and $Z$ is the factor which normalizes the probability distribution. The free energy of the system with respect to that of the corresponding free theory is given by
\begin{equation}
F - F_0 = - \ln Z / Z_0,
\end{equation}
where $Z_0$ is the free theory partition function. The expression on the r.h.s. is equal to a sum over all connected vacuum diagrams, a result known as the \textit{linked cluster theorem} \cite{Kopietz:2010zz, ZinnJustin:2002ru}. By separating the quadratic (gaussian) part of the action from the interaction $V(\varphi)$, the observables admit a perturbative representation
\begin{equation} \label{pert_series}
\langle \mathcal{O}(\varphi) \rangle_S = \frac{Z_0}{Z} \sum_{n=0}^\infty \frac{(-1)^n}{n!} \langle \mathcal{O}(\varphi) [V(\varphi)]^n \rangle_{0},
\end{equation}
where $\langle \cdot \rangle_0$ denotes expectations of the free theory. The role of the leading factor of $Z_0/Z$ is to divide out all the vacuum bubbles. The observables can be considered as generated by a certain functional, the sourced partition function
\begin{equation}
Z(J) = \int \mathscr{D} \varphi \; \mathrm{e}^{-S(\varphi) + J \circ \varphi},
\end{equation}
where\footnote{This notation is sometimes used in the literature to reduce clutter. It will be generalized to a functional tensor notation in chapter 4.}
\begin{equation}
J \circ \varphi := \int \mathrm{d}^d x J(x) \varphi(x).
\end{equation}
On a lattice, the integral would be replaced by a summation. The $n$-point functions are given in terms of $Z(J)$ by\footnote{$G^{(n)}$ is used rather than $Z^{(n)}$ because the latter would lead to great confusion once $Z$ factors are introduced.}
\begin{equation}
\langle \varphi(x_1) \cdots \varphi(x_n) \rangle = \frac{\delta}{\delta J(x_1)} \cdots \frac{\delta}{\delta J(x_n)} \; Z(J) \Big|_{J=0} = G^{(n)}(x_1, \dots , x_n).
\end{equation}
Such observables do not typically decay as the separations $|x_i - x_j| \to \infty$, but the \textit{connected} observables do decay (under the assumptions of the cluster decomposition principle \cite{ZinnJustin:2002ru}):
\begin{equation}
\langle \varphi(x_1) \cdots \varphi(x_n) \rangle^\mathrm{c} = \frac{\delta}{\delta J(x_1)} \cdots \frac{\delta}{\delta J(x_n)} \; W(J) \Big|_{J=0} = W^{(n)}(x_1, \dots , x_n),
\end{equation}
where the generator of connected $n$-point functions is defined by
\begin{equation}
\mathrm{e}^{W(J)} := \frac{1}{Z_0} \int \mathscr{D} \varphi \; \mathrm{e}^{-S(\varphi) + J \circ \varphi} = Z(J) / Z_0.
\end{equation}
The normalization by the free partition function guarantees that $W^{(0)} = -F + F_0$.
The last generating functional we consider, for now, is the 1PI generator $\Gamma(v)$ defined as the Legendre transform of $W(J)$:
\begin{equation}
\Gamma(v) := v \circ J(v) - W(J(v)),
\end{equation}
where $v(x) = \langle \varphi(x) \rangle_J$ is the vacuum expectation value (``vev'') of the field from the sourced action. Notice that we may write $v$ as
\begin{equation}
v(x) = \frac{\delta W(J)}{\delta J(x)},
\end{equation}
to compute
\begin{equation}
\fdelAB{\Gamma(v)}{v(x)} = J(x) + v \circ \fdelAB{J}{v(x)} - \fdelAB{W(J)}{J} \circ \fdelAB{J}{v(x)} = J(x).
\end{equation}
We can interpret this equation as providing a quantum equation of evolution of the vev $v(x)$, determined by the ``quantum effective action'' $\Gamma(v)$. The derivatives of $\Gamma(v)$ are the vertex functions $\Gamma^{(n)}$, which play a central role in renormalization theory. By differentiating the previous equation by $J(y)$ and using the chain rule, we find
\begin{equation} \label{Gamma2W2}
(\Gamma^{(2)} W^{(2)})(x,y) = \delta(x-y),
\end{equation}
where the product $\Gamma^{(2)} W^{(2)}$ makes sense as matrix multiplication. $\delta(x-y)$ are the components of the identity matrix in the position basis. It follows that
\begin{equation}
\Gamma^{(2)} = [W^{(2)}]^{-1}.
\end{equation}
The relation between the $\Gamma^{(n)}$ and $W^{(n)}$ follow from repeated differentiation of this formula with respect to $\varphi$. The relation of higher $n$-point functions can be deduced by repeated differentiation of eq. (\ref{Gamma2W2}).
Denoting by $\Delta$ the free propagator of the theory, $W^{(2)}$ has a series of the form
\begin{equation}
W^{(2)} = \frac{1}{\Delta^{-1} + \Sigma} = \mn\Delta - \mn\Delta \Sigma \mn\Delta + \mn\Delta \Sigma \mn\Delta \Sigma \mn\Delta + O(\Sigma^3),
\end{equation}
where $\Sigma$ is the self-energy matrix. Hence,
\begin{equation}
\Gamma^{(2)} = \Delta^{-1} + \Sigma,
\end{equation}
which is an essential quantity in any field theory. In momentum space, the propagator is diagonal by translation invariance, meaning that
\begin{equation}
\Gamma^{(2)}(p_1,p_2) = (2\pi)^d \delta(p_1 + p_2) \Gamma^{(2)}(p_2),
\end{equation}
where the notation $\Gamma^{(2)}(p)$ is convenient for the non-delta part.\footnote{Similarly, in position space the components of the 2-point function are often written as $G^{(2)}(x,y) = G^{(2)}(x-y)$.}
An important observable in a $\phi^4$ theory is the 4-point vertex function
\begin{equation}
\Gamma^{(4)}(p_1,p_2,p_3,p_4) =- \frac{W^{(4)}(p_1,p_2,p_3,p_4)}{W^{(2)}(p_1) W^{(2)}(p_2) W^{(2)}(p_3) W^{(2)}(p_4)},
\end{equation}
which determines the $2 \to 2$ scattering amplitudes of particles, and therefore characterizes the strength of the interaction of the particles in the theory. It is proportional to a momentum-conserving delta function.
\subsection{Renormalized couplings} When one carries out the computation of observables determined by eq. (\ref{pert_series}), one often encounters \textit{cutoff sensitivity}: subleading orders in the perturbative series generally contain terms proportional to powers of $1/a$ or $\ln a \mu$ for some mass scale $\mu$, which diverge as $a \to 0$. Historically, this led to the development of the theory of \textit{renormalization}, which had roots in the first work on field theory by the founders of quantum mechanics in the 1930's, and which was given a firm foundation by F. Dyson around 1950 \cite{Dyson:1949bp, Dyson:1997gy}. Under renormalization, one systematically ``eliminates'' the sensitivity to the cutoff by defining renormalized parameters and reexpressing the perturbation series in terms of these parameters. If, by defining a finite number of such renormalized parameters, the resulting series has \textit{no} cutoff sensitivity, meaning that all $a$-dependence is order $O(a^2 \ln^\ell a \mu)$, then the theory is called \textit{perturbatively renormalizable}. It was thought for many years, up until the mid 1970's, that quantum field theories needed to be renormalizable in order to be serious candidates for the description of real physics. The advent of Wilsonian RG and the notion of effective field theory were to eventually undermine such philosophies \cite{Cao:1993gpm, Cao:1997my}. Nevertheless, renormalization is still important in field theory: it is the procedure by which the parameters used to define a theory are related to experimentally determined parameters, which is a necessary step in any physical science.
Continuum conventions typically take the coefficient of $p^2$ in $\Gamma^{(2)}(p)$ to be 1, a procedure called \textit{wave function renormalization}. Since the measured propagator does not typically satisfy this condition, one defines the $Z$ \textit{factor} of $\varphi$ by
\begin{equation}
Z_\phi = \frac{\mathrm{d}}{\mathrm{d} p^2} \Gamma^{(2)}(p)\Big|_{p^2=0},
\end{equation}
and then defines the \textit{renormalized field} $\varphi_\mrm{r} := \varphi / \sqrt{Z_\phi}$, so that
\begin{equation}
1 = \frac{\mathrm{d} \Gamma^{(2)}_\mrm{r}(p)}{\mathrm{d} p^2}\Big|_{p^2=0}.
\end{equation}
The renormalized connected functions and 1PI functions then satisfy
\begin{equation}
W_\mrm{r}^{(n)} = Z_\phi^{-\frac{n}{2}} W^{(n)}_0, \quad \Gamma_\mrm{r}^{(n)} = Z_\phi^{\frac{n}{2}} \Gamma^{(n)}_0,
\end{equation}
where a 0-subscript has been put on the bare $n$-point functions to further distinguish them from renormalized $n$-point functions. Thus, the only difference in the values of these functions is a proportionality factor by some power of $Z_\phi$.
In a euclidean theory, the correlation length $\xi$ is determined by the inverse of the renormalized mass (the smallest eigenenergy above zero in the spectrum of the theory), which is defined by
\begin{equation}
m^2_\mrm{r} = \Gamma^{(2)}_\mrm{r}(p)|_{p=0}.
\end{equation}
It is a physical quantity, as it sets the rate of exponential decay of correlations among distant parts of the system. Another observable of interest is the dimful renormalized coupling
\begin{equation}
\lambda_\mrm{r} = \Gamma^{(4)}_\mrm{r}(p_1, p_2, p_3, p_4)|_{p_i^2 = 0},
\end{equation}
which characterizes the strength of the interactions between particles, as mentioned in the last subsection. Both of the renormalized couplings are \textit{long-distance} quantities, since they are defined at zero momentum.
These couplings can be thought of as the observable counterparts to the bare couplings $m_0^2, \; g_0$, which are the input parameters of the lattice model, since they agree at leading order in perturbation theory, as we will soon observe.
The renormalized couplings of the theory are totally determined by the bare couplings and the cutoff. As such, we can write them as functions thereof:
\begin{align}
Z_\phi & = \smallfrac{\mathrm{d}}{\mathrm{d} p^2} \Gamma^{(2)}_0(p; m_0, g_0, a) |_{p=0} , \nonumber \\
m^2_\mrm{r} & = \Gamma^{(2)}_\mrm{r}(p; m_0,g_0, a)|_{p=0}, \nonumber \\
\lambda_\mrm{r} & = \Gamma^{(4)}_\mrm{r}(\boldsymbol p; m_0, g_0, a)|_{\boldsymbol p = 0},
\end{align}
where $\boldsymbol p$ is the 4-tuple of momenta. In the next section, we will describe RG in somewhat general terms, but we shall follow along with the example of $\phi^4_3$. To that end, let us find expressions for the renormalized couplings above.
In perturbation theory, observables are determined from eq. (\ref{pert_series}) with the lattice action $S(\varphi)$ and applying Wick's theorem for gaussian integrals. The bare connected 2-point function is found to be
\begin{equation}
W^{(2)}_0(p) = \mn\Delta(p) - \frac{\lambda_0}{2} \mn\Delta(p) \mn\Delta(-p) \sum_\ell \mn\Delta(\ell) + O(\lambda_0^2),
\end{equation}
where the free propagator is
\begin{equation}
\mn\Delta(p) = \frac{1}{\hat p^2 + m_0^2}.
\end{equation}
The inverse of $W^{(2)}_0(p)$ gives us $\Gamma^{(2)}_0(p)$:
\begin{equation}
\Gamma^{(2)}_0(p) = \mn\Delta^{-1}(p) + \frac{\lambda_0}{2 V} \sum_\ell \mn\Delta(\ell) + O(\lambda_0^2) = \hat p^2 + m_0^2 + \frac{\lambda_0}{2 V} \sum_\ell \mn\Delta(\ell) + O(\lambda_0^2).
\end{equation}
By expanding the lattice momenta $\hat p$ in $a$,
\begin{equation}
\hat p^2 = p^2 - \frac{a^2}{12} \sum_\mu p_\mu^4 + O(a^4 p^6),
\end{equation}
we see that there is no change to the $p^2$ coefficient at 1-loop order, which means that $Z_\phi = 1 + O(\lambda_0^2)$. The renormalized mass, however, has a first order contribution like
\begin{equation}
m_\mrm{r}^2 = m_0^2 + \frac{\lambda_0}{2 V} \sum_\ell \mn\Delta(\ell) + O(\lambda_0^2).
\end{equation}
The connected 4-point function at 1-loop order is
\begin{align}
W^{(4)}_0 & (\boldsymbol p) = V \hat \delta (p_\mrm{tot}) \mn\Delta(p_1) \cdots \mn\Delta(p_4) \nonumber \\
& \times \Big[ - \lambda_0 + \frac{\lambda_0^2}{2 V} \sum_\ell \mn\Delta(\ell) \sum_{i=1}^4 \mn\Delta(p_i) + \frac{\lambda_0^2}{2 V} \sum_{i=1}^3 \sum_\ell \mn\Delta(\ell) \mn\Delta(\ell + p_{\sigma_i}) + O(\lambda_0^3) \Big].
\end{align}
Dividing by four factors of $W^{(2)}$ and expanding the denominator in $\lambda_0$ cancels the second term above, which is not 1PI. One then obtains the 1PI function, and evaluation at zero external momenta then yields (minus) the renormalized coupling, or
\begin{equation}
\lambda_\mrm{r} = \lambda_0 - \frac{3 \lambda_0^2}{2 V} \sum_\ell \mn\Delta(\ell) \mn\Delta(\ell) + O(\lambda_0^3).
\end{equation}
We remark that corresponding to the dimful equations above are the dimless relations
\begin{align}
\hat m_\mrm{r}^2 & = \hat m_0^2 + \frac{\hat \lambda_0}{2 N^d} \sum_\ell a^2 \mn\Delta(\ell) + O(\hat \lambda_0^2), \nonumber \\
\hat \lambda_\mrm{r} & = \hat \lambda_0 - \frac{3 \hat \lambda_0^2}{2 N^d} \sum_\ell a^4 \mn\Delta(\ell) \mn\Delta(\ell) + O(\hat \lambda_0^3),
\end{align}
obtained by letting $a$ give dimension to all quantities, e.g. $\hat m_\mrm{r} = m_\mrm{r} a, \; \hat \lambda_\mrm{r} = a^{4-d} \lambda_\mrm{r} $. Up to factors of $\beta$ coming from $\hat \varphi = a^{d_\phi} \sqrt{\beta} \varphi$, these couplings are directly measured in lattice simulations. Notice that the dimless renormalized couplings are therefore determined solely by the choice of dimless simulation parameters (and the lattice size $N$).
The evaluation of lattice loop integrals is generally more difficult than those of the continuum, and one resorts to expansion in $\hat m_0$ and numerical integrations for exact results, under the assumption that small $\hat m_0$ indeed is the interesting limit. The $\hat m_0$ expansions are typically asymptotic series, since the coefficients of the would-be Taylor expansion are often singular at some order.
To make our lives easier, we evaluate these integrals in the naive continuum limit, where deviations from the continuum result due to the lattice arise from the expansion of $\hat p^2$ in $p^2 a^2$. The renormalized mass with a sharp cutoff $\mn\Lambda = a^{-1}$ evaluates to
\begin{equation} \label{mass_divergence}
m_\mrm{r}^2 = m_0^2 + \frac{\lambda_0}{2} \; \Omega_3 \Big[ \frac{1}{a} - m_0 \arctan \frac{1}{a m_0} \Big] + O(\lambda_0^2),
\end{equation}
where $\Omega_d = S_{d-1} / (2 \pi)^d$ is a common factor arising in loop integrals; $S_{n}$ is the $n$-sphere surface area. For $d=3$, $\Omega_3 = 1/2\pi^2$. In perturbation theory, one is ultimately interested in replacing $m_0$ by $m_\mrm{r}$ in the series of other observables, so we expand the expression in powers of $m_0$:
\begin{equation}
m_\mrm{r}^2 = m_0^2 + \frac{\lambda_0}{4 \pi^2 a} \Big[ 1 - \frac{\pi}{2} a m_0 + O((a m_0)^2) \Big] + O(\lambda_0^2).
\end{equation}
Multiplying by $a^2$ leads to
\begin{equation} \label{mren}
\hat m_\mrm{r}^2 = \hat m_0^2 + \frac{\hat \lambda_0}{4 \pi^2} \Big[ 1 - \frac{\pi}{2} \hat m_0 + O(\hat m_0^2) \Big] + O(\hat \lambda_0^2).
\end{equation}
The continuum limit $a \to 0$ of the lattice model occurs for $\hat m_\mrm{r} = m_\mrm{r} a \to 0$ with $m_\mrm{r} \neq 0$. We see that the limit is equivalent to
\begin{equation}
\hat m_0^2 \to - \frac{1}{4 \pi^2} \hat \lambda_0 + O(\hat \lambda_0^2, \hat \lambda_0 \hat m_0).
\end{equation}
In other words, we can approach the continuum limit of the model by fixing $\hat \lambda_0$ and \textit{tuning} $\hat m_0^2$ to some particular value, which to first order in perturbation theory is given as above.\footnote{Since a perturbative estimate may not always be reliable, this way of choosing simulation parameters is not taken, in practice.}
The dimful renormalized coupling is similarly given by
\begin{equation}
\lambda_\mrm{r} = \lambda_0 - \frac{3 \lambda_0^2}{8 \pi^2} \Big[ \frac{1}{m_0} \arctan \frac{1}{am_0} - \frac{a}{1 + a^2 m_0^2} \Big] + O(\lambda_0^3),
\end{equation}
which (asymptotically) expands to
\begin{equation}
\lambda_\mrm{r} = \lambda_0 - \frac{3}{8 \pi^2} \frac{\lambda_0^2}{m_0} \Big[ \frac{\pi}{2} - 2 \hat m_0 + O(\hat m_0^3) \Big] + O(\lambda_0^3),
\end{equation}
and multiplying by $a$ we find
\begin{equation} \label{lamren}
\hat \lambda_\mrm{r} = \hat \lambda_0 - \frac{3}{8 \pi^2} \frac{\hat \lambda_0^2}{\hat m_0} \Big[ \frac{\pi}{2} - 2 \hat m_0 + O(\hat m_0^3) \Big] + O(\hat \lambda_0^3).
\end{equation}
I remark that a more useful dimensionless coupling to define is $g_\mrm{r} := \lambda_\mrm{r} / m_\mrm{r}$, as we will see at the end of the following section.
Once the series representation of the renormalized couplings has been obtained, one can invert them to obtain the bare couplings as series in renormalized ones. This allows all observables to be reexpressed in terms of renormalized parameters. When such a reexpression leads to a total elimination of cutoff-sensitivity, a theory is called \textit{renormalizable}, as mentioned before. In $\phi^4_3$ theory, there are in fact only two primitive diagrams that are cutoff-sensitive, which are renormalized by the mass $\hat m_\mrm{r}^2$ and the wave function $Z$-factor. This scenario is an instance of \textit{super-renormalizability}. It will turn out, however, that in order to talk about the infrared properties of $\phi^4_3$, it is nevertheless important to define the renormalized coupling $\lambda_\mrm{r}$ and to express the perturbation series in terms of $\lambda_\mrm{r}$, or $g_\mrm{r}$.
\section{Perturbative renormalization group}
The term ``renormalization group'' was first used in 1953 by Stueckelberg and Petermann \cite{Petermann:1953wpa} to describe the transformations which relate renormalized couplings defined at various scales in QED4. The next year, Gell-Mann and Low introduced their analysis of the scale-dependent coupling of QED \cite{GellMann:1954fq}, which introduced the concept of the \textit{beta function}. The method of Gell-Mann and Low may be termed \textit{perturbative renormalization group}, as it concerns itself with equations derivable only in a perturbative context. Perturbative RG was brought to its final form by Callan \cite{Callan:1970yg} and Symanzik \cite{Symanzik:1970rt} in 1970, right as Wilson was starting to put his theory of RG together. Wilson's philosophy was inherently nonperturbative, even though many of its instances involved perturbation theory. On the lattice, nonetheless, it is useful to begin with an understanding of perturbative RG, as it applies well in many theories, including QCD.
By comparing the measured value of $\hat m_\mrm{r}$ with the empirical correlation length, we can determine the lattice spacing by $a = \hat m_\mrm{r} / m_\mrm{r} = \xi / \hat \xi$, which again is an example of setting the scale. At fixed empirical scale $m_\mrm{r}$, a change in the cutoff $a$ therefore amounts to a change in $\hat m_\mrm{r}$, which is itself a function of $\hat m_0, \; \hat g_0$. Hence, a change in the cutoff is tantamount to a change in the bare parameters; this relationship is called the \textit{bare renormalization group}, which we describe below. Alternatively, we can consider the bare parameters to be fixed, and look at the change in renormalized observables as the renormalized mass $m_\mrm{r}$ is changed. This second perspective implies the \textit{Callan-Symanzik} equations. By studying these two faces of RG, we may form a picture of the behavior of a theory in the space of bare or renormalized parameters.
\subsection{Bare RG equations} In any lattice observable, we can in principle replace bare parameter dependence by renormalized parameter dependence, by using the equations which define them:
\begin{equation}
H^{(n)}_\mrm{r}(\hat g_\mrm{r}, \hat m_\mrm{r}) := \Gamma_\mrm{r}^{(n)}(\hat g_0(\hat g_\mrm{r}, \hat m_\mrm{r}), \hat m_0(\hat g_\mrm{r}, \hat m_\mrm{r})).
\end{equation}
Comparison with the correlation length determines the spacing $a$, and we can then define
\begin{equation}
\tilde H^{(n)}_\mrm{r}(g_\mrm{r}, m_\mrm{r}, a) := H^{(n)}_\mrm{r}(g_\mrm{r} a^{d_g}, m_\mrm{r} a),
\end{equation}
where $d_g$ is minus the mass dimension of $g_\mrm{r}$. If the theory is \textit{perturbatively renormalizable}, then these functions have the nontrivial property of having a limit as $a \to 0$,
\begin{equation}
\tilde H^{(n)}_\mrm{r}(g_\mrm{r}, m_\mrm{r}, a) = \tilde H^{(n)}_\mrm{r}(g_\mrm{r}, m_\mrm{r}, 0) + O(a^2 \ln^\ell a),
\end{equation}
where $\ell$ is some positive integer determined perturbatively. Renormalizability then implies (at fixed $g_\mrm{r}, m_\mrm{r}$)
\begin{equation}
a \frac{\partial}{\partial a} \tilde H^{(n)}_\mrm{r}(g_\mrm{r}, m_\mrm{r}, a) = O(a^2 \ln^\ell a).
\end{equation}
The terms on the r.h.s. are called \textit{scaling violations}. Since the various $n$-point functions above are numerically equal, $\tilde H^{(n)} = H^{(n)} = \Gamma^{(n)}$, we can write the differential renormalizability statement in terms of the $\Gamma^{(n)}$,
\begin{equation}
a \frac{\mathrm{d}}{\mathrm{d} a} \Gamma_\mrm{r}^{(n)}(\hat g_0(\hat g_\mrm{r}, \hat m_\mrm{r}), \hat m_0(\hat g_\mrm{r}, \hat m_\mrm{r})) = O(a^2 \ln^\ell a),
\end{equation}
where the $a$-dependence is implicit in $\hat g_\mrm{r}, \hat m_\mrm{r}$; we could therefore write the arguments of $\Gamma^{(n)}_\mrm{r}$ as $\hat g_0(a), \hat m_0(a)$. Such functions describe the family of bare parameters which all yield the same physics. It will be convenient to replace $\hat m_0$ by $\hat m_\mrm{r}$, which can be done in principle by solving the equation defining $\hat m_\mrm{r}$ for $\hat m_0$, to obtain functions $\tilde \Gamma^{(n)}_\mrm{r}(\hat g_0, \hat m_\mrm{r})$, yielding
\begin{equation}
a \frac{\mathrm{d}}{\mathrm{d} a} \tilde \Gamma_\mrm{r}^{(n)}(\hat g_0(a), \hat m_\mrm{r}(a)) = O(a^2 \ln^\ell a).
\end{equation}
Writing $\tilde \Gamma^{(n)}_\mrm{r} = Z_\phi^{n/2} \tilde \Gamma^{(n)}_0$, and then using the chain rule, while recalling that $\hat m_\mrm{r} = a m_\mrm{r}$, we find the \textit{bare RG equations}
\begin{equation}
\Big( \hat m_\mrm{r} \frac{\partial}{\partial \hat m_\mrm{r}} - \beta_\mathrm{latt} \frac{\partial}{\partial \hat g_0} + n \gamma_\mathrm{latt} \Big) \tilde \Gamma^{(n)}_0( \hat g_0, \hat m_\mrm{r}) \Big|_{g_\mrm{r}, m_\mrm{r}} = O(a^2 \ln^\ell a),
\end{equation}
where the lattice beta function and anomalous field dimension are defined by\footnote{The sign is chosen so that decreasing $a$ is equivalent to increasing $m_\mrm{r}$ in the renormalized RG equations.}
\begin{equation}
\beta_\mathrm{latt} := - a \frac{\mathrm{d} \hat g_0}{\mathrm{d} a} \Big|_{g_\mrm{r}, m_\mrm{r}} , \quad \gamma_\mathrm{latt} := \frac{a}{2} \frac{\mathrm{d} \ln Z_\phi}{\mathrm{d} a}\Big|_{g_\mrm{r}, m_\mrm{r}}.
\end{equation}
The total derivatives here become partials when the bare parameters are expressed in terms of $(g_\mrm{r}, m_\mrm{r}, a)$ via $(\hat g_\mrm{r}, \hat m_\mrm{r})$. A further consequence of perturbative renormalizability is that $\beta_\mathrm{latt} = \beta_\mathrm{latt}(\hat g_0)$ is a pure function of $\hat g_0$, up to scaling violations. If this function is known, the equation may be integrated to obtain $\hat g_0(a)$. Knowledge of the beta function is essential to understanding the approach to the continuum limit of a lattice theory, as we will soon see.
\subsection{Callan-Symanzik equations} A complimentary scenario is to consider the bare coupling $\hat g_0$ as a fixed parameter and to vary $\hat m_\mrm{r}$ via $m_\mrm{r}$. Varying $\hat m_\mrm{r}$ is equivalent to varying $\hat m_0$ at fixed $\hat g_0$ in a lattice simulation. From the relation
\begin{equation}
H^{(n)}_\mrm{r}(\hat g_\mrm{r}, \hat m_\mrm{r}) = \tilde \Gamma_\mrm{r}^{(n)}(\hat g_0, \hat m_\mrm{r}),
\end{equation}
the total derivative of the l.h.s. with respect to $\hat m_\mrm{r}$ is
\begin{equation}
\Big( m_\mrm{r} \frac{\partial}{\partial m_\mrm{r}} + \beta_\mrm{r} \frac{\partial}{\partial \hat g_\mrm{r}} \Big) H^{(n)}_\mrm{r}(\hat g_\mrm{r}, \hat m_\mrm{r}), \quad \beta_\mrm{r} := m_\mrm{r} \frac{\mathrm{d} \hat g_\mrm{r}}{\mathrm{d} m_\mrm{r}} \Big|_{\hat g_0, a},
\end{equation}
while that of the r.h.s. is
\begin{equation}
n \gamma_\mrm{r} \tilde \Gamma_\mrm{r}^{(n)}(\hat g_0, \hat m_\mrm{r}) + \Delta \tilde \Gamma_0^{(n)}(\hat g_0, \hat m_\mrm{r}),
\end{equation}
where
\begin{equation}
\gamma_\mrm{r} := \frac{ m_\mrm{r}}{2} \frac{\mathrm{d} \ln Z_\phi}{\mathrm{d} m_\mrm{r}}\Big|_{\hat g_0}, \quad \Delta \tilde \Gamma_0^{(n)}(\hat g_0, \hat m_\mrm{r}) := Z_\phi^{\frac{n}{2}} m_\mrm{r} \frac{\partial}{\partial m_\mrm{r}} \tilde \Gamma_0^{(n)}(\hat g_0, \hat m_\mrm{r}).
\end{equation}
Writing everything in terms of $H^{(n)}$, we find the \textit{Callan-Symanzik equations} of $\phi^4_d$,
\begin{equation}
\Big( \hat m_\mrm{r} \frac{\partial}{\partial \hat m_\mrm{r}} + \beta_\mrm{r} \frac{\partial}{\partial \hat g_\mrm{r}} - n \gamma_\mrm{r} \Big) H^{(n)}_\mrm{r}(\hat g_\mrm{r}, \hat m_\mrm{r}) = \Delta \tilde \Gamma_0^{(n)}(\hat g_0, \hat m_\mrm{r}).
\end{equation}
The r.h.s. is an observable which involves an insertion of the renormalized $\phi^2$ operator. A more thorough analysis of renormalizability must also include such insertions, but here we just report that the correlations of observables with arbitrary numbers of insertions of $\phi^2$ are also perturbatively renormalizable in $\phi^4_d$ theories \cite{ZinnJustin:2002ru}.\footnote{The CS equation in this form may look different from the forms we've grown used to due to the presence of the $\Delta \Gamma$ term. But this is a result of having used $\boldsymbol p = 0$ as the subtraction scale in the renormalization conditions, rather than some scale $\mu > 0$. See \cite{ZinnJustin:2002ru} for details.}
To sum up the previous two subsections, we have seen that the existence of a perturbatively renormalizable theory implies certain RG equations which describe the variation of observables, whether they're bare or renormalized ones, as the dimless correlation length is varied via $\hat m_\mrm{r}$. Being first order PDE's, they may be solved by the method of characteristics in the limit that we ignore scaling violations. These solutions constitute the \textit{scaling forms} of the observables in the continuum limit, an observation of far-reaching explanatory power in both field theory and critical phenomena.
\subsection{Continuum limits}
For lattice simulations, the primary utility of beta functions is that they tell us how to simulate closer to the continuum limit, as we now describe. A general renormalized beta function will have the perturbative form
\begin{equation}
\beta_\mrm{r}(\hat g_\mrm{r}) = \hat m_\mrm{r} \frac{\mathrm{d} \hat g_\mrm{r}}{\mathrm{d} \hat m_\mrm{r}} \Big|_{\hat g_0,a} = \beta_1 \hat g_\mrm{r} + \beta_2 \hat g_\mrm{r}^2 + \beta_3 \hat g_\mrm{r}^3 + O(\hat g_\mrm{r}^4).
\end{equation}
The sign of the beta function determines whether $\hat g_\mrm{r}$ decreases or increases as the cutoff is varied at fixed $\hat g_0$. As the continuum limit $\hat m_\mrm{r} \to 0$ is approached, we see that the behavior of $\hat g_\mrm{r}$ is determined by the zeros $\hat g_*$ of $\beta_\mrm{r}$. Such values are called \textit{fixed points} of the theory. Notice from the perturbative expression above that $\hat g_* = 0$ is always a fixed point, at least when the expansion above is valid. This is called the \textit{gaussian} fixed point (GFP). If $\beta_\mrm{r}$ is positive near the GFP, then as $\hat m_\mrm{r} \to 0$, the renormalized coupling approaches zero, and we say the theory is \textit{trivial}. In general, if the slope near a fixed point $\hat g_*$ is positive, then it attracts the renormalized coupling in the continuum limit, and we call such a fixed point an \textit{infrared} fixed point (IRFP). If the slope is negative, on the other hand, then $\hat g_\mrm{r}$ repels away from $\hat g_*$ in the continuum limit. These are called \textit{ultraviolet} fixed points (UVFP).
If we consider these cases from the bare RG perspective (where $\hat g_\mrm{r}$ is held fixed), then the bare coupling $\hat g_0$ behaves in the ``opposite'' way.
This matches our intuition that $\hat g_0$, as a UV quantity, should behave in an ``opposite'' way as $\hat g_\mrm{r}$, an IR quantity. Qualitatively, an IRFP repels $\hat g_0$, whereas a UVFP attracts it, in the continuum limit. If a renormalized beta function is monotonic and vanishes at $\hat g_\mrm{r} = 0$, then the behavior of the theory is relatively simple. If positive, one would approach a trivial theory $\hat g_\mrm{r} = 0$ in the continuum, and if negative, $\hat g_\mrm{r}$ grows in the continuum limit.
As a concrete example, we consider $\phi^4_3$, which has a nontrivial RG diagram, exhibiting both kinds of fixed points. For the parallel discussion of $\phi^4_4$, see Montvay and M\"unster sections 1.7 and 2.4. To compute the renormalized beta function, begin with the perturbative expression for the renormalized coupling, eq. (\ref{lamren}):
\begin{equation} \label{lambda_ren}
\lambda_\mrm{r} = \lambda_0 - \frac{3}{8 \pi^2} \frac{\lambda_0^2}{m_0} \Big[ \frac{\pi}{2} - 2 \hat m_0 + O(\hat m_0^2) \Big] + O(\hat \lambda_0^3).
\end{equation}
To study the variation as the continuum limit is approached, we replace $\hat m_0$ with $\hat m_\mrm{r}$ in eq. (\ref{lambda_ren}), valid at this order in perturbation theory. Since the renormalized coupling is a long-distance quantity, it is natural to give $\lambda_\mrm{r}$ dimension with the renormalized mass, defining the dimensionless coupling by \cite{Binney:1992vn, ZinnJustin:2002ru}
\begin{equation} \label{dimless_gren}
g_\mrm{r} := \frac{\lambda_\mrm{r}}{m_\mrm{r}} = \frac{\lambda_0}{m_\mrm{r}} - \frac{3}{8 \pi^2} \frac{\lambda_0^2}{m_\mrm{r}^2} \Big[ \frac{\pi}{2} - 2 \hat m_\mrm{r} + O(\hat m_\mrm{r}^2, \hat \lambda_0 \hat m_\mrm{r}^2) \Big] + O(\lambda_0^3).
\end{equation}
The reason for this definition is also suggested in perturbation theory, where this turns out to be the natural renormalized expansion parameter. In three dimensions, some power-counting and graph theory imply that the mass dimension of a Feynman diagram contributing to an $E$-point vertex function at order $V$ in $\lambda_0$ will be
\begin{equation}
\delta(E,V) = 3 - \smallfrac{1}{2} E - V.
\end{equation}
This tells us two important facts. First, the asymptotic dependence on the UV cutoff $\Lambda = 1/a$ decreases with increasing external points ($E$) and with increasing order in perturbation theory ($V$). In fact, there are only 2 primitive diagrams in the theory which diverge as $\Lambda \to \infty$, the snail and the sunset diagrams that appear in $\Gamma^{(2)}$; this fact makes $\phi^4_3$ an example of a \textit{superrenormalizable} theory. The second fact we learn from $\delta (E,V)$ is that, if we factor out $m_0$ from every loop integral and change momentum variables $p = m_0 \bar p$, then upper limits of integrals become $\Lambda / m_0 = 1/\hat m_0$, and the dimensionless integral gets multiplied by a factor of $m_0^{\delta(E,V)}$. Since the first two terms, $3-E/2$, are independent of $V$, they factor out of the entire perturbation series. Meanwhile, the remaining expansion is in powers of $\lambda_0 / m_0$. Thus the generic observables will have a series that looks schematically like
\begin{equation}
\Gamma^{(E)} = \Gamma^{(E)}_\mathrm{tree} + m^{3-E/2}_0 \sum_{V = 1}^\infty A_{E,V}(1/\hat m_0) \big( \lambda_0 m_0^{-1} \big)^{V}
\end{equation}
and all the coefficients $A_{E,V}(1/\hat m_0)$ are finite as $\hat m_0 \to 0$ except the snail and sunset diagrams. Replacing the bare parameters by their renormalized counterparts yields series in $g_\mrm{r}$, apart from the over-all multiplication by $m_\mrm{r}^{3-E/2}$.
To compute the renormalized beta function $\beta_\mrm{r}(g_\mrm{r})$, we compute from eq. (\ref{dimless_gren})
\begin{equation}
\hat m_\mrm{r} \frac{\mathrm{d} g_\mrm{r}}{\mathrm{d} \hat m_\mrm{r}} \Big|_{\hat \lambda_0} = - \frac{\lambda_0}{m_\mrm{r}} + \frac{3}{4 \pi^2} \frac{\lambda_0^2}{m_\mrm{r}^2} \Big[ \frac{\pi}{2} + O(\hat m_\mrm{r}) \Big] + O(\lambda_0^3).
\end{equation}
Solving for $\lambda_0$ in terms of $\lambda_\mrm{r}$ then yields
\begin{equation}
\beta_\mrm{r}(g_\mrm{r}) = - g_\mrm{r} + \frac{3}{16 \pi} g_\mrm{r}^2 + O(g_\mrm{r}^3, g_\mrm{r}^2 \hat m_\mrm{r}).
\end{equation}
The terms proportional to $\hat m_\mrm{r}$ vanish in the continuum limit (they are an example of scaling violations). To compute the bare beta function, we need the derivative of $\hat \lambda_0$ at fixed $g_\mrm{r}$. Using eq. (\ref{dimless_gren}) again, but being mindful of the $O(\hat m_\mrm{r})$ part of the 1-loop term, and using the chain rule, we compute
\begin{equation}
\beta_0(\hat \lambda_0) = \hat \lambda_0 - \frac{3}{4 \pi} \hat \lambda_0^2 + O(\hat \lambda_0^3, \hat \lambda_0^2 \hat m_\mrm{r}).
\end{equation}
From $\beta_\mrm{r}(g_\mrm{r})$, we learn that an IRFP exists around $g_* = 8 \pi / 3$,\footnote{This parameter does not seem very small. However, its every occurrence in the perturbation series above comes with a factor of $1/(2 \pi)^3$, so the effective expansion parameter is in fact $1/3 \pi^2$, which is small \cite{Binney:1992vn}.} while the gaussian fixed point is a UVFP. Thus, at fixed bare coupling, $g_\mrm{r}$ tends to $g_*$ in the continuum limit, whereas at fixed $g_\mrm{r}$, $\hat \lambda_0$ tends to zero in the continuum. The fact that $g_\mrm{r} \to g_*$ as one approaches the continuum, no matter what $\hat \lambda_0$ one begins with, is an example of \textit{universality} at the IRFP. Moreover, all critical quantities, like exponents and amplitude ratios, are expressible as functions of $g_*$, and therefore are also universal \cite{ZinnJustin:2002ru}. In 4 dimensions, the parallel analysis leads one to the conclusion that $g_\mrm{r} \to g_* = 0$ in the continuum limit, a result that has found further evidence from much more systematic analytic calculations \cite{Baker:1981zz,Aizenman:1982ze,Frohlich:1982tw,Gawedzki:1985ic,Luscher:1987ay} as well as lattice simulations \cite{Freedman:1981wr,Hasenfratz:1987eh,Kim:1992rw}. This is an example of \textit{triviality} in a quantum field theory.
This has all been perturbative, and confined to a few couplings. One may rightly wonder whether this picture holds nonperturbatively, or when there are \textit{many} couplings. Furthermore, the lingering question about how this plays out for \textit{nonrenormalizable} theories suggests itself: how should we understand situations where operators are present in the action for which perturbative renormalizability fails? In a sense, the key to a deeper understanding of RG rests in finding an answer to these questions. The insight of Wilson which led to an answer was to formulate RG in an entirely nonperturbative way with the help of the concept of block spins and theory space. It was also through his formulation that the application of the Callan-Symanzik equations to critical phenomena became apparent.
\section{Block-spin RG}
In the 1950's and 60's it became clear that the traditional approach to critical phenomena, namely, Landau mean field theory \cite{Landau:1937obd}, was inadequate to describe the long-established experimental fact of nongaussian scaling of thermodynamic properties in statistical systems near their critical points \cite{Wilson:1993dy,Cao:1999pw}. Progress was made with the pursuit of high-temperature series expansions by Domb, Fisher, and others. In 1965, Widom \cite{doi:10.1063/1.1696618} proposed a \textit{scaling hypothesis} for the thermodynamic free energy which was able to reproduce some of the observed scaling laws. But these hypotheses lacked any deep theoretical basis. The concept of ``block-spins'' emerged in the late 60's as a promising avenue to theoretically understand such scaling, beginning with a suggestion by Buckingham \cite{Cao:1999pw}, and separately (though more fully) by Kadanoff in 1966 \cite{Kadanoff:1966wm}. Kadanoff's work then formed the basis of Wilson's theory of RG, which he introduced in 1971 \cite{Wilson:1971bg,Wilson:1971dh,Wilson:1971dc}, and which finally provided a compelling theoretical explanation for the aforementioned critical properties.\footnote{The line of progress hitherto described is, of course, a narrow view of a much broader field of contributions and research in the late 60's. As Wilson notes in his Nobel lecture \cite{Wilson:1993dy}, independent work on the relationship between field theory and critical phenomena was carried out during the same time period by Gribov, Migdal, Symanzik, Polyakov, Dyson, and others. It should be noted that some of these parallel developments have recently been exploited in the conformal bootstrap program \cite{Poland:2018epd} with striking success.} The numerical implementation of block-spin RG was later carried out in the 1980's by Swendsen, Wilson, and others, in a framework known as Monte Carlo Renormalization Group (MCRG) \cite{Swendsen:1979gn,Pawley:1984et}. MCRG has since become a commonplace tool in the study of RG properties of lattice systems.
\subsection{Block-spin transformations}
Starting with a lattice of spins $\varphi(x)$, a new set of \textit{blocked} spins $\varphi_b(x)$ is defined by local averages of the old ones,
\begin{equation}
\varphi_b(x_b) := (B_b \varphi)(x) = \frac{b^\Delta}{b^d} \sum_{\varepsilon} \varphi(x + \varepsilon),
\end{equation}
where $\varepsilon$ is a vector pointing to each neighbor of $x$ within a distance $b$, which is called the ``scale factor,'' and $\Delta$ is called the \textit{scaling dimension} of $\varphi$, which we will discuss soon. The index $x_b$ refers to the site of a blocked lattice superimposed on the original one, located at some chosen site within the block of original sites. Unless the initial system had an infinite volume, the blocked spins must live on a smaller lattice. The blocking operator $B_b$ defined by
\begin{equation}
B_b(x,y) = \frac{b^\Delta}{b^d} \sum_\varepsilon \delta(x+\varepsilon, y)
\end{equation}
will be useful to keep in mind later in this work.
The blocking transformation on the fields induces a transformation of the action on the level of the partition function by introducing a delta function which sets new spins equal to blocked spins,
\begin{equation}
Z = \sum_\varphi \mathrm{e}^{-S(\varphi)} = \sum_\varphi \frac{1}{V}\sum_{\varphi_b} \delta(\varphi_b - B_b \varphi) \; \mathrm{e}^{-S(\varphi)} = \sum_{\varphi_b} \; \mathrm{e}^{-S_b(\varphi_b)}.
\end{equation}
The last equality defines the blocked action. It generally does not equal the original action; it will contain many terms which were not present before, and the terms that were already present will have different values of their couplings. Kadanoff's approach was limited by not considering these extra terms. For example, if $S$ had only a nearest neighbor interaction
\begin{equation}
- J\sum_{x, \mu} \varphi(x) \varphi(x+\mu),
\end{equation}
then the new action will have a different value $J'$ as well as new terms involving next-nearest neighbors, next-next-nearest neighbors, etc., and even higher-powered interactions like\footnote{Technically, if $\varphi$ takes values in all of $\mathbb{R}$, then the higher order terms in $\varphi$ are only generated when there are interacting terms, like $\varphi^4$ or $\varphi^6$, in the initial action. If the spins are constrained to have unit size, $|\varphi| = 1$, as in the Ising model, then the nearest neighbor term is sufficient to generate such higher order interactions. But the blocking transformation is different in the Ising model; one must project the blocked spin back to unit norm.}
\begin{equation}
\sum_{x} \sum_{\mu_1, \dots, \mu_j} \varphi(x) \varphi(x + \mu_1) \cdots \varphi(x + \mu_j),
\end{equation}
for every $j = 2n, \; n \in \mathbb{Z}_+$. In fact, it will typically contain \textit{all} possible terms consistent with the symmetries of the system. We will explicitly compute a few such terms in chapter 3 when discussing functional RG. Part of Wilson's breakthrough was to recognize the relative importance of all these extra terms in the effective action.
If the correlation length of the system being described is $\xi$, then the original lattice spacing is $a = \xi / \hat \xi$, with $\hat \xi$ calculated in the original theory. By definition, the blocked lattice has a spacing $a_b = b a$, so the dimless correlation length of the blocked theory must be $\hat \xi_b = \hat \xi / b$. Thus the blocked theory will generally have a reduced dimless correlation length, which means that fewer degrees of freedom are strongly correlated across the lattice. The philosophy of both Kadanoff and Wilson was that the blocking transformation therefore reduces the complexity of many-body systems by systematically reducing the number of degrees of freedom being taken into account, without changing the physics \cite{Wilson:1973jj} (because the partition function is invariant), a philosophy which could be called the \textit{pragmatic} view of RG. Because critical phenomena are characterized by large correlation lengths, block-spin RG proves to be a useful tool.
The blocking transformation on the spins induces a transformation of the Boltzmann factor, or equivalently the action, as noted above. Thus, we can regard it as a map on the space of actions, parameterized by the number $n$ of iterations of the transformation determined by $b$, which produces a sequence, or \textit{flow},\footnote{``Flow'' may be misleading here, since the transformations are discrete. Continuous RG transformations will be described in the next three chapters.} on action space,
\begin{equation}
S_0 \to S_1 \to S_{2} \to \cdots \to S_{n}.
\end{equation}
If $S_0$ had couplings $\boldsymbol g = (g_i)$, then the couplings in $S_{n}$ are denoted $\boldsymbol g_n$. Now, as $n\to \infty$, one eventually (for generic actions) approaches $\hat \xi_n \to 0$, namely, a trivially decoupled lattice system. In the vicinity of $\hat \xi_n = 0$, the action no longer changes much after each iteration. Actions which are exactly invariant under RG transformations are called \textit{fixed points}, denoted $S_*$. From the relation $\hat \xi_b = \hat \xi / b$, we observe that the only actions which can be fixed points must have either $\hat \xi = 0$ or $\infty$. The former type are called \textit{zero-correlation length} fixed points while the latter are called \textit{critical} fixed points, since they are the ones of use in the account of critical phenomena. Zero-correlation length fixed points act as sinks for RG trajectories, since any initial theory with $\hat \xi < \infty$ will eventually run into it, at least in the generic case where there are no limit cycles or other exotic behaviors. From $a = \xi / \hat \xi$, we also see that the critical fixed points correspond to zero lattice spacing systems (if $\xi \neq 0$), i.e. the continuum limit, consistent with the analysis of perturbative RG in the previous section.
\subsection{Correlator scaling laws}
One of the striking experimental discoveries of modern physics is that the correlation functions of statistical systems at criticality can exhibit nontrivial power law behavior, rather than a (typical) exponential decay, which is a manifestation of the long-distance correlations of critical systems. For spin systems, the critical spin-spin correlation function is observed to behave like\footnote{The nontrivial part of the correlator may be understood intuitively as an expression of scale-dependence of the interaction by writing $A / z^\eta = A(z)$, with $A(z) = A' a^\eta/ z^\eta = A'(1 - \eta \ln z / a + \dots)$, which modifies the free-field behavior \cite{Cao:1999pw}.}
\begin{equation}
G(z) := \langle \varphi(z) \varphi(0) \rangle = \frac{A}{z^{d-2+\eta}},
\end{equation}
where the constant $A$ has mass dimension $-\eta$, since the dimension of the spins is $d_\phi = d/2-1$. The exponent $\eta$ is equal to zero in mean field theory \cite{Kopietz:2010zz}. The empirical fact that $\eta \neq 0$ for many systems constituted a major theoretical problem in the 60's. With the advent of RG, however, it finally found an explanation \cite{Kadanoff:1966wm,Wilson:1971bg,Cardy:1996xt}.
Let us compute $G(z)$ in the blocked theory with action $S_b$, without assuming any kind of $z$-dependence. Since $a_b = ba$, the dimensionless distance $\hat z_b = \hat z / b$ between blocked spins corresponds to a distance $\hat z$ between original spins,
\begin{equation}
G_b(\hat z_b) = \langle \hat \varphi_b(\hat z_b) \hat \varphi_b(0) \rangle_{S_b} = \langle B_b \hat \varphi (\hat z) B_b \hat \varphi(0) \rangle_{S_0} = \frac{b^{2\Delta}}{b^{2d}} \sum_{\varepsilon \varepsilon'} \langle \hat \varphi (\hat z + \varepsilon) \hat \varphi(\varepsilon') \rangle_{S_0}.
\end{equation}
At large distances one expects the approximation $G(z) \approx [G(z+\varepsilon) + G(z-\varepsilon)]/2$ to get better and better, which leads to the asymptotic relation
\begin{equation}
G_b(\hat z/b) \sim b^{2\Delta} G(\hat z).
\end{equation}
Now, if the action $S$ had couplings $\boldsymbol g = (g_i)$, then the blocked action typically has different ones $\boldsymbol g_n$, but the \textit{function of} these couplings $G(\hat z; \boldsymbol g)$ is the same in either case (assuming we include all possible couplings in the set $\boldsymbol g$), so
\begin{equation}
G(\hat z/b; \boldsymbol g_b) \sim b^{2\Delta} G(\hat z; \boldsymbol g).
\end{equation}
This relation holds for any pair of successive blocking steps. Let us now assume that we are in the vicinity of a fixed point of the RG transformation, meaning that $\boldsymbol g_b \approx \boldsymbol g \approx \boldsymbol g_*$, implying
\begin{equation}
G(\hat z/b; \boldsymbol g_*) \sim b^{2\Delta} G(\hat z; \boldsymbol g_*).
\end{equation}
But this means $G$ is homogeneous of degree $2\Delta$. Thus, at large distances,
\begin{equation}
G(\hat z; \boldsymbol g_*) \sim \frac{A_*}{\hat z^{2\Delta}},
\end{equation}
which produces the empirical result when $\Delta = d/2 - 1 + \eta/2 = d_\phi + \gamma_\phi$. $d_\phi$ is the canonical mass dimension of the field $\varphi$ in position space, so $\gamma_\phi = \eta/2$ is called the \textit{anomalous} dimension of $\varphi$. This anomalous dimension coincides with the one defined in the previous section in the context of field theory.\footnote{This may not be obvious. The bare RG equations of the $\Gamma^{(n)}_0$ imply a nontrivial scaling behavior in $a$ as $a \to 0$ that is power law-like with exponent $\gamma_\phi$, which for $\Gamma^{(2)}_0$ leads to the identification of $\eta = 2\gamma_\phi$.}
We remark that short-distance observables of the original theory are not quite invariant under a blocking transformation, in the following sense. The nearest-neighbor observable
\begin{equation}
\langle \varphi(x) \varphi(x+\varepsilon) \rangle,
\end{equation}
with $\varepsilon < b$, has no direct counterpart in the blocked theory: those neighbors have been integrated out; the nearest-neighbor on the blocked lattice relates spins that are ``farther apart.'' By contrast, the correlator analysis above implied that $G(\hat z)$ at large distances \textit{is} invariant, up to a proportionality with the previous blocking step. This is why one says that RG transformations typically only preserve long-distance observables. We note that if the RG transformation could be made to be continuous, then one could meaningfully discuss infinitesimal variations of the short-distance observables, at least in the continuum. We will discuss this in chapter 4.
\subsection{Fixed points}
For ease of notation let $\boldsymbol g' = \boldsymbol g_b$. The blocked couplings may be expressed as functions of the previous ones:
\begin{equation}
\boldsymbol g' = \boldsymbol R_b(\boldsymbol g).
\end{equation}
Near a fixed-point, assuming analyticity of $\boldsymbol R_b(\boldsymbol g_*)$, we may linearize the transformation,
\begin{equation} \label{linearized_couplings}
\boldsymbol g' = \boldsymbol g_* + \boldsymbol T_b(\boldsymbol g_*) (\boldsymbol g - \boldsymbol g_*) + O((\boldsymbol g - \boldsymbol g_*)^2), \quad \mathrm{or} \quad \delta \boldsymbol g' = \boldsymbol T_b(\boldsymbol g_*) \delta \boldsymbol g + O(\delta \boldsymbol g^2),
\end{equation}
where $\boldsymbol T_b(\boldsymbol g)$ is called the RG ``stability matrix,'' with components
\begin{equation}
\boldsymbol T_b(\boldsymbol g) = \frac{\partial \boldsymbol g'}{\partial \boldsymbol g}.
\end{equation}
Let $\boldsymbol v_a$ be the left-eigenvectors of $\boldsymbol T$, i.e.
\begin{equation}
\boldsymbol v_a^\top \boldsymbol T_b(\boldsymbol g_*) = \lambda_a \boldsymbol v^\top_a,
\end{equation}
and define the \textit{scaling variables} $u_\alpha$ by $u_a := \boldsymbol v_a^\top \delta \boldsymbol g$, so that the linearized transformation eq. (\ref{linearized_couplings}) implies
\begin{equation}
u_a' = \lambda_a u_a,
\end{equation}
where $\lambda_a$ depends on $b$. Although the various couplings $\boldsymbol g$ will mix under the RG transformation, the scaling variables do not. A practical requirement of block-spin transformations is the composition property $\boldsymbol R_{b'} ( \boldsymbol R_b (\boldsymbol g)) = \boldsymbol R_{b'b}(\boldsymbol g)$, which then implies that the eigenvalues satisfy $\lambda_a(b') \lambda_a(b) = \lambda_a(b'b)$, which is solved for $\lambda_a(b) = b^{y_a}$, for some $b$-independent constants $y_a$ \cite{Kopietz:2010zz}. The $y_a$ are referred to as the \textit{RG eigenvalues} of the fixed point $\boldsymbol g_*$.
We can write an arbitrary action as a scalar product of couplings $\boldsymbol g$ with action operators $\boldsymbol S = (S_i)$ as $S = \boldsymbol g^\top \boldsymbol S$. Denoting the fixed point action by $S_*$, an arbitrary deviation of an action from $S_*$ may then be written as
\begin{equation}
S - S_* = \boldsymbol g^\top \boldsymbol S - S_* = \delta \boldsymbol g^\top \boldsymbol S = \sum_a \delta \boldsymbol g^\top \boldsymbol v_a \; \boldsymbol v_a^\top \boldsymbol S = \sum_a u_a \mathcal{R}_a,
\end{equation}
where the \textit{scaling operators} have been defined, $\mathcal{R}_a := \boldsymbol v_a^\top \boldsymbol S$, and we have used completeness of the left-eigenvectors. The fixed point values of the scaling variables are therefore zero, $u_{*a} = 0$. Performing an RG transformation beginning with $S$ close to $S_*$, we obtain
\begin{equation}
S ' = S_* + \sum_a b^{y_a} u_a \mathcal{R}_a.
\end{equation}
In particular, if $S = S_* + u_a \mathcal{R}_a$ for some particular $a$, then the blocked action will again only involve $\mathcal{R}_a$. We then can distinguish three scenarios for the behavior of a perturbation from the fixed point action:
\begin{itemize}
\item $y_a < 0$: the perturbation decays with blocking iterations, and is called \textit{irrelevant},
\item $y_a = 0$: the perturbation is independent of iterations, and is called \textit{exactly marginal},\footnote{I include ``exactly'' because one often talks about ``marginally'' irrelevant and relevant operators to mean ones which are marginal at a gaussian fixed point but become either irrelevant or relevant at a nearby fixed point.}
\item $y_a > 0$: the perturbation increases with iterations, and is called \textit{relevant}.
\end{itemize}
The relative sizes of the $u_a$ present in any given action determine how closely RG transformations will map it towards a fixed point. The negative RG eigenvalues diminish with iterations, so they do not prevent the approach to the fixed point. The exactly marginal operators, interestingly, are invariant, and therefore a perturbation by a marginal operator constitutes a \textit{new} fixed point. Generally, then, we see that the set of RG fixed points differing by marginal operators form a fixed point \textit{submanifold} in the space of actions. The positive eigenvalues, on the other hand, will steer the flow \textit{away} from the fixed point. Hence, the distance of closest approach to the fixed point depends strongly on what the values of the relevant scaling variables are; the smaller they are, the longer it takes for those terms to ``kick in.'' For initial actions that are ``tuned'' such that $u_\mathrm{rel} = 0$, RG will map the action directly into the fixed point.
The region in parameter space that flows directly into the fixed point under RG transformations is called the \textit{basin of attraction} of the fixed point, and is therefore the surface $u_\mathrm{rel} = 0$. On this surface, the irrelevant variables are unconstrained, and theories defined by actions which differ only by irrelevant variables have the same long-distance properties. This is the phenomenon of \textit{universality}. It explains the empirical fact that many different physical systems can have the same critical exponents (RG eigenvalues) near a second-order phase transition. For example, the Ising universality class in three dimensions describes not only the critical behavior of certain ferromagnets, but also such diverse situations as the liquid-gas transition in xenon, critical points of binary fluids, the atomic arrangement transition in copper-zinc alloys, and superfluid helium transitions \cite{Peskin:1995ev}. Generally, one expects theories with exactly the same symmetries, in the same dimension, to belong to the same universality class. The defining symmetry of the Ising universality class is $\mathbb{Z}_2$ transformations of the order parameter.
In the correlator analysis above, it was assumed that the RG transformation had a fixed point to begin with. This will only be true if $\Delta$ is chosen carefully, and since we saw above that only $\Delta = d/2 - 1 + \eta/2$ led to the empirical value, it comes as no surprise. This may seem like an undesirable tuning of the blocking transformation, and from that perspective it is. However, once $\Delta$ is picked correctly, the scaling dimensions of any other local operators may be determined, in principle, by studying the scaling of correlation functions. If $\mathcal{R}_a(\varphi; \hat x)$ is a local scaling operator with corresponding RG eigenvalue $y_a$, then near the fixed point one has \cite{Cardy:1996xt}
\begin{equation} \label{corr_scaling}
\langle \mathcal{R}_a(\varphi_b; \hat z / b) \mathcal{R}_a(\varphi_b; 0) \rangle_{S_b} \sim b^{2\Delta_a} \langle \mathcal{R}_a(\varphi; \hat z) \mathcal{R}_a(\varphi; 0) \rangle_{S},
\end{equation}
where $\Delta_a = d - y_a$ is the \textit{scaling dimension} of $\mathcal{R}_a$. A derivation of this formula in the context of functional RG is given in chapter 3, eq. (\ref{scalingops}). In practice, analytically, this formula is not in fact of much use; perturbative RG methods are more typically used, and recently the conformal bootstrap \cite{Poland:2018epd} has seen many successes. On the lattice, scaling dimensions may be systematically computed using MCRG techniques, as described below. In chapter 2, however, we will finally put eq. (\ref{corr_scaling}) to use in lattice simulations, but not with a blocking transformation, per se.
\subsection{Synthesis} Let us suppose we begin with an action which has been tuned in the manner described above. Since the partition function is invariant under the blocking, and since the blocking preserves the long-distance observables, it follows that the correlations of the system will exhibit the correlations of the fixed point theory, up to rescaling of the fields. Since the fixed point theory displays possibly nontrivial scaling behavior, as we saw with the correlator $G(z)$ above, we finally see how the block-spin RG formalism can explain critical phenomena.
For statistical systems that really do have a lattice spacing, due to a microscopic cutoff such as an inter-atomic spacing in a ferromagnet, the picture is the following. The relevant parameters correspond to temperature $T$ and external magnetic field $H$. For simplicity, we imagine that $H$ vanishes identically. The critical surface of the system then corresponds to $T = T_c$. Thus, buy tuning the ``temperature knob'' to $T \approx T_c$, one induces critical behavior in the system. In terms of the correlation length $\hat \xi = 1/m_\mrm{r} a$, the finite atomic spacing $a \neq 0$ means that one is tuning the renormalized mass to zero. The same procedure is accomplished in lattice simulations of spin systems to approach criticality.
In field theory, one typically speaks of criticality as being a ``continuum limit,'' because the relevant situation is presumably $a \to 0$ at $m_\mrm{r} \neq 0$, at least for a massive field theory. But the approach to this limit is achieved in the same way: tune the bare parameters so as to achieve $\hat \xi \to \infty$. If the fixed point theory exists, then so do theories all along the critical surface, since they are equivalent under RG transformations. Now, the renormalized theory (with $a = 0$) corresponds to a theory living on the critical surface, and therefore exists if there is a fixed point. This is the statement of nonperturbative renormalizability. Since the irrelevant variables near the fixed point play a subleading role, it is permissible to consider analyses involving only the variables with the largest RG eigenvalues, to a first approximation. This accounts for the success of the Callan-Symanzik-type of RG described in the previous section, so long as perturbation theory is valid. In particular, the case which holds $a$ fixed and sends $m_\mrm{r} \to 0$ allows one to describe critical statistical systems using perturbative RG.
In $\phi^4_3$ theory with vanishing external field $H$, the two most relevant parameters are the mass and the quartic coupling. We saw that the theory possesses an IRFP with nonzero coupling, the Wilson-Fisher fixed point (WFFP), and a UVFP with vanishing coupling, the gaussian fixed point (GFP). The critical surface of the GFP is the subspace defined by $m_0^2 = 0, \lambda_0 = 0$ (and all couplings $g_n$ of degree $n>2$ in $\phi$ also vanishing), since those variables are relevant at the GFP. If $\lambda_0 \neq 0$ (or $g_n \neq 0$), however, the IR behavior is dominated by the WFFP. The critical surface is determined by $\hat \xi = \infty$ ($m_0^2 =0$ is not sufficient with nonzero $\lambda_0, g_n$); all bare actions along this surface flow into the WFFP under RG iterations, as depicted in figure \ref{fig:WFFPs}. Since all higher-order interactions, such as $\phi^6, p^2 \phi^4, \phi^8,$ etc., are irrelevant at the WFFP,\footnote{This identification is somewhat loose; to each of these operators corresponds a \textit{scaling operator} that is irrelevant.} we observe that a large class of scalar field theories are governed by the same fixed point. Since those irrelevant operators coincide with the nonrenormalizable interactions in perturbation theory, we now know how to think of them: they are ultimately unproblematic because they do not significantly alter the long-distance properties of the theory. Putting this knowledge to use is the program of \textit{effective field theory}, which we will briefly summarize in chapter 3. We close with a quote from Wilson:
\begin{displayquote}
\textit{``I go to graduate school in physics, and I take the first course in quantum field theory, and I’m totally disgusted with the way it’s related. They’re discussing something called renormalization group, and it’s a set of recipes, and I’m supposed to accept that these recipes work — no way. I made a resolution, I would learn to do the problems that they assigned, I would learn how to turn in answers that they would accept, holding my nose all the time, and someday I was going to understand what was really going on. And it took me ten years, but through the renormalization group work I finally convinced myself that there was a reasonable explanation for what was taught in that course.''} -- Reported in P. Ginsparg's \textit{Renormalized After-Dinner Anecdotes} at the ``Celebrating the Science of Kenneth Geddes Wilson" symposium in 2013 \cite{Ginsparg:2014fya}.
\end{displayquote}
\begin{figure}
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=0.8\textwidth]{WFFP3.png}
\end{minipage}\hfill
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{WFFPnew.png}
\end{minipage}
\caption{\small{(Left) Projection of the RG flow in $\phi^4_3$ to the relevant hyperplane. Adapted from \cite{Kopietz:2010zz}. (Right) RG flow of $\phi^4_3$ with a third axis representing all irrelevant interactions (besides $\lambda$). Trajectories that do not begin on the critical surface initially approach the IRFP but eventually veer away in the directions orthogonal to the surface at the IRFP. Adapted from \cite{Capponi:2016yjz}.}}
\label{fig:WFFPs}
\end{figure}
\subsection{MCRG} The most systematic implementation of the block-spin RG transformation is via Swendsen's Monte Carlo Renormalization Group \cite{Swendsen:1979gn, Pawley:1984et}, which extracts estimates of critical exponents from a computation of the discrete RG stability matrix introduced above. Consider the expectation value of an action operator $S_k$ after a blocking step $S \to S'$,
\begin{equation}
\langle S_k' \rangle_{S'} = \frac{1}{Z(g')} \sum_{\varphi} S_k' \mathrm{e}^{-S'(g')} = - \frac{\partial}{\partial g_k'} \log Z(g').
\end{equation}
From the invariance of the partition function, $Z(g) = Z(g')$, we can differentiate with respect to the couplings $g_j$ at the previous blocking step, to obtain
\begin{equation}
\frac{\partial}{\partial g_j} \langle S_k' \rangle_{S'} = - \langle S_k' S_j \rangle_{S}^\mrm{c},
\end{equation}
where $S'_k$ in an expectation value with respect to $S$ is understood as evaluation of $S_k$ on the blocked field $\varphi_b$. Alternatively, we can use the chain rule to differentiate with respect to $g'$:
\begin{equation}
\sum_h \frac{\partial g'_h}{\partial g_j} \frac{\partial}{\partial g'_h} \langle S_k' \rangle_{S'} = -\sum_h \frac{\partial g'_h}{\partial g_j} \langle S_k' S_h' \rangle_{S'}^\mrm{c}.
\end{equation}
Putting it all together we obtain
\begin{equation} \label{Swendsen_Eqns}
\langle S_k' S_j \rangle_{S}^\mrm{c} = \sum_h \langle S_k' S_h' \rangle_{S}^\mrm{c} \; T_{hj},
\end{equation}
where the RG stability matrix $T_{hj} = \partial g'_h / \partial g_j$ enters. Since the observables on both sides may be explicitly computed in a simulation, one can compute the matrix $\boldsymbol T$ at blocking scale $b$ by numerically inverting the matrix equation above.
If the bare action is sufficiently close to the critical surface, then repeated blocking transformations carry one toward the RG fixed point. In its vicinity, the stability matrix will approach its fixed point value, and the diagonalization of $\boldsymbol T$ will have eigenvalues $b^{y_\alpha}$. Since $b$ is fixed by definition of the blocking, one can extract estimates for the RG eigenvalues using MCRG. The method is limited in practice by the number of iterations one can do given a finite simulation volume. Choosing $b=2$ leads to a halving of the linear size of the lattice with every iteration. Nonetheless, the MCRG method has been applied in numerous systems and has been quite successful \cite{Swendsen:1981rb,Pawley:1984et,Lang:1986pd,Hasenfratz:2009ea}.
\section{Lattice gauge theory and fermions}
In the next chapter we will see an example of a gauge theory in 4 dimensions, including fermions, so here we give a brief description of such theories on the lattice.
\subsection{Gauge theory} On the lattice, gauge fields are group-valued variables (or ``link variables'') $U_\mu(x) \in G$ living on links connecting adjacent sites, where the pair $(x,\mu)$ identifies the field on the link connecting site $x$ with $x+\mu$, and $\mu = 1, ... ,d$. Often $G$ is a Lie group, like U(1) or SU($N$), but discrete groups like $\mathbb{Z}_N$ or crystal groups are also sometimes considered. We denote the Lie algebra of $G$ by $\mathfrak{g}$. Gauge transformations arise from the change of variables
\begin{equation}
U_\mu(x) = \Omega(x) U'_\mu(x) \Omega^\dag(x+\mu), \quad \Omega(x) \in G,
\end{equation}
so that traces of products of links which form a closed loop are gauge invariant. The simplest such product is the plaquette around every elementary square of the lattice,
\begin{equation}
U_{\mu\nu}(x) = U_{\mu}(x) U_{\nu}(x+\mu) U^\dag_\mu (x+\nu) U^\dag_\nu(x).
\end{equation}
It is then sensible to construct an action as a positive definite sum over all plaquettes. This is called the \textit{Wilson action} after it was introduced in 1974 \cite{Wilson:1974sk},\footnote{Although the corresponding action for the discrete gauge group $\mathbb{Z}_2$ was written down 3 years earlier by Wegner \cite{Wegner:1984qt}.} and is given by
\begin{equation} \label{Waction}
S_W(U) = \frac{\beta}{N} \sum_x \sum_{\mu < \nu} \mathrm{Re} \; \mathrm{tr} \big[ \mathbb{I} - U_{\mu\nu}(x) \big],
\end{equation}
where $N = \mathrm{dim} \; G$. To formally obtain the continuum theory, one defines the Lie algebra-valued vector potentials $A_\mu(x) \in \mathfrak{g}$ by
\begin{equation}
U_\mu(x) = \exp a A_\mu(x),
\end{equation}
and expands in $a$. Using the Baker-Campbell-Hausdorff formula, one finds the expansion
\begin{equation}
U_{\mu\nu}(x) = 2 \mathbb{I} + a^4 F_{\mu\nu}(x) F_{\mu\nu}(x) + O(a^5),
\end{equation}
where
\begin{equation}
F_{\mu\nu}(x) = \partial_\mu A_\nu(x) - \partial_\nu A_\mu(x) + [A_\mu(x), A_\nu(x)]
\end{equation}
is the continuum field strength tensor, which takes values in $\mathfrak{g}$.\footnote{We typically use the convention of anti-hermitian elements $X \in \mathfrak{g}$. To obtain the hermitian gauge field one lets $A_\mu = i \tilde A_\mu$. Furthermore, the presence of $g_0^2$ in the Wilson action is related to the perturbative convention of having it in the fermion coupling term $i g_0 \bar\psi \tilde{\slashed{A}}' \psi$ by a rescaling $\tilde A_\mu' = \tilde A_\mu / g_0$. Such a rescaling makes the canonical dimension of the gauge field $d_A = 1$ in every dimension.} The Wilson action becomes
\begin{equation}
S_{W}(U(A)) = - \frac{ a^d}{2 g_0^2} \sum_x \Big[ \sum_{\mu\nu} \mathrm{tr} [ F_{\mu\nu}(x) F_{\mu\nu}(x) ] + O(a) \Big].
\end{equation}
for $\beta = 2 N / g_0^2$. The leading term is the Yang-Mills action $S_{\mathrm{YM}}(A)$ in the naive continuum limit $a \to 0$, that is,
\begin{equation}
S_{\mathrm{YM}}(A) = - \frac{1}{2 g_0^2} \int \mathrm{d}^d x \; \mathrm{tr} [ F_{\mu\nu}(x) F_{\mu\nu}(x) ],
\end{equation}
which defines the pure-gluonic sector of QCD.
The correlation length of a lattice gauge theory is determined by the lightest mass in its spectrum. For a pure gauge theory, this must refer to the lightest glueball state; a ``glueball'' is a bound state of gluons. To measure this one would have to compute the plaquette-plaquette correlator, which is a difficult task as the measurement is strongly affected by signal-to-noise problems \cite{DeGrand:2006zz}. Moreover, we do not even know experimentally what the mass of this state would be, since the real world includes fermions. The study of pure lattice gauge theory in 4 dimensions is therefore somewhat academic. However, several methods to set the scale in a semi-realistic way (i.e., using experimental measurements) have been put forth over the years.
A common method to set the scale is using the \textit{Sommer parameter} \cite{Sommer:1993ce}, which amounts to a measurement of the static quark potential $V(R)$. To measure this, one first defines the \textit{Wilson loop} operator by
\begin{equation}
W(C) := \mathrm{tr} \prod_{\ell \in C} U(\ell),
\end{equation}
where $C$ is a closed loop and $\ell \in C$ are the link labels along $C$. For a rectangular loop $C_{RT}$ of spatial size $R$ and temporal extent $T$, one can argue from a spectral decomposition that the static quark potential is given by
\begin{equation}
V(R) = - \lim_{T \to \infty} \frac{1}{T} \log \langle W(C_{RT}) \rangle.
\end{equation}
The expected form of $V(R)$ is that of a linearly confining theory,
\begin{equation}
V(R) = A + \frac{B}{R} + \sigma R,
\end{equation}
where $\sigma$ is the \textit{string tension}. The force corresponding to this potential is given by $F(R) = -V'(R)$. From separate studies of the nonrelativistic Schr\"odinger equation for heavy quarks, together with input from experimental data, it has been determined that $F(R) R^2|_{R_0} = 1.65$ occurs at a distance $R_0 \approx 0.5$ fm. Now, $R_0/a$ may be expressed in terms of the dimensionless coefficients $B, \; \sigma / a^2$ of $aV(R)$. Since $aV(R)$ can be measured on the lattice, one fits the measured potential to obtain an estimate of $R_0 / a$. By plugging in $R_0 = 0.5$ fm, one then has an estimate for $a$ in fm.
In practice, one is often simulating with dynamical fermions, the bound states of which are hadrons. Another common way to set the scale is then to input a well-known hadron mass in $\hat M_h = M_h a$, e.g. the mass of the $\rho$ meson or the $\Omega$ baryon \cite{Gattringer:2010zz, Sommer:2014mea}. A more recent procedure for setting the scale which utilizes Gradient Flow will be discussed in the next chapter.
Perturbative calculations in Yang-Mills theory suggest that the theory is asymptotically free \cite{Gross:1973id, Politzer:1973fx}, meaning that the coupling $g_0$ decreases as one probes higher energies. This result is confirmed in lattice perturbation theory, where one expands the Wilson action in powers of $a$, which allows observables to be expressed as series in $g_0$. One defines a renormalized coupling $g_\mrm{r}$,\footnote{There are various ways to define a renormalized coupling in this theory. In perturbation theory, one method is called \textit{momentum space subtraction} (MOM) \cite{Montvay:1994cy}, which uses the gluon propagator and the 1PI $\psi\psi A$ vertex $\Gamma^{(3)}$.} after which the renormalized and bare beta functions may be calculated in a similar (though algebraically more complex) manner to the scalar case. One finds \cite{Montvay:1994cy}
\begin{equation}
\beta_0(g_0) = - a \frac{\mathrm{d} g_0}{\mathrm{d} a} = - \beta_0 g_0^3 - \beta_1 g_0^5 + O(g_0^7),
\end{equation}
where the first few coefficients are
\begin{equation}
\beta_0 = \frac{N}{16 \pi^2} \cdot \frac{11}{3}, \quad \beta_1 = \Big(\frac{N}{16 \pi^2}\Big)^2 \cdot \frac{34}{3}.
\end{equation}
We see that $g_* = 0$ is a UVFP, so as the continuum limit is approached at fixed $g_\mrm{r}$, the bare coupling approaches zero. In a sense, this result is a consistency check on the perturbative expansion. Thus, in simulations of pure gauge theory, one achieves the continuum limit by extrapolating $g_0 \to 0$ according to the perturbative results from lattice perturbation theory. We stress that although the UV theory approaches a gaussian fixed point, the IR physics (characterized by $g_\mrm{r}$) remains strongly-interacting.
\subsection{Fermions} The naive discretization of fermion fields, following what was done for scalars, leads to trouble. The euclidean continuum Dirac action in 4 dimensions is
\begin{equation}
S_D = \int \mathrm{d}^4 x \; \overline\psi(x)(\slashed{\partial}+m_0) \psi(x),
\end{equation}
where $\psi(x)$ is a Grassmann-valued Dirac spinor (with 4 components). Its naive discretization with a symmetric difference operator is
\begin{equation}
S_N = a^4 \sum_x \overline\psi(x)\Bigg(\frac{1}{2a} \sum_\mu \gamma_\mu \big[ \psi(x+\mu) - \psi(x-\mu) \big] +m_0 \psi(x) \Bigg).
\end{equation}
In momentum space, one computes
\begin{equation}
S_N = \frac{1}{V} \sum_p \overline\psi(p) \Big( m_0 + \frac{i}{a} \sum_\mu \gamma_\mu \sin p_\mu a \Big) \psi(p).
\end{equation}
In the infinite volume limit, the naive fermion propagator $S(x-y)$ is then
\begin{equation}
S(x-y) = \int_{-\pi/a}^{\pi/a} \mathrm{d}^d p \; \mathrm{e}^{i p (x-y)} \Big[ m_0 + \frac{i}{a} \sum_\mu \gamma_\mu \sin p_\mu a \Big]^{-1},
\end{equation}
which may be written in integral form as
\begin{equation}
S(x-y) = \int_0^\infty \mathrm{d} s \; \mathrm{e}^{-m_0 s} \int_{-\pi/a}^{\pi/a} \mathrm{d}^d p \; \mathrm{e}^{i p (x-y)} \exp \Big[ - \frac{is}{a} \sum_\mu \gamma_\mu \sin p_\mu a \Big].
\end{equation}
The continuum propagator $\int_p \mathrm{e}^{ipz}/(m_0 + i \slashed{p})$ should be obtained in the $a \to 0$ limit. Although an expansion of the sine function \textit{appears} to achieve this, $\sin p_\mu a$ in fact has zeros at all $2^d$ corners of the Brillouin zone. As $a \to 0$, one then finds $2^d$ saddle points of the integrand above, which means that in the continuum limit, $S(x-y)$ is a sum of $2^d$ copies of the desired propagator. This hiccup is called the \textit{doubling problem} for lattice fermions.
Various approaches to remedy the doubling problem have been forwarded over the years. In one approach, called \textit{Wilson fermions}, one adds a laplacian term to the naive fermion action, which shifts the bare mass in such a way as to guarantee that the doubler masses become infinitely heavy as $a \to 0$, thereby dropping out of the theory. But this solution comes at a cost \cite{Nielsen:1980rz}: for a massless theory, the Wilson term breaks chiral symmetry (invariance under $\psi = \mathrm{e}^{i\gamma_5\theta} \psi', \; \overline \psi = \overline \psi' \mathrm{e}^{i\gamma_5\theta}$), making the simulation of massless fermions a difficult task. Solutions to the doubling problem which allow for the retention of chiral symmetry (in some capacity) have therefore been sought over the years. One such approach is that of \textit{staggered fermions}.
Because we will report results of a lattice simulation using staggered fermions in four dimensions in chapter 2, we give a brief introduction to them here. The first step is to change variables from naive fermions in a peculiar way, the \textit{staggered transformation}:
\begin{equation}
\psi(x) = \gamma_1^{x_1} \gamma_2^{x_2} \gamma_3^{x_3} \gamma_4^{x_4} \psi'(x), \quad \overline \psi(x) = \overline{\psi}' (x) \gamma_4^{x_4} \gamma_3^{x_3} \gamma_2^{x_2} \gamma_1^{x_1}.
\end{equation}
By repeatedly using the gamma matrix property $\{ \gamma_\mu, \gamma_\nu \} = 2 \delta_{\mu\nu} \mathbb{I}$, one can demonstrate that
\begin{equation}
\overline{\psi}(x) \gamma_\mu \psi(x\pm\mu) = \eta_\mu(x) \overline{\psi}'(x) \psi'(x\pm \mu), \quad \eta_\mu(x) := (-1)^{\sum_{\nu<\mu} x_\nu}.
\end{equation}
This implies that the staggered transformation decouples the 4 Dirac components in the naive fermion action, leaving an action for 4 copies of the same kind of (1-component) fermion. One then defines the staggered action by retaining only one of the copies. Introducing the gauge field coupling to fermions in the standard way, one has
\begin{equation} \label{staggered_action}
S_\mathrm{st} = a^4 \sum_x \overline\chi(x)\Bigg(\frac{1}{2a} \sum_\mu \eta_\mu(x) \big[ U_\mu(x)\chi(x+\mu) - U_\mu^\dag(x-\mu)\chi(x-\mu) \big] +m_0 \chi(x) \Bigg).
\end{equation}
because only one of the Dirac components was kept, one expects intuitively that this action reduces the 16-fold degeneracy of the naive action to 4. To check this intuition one must perform a more detailed analysis \cite{Gattringer:2010zz}. What is found is that the staggered action describes 4 species, called ``tastes,'' of Dirac fermions, in terms of which the action resembles the Wilson fermion action, which has no doublers in the continuum limit. Furthermore, from eq. (\ref{staggered_action}) we see that the action possesses a \textit{remnant} chiral symmetry given by invariance under $\chi = \mathrm{e}^{i\eta_5(x)\theta} \chi', \; \overline \chi = \mathrm{e}^{i\eta_5(x)\theta} \overline \chi'$ when $m_0 = 0$.
The staggered transformation only reduces the doublers to 4, whereas some simulations want as few as 2 fermions (for up and down quarks), or 3 (to include the strange). To accommodate this situation, a practical but controversial procedure is adopted: take the square-root of the staggered determinant, and for the strange quark, take the fourth-root. There is some controversy in the literature about the validity of this procedure, however. See \cite{Gattringer:2010zz} and references therein.
\newpage
\chapter{Gradient flow and RG}
In this chapter we introduce the notion of \textit{gradient flow renormalization group} (GFRG) by comparing a type of diffusion known as \textit{gradient flow} with the block-spin transformations we saw in chapter 1. The comparison naturally leads to correlator scaling laws involving gradient-flowed observables that can be measured on the lattice. The comparison suggests a method for extracting scaling dimensions of operators from lattice simulations in a manner distinct from that of MCRG. In section 1, we describe gradient flow in the case of Yang-Mills theory and its primary application in lattice theory, namely, scale-setting. In section 2 we apply the GFRG method to scalar field theories in 2 and 3 dimensions, and in section 3 we apply it to a 4-dimensional gauge-fermion theory.
\section{Gradient flow}
The Yang-Mills gradient flow equation, in the context of lattice theory, first appeared in an exploration of the large-$N$ behavior of smeared Wilson loops in a paper by Narayanan and Neuberger in 2006 \cite{Narayanan:2006rf}.\footnote{It appears that the authors were inspired to choose this form by an analogy to the Langevin equation which generates quantum Yang-Mills theory under stochastic quantization \cite{Damgaard:1987rr}, a development of the 1980's. It was not at that point thought of as a smoothing transformation, although the concept of stochastic regularization was a clue. I also remark that the Yang-Mills flow equation (perhaps for the first time) appeared in the work of Atiyah and Bott in 1983 \cite{10.2307/37156}. It has since been used in the study of \textit{Ricci flow}, having appeared, for example, in \cite{Young:2008xk, STREETS2010454}, where it is one of two equations defining so-called Ricci-Yang-Mills flow. This flow refers to a smoothing evolution of the metric and connection on a principle bundle over a Riemannian manifold. \textit{Pure} Ricci flow was proposed in 1982 in the works of R. Hamilton \cite{hamilton1982}. Interestingly, however, the Ricci flow equations arose even earlier in the study of generalized nonlinear sigma models by D. Friedan in 1980 \cite{Friedan:1980jf}, where it was demonstrated that the RG flow of the model is a Ricci flow on the target space of the field theory, to lowest order in perturbation theory.} The lattice version of the equation was proposed independently by L\"uscher in 2009 \cite{Luscher:2009eq} in the context of so-called ``trivializing maps'' on field space. The idea behind these trivializing maps was to perform a transformation of the field variables $U_\mu(x)$ on the lattice in such a way that the jacobian exactly cancels the gauge field action, effectively mapping the theory to its strong-coupling (or high-temperature) limit; the hope was to improve the efficiency of the Hybrid Monte Carlo algorithm, which diminishes in the continuum limit $g_0 \to 0$ of lattice QCD. For our purposes, however, we will focus on the smoothing property of gradient flow, which will be demonstrated to provide an essential ingredient of a continuous RG transformation on the lattice. It should be noted that L\"uscher did speculate on the possibility of using trivializing maps in the context of RG \cite{Luscher:2009eq}, and this suggestion was followed up analytically in the works of Yamamura and others \cite{Kagimura:2015via,Yamamura:2015kva,Makino:2018rys}.
The continuum formulation of gradient flow for gauge theories runs as follows. Beginning with the initial gauge fields $A_\mu(x)$, one defines their \textit{flow} $B_\mu(x,t)$ to be the solution of the diffusion-type equation
\begin{equation}
\partial_t B_\mu(x,t) = - \frac{\delta \hat S(A)}{\delta A_\mu(x)}\Big|_{A_\mu = B_\mu}, \quad B_\mu(x,0) = A_\mu(x),
\end{equation}
where $\hat S$ can be called the \textit{flow action}. The parameter $t$ is called the \textit{flow time}, with dimensions of distance-squared. Typically, $\hat S$ is chosen to be the Yang-Mills action $S_\mathrm{YM}$, in which case one obtains the Yang-Mills gradient flow,
\begin{equation}
\partial_t B_\mu = D_\nu F_{\nu\mu},
\end{equation}
where $F_{\mu\nu} = \partial_\mu B_\nu - \partial_\nu B_\mu + [B_\mu, B_\nu]$, and the covariant derivative in the adjoint representation is, for $X \in \mathfrak{g}$,
\begin{equation}
D_\mu X = \partial_\mu X + [A_\mu, X].
\end{equation}
L\"uscher and Weisz \cite{Luscher:2011bx} demonstrated perturbatively that the expectation values of observables at finite flow time required no \textit{further} renormalization above that of pure Yang-Mills theory, suggesting that such quantities will have well-defined continuum limits on the lattice.
A quantity of particular popularity is the Yang-Mills energy density at finite flow time,
\begin{equation}
E(t) := \smallfrac{1}{4} \langle \mathrm{tr} \; F_{\mu\nu}^2(x,t) \rangle.
\end{equation}
In \cite{Luscher:2010iy} it was demonstrated that $E(t)$ is renormalized if one computes it in bare perturbation theory and replaces the bare coupling $g_0$ by the $\overline{\mrm{MS}}$ coupling at scale $\mu^2 = 1/8t$, as expected from the general renormalizability of flowed observables mentioned above. It was then demonstrated that the lattice implementation of $E(t)$ can be of quite practical use in setting the scale. The ``theory scale'' $t_0$ defined through
\begin{equation}
t_0^2 E(t_0) = 0.3,
\end{equation}
was demonstrated to scale to the continuum in roughly the same way as the Sommer parameter $r_0$. That is, by computing $t_0$ at several bare couplings for which $a$ was known already from Sommer parameter scale-setting (with $r_0=0.5$), it was demonstrated empirically that $t_0 / r_0^2$ is constant as $a \to 0$ under a simple (slightly-improved) discretization of $F_{\mu\nu}$, the so-called ``clover operator.'' What all this means is that one can approach the continuum by following the behavior of observables computed at time $t_0$ for each of their bare couplings, if the physical box size $aN$ is constant. See \cite{Sommer:2014mea} for further discussion.
Perturbatively, the $\overline{\mrm{MS}}$-renormalized $t^2 E(t)$ is proportional to the renormalized coupling at tree level, suggesting that one can define an alternative renormalized coupling by
\begin{equation}
g^2_\mathrm{GF}(t) := t^2 E(t).
\end{equation}
Since the jacobian relating the two couplings is nonsingular to known orders in perturbation theory, this scheme change is expected to be valid. The computation of this quantity in finite volume has even led to a natural definition of a renormalized coupling that runs with the lattice size \cite{Fodor:2012td}, and which may be used in step-scaling analyses of the discrete beta function \cite{Fodor:2012td,Cheng:2014jba}. We note that this approach has been fruitful for several theories in the family of SU($N$) gauge theories with $N_f$ fermions. Recently, arguments based on Wilsonian RG have been offered in \cite{Hasenfratz:2019hpg} which suggest that the time derivative of $g^2_\mathrm{GF}(t)$ can be used to estimate the renormalized beta function $\beta(g^2_\mathrm{GF})$ in many gauge theories.
\begin{comment}
\ac{unnecessary?} If $\lambda$ is small, the solution may be computed perturbatively
\begin{equation}
\phi_t(x) = \sum_{k=0}^\infty \lambda^k \phi_t^{(k)}(x).
\end{equation}
The equation at order $\lambda^0$ is just the \textit{free} gradient flow
\begin{equation}
\partial_t \phi_t^{(0)} = \partial^2 \phi_t^{(0)} - m^2 \phi_t^{(0)},
\end{equation}
whose solution has an integral representation
\begin{equation}
\phi_t^{(0)}(x) = \int_p \mathrm{e}^{ipx - (p^2+m^2)} \varphi(p).
\end{equation}
\end{comment}
\section{Block-spin analogy}
In this section we will focus on scalar theories, so first we give a brief review of GF for scalar fields \cite{Monahan:2015lha,Monahan:2015fjf,Capponi:2016yjz,Aoki:2016ohw,Aoki:2016env,Aoki:2017bru,Fujikawa:2016qis}. The flowed fields will be denoted by $\phi_t(x)$. The general gradient flow equation for a one-component scalar field is
\begin{equation}
\partial_t \phi_t(x) = - \frac{\delta \hat S(\phi)}{\delta \phi(x)} \Big|_{\phi = \phi_t}, \quad \phi_0(x) = \varphi(x).
\end{equation}
If we choose a standard quartic action, for example,
\begin{equation}
\hat S(\phi) = \int \mathrm{d}^d x \Big[ \frac{1}{2} \big(\partial \phi(x)\big)^2 + \frac{m^2}{2} \phi^2(x) + \frac{\lambda}{4!} \phi^4(x) \Big],
\end{equation}
then the flow equation reads
\begin{equation} \label{int_flow}
\partial_t \phi_t(x) = \partial^2 \phi_t(x) - m^2 \phi_t(x) - \frac{\lambda}{3!} \phi_t^3(x).
\end{equation}
The utility of interacting flow for scalars was called into question by Suzuki and Fujikawa \cite{Fujikawa:2016qis}, who determined that the finite-flow time observables are not entirely renormalized by a renormalization of the parameters in the bare action. This is intuitively clear from the presence of the cubic product $\phi^3(x)$ on the r.h.s. above, together with the lack of gauge symmetry which was crucial for the renormalizability proof of GF in Yang-Mills theory \cite{Luscher:2011bx}. The perturbative solution to the scalar GF equation involves local products of fields, which when self-contracted, lead to divergent tadpoles in flowed observables that are not eliminated by the standard renormalization procedure of $\phi^4$ theory. Fujikawa demonstrated, however, that suitably-modified definitions of the interacting flow can lead to a finite theory, ones involving derivative interactions in place of a point-vertex. We will revisit the notion of interacting flow in chapter 4 in the discussion of nonlinear RG's. In the remainder of this section, however, we will stick to noninteracting flows.
\subsection{GFRG transformation} If we specialize to the simplest kind of gradient flow, namely, massless free flow ($m^2=0,\lambda = 0$ in eq. (\ref{int_flow})),
\begin{equation}
\partial_t \phi_t(x) = \partial_x^2 \phi_t(x),
\end{equation}
then we have a simple heat equation. The solution is given by the action of the heat kernel on the initial field,
\begin{equation}
\phi_t(x) = (K_t \varphi)(x) = \int \mathrm{d}^d y K_t(x,y) \varphi(y) = \int \mathrm{d}^d z K_t(z) \varphi(x+z).
\end{equation}
In the last equality we have brought the solution to a suggestive form, using $K_t(x,y) = K_t(x-y)$. The (infinite-volume) heat kernel in $d$ dimensions is
\begin{equation}
K_t(z) = \int_p \mathrm{e}^{ipz - p^2 t} = \frac{\mathrm{e}^{-z^2/4t}}{(4 \pi t)^{d/2}},
\end{equation}
which rapidly decays when $z \gg \sqrt{4t}$. The solution is therefore reminiscent of the blocking transformation
\begin{equation}
\varphi_b(x_b) = (B_b \varphi)(x) = \frac{b^{\Delta_\phi}}{b^d} \sum_{\varepsilon} \varphi(x + \varepsilon),
\end{equation}
when we identify $b \propto \sqrt{t}$, except that the averaging by the heat kernel depends continuously on its ``blocking parameter'' $t$. Importantly, we also do not have an analog of the rescaling factor $b^{\Delta_\phi}$. In this sense, free GF cannot of itself constitute an RG transformation.
We wish to define a smooth RG transformation based on the resemblance just noted. Since the field rescaling was an essential ingredient in block-spin RG which allowed the transformation to exhibit a fixed point, we propose that the analog of the blocked field $\varphi_b$ should be defined by
\begin{equation} \label{GFRG}
\mn\Phi_t(x_t) := b_t^{\Delta_\phi} \phi_t(x), \quad \mathrm{with} \quad x_t = x / b_t,
\end{equation}
where the exact form of $b_t$ is not yet determined, except that it must approach 1 as $t\to 0$ and it must be proportional to $\sqrt{t}$ for large enough $t$. This is because the mean-squared radius of the heat kernel is determined by
\begin{equation}
\langle z^2 \rangle = \int \mathrm{d}^d z \; z^2 K_t(z) = 2dt,
\end{equation}
which should correspond to the block-spin radius-squared (times $d$). In chapter 4 we will determine that, under Schwinger regularization (see eq. (\ref{effective_cutoff})) in the continuum, the rescaling factor is exactly
\begin{equation} \label{bt_form}
b_t = \sqrt{1 + 2 t \mn\Lambda_0^2},
\end{equation}
where $\mn\Lambda_0$ is the bare cutoff. The function $b_t$ is expected to be regularization-dependent, however.
In the numerical implementation of GF, one must use the lattice heat equation. The continuum laplacian is replaced by its discretization, which we saw in chapter 1, so the GF equation is given by
\begin{equation}
\partial_t \phi_t(x) = \sum_{\mu} \hat \partial^*_\mu \hat \partial_\mu \phi_t(x),
\end{equation}
where $\hat \partial, \; \hat \partial^*$ are the forward and backward difference operators, respectively. In this case, the solution is
\begin{equation}
\phi_t(x) = \frac{1}{V} \sum_p \mathrm{e}^{ i p x - \hat p^2 t} \varphi(p),
\end{equation}
where $\hat p_\mu = 2 \sin(p_\mu a /2) / a$. The lattice momenta are restricted to $p_\mu \in (-\pi/2a, \pi/2a]$, so we observe a monotonic increase in the suppression of high momentum modes, in a qualitatively similar way to the continuum gradient flow solutions. Thus we expect the lattice free flow equation to be equally capable of defining a continuous blocking transformation, which approaches the continuum formulation as $a \to 0$.
We also see in eq. (\ref{GFRG}) the introduction of a rescaled position $x_t = x / b_t$: the blocked field is defined on a rescaled space. In MCRG this leads to the necessity of considering lattices of different sizes when applying the method. In our case, the rescaled field must be said to live on a fictitious blocked lattice with non-integer spacing. We will avoid this subtlety in our analysis by always relating blocked observables to the bare observables and performing computations in the bare theory, as described below.
\subsection{Correlator ratios} The GFRG transformation proposed above leads to scaling relations among correlators which may be implemented in lattice simulations. Recall the correlator scaling formula for block-spin RG which relates the blocked and bare quantities,
\begin{equation} \label{corr_scaling2}
\langle \mathcal{R}_a(\varphi_b; \hat z_b) \mathcal{R}_a(\varphi_b; 0) \rangle_{S_b} \sim b^{2\Delta_a} \langle \mathcal{R}_a(\varphi; \hat z) \mathcal{R}_a(\varphi; 0) \rangle_{S_0},
\end{equation}
for $\hat z \gg b$. The scaling operators $\mathcal{R}_a$ are generally polynomial in the field $\varphi$. Assuming that GFRG defines a valid RG transformation, the corresponding scaling formula reads
\begin{equation} \label{bare_ratio}
\langle \mathcal{R}_a(\mn\Phi_{t}; \hat z_{t}) \mathcal{R}_a(\mn\Phi_{t}; 0) \rangle_{S_{t}} \sim b_{t}^{2\Delta_a} \langle \mathcal{R}_a(\varphi; \hat z) \mathcal{R}_a(\varphi; 0) \rangle_{S_0};
\end{equation}
$S_t$ is the effective action generated by the GFRG transformation. The proper definition of this action will be described in chapter 4, but here we avoid it by use of the MCRG principle, which as described in chapter 1, allows one to compute observables in the blocked theory by computing blocked observables in the bare theory.
Now we consider eq. (\ref{bare_ratio}) at two times $t', \; t$ with $t' > t$, and take their ratio:
\begin{equation}
\frac{\langle \mathcal{R}_a(\mn\Phi_{t'}; \hat z_{t'}) \mathcal{R}_a(\mn\Phi_{t'}; 0) \rangle_{S_{t'}}}{\langle \mathcal{R}_a(\mn\Phi_{t}; \hat z_{t}) \mathcal{R}_a(\mn\Phi_{t}; 0) \rangle_{S_{t}}} \sim \Big(\frac{b_{t'}}{b_t}\Big)^{2\Delta_a}.
\end{equation}
The quantities on the l.h.s. are defined on the lattice with points $\hat z_t = \hat z / b_t$. Using MCRG to write the expectations in the bare theory then yields a \textit{ratio formula},
\begin{equation} \label{ratioR}
\frac{\langle \mathcal{R}_a(b_{t'}^{\Delta_\phi} \phi_{t'}; \hat z) \mathcal{R}_a(b_{t'}^{\Delta_\phi} \phi_{t'}; 0) \rangle_{S_0}}{\langle \mathcal{R}_a(b_t^{\Delta_\phi} \phi_t; \hat z) \mathcal{R}_a(b_t^{\Delta_\phi} \phi_t; 0) \rangle_{S_0}} \sim \Big(\frac{b_{t'}}{b_t}\Big)^{2\Delta_a},
\end{equation}
where now the position arguments refer to sites on the original lattice. Now, close to a fixed point, the correlator of any two operators $\mathcal{O}_h, \; \mathcal{O}_k$ may be expanded in correlators of scaling operators \cite{Amit:1984ms, Cardy:1996xt},
\begin{equation} \label{corr_expansion}
\langle \mathcal{O}_h \mathcal{O}_k \rangle = \sum_a c_{ha} c_{ka} \langle \mathcal{R}_a \mathcal{R}_a \rangle.
\end{equation}
If it happens that one of the scaling operators, say $\mathcal{R}_a$, dominates the sum, then one might expect that the ratio of correlators of $\mathcal{O}_h, \; \mathcal{O}_k$ can be used to measure $\Delta_a$. (At large enough distances, the leading operator always dominates the sum.) Letting $\mathcal{O}_h$ be of order $n_h$ in $\varphi$ and $\ell_h$ in derivatives, we can factor out the rescalings $b_t^{\Delta_\phi}$ from each operator to obtain
\begin{equation} \label{ratioO}
\frac{\langle \mathcal{O}_h( \phi_{t'}; \hat z) \mathcal{O}_k(\phi_{t'}; 0) \rangle_{S_0}}{\langle \mathcal{O}_h(\phi_t; \hat z) \mathcal{O}_k(\phi_t; 0) \rangle_{S_0}} \sim \Big(\frac{b_{t'}}{b_t}\Big)^{2\Delta_a - n_{hk} \Delta_\phi - \ell_{hk}},
\end{equation}
where $n_{hk} = n_h + n_k, \; \ell_{hk} = \ell_h + \ell_k$. The factors of $b_t^\ell$ arise because derivatives in the rescaled theory are related to those in the bare theory via $\hat \partial_{\hat z_t} = b_t \hat \partial_{\hat z}$. But when do we expect these ratio formulas to be valid? First, we need $\hat z \gg b_t$, so that the smeared operators do not overlap. Second, we need that the $\mathcal{O}_h\mathcal{O}_k$ correlator really \textit{is} dominated by $\mathcal{R}_a$. Third, the scaling operators are only defined with respect to a fixed point, so we expect the formula above to be valid only in the vicinity of a fixed point, which means the RG transformation must be repeated enough times that proximity has been achieved; we interpret this as meaning that the flow time $t$ is large enough that the effective action $S_t$ is near the fixed point. Lastly, we remark that eq. (\ref{ratioR}) will be deduced without recourse to a block-spin analogy in chapter 4 in the framework of stochastic RG.
Notice that using $\mathcal{O}_h = \mathcal{O}_k = \phi$ in eq. (\ref{ratioO}) gives no information about $\Delta_\phi = d_\phi + \gamma_\phi$, since $\phi$ is the leading operator in the $\mathbb{Z}_2$-odd subspace. Generally, the ratios of the fundamental field cannot be used to extract an estimate for $\Delta_\phi$, and other methods are needed; there at least two options one may take.
\begin{itemize}
\item Option 1: If there exists an operator $\mathcal{A}$ which is known \textit{a priori} to have zero anomalous dimension, with scaling dimension $\Delta_\mathcal{A} = d_\mathcal{A}$, then its ratio formula implies
\begin{equation} \label{Option1}
\frac{\langle \mathcal{A}( \phi_{t'}; \hat z) \mathcal{A}(\phi_{t'}; 0) \rangle_{S_0}}{\langle \mathcal{A}(\phi_t; \hat z) \mathcal{A}(\phi_t; 0) \rangle_{S_0}} \sim \Big(\frac{b_{t'}}{b_t}\Big)^{2(d_\mathcal{A} - n_\mathcal{A} \Delta_\phi - \ell_\mathcal{A})}.
\end{equation}
An example is the stress-energy tensor of a theory, or a conserved current such as the vector or remnant axial vector currents in gauge-fermion theories.
\item Option 2: In any theory, the operators fall into symmetry subspaces, e.g. $\mathbb{Z}_2$ in $\phi^4$ theory. In the domain of applicability of eq. (\ref{ratioO}), then, the mixed correlation functions $\langle \mathcal{O}_i \mathcal{O}_j \rangle$ in that subspace of operators all have a leading scaling behavior of $b_t^{2\Delta_a}$, and one can measure a family of exponents
\begin{equation} \label{Option2}
\delta_{ij} = 2\Delta_a - (n_i+n_j) \Delta_\phi - (\ell_i + \ell_j),
\end{equation}
where $n_i + n_j$ is the total number of factors of $\phi$ on the l.h.s. and $\ell_i+\ell_j$ the total number of derivatives. From any pair $(i,j) \neq (h,k)$ one may then extract estimates of $\Delta_a,\; \Delta_\phi$, so long as neither correlator is $\langle \phi \phi \rangle$ itself, of course. One must have empirical or theoretical evidence that one operator \textit{does} dominate to make use of this method.
\end{itemize}
\subsection{Diagonalization method} In general, there will not be a dominant operator, and asymptotically large distances may not be accessible. And even if there is, one might instead want to extract the dimension of a subleading operator. One must then use eq. (\ref{ratioR}) directly, which requires a more involved approach. Expanding each $\mathcal{R}_a$ in a basis of $\mathcal{O}_k$'s, we obtain for the correlator of $\mathcal{R}_a$'s
\begin{equation}
\langle \mathcal{R}_a(b_{t}^{\Delta_\phi} \phi_{t}; \hat z) \mathcal{R}_a(b_{t}^{\Delta_\phi} \phi_{t}; 0) \rangle_{S_0} = \sum_{j,k} c_{aj} c_{ak} b_t^{(n_j+n_k) \Delta_\phi + \ell_j + \ell_k} \langle \mathcal{O}_j(\phi_t; \hat z) \mathcal{O}_k(\phi_t; 0) \rangle_{S_0}.
\end{equation}
We could use eq. (\ref{ratioR}) therefore if we knew the coefficients $c_{aj}$ by forming appropriate linear combinations of the correlators on the r.h.s, which are directly measured in lattice simulations. The $c_{aj}$ may be estimated numerically by recalling the consequence of conformal invariance on mixed scaling operator correlations \cite{Amit:1984ms, Cardy:1996xt},
\begin{equation}
\langle \mathcal{R}_a(\hat z) \mathcal{R}_b(0) \rangle = \delta_{ab} \frac{A_b}{\hat z^{2\Delta_b}},
\end{equation}
valid exactly at the fixed point. What is suggested is to choose a basis of operators $\{\mathcal{O}_k\}$ and compute the mixed correlations of the $\mathcal{O}_k$, then forming the quantities
\begin{equation}
b_t^{(n_j+n_k) \Delta_\phi} \langle \mathcal{O}_j(\phi_t; \hat z) \mathcal{O}_k(\phi_t; 0) \rangle_{S_0}
\end{equation}
from an ansatz for $b_t$. As the full scaling dimension $\Delta_\phi$ is required for the GFRG transformation, one must here input a value for $\Delta_\phi$ if it is known. One then numerically diagonalizes the matrix of correlators at every distance $\hat z$ and time $t$ to obtain estimates for
\begin{equation}
\langle \mathcal{R}_a(b_{t}^{\Delta_\phi} \phi_{t}; \hat z) \mathcal{R}_a(b_{t}^{\Delta_\phi} \phi_{t}; 0) \rangle_{S_0}.
\end{equation}
Finally, one applies the ratio formula eq. (\ref{ratioR}) and measures $2 \Delta_a$ directly. The estimate for $\Delta_\phi$ obtained from this approach is merely a consistency check, while all other dimensions constitute genuine predictions.
\section{Scalar field theory}
We have applied the ratio formulas numerically in $\phi^4_d$ theory for $d=2, \; 3$ using the simulation action eq. (\ref{phi4_lat_action}), which reads
\begin{equation}
S(\varphi) = \sum_{x} \Bigg( - \beta \sum_{\mu} \varphi(x) \varphi(x+\mu) + \varphi^2(x) + \lambda ( \varphi^2(x) - 1)^2 \Bigg),
\end{equation}
where we drop hats for the lattice field out of convenience. We take the volume of the lattice $V = L^d$ to be cubic. To minimize the distance of the RG flow from the IRFP, the bare action must be tuned sufficiently well, meaning that given a value for $\lambda$, the neighbor coupling $\beta$ must be set as close to the critical value $\beta_c$ as possible, where $(\beta_c, \lambda)$ is a point on the system's critical ($a=0$) surface. A popular method for determining $\beta_c$ in spin systems is via the Binder cumulant.
\subsection{Tuning to the critical surface} The order parameter of $\mathbb{Z}_2$ symmetry breaking in any spin model is the magnetization $m$, which is defined on a particular configuration $\varphi$ by
\begin{equation}
m = \frac{1}{V} \sum_x \varphi(x),
\end{equation}
and is equal to the zero mode of the field. In the ordered phase where spins are aligned, $\langle m \rangle \neq0$, while in the disordered phase one has $\langle m \rangle =0$. One can study the probability distribution of the magnetization through an analysis of the finite volume zero mode effective action obtained by integrating out all modes in the box with $p \neq 0$ \cite{ZinnJustin:2002ru}. The moments of the magnetization distribution exhibit universal properties in the infinite volume limit. One such observable of particular practicality is the \textit{Binder cumulant}, which is defined by
\begin{equation}
U_L(\beta) = \frac{3}{2} - \frac{1}{2} \frac{\langle m^4 \rangle}{\langle m^2 \rangle^2}.
\end{equation}
In the lower temperature limit, $U_L \to 1$, while in the high temperature limit, $U_L \to 0$. It was argued by Binder long ago \cite{Binder:1981sa} that, at the critical value $\beta_c$, the cumulant has a universal value $U_* \neq 0$ as $L \to \infty$, universal in the sense that it is independent of $\lambda$ and shared by all systems in the Ising universality class. The approach of $U_L(\beta_c)$ to $U_*$ is determined by \textit{corrections to scaling} of the form \cite{Amit:1984ms, Cardy:1996xt, ZinnJustin:2002ru}
\begin{equation} \label{corrections_to_scaling}
U_L(\beta_c) = U_* + c_1(\lambda) L^{-\omega} + O( L^{-2\omega}, L^{-\omega'}),
\end{equation}
where $\omega = -y _4 > 0$ is the exponent corresponding to the leading irrelevant RG eigenvalue of the system, and $\omega'$ stands for the next-to-leading irrelevant exponent. The universal values $U_*$ depend only on the dimension of the system for spin systems with only 1 internal degree of freedom. They are known to very high precision \cite{Salas:1999qh, Kaupuzs:2016hpl, Hasenbusch:1999mw}: in 2d, $U_* = 0.9160386(24)$, while in 3d, $U_* = 0.69832(13)$.
\begin{comment}
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c c c||}
\hline
$d$ & $U_*$ & $\omega$ \\
\hline\hline
2 & 0.9160386(24) & 2?? \\
\hline
3 & 0.69832(13) & 0.845(11) \\
\hline
\end{tabular}
\caption{Precise estimates of the universal Binder cumulant}
\label{fitresults}
\end{center}
\end{table}
\end{comment}
In figure \ref{fig:binderplots} we plot the behavior of the cumulant $U_L$ in 2d as a function of $\beta$ on several lattices. We see that there exists a region where the different volumes nearly intersect each other. The cumulant is analytic in $\beta - \beta_c$, and therefore this behavior is expected to occur in the vicinity of $\beta_c$ according to eq. (\ref{corrections_to_scaling}), up to $O(L^{-\omega})$ deviations. Furthermore, exactly at the critical point, an infinite volume extrapolation of the cumulant should yield the universal value $U_*$. Very precise estimates exist in the literature for the critical couplings $\beta_c$ at various interaction parameter values $\lambda$. Using these values, we have checked that our system is well-tuned by performing infinite volume extrapolations as suggested by eq. (\ref{corrections_to_scaling}) at leading order. This is depicted for our simulation in $d=2$ in figure \ref{fig:binderplots}. In both dimensions we obtain good fits consistent with the universal value at $L=\infty$ within $1\sigma$; the fit results are exhibited in table \ref{binder_fitresults}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c c c c c c c||}
\hline
$d$ & $\lambda$ & $\beta_c$ & $U_\infty$ & $\omega$ & $a$ & $\chi^2/\mathrm{dof}$\\
\hline\hline
2 & 1.00 & 0.6806048 & 0.91615(89) & 0.989(38) & -0.890(86) & 0.75 \\
\hline
3 & 1.100 & 0.3750966 & 0.6971(20) & 0.845(10) & 0.036(44) & 0.43 \\
\hline
\end{tabular}
\caption{Estimates of the universal Binder cumulant and related fit parameters from our simulation in dimensions $2$ and $3$, performed at the critical $\beta$ values from refs. \cite{Kaupuzs:2016hpl}, \cite{Hasenbusch:1999mw}, respectively.}
\label{binder_fitresults}
\end{center}
\end{table}
\begin{figure}[h!]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.05\textwidth]{phi42_binder_wide.png}
\end{minipage}\hfill
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi42_binder_extrap1.png}
\end{minipage}
\caption{\small{(Left) Binder cumulant on multiple volumes over a wide range of $\beta$ in 2d. The curves for different $L$ nearly intersect close to the critical value $\beta_c$. (Right) Extrapolation of the Binder cumulant computed at $\beta_c$ to infinite volume in 2d according to eq. (\ref{corrections_to_scaling}).\label{fig:binderplots}}}
\end{figure}
\subsection{Simulation details} We simulated $\phi^4_d$ theory using Markov Chain Monte Carlo (MCMC) methods. In what follows we report the details for 3 dimensions, for simplicity. The MC chain of field configurations was generating using a mixed update algorithm consisting of Metropolis updates for the size of $\phi$ and Wolff cluster updates for the sign of $\phi$ \cite{PhysRevLett.62.361}. One Metropolis update involved picking $V = L^d$ random sites in sequence and for each pick updating the spin length according to
\begin{equation}
\phi' = \phi + r (u - 0.5),
\end{equation}
where $u$ is a random number uniformly distributed in the interval $[0,1]$, with probability $\mathrm{e}^{-\Delta S}$, with $\Delta S = S' - S$ being the change in the action due to the proposed spin update. The number $r$ is the maximum radial update length, which was chosen to be $r = 2.00$ in both dimensions. One cluster update consisted of the attempted construction of a cluster, which picks a site at random and adds neighboring spins to the cluster with probability $\mathrm{e}^{-2\beta \sigma_i \sigma_j}$, where $\sigma_i = \mathrm{sgn}(\phi_i)$. Once built, the signs of all spins in the cluster are flipped. The Twister PRNG was used to generate all random variables \cite{10.1145/272991.272995}.
The ratio of radial updates to cluster updates was chosen to be that which led to the best extrapolation of the Binder cumulant to infinite volume for given sample size at criticality. In 3d, we chose 10 cluster updates per radial update, yielding $\tau_\mathrm{int} \approx 10$, with variations of order 1 between different observables. Measurements in the full simulation were then carried out every 5 MC sweeps, where one sweep was defined to be 50 cluster updates and 5 radial updates.
The autocorrelation was estimated in two ways: (1) errors $\epsilon_b$ were computed on binned data for various bin sizes, and the integrated time is estimated as $\tau_\mathrm{int} \approx \epsilon_{*}^2 / 2 \epsilon_0^2$, where $\epsilon_*$ was the error on the binning plateau \cite{Amit:1984ms}, and (2) a direct computation of the integrated autocorrelation time on sequential subsets of the MC chain, repeated for every subset and averaged together. We checked that $\tau_\mathrm{int}$ for $\phi^2$ and $\phi(0) \phi(z)$ at maximum distance $L/2$ were comparable in both cases.
To multiply the statistics, the MC chain was split at 10k sweeps (the thermalization cut) into 10 branches. After a few sweeps, the data from separate chains was checked to be essentially uncorrelated. On each branch, almost 1M sweeps were carried out (for every volume except the two largest, 64 and 72, which had 150k sweeps per chain), yielding a total of $\approx$ 10M MC sweeps, and thus 2M measurements. To saturate the errors, the data was then binned with bin size $b=10$, yielding about 20k independent statistical samples per branch. We simulated on volumes $L=24,36,48,56,64,72$.
Lastly, the numerical integration of the gradient flow was performed using a fourth order Runge-Kutta integration scheme.
\subsection{Ratios and exponents}
The ratio formulas proposed above have been tested by measuring the mixed correlation functions in the odd-operator subspace with basis $\{\phi, \phi^3\}$ and even subspace with basis $\{\phi^2, \phi^4\}$, each one containing the leading two operators according to canonical dimension. Corresponding to each of these operators is a scaling dimension $\Delta_i, \; i = 1,2,3,4$, although those are the dimensions of the \textit{scaling} operators that are dominated by the corresponding monomial operators $\phi^i$. The most precise estimates we know of from lattice simulations, except $\Delta_3$, are given in table \ref{preciseDeltas} \cite{Hasenbusch:1999mw}. The $\Delta_3$ dimension is predicted to be $\Delta_3 = 2 + \Delta_1$ \cite{Rychkov:2015naa}. We are unaware of any direct numerical determinations of $\Delta_3$ apart from the conformal bootstrap \cite{Poland:2018epd}, so we use the prediction just mentioned. Preliminary results of this section were reported in \cite{Carosso:2018rep,Carosso:2019tan}. In what follows, we describe the analysis in the context of $\phi^4_3$. At the end, we briefly report preliminary results for $\phi^4_2$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c c c||}
\hline
$\Delta$ & $d=2$ & $d=3$ \\
\hline\hline
$\Delta_1$ & 0.125 & 0.51790(20) \\
\hline
$\Delta_2$ & 1 & 1.41169(76) \\
\hline
$\Delta_3$ & 2.125 & $\sim$ 2.5 \\
\hline
$\Delta_4$ & 2 & 3.845(11) \\
\hline
\end{tabular}
\caption{Exact scaling dimensions in 2d, known from the solution of the 2d Ising model. In 3d, we report the most precise estimates of scaling dimensions from the lattice \cite{Hasenbusch:1999mw}.}
\label{preciseDeltas}
\end{center}
\end{table}
At criticality, the point-point correlation functions are expected to exhibit power law decay of the form $C(z)=A/z^{2\Delta}$ in infinite volume. Since $\Delta_1, \; \Delta_2$ are the leading dimensions in their respective subspaces, they are expected to govern the leading power law behaviors. In figure \ref{fig:powerlawfits} we plot the $\phi \phi^3$ and $\phi^4 \phi^4$ correlators together with their fits to a periodic power-law $C(z) + C(L-z)$ with $C(z) = A/z^{2\Delta}$.
We observed power laws that clearly indicate the dominance of the leading operators in each subspace, with exponents (reported in the plots) close to the expected $2\Delta_1 \approx 1.0358, \; 2\Delta_2 \approx 2.82$. For $\phi\phi^3$, the subleading power law has exponent $2\Delta' \approx 5$; thus, for both correlators we observe a clear dominance by the leading operator.
\begin{figure}[h!]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=0.98\textwidth]{phi43_corr13_powerlaw_L48.png}
\end{minipage}\hfill
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi43_corr44_powerlaw_L48.png}
\end{minipage}
\caption{\small{Power law exponential fits of the $\phi\phi^3$ and $\phi^4\phi^4$ correlators that exhibit the dominance of the leading operators $\phi$ and $\phi^2$. \label{fig:powerlawfits}}}
\end{figure}
\begin{figure}[h!]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{ratio11_L48.png}
\end{minipage}\hfill
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{ratio22_L48.png}
\end{minipage}
\caption{\small{Ratios of flowed correlators versus distance. Plateaus form at large distances where scaling sets in and the ratio formulas become valid.\label{fig:ratios}}}
\end{figure}
In figure \ref{fig:ratios}, we plot the ratios of correlators at several flow times as functions of distance on the lattice. We observe the short-distance region where the smeared spins overlap as dips in the ratios. At larger distances, plateaus form where the ratios approach their asymptotic forms, although there appears to be slight residual $z$-dependence. For the $\langle\phi\phi\rangle$ correlator, the plateau moves extremely little with flow time, as predicted by eq. (\ref{ratioO}). For all other correlators there is notable movement. The residual $z$-dependence could come from a number of sources. First, we expect that even with a dominant operator, there will always be subleading corrections due to the leading irrelevant operators. If we keep the first subleading term in eq. (\ref{corr_expansion}) and compute the ratio in eq. (\ref{ratioO}), assuming power law correlations, we can derive the expected form of these corrections (in infinite volume). Denoting ratios by $R(z,t)$, we find
\begin{equation}
R(z; t) \sim b_t^{2\Delta_a} \Big( 1 + (b_t^{2(\Delta_b - \Delta_a)} - 1) \frac{c_1}{z^{2(\Delta_b - \Delta_a)}} + b_t^{2(\Delta_b - \Delta_a)} \frac{c_1^2}{z^{4(\Delta_b - \Delta_a)}} \Big).
\end{equation}
In 3 dimensions, $\Delta_b - \Delta_a \gtrsim 2$ in both subspaces, so we expect these corrections to be small at large distances. We were unable to extract estimates for these subleading terms from fits. A second source of $z$-dependence is the fact that the finite volume heat kernel has nontrivial behavior in $z$. However, we expect such corrections to be multiplied by factors of $O(\mathrm{e}^{-n^2 L^2 / 4 t}) \; \forall n>0$. See appendix A for details about the finite volume heat kernel.
We therefore have attempted to extract $\Delta_1$ from applying eqs. (\ref{ratioR}), (\ref{Option2}) in the odd subspace, and separately $\Delta_1, \; \Delta_2$ from the even subspace, according to Option 2 outlined in section 2. In applying the ratio formula, we compare flow times separated by $\epsilon = 0.05$ and fit using the form (inspired by eq. (\ref{bt_form}))
\begin{equation}
R_{\phi^i \phi^j}(z;t) \sim b_\epsilon(t)^{\delta_{ij}} = \Big(1 + \frac{\epsilon}{c + t}\Big)^{\delta_{ij}/2},
\end{equation}
where $\delta_{ij} = 2\Delta_a - (n_i + n_j) \Delta_1$ is expected for the operators we use, $a$ being the index of the dominant operator in the correlator $\langle \mathcal{O}_i \mathcal{O}_j \rangle$. We remark that this form allows one to attempt fitting at arbitrarily small $t$ values, but that the scaling form is expected only for larger times. The correlators at nearby flow times are statistically highly correlated due to the smoothing effect of GF, making the estimation of errors by classical means a risky task,
much like the high correlation of correlator values at successive distances in spectrum measurements. We thus adopt a jackknifing procedure whereby a number of sub-ensembles are generated from the whole ensemble by removing chunks of $J$ samples in sequence, then the correlator ratios are computed and the fits performed, and the best fit parameters are collected together into an ensemble \cite{DeGrand:2006zz}. To account for the noise observed at large distances we have included in our jackknife ensemble fits from every $z$ value from regions where a stable plateau is identifiable in the ratio plots. On each volume we chose a ratio $N/J \approx 40$, where $N$ is the total number of samples. The final estimates on a given volume are then obtained by computing the means and covariance of the ensemble of best fit parameters.
\begin{figure}
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi43_delta13_zs11.png}
\end{minipage}\hfill
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi43_delta22_zs11.png}
\end{minipage}
\caption{\small{Infinite volume extrapolation of the $\delta_{13}$ and $\delta_{22}$ exponents according to eq. (\ref{inf_vol_ansatz}).}}
\label{fig:delta22_extrap}
\end{figure}
The results show a notable dependence on volume. We therefore extrapolate to infinite volume using a leading correction-to-scaling ansatz,
\begin{equation} \label{inf_vol_ansatz}
\delta_{ij}(L; a, \bar \omega) = \delta_{ij} + a L^{-\bar \omega}.
\end{equation}
In \fig{fig:delta22_extrap}, the examples of $\delta_{13}$ and $\delta_{22}$ are displayed. In the odd subspace, one can extract $\Delta_1$ from a single $\delta_{ij}$. In the even subspace, one can extract $\Delta_1, \; \Delta_2$ from any \textit{pair} of $\delta_{ij}$'s. Best fit parameters of the infinite volume extrapolations are reported in table \ref{delta_fits}, and the corresponding scaling dimensions are reported in table \ref{MyDeltas}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c c | c c c c||}
\hline
$(i,j)$ & $\delta_{ij}\text{\small{(table \ref{preciseDeltas})}}$ & $\delta_{ij}$ & $\bar \omega$ & $a$ & $\chi^2/\mathrm{dof}$ \\
\hline\hline
$(1,3)$ & -1.03580(40) & -1.0283(65) & 1.667(80) & 59(14) & 0.16 \\
\hline
$(3,3)$ & -2.07160(80) & -2.055(18) & 1.73(11) & 154(51) & 0.30 \\
\hline
$(2,2)$ & 0.7518(17) & 0.743(17) & 1.17(18) & 7.2(3.5) & 0.17 \\
\hline
$(2,4)$ & -0.2840(19) & -0.308(38) & 0.96(20) & 5.8(2.9) & 0.11 \\
\hline
$(4,4)$ & -1.3198(22) & -1.310(29) & 1.56(16) & 84(38) & 0.20 \\
\hline
\end{tabular}
\caption{\small{Fit results of the infinite volume extrapolations of $\delta_{ij}$ from mixed correlators $\langle \phi^i \phi^j \rangle$ using eqs. (\ref{ratioO}), (\ref{Option2}) and fitting to eq. (\ref{inf_vol_ansatz}). The expected values from table \ref{preciseDeltas} are tabulated for comparison.}}
\label{delta_fits}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c c c||}
\hline
$(i,j), (h,k)$ & $\Delta_1$ & $\Delta_2$ \\
\hline\hline
$(1,3)$ & 0.5141(32) & --- \\
\hline
$(3,3)$ & 0.5138(45) & --- \\
\hline
$(2,2),(2,4)$ & 0.525(21) & 1.422(46) \\
\hline
$(2,2),(4,4)$ & 0.5132(84) & 1.398(22) \\
\hline
$(2,4),(4,4)$ & 0.501(24) & 1.349(88) \\
\hline\hline
table \ref{preciseDeltas} & 0.51790(20) & 1.41169(76) \\
\hline
\end{tabular}
\caption{\small{Estimates of leading scaling dimensions in 3d from mixed correlators $\langle \phi^i \phi^j \rangle$ (or pairs thereof) using eqs. (\ref{ratioO}), (\ref{Option2}). Values are obtained from the results reported in table \ref{delta_fits}.}}
\label{MyDeltas}
\end{center}
\end{table}
Next, we report the results of the diagonalization procedure based on eq. (\ref{ratioR}). The leading diagonalized correlators with dimensions $\Delta_1$ and $\Delta_2$ had a clean signal at all distances past $z = 10$ on every lattice, in the sense that their plateaus exhibited no notable noise. The subleading correlators, however, tended to exhibit wild fluctuations past certain distances ($z \mathrel{\raisebox{-.6ex}{$\stackrel{\textstyle>}{\sim}$}} 15$), where the signal from the subleading operators becomes small. At shorter distances ($z \mathrel{\raisebox{-.6ex}{$\stackrel{\textstyle<}{\sim}$}} 10$), there was a systematic tendency for exponents to be underestimated.
Repeating the same data analysis used to extract the $\delta_{ij}$ and their infinite volume extrapolations, but now extracting directly estimates of $2\Delta_a$, and using the first value of $\Delta_1$ from table \ref{preciseDeltas} as the necessary input dimension in eq. (\ref{ratioR}), we obtained estimates for $\Delta_1$ and $\Delta_2$. The value for $\Delta_1$ is slightly displaced from the input value, but the value for $\Delta_2$ is consistent with those extracted in table \ref{MyDeltas}.
For the $\mathcal R_3$ scaling operator, distances beyond $z \approx 15$ were left out of our analysis because of a poor signal. In figure \ref{fig:diag_D1D3}, we plot the extrapolations for $\Delta_1$ and $\Delta_3$ using the same limited $z$-range $z \in [10,14]$. The value $\Delta_3 = 2.506(33)$ is consistent with the prediction of $2 + \Delta_1 \approx 2.518$ from \cite{Rychkov:2015naa, Hasenbusch:1999mw}. The signal for $\Delta_4$ was generally quite poor. The data was not clean enough to perform an infinite volume extrapolation. The most reasonable estimates on each volume were obtained from distances $z < 14$. A crude estimate obtained from averaging results from every volume in the range $z \in [10, 13]$, for example, yields $2\Delta_4 = 7.98(65)$, while adding one more distance, so $z \in [10,14]$, yields $2\Delta_4 = 8.31(99)$. The expected value is $2\Delta_4 \approx 7.69$. We take this as a good sign, but without the infinite volume extrapolation the result is not as precise as the lower dimensions $\Delta_1, \; \Delta_2, \; \Delta_3$.
\begin{figure}[h!]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi43_2Delta1_zs11.png}
\end{minipage}\hfill
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi43_2Delta2_zs11.png}
\end{minipage}
\caption{\small{Infinite volume extrapolations of $2\Delta_1$ and $2\Delta_2$ from the diagonalization method using correlator ratios at all distances past $z=10$. \label{fig:diag_D1D2}}}
\end{figure}
\begin{figure}[h!]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi43_2Delta1_zs10_d3zes.png}
\end{minipage}\hfill
\begin{minipage}{.48\textwidth}
\includegraphics[width=1.0\textwidth]{phi43_2Delta3_zs10.png}
\end{minipage}
\caption{\small{Infinite volume extrapolations of $2\Delta_1$ and $2\Delta_3$ from the diagonalization method using correlator ratios from a reduced set of distances, $z \in [10,14]$. \label{fig:diag_D1D3}}}
\end{figure}
\newpage
In 2 dimensions, we have carried out a preliminary analysis to obtain estimates of $\delta_{ij}$ in an identical manner as above, although with about a fifth of the statistics, so far. The results are reported in table \ref{delta2d_fits}. In 2d, the canonical dimension of $\phi$ is zero, and therefore all scaling dimensions are purely anomalous. Ratios for $\phi\phi$ again exhibited minimal variation with time, while higher operator ratios exhibited significant movement. The plateaus were generally less flat than they were in 3d, possibly due to larger contributions from subleading operators. Nonetheless, suggestive estimates were obtained for the $\delta_{ij}$. In the odd-subspace, the results deviate from the exact values by many standard deviations, indicating perhaps the stronger presence of the subleading operator and the necessity of a diagonalization analysis. This has not yet been carried out.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c c | c c c c||}
\hline
$(i,j)$ & $\delta_{ij}\text{\small{(table \ref{preciseDeltas})}}$ & $\delta_{ij}$ & $\bar \omega$ & $a$ & $\chi^2/\mathrm{dof}$ \\
\hline\hline
$(1,3)$ & -0.25 & -0.2616(14) & 2.48(21) & 127(83) & 0.21 \\
\hline
$(3,3)$ & -0.50 & -0.5279(28) & 2.35(18) & 161(91) & 0.55 \\
\hline
$(2,2)$ & 1.50 & 1.538(20) & 1.92(35) & 90(93) & 0.35 \\
\hline
$(2,4)$ & 1.25 & 1.299(31) & 1.79(31) & 103(93) & 0.30 \\
\hline
$(4,4)$ & 1.00 & 1.061(60) & 1.62(25) & 129(93) & 0.39 \\
\hline
\end{tabular}
\caption{\small{(Preliminary) Fit results of the infinite volume extrapolations of the $\delta_{ij}$ in 2 dimensions. The expected values from table \ref{preciseDeltas} are tabulated for comparison.}}
\label{delta2d_fits}
\end{center}
\end{table}
Lastly, we note that the exponents we have measured above, of course, are not as precisely determined as they are from finite-size scaling (FSS) analyses, and neither method is nearly as precise as the conformal bootstrap predictions \cite{Poland:2018epd} for scalar field theories. We note that the FSS results from \cite{Hasenbusch:1999mw}, however, had roughly 10-100 times the statistics we have. An advantage of GFRG methods is that they are expected to be applicable in a much broader class of lattice field theories, including gauge-fermion systems in 4 dimensions. In such systems, the nonperturbative determination of anomalous dimensions is a lively and ongoing research program, and the conformal bootstrap has only recently made progress in these systems \cite{Poland:2018epd,Li:2020bnb}. In the next section, we describe an application of GFRG to one such system.
\section{12-flavor SU(3) gauge theory}
A model of central interest in the beyond Standard Model lattice community is the nearly-conformal $N_f$-flavor SU(3) gauge theory and its generalizations. It is a candidate for explaining the electroweak symmetry breaking which produces the Higgs boson, arising from new strong interactions at a higher energy scale \cite{Appelquist:2016viq}. Its motivation lies in the fact that for a range of $N_f$ values, called the ``conformal window,'' the theory may contain a light scalar identified with the Higgs. The perturbative beta function for SU($N$) gauge-fermion systems is given to 2-loop order by \cite{Montvay:1994cy}
\begin{equation}
\beta(g_0) = - \beta_0 g_0^3 - \beta_1 g_0^5 + O(g_0^7),
\end{equation}
where
\begin{equation}
\beta_0 = \frac{1}{16 \pi^2}\Big( \frac{11 N}{3} - \frac{2 N_f}{3} \Big), \quad \beta_1 = \frac{1}{(16 \pi^2)^2} \Big( \frac{34 N^2}{3} - \frac{10 N N_f}{3} - \frac{(N^2-1)N_f}{N} \Big).
\end{equation}
As $N_f$ is increased from 0 the theory eventually develops an interacting infrared fixed point (IRFP), whose coupling strength decreases with $N_f$: the Caswell-Banks-Zaks fixed point \cite{Caswell:1974gg,Banks:1981nn}. Above $N_f = 16.5$, the IRFP merges with the gaussian fixed point and the theory loses asymptotic freedom. In the conformal regime, the IRFP is characterized by a set of scaling dimensions of local operators. The values of these scaling dimensions are highly nontrivial, especially as $N_f$ is lowered from $16.5$ and one begins to leave the class of weakly-coupled IRFPs. Much work has been done to determine the anomalous dimensions, both analytically \cite{Gracey:2018oym, DiPietro:2020jne} and on the lattice \cite{Cheng:2013bca, Giedt:2015alr}, and lattice simulations in particular have focused on the determination of the fermion mass anomalous dimension for reasons of phenomenology as well as practicality.
We expect that the ratio formula eq. (\ref{ratioO}) is applicable in generic field theories, since blocking may be defined in any theory. In fact, the \textit{first} application of the correlator ratio method outlined in this chapter was to a $N_f = 12,$ SU(3) lattice gauge theory using staggered fermions \cite{Carosso:2018bmz}. We computed ratios of the pion, axial vector, vector, and baryon correlators, and extracted estimates for the mass anomalous dimension of the fermions and the leading baryon. Because staggered fermions exhibit a conserved current from a remnant chiral symmetry, we used its correlator ratio to estimate the fermion field anomalous dimension, and to eliminate its contribution to other ratios. Before reporting results, we will describe our prescription for the flow and the method of measurement of flowed hadronic observables.
\subsection{Flow definitions} The first question to address is what type of gradient flow should be used in a gauge theory. If we wish to keep the effective action in the same universality class of the bare theory, the flow should preserve the symmetries of the theory. In this case, the flow equation must maintain gauge invariance. The simplest gauge-invariant action is the Yang-Mills action, so we expect YM gradient flow to be sufficient in the continuum. It is clear from a perturbative analysis of the equations that the flowed fields have the desired damping of high-momentum modes of the gauge field $A_\mu(x)$ \cite{Luscher:2010iy}. We also note that the requirement of gauge invariance necessitates a nonlinear gradient flow, a feature we did not see in the scalar case.
When translating to the lattice, one must decide on the discretization of the gauge action. From a Wilsonian RG perspective, it is expected that any discretization is fine, in principle, so long as it reduces to the YM flow in the continuum limit. The simplest lattice discretization of the Yang-Mills gradient flow is called \textit{Wilson flow}. Beginning with bare links $U_\mu(x)$, their flow $V_\mu(x,t)$ is determined by
\begin{equation} \label{wilsonflow}
\partial_t V_\mu(x,t) = - g_0^2 \partial_{x,\mu} S_{W}(V_t) \; V_\mu(x,t),
\end{equation}
where $S_{W}(U)$ is the Wilson action, eq. (\ref{Waction}), and $\partial_{x,\mu} = T^a \partial_{x,\mu}^a$ is a Lie algebra-valued derivative. It is defined on functions $f(U)$ on the group $G$, where $U = \{U_\nu(y)\}$ is the set of gauge links on the lattice, by
\begin{equation}
\partial_{x,\mu}^a f(U) := \frac{\mathrm{d}}{\mathrm{d} s} f \big( \mathrm{e}^{sX_\mu(x)} U \big) \Big|_{s = 0}, \quad \mathrm{where} \quad X_\mu(x) := \delta_{\mu\nu} \delta(x,y) T^a.
\end{equation}
In words, then, the derivative operation first replaces $U_\mu(x)$ in $f(U)$ by $\mathrm{e}^{s T^a} U_\mu(x)$, differentiates with respect to $s$, and sets $s = 0$. It's then clear that $\partial_{x,a} f(U) \in \mathfrak{g}$. It is nothing but the definition of the tangent vectors to the gauge group $G$ at element $U$. As the simplest discretization, the Wilson flow has significant lattice artifacts, and therefore alternative flow definitions have been given \cite{Ramos:2015baa}, but we will not discuss these.
The flow of the fermions $\psi(x)$ can be defined with a simple gauge-covariant diffusion, i.e., gauged heat equation \cite{Luscher:2013cpa}. The covariant derivative in the fundamental representation is $D_\mu(A) = \partial_\mu + A_\mu$. We denote flowed fermion fields by $\chi_t(x)$. The diffusion equation in the continuum is then
\begin{equation} \label{fermionflowcontinuum}
\partial_t \chi_t(x) = D^2(B) \chi_t(x).
\end{equation}
The simplest discretization of this flow would be
\begin{equation} \label{fermionflow}
\partial_t \chi_t(x) = \Delta(V_t) \chi_t(x),
\end{equation}
where $\Delta(U) = \sum_\mu \nabla^*_\mu \nabla_\mu$, and the covariant difference operators are
\begin{equation}
\nabla_\mu \chi(x) = U_\mu(x) \chi(x+\mu) - \chi(x), \quad \nabla^*_\mu \chi(x) = \chi(x) - U_\mu^\dag(x-\mu) \chi(x-\mu).
\end{equation}
But the choice of flow is to some extent arbitrary, so long as it serves to damp high modes while preserving the symmetries of the field. Thus, one could alternatively define the flow by the square of the Dirac operator $\slashed{D}^2(U)$,
\begin{equation}
\partial_t \chi_t(x) = \slashed{D}^2(V_t) \chi_t(x).
\end{equation}
In fact, the two kinds of second derivative are related as
\begin{equation}
\slashed{D}^2 = D^2 + \sigma_{\mu\nu} F_{\mu\nu}, \quad \sigma_{\mu\nu} = \smallfrac{1}{2} [\gamma_\mu,\gamma_\nu].
\end{equation}
We can think of the difference between these flows as follows. The flow generates an effective action which typically contains all possible terms consistent with the symmetries of the theory, so terms like $c(t) \bar \chi \sigma_{\mu\nu} F_{\mu\nu} \chi$ would be present for both choices of flow. They would only differ in their dependence on $t$, on the precise form of the coefficient $c(t)$. But since the dynamics of the theory is controlled by its IRFP, differences in the exact details of $c(t)$ will become less important as $t$ increases and the effective action approaches the fixed point action.
\subsection{Flowed observables}
For gauge fields and scalar fields, the way to compute observables at finite flow time is straight-forward. One simply evaluates the operators within expectation values on the flowed fields. For fermions, however, the problem is more subtle, because fermion fields are Grassmann-valued and therefore they are not directly manipulated and measured in lattice simulations. One therefore must do some work for any given observable to understand how it should be measured.
The simplest fermionic observables are the mesonic operators. A flowed meson operator has the general bilinear form
\begin{equation}
P_t(x) = \overline \chi_t(x) \Gamma \chi_t(x)
\end{equation}
for some gamma matrix $\Gamma$ (or staggered equivalent thereof), as these are the operators which can have the same quantum numbers as the mesons out in nature. Their correlators are then defined by
\begin{equation} \label{fullcorr}
C_t(x,y) = \langle P_t(x) P_t(y) \rangle.
\end{equation}
It is also convenient to define \textit{partially-flowed} correlators by
\begin{equation} \label{partialcorr}
\tilde C_t(x,y) = \langle P_t(x) P_0(y) \rangle,
\end{equation}
as these are simpler to measure and differ from the fully-flowed correlators by terms of $O(\sqrt{t} / |x-y|)$, as we argue in the next section. The flowed baryon operators are similarly defined. For the simplest staggered baryon, the flowed operator is given by
\begin{equation}
B_t(x) = \epsilon_{abc} \chi_t^a(x) \chi_t^b(x) \chi_t^c(x),
\end{equation}
where $a,b,c=1,\dots,3$ are the color indices. Their correlations are given just as in eqs. (\ref{fullcorr}, \;\ref{partialcorr}).
To understand how such correlators are measured, first we compute their contractions. Letting $S(x,y) = \slashed{D}^{-1}(x,y)$ be the inverse Dirac operator, one finds \cite{Luscher:2013cpa}
\begin{align}
\contraction{}{\psi(x)}{}{ \bar{\psi}(y)} \psi(x)\bar{\psi}(y) &= S(x,y), \nonumber \\
\contraction{}{\chi_t(x)}{}{ \bar{\psi}(y)} \chi_t(x) \bar{\psi}(y) &= \sum_v \! K(t,x;0,v) S(v,y), \nonumber \\
\contraction{}{\psi(x) }{}{\bar{\chi}_t(y)}\psi(x) \bar{\chi}_t(y) &= \sum_v \! S(x,v) K(t,y;0,v)^\dag, \nonumber \\
\contraction{}{\chi_t(x)}{}{\bar{\chi}_t(y)} \chi_t(x)\bar{\chi}_t(y) &= \sum_{vw} \! K(t,x;0,v) S(v,w) K(t,y;0,w)^\dag,
\end{align}
where $K(t,x;s,y)$ is the gauge covariant Green function solution of the fermion flow equation, and therefore depends on the gauge field in a nontrivial way. One formally writes the solution as
\begin{equation}
K(t,s) = \exp \int_s^t \mathrm{d} \sigma \Delta(V_\sigma),
\end{equation}
where $V_\sigma$ solves the flow equation (\ref{wilsonflow}). Using the contractions above, one integrates over the fermions in the flowed expectation values to obtain expressions in terms of the gauged heat kernel and the gauge fields. For example, integrating over the fermions in eq. (\ref{partialcorr}) gives
\begin{align}
- \Gamma_{\alpha\beta}\Gamma_{\gamma\delta} & \sum_{v}\! K(t,y;0,v) S(v,x)_{\delta\alpha} \cdot \sum_{u}\! S(x,u)_{\beta\gamma} K(t,y;0,u)^\dag \nonumber \\
& = - \int_{vu} \! \mathrm{tr}\big[ K(t,y;0,v) S(v,x) \Gamma \; S(x,u) K(t,y;0,u)^\dag \Gamma \big].
\end{align}
In the case of pions (for Wilson fermions, say), the gamma matrix is $\Gamma = \gamma_5$, and from $\gamma_5$-hermiticity, $\gamma_5 S(x,y) \gamma_5 = S^\dag(y,x)$, we have
\begin{equation}
\langle P(x) P_t(y) \rangle = - \sum_{vu} \! \mathrm{tr}\big[ K(t,y;0,v) S(v,x) \; \big( K(t,y;0,u) S(u,x) \big)^\dag \big].
\end{equation}
Now, on the lattice, one computes $S(x,y)$ by placing a point source $\eta_y$ at site $y$ defined by $\eta_y(x) = \delta(x,y)$, and numerically inverts the Dirac operator on the source. For point-point correlators, then, the quantity
\begin{equation}
(K_t S \eta_y)(x) = \sum_{v} \! K(t,x;0,v) S(v,y),
\end{equation}
is simply the solution of eq. (\ref{fermionflow}) with $\psi_t \to S$ and initial condition $S(v,y)$, with $y$ held fixed. Denoting the inversion of $\slashed{D}$ on the point source by $(S \eta_y)(x)$, the pion correlator on a single gauge configuration takes the form
\begin{equation}
- \sum_y | (K_t S\eta_x)(y) |^2,
\end{equation}
which is numerically implementable. Thus, to measure the partially-flowed pion correlator, one inverts the Dirac operator on a point source and integrates the gauged heat equation with initial condition being the vector field $S\eta_y$.
\begin{comment}
The story for the baryon is similar; one finds the expression \ac{CHECK}
\begin{equation}
\langle B_t(x) B_0(0) \rangle_F = \det (K_t S \eta_x),
\end{equation}
where the determinant is in color space. So, one can measure it after the meson correlator at the cost of computing an extra determinant.
\end{comment}
Observables that are fully-flowed, such as eq. (\ref{fullcorr}), are much harder to measure, because one must integrate instead the \textit{adjoint} heat equation \cite{Luscher:2013cpa}, and the computational cost increases drastically. Even some local observables, like the chiral condensate $\bar \chi_t (x) \chi_t(x)$, require adjoint flow. None of the observables used below required the computation of adjoint flow, however.
\subsection{Super ratios}
As we saw above, it is numerically advantageous to compute expectation values with only a single flowed operator in the correlator. But the original ratio formula eq. (\ref{ratioO}) requires both operators to be flowed. If such partially-flowed correlators are approximately equal to the full correlators at large distances, then we expect a modified ratio formula (for $i = j$)
\begin{equation}
\frac{ \langle \op(0) \op_t(z) \rangle}{ \langle \op(0) \op(z) \rangle} \propto t^{\ell_\op / 2 + \gamma_\op/2 - n_\op \eta/4} + O(\sqrt{t}/z) \label{eq:partialratio},
\end{equation}
where the dependence on $t$ is the square root of the dependence in eq. (\ref{ratioO}), at large times. Intuitively, this should hold for distances much larger than the smearing radius, $z \gg \sqrt{t}$. Notice also that we switch to an emphasis on anomalous dimensions rather than full scaling dimensions in this section, as is customary in four dimensions.
To motivate this form of the ratio formula, let us consider the case of $\langle \phi_t(z) \phi_t(0) \rangle$. From $\phi_t(y) = (K_t \varphi)(y)$, where $K_t$ is the heat kernel, we obtain
\begin{equation}
\langle \phi_t(z) \phi_t(0) \rangle = \int_y K_t(y) \langle \phi_t(z) \varphi(y) \rangle \equiv \int_y K_t(y) G_t(z - y).
\end{equation}
Now we expand $G_t$ about $z$ using
\begin{equation}
|z-y| = |z| \Big( 1 + \frac{y^2}{z^2} - 2 \frac{y \cdot z}{z^2} \Big)^{1/2} = |z|\Big(1 + \frac{y^2}{2 z^2} - \frac{y \cdot z}{z^2} - \frac{(y \cdot z)^2}{2(z^2)^2} + O(y^3) \Big),
\end{equation}
where $x^2 = |x|^2$. The expansion of the correlator above is then
\begin{equation}
\int_y K_t(y) G_t(z - y) = \int_y K_t(y) \Big[ G_t(z) + \Big(\frac{y^2}{2 z^2} - \frac{y \cdot z}{z^2} - \frac{(y \cdot z)^2}{2(z^2)^2} + O(y^3)\Big) z \cdot \nabla_z G_t(z) + O(\nabla_z^2) \Big].
\end{equation}
Now, from the moments of the heat kernel $\langle 1 \rangle = 1$, $\langle y \rangle = 0$, and $\langle y_i y_j \rangle = 2t \delta_{ij}$, we obtain
\begin{equation}
\int_y K_t(y) G_t(z - y) = G_t(z) + (d-1) \frac{t}{z^2} z \cdot \nabla_z G_t(z) + O(t^2 / z^4, \nabla^2_z).
\end{equation}
At large distances $z \gg \sqrt{t}$, the partially flowed correlators are then approximately equal to the fully-flowed correlators, and we expect eq. (\ref{eq:partialratio}) to be valid.
We can use eq. (\ref{eq:partialratio}) to determine the field anomalous exponent $\eta$ along the lines of Option 1 outlined above at eq. (\ref{Option1}). Once $\eta$ is determined, any other anomalous dimension can be predicted. Alternatively, we may construct a \textit{super ratio} of the form
\begin{eqnarray}
\Rop_{\op}(t,x_0) &=& \frac{ \langle \op(0) \op_t(x_0) \rangle}{ \langle \op(0) \op(x_0) \rangle} \label{eq:ratio_full}
\Bigg( \frac{ \langle \symop(0) \symop(x_0) \rangle}{ \langle \symop(0) \symop_t(x_0) \rangle} \Bigg)^{n_\op/n_\symop} \\
&=& b^{\Delta_\op - (n_\op / n_\symop) d_\symop} \nonumber \\
&\propto& t^{\gamma_\op/2 + \delta/2}, \quad x_0 \gg a\sqrt{t}, \nonumber
\end{eqnarray}
which cancels the anomalous dimension $\eta$ directly, leaving only the desired anomalous dimension $\gamma_\op$ and some possible residual dependence on the canonical dimensions of $\op$ and $\symop$ through $\delta \equiv d_\op - (n_\op / n_\symop) d_\symop$. If the operators contain no derivatives then $\delta = 0$; this will be the case for all operators we consider in our numerical study.
Eq.~\ref{eq:ratio_full} is valid only on the critical $m=0$ surface and at sufficiently large flow times such that the linear basin of attraction of the IR-stable fixed point has been reached. Otherwise, we expect the predicted $\gamma_\mathcal{O}$ from eq.~\ref{eq:ratio_full} to show additional dependence on $t$ coming from irrelevant operators. In practice, the flow time $t$ which can be reached is limited by the finite lattice volume.
\subsection{Finite volume corrections} To correct for finite volume, a different approach was used in this system than was later used in the scalar case.
The correlator scaling formula under a blocking transformation in a finite volume reads
\begin{equation}
C(z/b;g',L/b) = b^{\Delta} C(z;g,L),
\end{equation}
where $g'$ is the coupling of the blocked theory after blocking by $b$. Now consider the same formula on a larger volume $sL$ at distance $sz$, under a rescaling by $sb$:
\begin{equation}
C(sz/sb;\tilde g',sL/sb) = (sb)^{\Delta} C(sz;\tilde g,sL),
\end{equation}
where a possibly different coupling $\tilde g$ is used on the larger lattice. The two scaling formulas above imply the two ratios
\begin{equation}
R_b(g;L) = b^\Delta, \quad R_{sb}(\tilde g; sL) = (sb)^\Delta,
\end{equation}
from which it follows that
\begin{equation}
R_{sb}(\tilde g; sL) = s^\Delta R_b(g;L).
\end{equation}
Repeating the argument above on a volume $L'$ leads to
\begin{equation}
R_{sb}(\tilde{\bar g}; sL') = s^\Delta R_b(\bar g; L').
\end{equation}
Letting $L' = sL$ and taking the difference of the previous two equations implies
\begin{equation}
R_{sb}(\tilde{\bar{g}}; s^2 L) - R_{sb}(\tilde g; sL) = s^\Delta \big[ R_b(\bar g;sL) - R_b(g;L) \big].
\end{equation}
If the blocking steps above are performed sufficiently close to the IRFP, the effective couplings $g, \bar g$ are close to their fixed point values. Expanding each ratio about $g_*$, we find
\begin{equation}\label{eq:finite_vol_corr}
R_{sb}(g_*; s^2 L) - R_{sb}(g_*; sL) = s^\Delta \big[ R_b(g_*; sL) - R_b(g_*; L) \big] + O(g_i - g_*).
\end{equation}
Eq. (\ref{eq:finite_vol_corr}) predicts the ratio $\Rop(g)$ on volume $s^2 L$ in terms of ratios on smaller volumes, plus a correction term $O(g - g_*)$. We will absorb the latter term as a $g$ dependent correction and assume that the ratio on $s^2 L$ volumes approximates infinite volume. Assuming that conformal symmetry is broken only by the finite number of spatial lattice points $L$, we expect finite volume corrections to depend only on the dimensionless ratio $b/L$, and thus on the flow time as $\sqrt{t}/L$.
\subsection{Simulation details}
We carried out a pilot study of SU(3) gauge theory with $N_f = 12$ degenerate fermions in the fundamental representation. We used a set of gauge configurations that were originally generated for finite-size study of this system~ \cite{Cheng:2013xha} using a plaquette gauge action and nHYP-smeared staggered fermions \cite{Hasenfratz:2001hp,Hasenfratz:2007rf}. Further details on the lattice action can be found in Refs.~\cite{Cheng:2011ic,Cheng:2013eu,Cheng:2013bca,Cheng:2013xha}. We considered five values of the bare gauge coupling $\beta=4.0,5.0,5.5,5.75$ and $6.0$, analyzing 46 and 31 configurations on lattice volumes of $24^3\times 48$ and $32^3\times 64$, respectively. The fermion mass was set to $m = 0.0025$, small enough that we expect the breaking of scale invariance to be dominated by the finite spatial extent $L$.
We considered only fermionic operators, and used the axial charge $A^4$ for our conserved operator $\mathcal{A}$. Since staggered fermions have a remnant U(1) symmetry, it is straightforward to construct a conserved axial charge operator with $Z_A=1$ \cite{Aoki:1999av}. We used on-site staggered operators for the pseudoscalar, vector, and nucleon, and a 1-link operator for the axial charge states. Our individual correlators were consistent with simple exponential decay, although we cannot rule out a functional dependence that includes a Yukawa-like power law correction \cite{Ishikawa:2013tua}.
We considered 10 flow time values between $1.0 \le t/a^2 \le 7.0$ (note that the flow range is $\sqrt{8t}$ in four dimensions.) The strong correlations in GF lead to very small statistical errors in the flow-time dependence.
\subsection{Analysis}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{corr-flow.png}
\caption{\small{Dependence of the correlator ratio $R_P$ on source-sink separation $x_0$ and flow scale $\sqrt{8t}$. For each value of $\sqrt{8t}$, a stable plateau in $R_P$ is seen for $x_0 \gtrsim 2\sqrt{8t}$. The results shown here are on $32^3\times 64$ volumes at $\beta=5.75$. \label{fig:corr-flow}}}
\end{figure}
In the following, we work in lattice units. The ratio given in eq. (\ref{eq:ratio_full}) should be independent of $x_0$ at large $x_0$, as long as the operator $\op$ has well defined quantum numbers. At distances comparable to the flow range, $x_0 \lesssim \sqrt{8t}$, the flowed operators overlap and the ratios could have non-trivial and non-universal structure. Since we used staggered fermions where the action has oscillating phase factors, in the small $x_0$ region we observed significant oscillation, as shown in \fig{fig:corr-flow} for the $\gamma_5$ pseudoscalar operator that does not have a partner in the channel. The width of the oscillation is about $2\sqrt{8 t}$, after which a stable plateau develops. The decrease in the value of the plateau as the flow time increases predicts the anomalous dimension of the pseudoscalar operator.
We worked directly with the ratio $\Rop(t)$ of eq. (\ref{eq:ratio_full}), and did not attempt to extrapolate the fermion field exponent $\eta$ (obtained from using $\symop$ in eq. (\ref{Option1})) to the infrared limit, as it showed much stronger finite-volume and bare coupling dependence than the full operator ratios. At fixed $t$ and $\beta$ we typically found $\eta \lesssim 0.1$.
As a consistency check we considered the vector operator, but found large systematic effects due to oscillation; although we cannot quote a precise extrapolated value, we generally found the associated anomalous dimension consistent with zero as expected.
We predicted the anomalous dimension as a function of $t$ by comparing the ratios at consecutive $(t_1,t_2)$ flow time values
\begin{equation}
\gamma_\op (\beta, \bar{t}, L) =
\frac{ \rm{log}(\Rop_\op(t_1,\beta,L) / \Rop_\op(t_2,\beta,L)) } { \rm{log}(\sqrt{t_1}/\sqrt{t_2}) }
\end{equation}
where $\bar{t}=(t_1+t_2)/2$. The mass anomalous dimension is predicted by considering the pseudoscalar operator, recalling that $\gamma_m = -\gamma_{S} = - \gamma_{PS} $. We estimated the finite volume corrections by eq. (\ref{eq:finite_vol_corr}), estimating $\gamma_m$ iteratively. We had numerical data on $24^3\times 48$ and $32^3\times 64$ volumes so $s=32/24$, and eq. (\ref{eq:finite_vol_corr}) increased the effective volume to $42.66$.
In \fig{fig:gamma-M} we show the infinite volume estimated $\gamma_m$ as a function of $\mu \equiv 1/\sqrt{ 8 \bar{t}}$.
There is significant dependence on the bare gauge coupling $\beta$ and also on the flow time $t$, as expected in a slowly running system. We extrapolated to the $t \to \infty$ limit as
\begin{equation}
\gamma_m(\beta,t) = \gamma_0 + c_\beta t^{\alpha_1} + d_\beta t^{\alpha_2}
\label{eq:extrapolation}
\end{equation}
motivated by the expectation that the correction terms should be due to the slowly evolving irrelevant couplings, associated with higher-dimensional operators that can mix with the operator of interest. Based on Refs.~\cite{Cheng:2013xha,Cheng:2013eu,Cheng:2013bca} we expect the FP to be closest to the $\beta=5.5-6.0$ range, so that the dependence on $\beta$ should be weakest in this range.
We performed a combined fit versus $\beta$ and $t$ using common $\gamma_0$, $\alpha_1$ and $\alpha_2$, but allowing $\beta$ dependent coefficients $c_\beta$ and $d_\beta$. The central fit, as shown in \fig{fig:gamma-M}, omits $\beta=4.0$ and discards the smallest and two largest $t$ values, predicting $\gamma_m=0.23$. The other exponents obtained were $\alpha_1 = -0.25(14)$ and $\alpha_2 = -2.37(29)$; these likely include some remaining finite-volume effects and thus should not correspond directly to irrelevant operator dimensions.
\begin{figure}
\centering
\begin{minipage}{.45\textwidth}
\includegraphics[width=1.1\textwidth]{gamma-extrap-M.png}
\caption{\small{Extrapolation of the mass anomalous dimension $\gamma_m$ to the infrared limit, as described in the text. \label{fig:gamma-M}}}
\end{minipage}\hfill
\begin{minipage}{.45\textwidth}
\includegraphics[width=1.1\textwidth]{gamma-extrap-N.png}
\caption{\small{Extrapolation of the mass anomalous dimension $\gamma_N$ to the infrared limit, as described in the text. \label{fig:gamma-N}}}
\end{minipage}
\end{figure}
We varied the analysis by dropping small/large $t$ values, and also including or discarding $\beta=4.0$ and $\beta=6.0$ from the fit; from these variations we estimated a systematic error of $0.04$ on $\gamma_m$. As an additional cross-check on our finite volume correction procedure, we performed an alternative analysis in which a global fit to $\Rop_\op(t)$ were carried out assuming power-law dependence on the dimensionless ratio $\sqrt{8t}/L$. This gave a central value of 0.27. We conservatively took the difference in central values as an estimate of our finite-volume extrapolation systematic, giving the final prediction
\begin{equation}
\gamma_m = 0.23(6)
\end{equation}
combining the systematic errors in quadrature.
A significant advantage of this technique is that more complicated composite operators can be dealt with in a straightforward way. To demonstrate this, we considered the nucleon operator with our method. The nucleon showed more significant oscillations in the ratio $\Rop_N$, continuing into the plateau region; we accounted for the oscillations by averaging over adjacent pairs of $x_0$ values to obtain $\Rop_N$. The oscillations at large $x_0$ may be due to the coupling of the staggered nucleon operator to other wrong-parity states; numerically, the coupling is small in the ratio. We defined the nucleon anomalous dimension with an additional negative sign, $\gamma_N \equiv \Delta_N - d_N$, to match the convention of refs. \cite{Pica:2016rmv,Gracey:2018oym}. Repeating the full analysis as described yields \fig{fig:gamma-N} and predicts
\begin{equation}
\gamma_N = 0.05(5)
\end{equation}
where the finite-volume systematic error is estimated to be 0.03 and the remaining combined systematic and statistical error is 0.04.
\section{Afterword}
We have demonstrated that gradient flow (GF) can be used to extract estimates of scaling dimensions by testing the correlator ratio method in both relatively well-understood theories, $\phi^4_3, \; \phi^4_2$, and a relatively complicated theory, the 12-flavor SU(3) gauge-fermion system. The method is entirely nonperturbative. Furthermore, our method avoids the costly procedure of ensemble matching that is required in most MCRG studies \cite{Hasenfratz:2009kz}. Now, in the scalar theories we worked with a well-tuned system, and in the gauge-fermion case, we worked effectively at zero fermion mass. An important avenue for future work will be to consider the effects of deviations from criticality in the scalar system. Another question to address is whether the method may be extended to systems without IRFP's, such as QCD. We also expect the method should be fully applicable to other conformal theories than those we have considered, such as $\mathcal{N}=4$ super-Yang-Mills \cite{Schaich:2015daa}, and it has already been applied by Bergner, et al. to adjoint QCD with 1, 3/2, and 2 flavors \cite{Bergner:2019kub, Bergner:2019dim}. Lastly, we plan to apply the method to compute anomalous dimensions of electron bound states in 3-dimensional noncompact QED with $N_f$ flavors, a system with interesting and controversial infrared properties.
\newpage
\chapter{Functional RG}
In this chapter we will introduce the \textit{functional} or \textit{exact renormalization group} program (FRG), the goal of which is to systematically define and solve \textit{functional} PDE's which describe the evolution of field-theoretic quantities of interest under continuous RG transformations. Examples of such quantities are the flowing effective action or the flowing 1PI generator. The RG equations we encountered in chapter 1 were differential equations in the couplings for the observables of a theory, be they renormalized or bare ones. In contrast, the functional RG equations are PDE's in the field variables, which track the evolution of the flowing action \textit{as a whole}. By expanding these functionals in powers of the field, one typically obtains an infinite hierarchy of coupled (non-functional) PDE's for the coefficient functions multiplying the fields. An important difference between FRG and the perturbative RG methods we encountered in chapter 1 is that FRG allows for nonperturbative approaches to the study of RG, which do not rely on Callan-Symanzik-type equations or perturbative renormalizability. But of course, in practice, the method must implement its own approximation strategies in order to solve the functional PDE's, which are often highly nonlinear. We will see that FRG can be thought of as a continuous, or smoothed-out, implementation of block-spin RG
To get started, we introduce a functional tensor notation which proves to be convenient when working with functional PDE's. Then, as a warm-up to the general program of FRG, we will describe in detail a version of \textit{smooth} high-mode elimination RG, to be compared with the typical textbook example of \textit{sharp} high mode elimination, in the framework of perturbation theory, and we will derive the perturbative Wilson-Fisher fixed point (for the second time in this thesis). Along the way, we will introduce some new generating functionals, and we will develop an understanding of what is meant by \textit{effective field theory}. This presentation will also mirror the one given later in the context of stochastic RG. We then will describe the general derivation of FRG equations, and the phenomena of RG fixed points in this formalism. We will compute the gaussian fixed point action following the analysis of \cite{Wilson:1973jj}. To close the chapter, we will briefly survey various applications of FRG which have emerged over the years.
\section{Notation}
In this subsection a functional (index-free) tensor notation is introduced, to be used extensively in this chapter and the next. It is based on that of \cite{Kopietz:2010zz} and certain conventions from differential geometry. The notation often renders quite simple the expression of otherwise cumbersome functional equations by avoiding explicit position and momentum integrations and arguments, when it is appropriate to do so. We will develop the notation by recasting the generating functionals we defined in chapter 1 in a new form.
The first bit of notation was introduced back in chapter 1:
\begin{equation}
J \circ \varphi := \int \mathrm{d}^d x J(x) \varphi(x).
\end{equation}
Sometimes this is written alternatively as $(J,\varphi)$. We can think of this notation as expressing the contraction of two vectors $J$ and $\varphi$ whose components are $J(x)$ and $\varphi(x)$, since they have one ``index." On a lattice, the integral would be replaced by a summation. Now, with the notion of a functional vector comes the notion of functional tensor products, and thus functional tensors. For example, $J \otimes \phi$ is a rank-2 tensor, and the ``dot product'' above may be written in yet another way:
\begin{equation}
J \circ \varphi = \mathrm{tr}[ J \otimes \varphi ].
\end{equation}
$n$-point functions may be written in terms of functional tensor products. For example, the connected functions may be written as\footnote{For fermions one will need to be careful about ordering in this notation. See\cite{Kopietz:2010zz} for one approach.}
\begin{equation}
\langle \varphi \otimes \cdots \otimes \varphi \rangle^\mrm{c} = \frac{\delta}{\delta J} \otimes \cdots \otimes \frac{\delta}{\delta J} \; W(J) \Big|_{J=0} = W^{(n)},
\end{equation}
and one can say $W^{(n)}$ is rank-$n$. We can further introduce a multilinear notation for contracting tensors against vectors, e.g.\footnote{This notation is commonly used in differential geometry as it allows for the expression of tensorial quantities in a coordinate-free manner. See \cite{Nakahara:2003nw} for examples.}
\begin{equation}
W^{(n)} (J, \dots, J) := \prod_{i=1}^n \int \mathrm{d}^d x_i J(x_i) \cdot W^{(n)}(x_1, \dots, x_n),
\end{equation}
so that the expansion of the generator in $J$ is simply written as
\begin{equation}
W(J) = \sum_{n=0}^\infty \frac{1}{n!} W^{(n)} (J, \dots, J).
\end{equation}
Whether the arguments of $W^{(n)}$ refer to position, momentum, or functional vectors should be clear from context.
Another instance of multilinear notation is in the relationship between the 1PI vertices $\Gamma^{(n)}$ and the $W^{(n)}$. We noted in chapter 1 that $\Gamma^{(4)} = - W^{(4)} / W^{(2)} \cdots W^{(2)}$, that is, the 4-point vertex function is a full-propagator-amputated connected 4-point function (for a $\mathbb{Z}_2$-symmetric theory). Such a relation can be expressed nicely in multilinear notation as
\begin{equation}
W^{(4)}(\chi, \dots, \chi) = - \Gamma^{(4)}(W^{(2)} \chi, \dots, W^{(2)} \chi),
\end{equation}
for an arbitrary functional vector $\chi$. We leave the determination of the corresponding relation for $W^{(6)}$ as an exercise for the curious reader. Lastly, we remark that rank-2 tensors are functional matrices, and we may speak of their inverses as usual. For example, the inverse propagator and the 2-point vertex are related by $[W^{(2)}]^{-1} \Gamma^{(2)} = \mathbb{I}$, where the functional identity has the Dirac delta as its components. With all this new notation, we make our lives easier in many computations that come up in functional RG, as we will observe below.
\section{High-mode elimination}
\textit{High-mode elimination} refers to the quintessential form of RG, namely, the systematic elimination of high-momentum (or short-distance) modes $\varphi(p)$ from a field theory, while preserving its low-momentum (or long-distance) structure. In a lattice system, this can be achieved nonperturbatively with block-spin RG or the GFRG of chapter 2. The former was a discrete transformation, while the latter was a continuous transformation. If one wants to analyze the structure of the effective action obtained by these transformations, it is usually simpler to work in the continuum and to use a continuous RG transformation. The framework of high-mode elimination enables one to study such effective actions, and it is perhaps most straight-forward to use perturbation theory, although the ultimate goal of functional RG is to apply nonperturbative methods to study the same problem. Now, there are many ways to carry out this program; we will adopt a method which combines elements of the analyses of Zinn-Justin \cite{ZinnJustin:2002ru}, Peskin and Schroeder \cite{Peskin:1995ev}, Igarashi et al. \cite{Igarashi:2009tj}, and Kopietz et al. \cite{Kopietz:2010zz}, and which utilizes perturbation theory.
\subsection{The bare theory} The initial motivation of high-mode elimination RG is to mimic the discrete blocking transformations of real space RG in the context of continuum field theory. One begins with a bare theory at cutoff $\mn\Lambda_0$ and with partition function and action, respectively,
\begin{equation}
Z = \int \mathscr{D} \varphi \; \mathrm{e}^{-S_0(\varphi)}, \quad S_0(\varphi) = \frac{1}{2} ( \varphi, M_0 \varphi) + V_0(\varphi).
\end{equation}
If one regularizes with a \textit{sharp} cutoff on the momentum integrals in $S_0$, then one is working with a pseudo-lattice model of infinite spatial volume and with non-lattice kinetic terms (e.g. $p^2$ vs $\sin^2 p a$). Furthermore, the sharpness of the cutoff will lead to nonanalyticity in position space, in general, and is therefore unfavorable for some purposes. There are other means of implementing a cutoff in the continuum, however. For a scalar field theory, a common choice is to define a \textit{smooth cutoff function} $K_0(p)$ such that $K_0(0) = 1$ and $K_0(p) \ll 1$ for $p \gg \mn\Lambda_0$, and implement it by a modified free propagator,\footnote{The quantization of field theories with smooth cutoffs is described in \cite{Namsrai:1986md} under the name of ``nonlocal QFT,'' and has been achieved with reasonable rigor. Here, we work in euclidean spacetime and concern ourselves with statistical field theories. It would be interesting, however, to attempt to apply the procedures below in quantum field theory, proper. It seems possible, for example, to obtain effective nonrelativistic field theories in such a manner.}
\begin{equation}
M_0 = \mn\Delta_0^{-1} := K_0^{-1} \mn\Delta^{-1}, \quad \mathrm{where} \quad \mn\Delta^{-1}(p) = p^2 + m_0^2,
\end{equation}
for example. By implementing a cutoff this way, the theory may be regularized in perturbation theory, since every internal line will correspond to a factor of
\begin{equation}
\mn\Delta_0 (p) = \frac{K_0(p)}{p^2 + m_0^2}.
\end{equation}
A convenient choice we shall adopt for the rest of this work is Schwinger regularization,
\begin{equation}
K_0(p) = \mathrm{e}^{-p^2 / \mn\Lambda_0^2} = \mathrm{e}^{-p^2 a_0^2},
\end{equation}
where the inverse cutoff $a_0 = \mn\Lambda_0^{-1}$ has been defined. Notice that such a $K_0$ is nothing but a momentum space heat kernel $f_t(p)$ at ``time'' $t = a_0^2$.
In general, the regulator should be chosen to preserve the symmetry of the fields appearing in the theory. In this case, Schwinger regularization suffices, but one must take greater care when dealing with gauge theories, or theories with constraints like spin models.
\subsection{The low-mode action} We want to define a low-mode, or \textit{effective} action, corresponding to the bare theory in such a way that the cutoff of the effective theory is lower, $\mn\Lambda < \mn\Lambda_0$, and the low-momentum (or long-distance) observables $\langle \mathcal{O}(\varphi) \rangle_{S_0}$ are unaffected. We begin by deriving a peculiar functional identity:
\begin{align}
\mathcal{N} \int \mathscr{D} \varphi & \exp\Big[-\frac{1}{2} ( \varphi, [A + B]^{-1} \varphi) - V(\varphi) \Big] \nonumber \\
& = \int \mathscr{D} \phi_1 \mathscr{D} \phi_2 \exp\Big[- \frac{1}{2} ( \phi_1, A^{-1} \phi_1) - \frac{1}{2} ( \phi_2, B^{-1} \phi_2) - V(\phi_1 + \phi_2) \Big],
\end{align}
where $A$ and $B$ are invertible matrices, and $\mathcal{N}$ is a constant. To prove it, begin with the r.h.s., and redefine the $\phi_2$ field via $\varphi = \phi_1 + \phi_2$. The quadratic part of the action becomes
\begin{equation}
\frac{1}{2} ( \phi_1, [A^{-1} + B^{-1}] \phi_1) - ( \phi_1, B^{-1} \varphi) + \frac{1}{2} ( \varphi, B^{-1} \varphi).
\end{equation}
The integral over $\phi_1$ is then gaussian (being an instance of Wick's theorem) evaluating to
\begin{equation}
\det\Big[ \frac{2 \pi}{A^{-1}+B^{-1}} \Big] \exp \Big[ \frac{1}{2} \big( B^{-1} \varphi, [A^{-1}+B^{-1}]^{-1} B^{-1} \varphi \big) \Big],
\end{equation}
which identifies the constant $\mathcal{N}$. The matrix in the quadratic part of the $\varphi$ action is therefore
\begin{equation}
B^{-1} - (B^{-1})^\top [A^{-1}+B^{-1}]^{-1} B^{-1} = [A + B]^{-1}
\end{equation}
(notice that the matrices need not commute, we just need $B^{-1}$ symmetric).
Next, we write the quadratic term $M_0$ in the bare action $S_0$ as
\begin{equation}
K^{-1}_0 \mn\Delta^{-1} = [K_0 - K_{\mn\Lambda} + K_{\mn\Lambda}]^{-1}\mn\Delta^{-1} = [\mn\Delta(K_0 - K_{\mn\Lambda}) + \mn\Delta K_{\mn\Lambda}]^{-1},
\end{equation}
thereby identifying $A$ and $B$. We use the identity above and afterward relabel $\phi_1 \to \varphi$, $\phi_2 \to \phi$. The integral over $\varphi$ we denote by
\begin{equation} \label{lowmode_ampconn}
\mathrm{e}^{-A_\mn\Lambda(\phi)} = \int \mathscr{D} \varphi \exp\Big[-\frac{1}{2} ( \varphi, \mn\Delta_{\mn\Lambda\mLam_0}^{-1} \varphi) - V(\varphi + \phi) \Big],
\end{equation}
where $\mn\Delta_{\mn\Lambda\mLam_0} = (K_0 - K_{\mn\Lambda})\mn\Delta$ is the so-called \textit{high-mode propagator}, because $\mn\Delta_{\mn\Lambda\mLam_0}(p) \to 0$ rapidly for $p \ll \mn\Lambda$, while $\mn\Delta_{\mn\Lambda\mLam_0}(p) \approx K_0(p) \mn\Delta(p)$ for $\mn\Lambda \ll p$. Complimentarily, we may regard $\mn\Lambda$ as a sliding infrared cutoff for the bare theory. We have therefore avoided the nonanalytic division of sharp high-mode elimination techniques $\phi = \phi_< + \phi_>$. The $\phi$ field in the argument of $V$ is sometimes called a \textit{background field} with respect to the $\varphi$ action. For our purposes, however, we will show that the functional $A_\mn\Lambda(\phi)$ is in fact the generating functional of free-propagator-amputated connected $n$-point functions.
To understand this, we derive another functional identity \cite{Kopietz:2010zz}. For arbitrary action $S(\chi) = \frac{1}{2} ( \chi, M \chi) + V(\chi)$, define
\begin{equation}
\mathrm{e}^{-A(\eta)} := \frac{1}{Z_0} \int \mathscr{D} \chi \exp \Big[-\frac{1}{2} ( \chi, M \chi) - V(\chi + \eta) \Big].
\end{equation}
Let $\chi = \chi' - \eta$. By foiling out the quadratic term (for symmetric $M$),
\begin{equation}
\frac{1}{2} ( \chi, M \chi) = \frac{1}{2} ( \chi', M \chi') - ( \chi', M \eta) + \frac{1}{2} ( \eta, M \eta),
\end{equation}
we observe that the $\chi'$-integral is just the generating functional of disconnected functions $Z(J)|_{J = M\eta}$, and is therefore the exponential of the connected generator $W(J)|_{J=M\eta}$. Hence
\begin{equation} \label{MAW}
A(\eta) = \frac{1}{2} ( \eta, M \eta) - W(M\eta).
\end{equation}
Letting $M = \mn\Delta^{-1}$ and taking two functional derivatives, we find
\begin{equation}
A^{(2)} = \mn\Delta^{-1} - \mn\Delta^{-1} W^{(2)} \mn\Delta^{-1}.
\end{equation}
The second term is the connected 2-point function with external free propagators divided out, or ``amputated.'' Differentiating $n>2$ times, one finds a relation best expressed in tensor notation,
\begin{equation}
A^{(n)}(\eta, \dots , \eta) = - W^{(n)}(\mn\Delta^{-1}\eta, \dots , \mn\Delta^{-1}\eta).
\end{equation}
We collect together all the information we have just uncovered, starting with the bare theory on the l.h.s., in the form
\begin{equation}
\int \mathscr{D} \varphi \; \mathrm{e}^{-S_0(\varphi)} = Z = \int \mathscr{D} \phi \; \mathrm{e}^{- S_\mn\Lambda(\phi) },
\end{equation}
where the \textit{low-mode effective action} $S_\mn\Lambda$ has been defined:
\begin{equation} \label{lowmode_action}
S_\mn\Lambda(\phi) := \frac{1}{2} ( \phi, \mn\Delta_{\mn\Lambda}^{-1} \phi) + A_\mn\Lambda(\phi)
\end{equation}
Thus, we have an exact functional expression for the low-mode effective action. Because it is given in terms of the generator $A_\mn\Lambda$, we also know how to systematically compute it in perturbation theory: compute the amputated-connected $n$-point functions determined by a theory with (high-mode) action
\begin{equation}
S_{\mn\Lambda \mn\Lambda_0} (\varphi) = \frac{1}{2} ( \varphi, \mn\Delta_{\mn\Lambda\mLam_0}^{-1} \varphi) + V_0(\varphi).
\end{equation}
Furthermore, every transformation we performed was \textit{passive}, and therefore the observables $\langle \mathcal{O}(\varphi) \rangle_{S_0}$ are unchanged, except that $\mathcal{O}(\varphi) \to \mathcal{O}'(\phi)$ are not generally of the same functional form. If $\mathcal{O}(\varphi)$ is some polynomial, then $\mathcal{O}'(\phi)$ will generally be a different polynomial. This is an instance of what is referred to as \textit{operator mixing}, a phenomenon we have seen already in the context of scaling operators. Lastly, notice that for any nontrivial theory (i.e. interacting, $V_0 \neq 0$), $A_\mn\Lambda(\phi)$ contains nonvanishing ``coefficients'' $A^{(n)} \neq 0 \; \forall n$, each of which has the same symmetries as the bare action. It follows that the effective action contains every possible term consistent with the symmetry of the bare theory, indeed an infinite number of terms. This generic feature is an integral aspect of the phenomenological approach to effective field theory, which we will discuss below.
\subsection{Effective couplings}
We now consider as an example the case of $\phi^4_d$ theory with bare action in momentum space given by
\begin{equation}
S_0(\varphi) = \frac{1}{2} \int_p K_0^{-1}(p)[p^2 + m_0^2] \varphi(p) \varphi(-p) + \frac{\lambda_0}{4!} \int_{\boldsymbol p} \tilde \delta(p_\mathrm{tot}) \varphi(p_1) \varphi(p_2) \varphi(p_3) \varphi(p_4).
\end{equation}
Here we use for convenience the notations
\begin{equation}
\int_p = \int_{\mathbb{R}^d} \frac{\mathrm{d}^d p}{(2\pi)^d}, \quad \boldsymbol p = p_1 \oplus \cdots \oplus p_n, \quad p_\mathrm{tot} = \sum_{i=1}^n p_i, \quad \mathrm{and} \quad \tilde \delta(p) = (2\pi)^d \delta(p).
\end{equation}
The value of $n$ should be clear from context; in the quartic term above it is 4, for example. We write the functional expansion of $S_\mn\Lambda(\phi)$ as usual,
\begin{equation}
S_\mn\Lambda(\phi) = \sum_{n=0}^\infty \frac{1}{n!} S^{(n)}_\mn\Lambda(\phi, \dots , \phi).
\end{equation}
The effective couplings at scale $\mn\Lambda$ are determined by expanding the functions $S^{(n)}_\mn\Lambda(\boldsymbol p)$ about $\boldsymbol p = \boldsymbol 0$, and writing $P_i$, $i=1,...,dn$ for the $i^\mathrm{th}$ component of the direct sum $\boldsymbol p = p_1 \oplus \cdots \oplus p_n$:
\begin{equation}
S^{(n)}_\mn\Lambda(\boldsymbol p) = \sum_{m=0}^\infty \frac{1}{m!} g_{\mn\Lambda, i_1\cdots i_m}^{(n,m)} P_{i_1} \cdots P_{i_m}, \quad g_{\mn\Lambda, i_1\cdots i_m}^{(n,m)} = \frac{\partial^m S^{(n)}_\mn\Lambda(\boldsymbol p)}{\partial P_{i_1} \cdots \partial P_{i_m}}\Big|_{\boldsymbol p = 0}.
\end{equation}
Since the bare theory is rotationally invariant and $\mathbb{Z}_2$-symmetric, the only non-vanishing couplings have $n, \;m$ even. We will only focus on a few of the most important couplings, out of simplicity.
The quadratic part of the low-mode action is given by the sum
\begin{equation}
S^{(2)}_\mn\Lambda = \mn\Delta^{-1}_\mn\Lambda + A^{(2)}_\mn\Lambda.
\end{equation}
Now, the high mode action has a free propagator $\mn\Delta_{\mn\Lambda_0\mn\Lambda}$. In perturbation theory in the bare coupling $\lambda_0$, the amputated 2-point function is then, to first order,
\begin{equation}
A^{(2)}_\mn\Lambda(p) = \mn\Delta^{-1}_{\mn\Lambda_0\mn\Lambda}(p) - \mn\Delta^{-1}_{\mn\Lambda_0\mn\Lambda}(p) W^{(2)}_{\mn\Lambda_0\mn\Lambda}(p) \mn\Delta^{-1}_{\mn\Lambda_0\mn\Lambda}(p) = \frac{\lambda_0}{2} I^d_{\mn\Lambda_0\mn\Lambda}(m_0^2) + O(\lambda_0^2),
\end{equation}
where the snail loop is
\begin{equation}
I^d_{\mn\Lambda_0\mn\Lambda}(m_0^2) = \int_\ell \mn\Delta_{\mn\Lambda_0\mn\Lambda}(\ell) = \int_\ell \frac{\delta K_{\mn\Lambda_0 \mn\Lambda} (\ell)}{\ell^2 + m_0^2}, \quad \delta K_{\mn\Lambda_0 \mn\Lambda} (\ell) = K_0(\ell) - K_\mn\Lambda(\ell).
\end{equation}
The leading behavior of $K_\mn\Lambda (p) S^{(2)}_\mn\Lambda(p)$ in the momenta is just $p^2 + O(\lambda_0^2)$, and therefore the kinetic term coefficient has no 1-loop contribution. Denoting that coefficient by $c_\mn\Lambda$, we have $c_\mn\Lambda = 1 + O(\lambda_0^2)$. The momentum-independent part of the function $S^{(2)}_\mn\Lambda(p)$ defines the effective mass term in $S_\mn\Lambda$:
\begin{equation} \label{eff_mass}
m_\mn\Lambda^2 = g_\mn\Lambda^{(2,0)} = m_0^2 + \frac{\lambda_0}{2} I^d_{\mn\Lambda_0\mn\Lambda}(m_0^2) + O(\lambda_0^2).
\end{equation}
The next-most important coupling is the quartic coupling, which comes from the amputated 4-point function at zero external momenta. A standard perturbative calculation gives
\begin{equation} \label{eff_coupling}
\lambda_\mn\Lambda = g_\mn\Lambda^{(4,0)} = \lambda_0 - \frac{\lambda_0^2}{2} \sum_{i = 1}^3 C^d_{\mn\Lambda_0 \mn\Lambda}( p_{\sigma_i}, m_0)|_{\boldsymbol p = 0} + O(\lambda_0^3),
\end{equation}
where
\begin{equation}
C^d_{\mn\Lambda_0 \mn\Lambda}(p, m_0) = \int_\ell \frac{\delta K_{\mn\Lambda_0 \mn\Lambda} (\ell)}{\ell^2 + m_0^2} \frac{\delta K_{\mn\Lambda_0 \mn\Lambda} (\ell + p)}{(\ell + p)^2 + m_0^2},
\end{equation}
and where the $p_{\sigma_i}$ are the sums $p_k + p_j$ for each distinct pairing $(k,j)$ of external momenta, without overcounting by momentum conservation $p_1 + p_2 + p_3 + p_4 = 0$. Other couplings, such as the sextic coupling $g_6$, may also be computed; if there is no such coupling in the bare action, then the low-mode couplings will be a function only of the bare mass and quartic couplings.
Momentum-dependent vertices may also be computed simply by expanding the $A_\mn\Lambda^{(n)}(\boldsymbol p)$ in $\boldsymbol p$ near zero, which may be done for nonzero bare mass. To get a feel for what these ``higher'' effective couplings look like, we consider the example of $g_6$. One finds
\begin{equation}
g^{(6,0)}_\mn\Lambda = A^{(6)}_\mn\Lambda(\boldsymbol p)|_{\boldsymbol p = 0} = \lambda_0^2 \sum_\sigma \mn\Delta_{\mn\Lambda\mLam_0}(p_\sigma) - \frac{1}{2} \lambda_0^3 \sum_{\sigma'}D^d_{\mn\Lambda\mLam_0}(p_{\sigma'}, m_0) \Big|_{\boldsymbol p=0},
\end{equation}
where the 1-loop integral is
\begin{equation}
D^d_{\mn\Lambda\mLam_0}(p, k, m_0) = \int_\ell \frac{\delta K_{\mn\Lambda_0 \mn\Lambda} (\ell)}{\ell^2 + m_0^2} \frac{\delta K_{\mn\Lambda_0 \mn\Lambda} (\ell + p)}{(\ell + p)^2 + m_0^2} \frac{\delta K_{\mn\Lambda_0 \mn\Lambda} (\ell + p + k)}{(\ell + p + k)^2 + m_0^2}.
\end{equation}
Since the high-mode propagator vanishes as $p\to 0$, the leading behavior of the effective 6-point coupling is determined by the 1-loop term, and power-counting implies it is of order $O(\lambda_0^3 / \mn\Lambda^3)$.
We remark that in the low-momentum sector $p \ll \mn\Lambda$ of the action, diagrams that are not 1PI are suppressed by terms of order $p^2/\mn\Lambda^2$, which explains why they do not contribute to the effective (zero-momentum) couplings above. This is because every non-1PI diagram contains internal propagators with no loop momenta, namely, the ones which connect 1PI pieces. Such propagators are proportional to $\delta K_{\mn\Lambda_0\mn\Lambda}(p) \approx p^2 / \mn\Lambda^2$, which means that they are highly suppressed. In sum,
\begin{equation}
A^{(n)}_{\mn\Lambda}(\boldsymbol p) \xrightarrow[\boldsymbol p \to 0]{} \Gamma^{(n)}_\mn\Lambda (\boldsymbol p) \Big|_{\boldsymbol p = 0}.
\end{equation}
We conclude that the low-momentum sector of the effective action coincides with that of the quantum effective action, which in a sense gives further justification to the name.
\subsection{RG} A renormalization group analysis concerns itself with the scale dependence of the theory and, if there be any, the fixed points in the space of possible actions. Now, we like to compare actions by comparing their coefficients. But the presence of a cutoff function in the action makes the comparison of coefficients at different scales ambiguous.
This is most clearly understood by reverting back to the sharp cutoff approach: comparing two actions $S_\mn\Lambda, \; S_{\mn\Lambda'}$ would involve comparing integrals with different limits, and worse, as $\mn\Lambda \to 0$, it would seem that all momentum integrals vanish. But, for any $S_\mn\Lambda$, by rescaling $p = \bar p \mn\Lambda$, we can normalize the integration limits to $\bar p \in [0,1]$. We can then compare two actions at different scales (almost) unambiguously. In our smooth cutoff approach, $p = \bar p \mn\Lambda$ corresponds to conventionally using $K(\bar p) = \mathrm{e}^{-\bar p^2}$ as the cutoff function.
The second ambiguity relates to the field normalization. The kinetic term in the effective action has coefficient $c_\mn\Lambda$, which is not equal to the bare coefficient $c_0$. For any field theory, the normalization of the fields is to some extent arbitrary, however. This freedom is reflected in the ability to always normalize one coupling in the action to 1. The convention in field theory is to normalize the kinetic term. Now, after performing the momentum redefinition $p = \bar p \mn\Lambda$ described above, the kinetic and mass terms in $S_\mn\Lambda$ have the form
\begin{equation}
\frac{1}{2} \mn\Lambda^d \int_{\bar p} K(\bar p)^{-1}[c_\mn\Lambda \mn\Lambda^2 \bar p^2 + m_\mn\Lambda^2] \phi(\bar p \mn\Lambda) \phi(-\bar p \mn\Lambda).
\end{equation}
One then defines the dimensionless \textit{rescaled} effective field $\mn\Phi$ by
\begin{equation}
\mn\Phi(\bar p) := c_\mn\Lambda^{1/2}\mn\Lambda^{-d_\phi} \phi(\bar p \mn\Lambda),
\end{equation}
after which the quadratic terms take the form
\begin{equation}
\frac{1}{2} \int_{\bar p} K(\bar p)^{-1}[\bar p^2 + \mn\Lambda^{-2} m_\mn\Lambda^2 / c_\mn\Lambda] \mn\Phi(\bar p) \mn\Phi(-\bar p).
\end{equation}
This suggests that the natural effective mass coupling to consider when performing an RG transformation is the dimensionless rescaled parameter
\begin{equation}
r_\mn\Lambda := m_\mn\Lambda^2 \mn\Lambda^{-2} / c_\mn\Lambda = b^2_\mn\Lambda \hat m^2_\mn\Lambda / c_\mn\Lambda,
\end{equation}
where the scale change parameter $b_\mn\Lambda := \mn\Lambda_0 / \mn\Lambda$ has been introduced, and hats denote removal of scale with $\mn\Lambda_0$, as we did in lattice theory. Similarly, the momentum and field redefinition lead to a natural redefinition of the effective quartic coupling,
\begin{equation}
u_\mn\Lambda := \mn\Lambda^{d-4} \lambda_\mn\Lambda c_\mn\Lambda^{-2}= b_\mn\Lambda^{4-d} \hat \lambda_\mn\Lambda c_\mn\Lambda^{-2}.
\end{equation}
The resulting terms in the effective action are then
\begin{equation}
S_\mn\Lambda(\phi) \supset \frac{1}{2} \int_{\bar p} K(\bar p)^{-1}[\bar p^2 + r_\mn\Lambda] \mn\Phi(\bar p) \mn\Phi(-\bar p) + \frac{u_\mn\Lambda}{4!} \int_{\bar{\boldsymbol p}} \delta(\bar{p}_\mathrm{tot}) \mn\Phi(\bar p_1) \mn\Phi(\bar p_2) \mn\Phi(\bar p_3) \mn\Phi(\bar p_4).
\end{equation}
It is then unambiguous to compare this effective action at different scales $\mn\Lambda$.
To study the possible fixed point behavior of this RG transformation, we derive ODE's for the flowing couplings $r_\mn\Lambda, u_\mn\Lambda$ perturbatively. One differentiates the effective couplings, eqs. (\ref{eff_mass}, \ref{eff_coupling}), with respect to $\mn\Lambda$, replaces bare couplings by effective couplings using their perturbative relationships, and looks for stationary points of the resulting system of ODE's.
To begin, note that
\begin{align}
\mn\Lambda \frac{\mathrm{d} m^2_ \mn\Lambda}{\mathrm{d} \mn\Lambda} & = \frac{\lambda_0}{2} \mn\Lambda \frac{\mathrm{d}}{\mathrm{d} \mn\Lambda} I^d_{\mn\Lambda_0\mn\Lambda}(m_0^2) + O(\lambda_0^2), \nonumber \\
\mn\Lambda \frac{\mathrm{d} \lambda_ \mn\Lambda}{\mathrm{d} \mn\Lambda} & = -\frac{3\lambda_0^2}{2} \mn\Lambda \frac{\mathrm{d}}{\mathrm{d} \mn\Lambda} C^d_{\mn\Lambda_0\mn\Lambda}(0, m_0^2) + O(\lambda_0^3).
\end{align}
Closed-form expressions for these integrals exist, but they are algebraically cumbersome. It is simplest to first compute the derivatives and then perform asymptotic expansions for $\mn\Lambda_0$ large and $\hat m_0^2$ small.\footnote{Closed-form expressions for both integrals exist in fact for any $ 2 < d \leq 4$. See \cite{Igarashi:2009tj} for examples in 4 dimensions. We also note that integrals that come up in smooth high-mode elimination allow one to understand the relationship between dimensional regularization and cutoff field theory. See \cite{Kleppe:1991ru} for some discussion of this.} One finds, keeping a few subleading terms,
\begin{align}
\mn\Lambda \frac{\mathrm{d}}{\mathrm{d} \mn\Lambda} I^3_{\mn\Lambda_0\mn\Lambda}(m_0^2) & = \Omega_3 \Big[ - \frac{\sqrt{\pi}}{2} \mn\Lambda + \sqrt{\pi} m_0^2 / \mn\Lambda + O(m_0^3 / \mn\Lambda^2) \Big] \nonumber \\
& = \Omega_3 \mn\Lambda_0 \Big[ - \frac{\sqrt{\pi}}{2} b_\mn\Lambda^{-1} + \sqrt{\pi} \hat m_0^2 b_\mn\Lambda + O(\hat m_0^3 b_\mn\Lambda^2) \Big],
\end{align}
and
\begin{align}
\mn\Lambda \frac{\mathrm{d}}{\mathrm{d} \mn\Lambda} C^3_{\mn\Lambda_0\mn\Lambda}(0, m_0^2) & = \Omega_3 \Big[ (\sqrt{2} -2) \sqrt{\pi} \Big( \frac{1}{\mn\Lambda} + \frac{8 m_0^2}{\mn\Lambda^3} \Big) + \frac{\sqrt{\pi} \mn\Lambda}{\mn\Lambda_0^2 } + O(m_0^3/ \mn\Lambda^4, m_0^2 / \mn\Lambda \mn\Lambda_0^2) \Big] \nonumber \\
& = \Omega_3 \mn\Lambda_0^{-1} \Big[ (\sqrt{2} -2) \sqrt{\pi} \Big( b_\mn\Lambda + 8 \hat m_0^2 b_\mn\Lambda^3 \Big) + \sqrt{\pi}b_\mn\Lambda^{-1} + O(\hat m_0^3b_\mn\Lambda^4, \hat m_0^2 b_\mn\Lambda) \Big].
\end{align}
The next step is to replace the bare couplings by their perturbation series in $\lambda_\mn\Lambda, \; m_\mn\Lambda^2$, by inverting eqs. (\ref{eff_mass}, \ref{eff_coupling}), and then write $\lambda_\mn\Lambda, \; m_\mn\Lambda^2$ in terms of $u_\mn\Lambda, \; r_\mn\Lambda$. Noting that $\mn\Lambda \partial_\mn\Lambda = - b \partial_b$, we then compute the flow of the rescaled couplings,
\begin{align}
b \frac{\mathrm{d} r_\mn\Lambda}{\mathrm{d} b} & = 2 r_\mn\Lambda + b^2 \mn\Lambda \frac{\mathrm{d} \hat m^2_ \mn\Lambda}{\mathrm{d} \mn\Lambda}, \nonumber \\
b \frac{\mathrm{d} u_\mn\Lambda}{\mathrm{d} b} & = u_\mn\Lambda + b \mn\Lambda \frac{\mathrm{d} \hat \lambda_ \mn\Lambda}{\mathrm{d} \mn\Lambda}.
\end{align}
As $b \to \infty$, these equations asymptotically approach the system of ODE's
\begin{align}
b \frac{\mathrm{d} r}{\mathrm{d} b} & = 2 r + \alpha_1 u + O(r u, u^2), \nonumber \\
b \frac{\mathrm{d} u}{\mathrm{d} b} & = u - \alpha_2 u^2 + O(r u^2, u^3),
\end{align}
where the coefficients $\alpha_i > 0$ are
\begin{equation}
\alpha_1 = \frac{1}{8 \pi^{3/2}}, \quad \alpha_2 = \frac{3(2-\sqrt{2})}{4 \pi^{3/2}}.
\end{equation}
We immediately observe the existence of two fixed point solutions at this order in perturbation theory. The first is the gaussian fixed point $r_* = u_* = 0$, while the second is the famous \textit{Wilson-Fisher fixed point} (WFFP)\footnote{The WFFP is usually found within the epsilon expansion, but here we have worked explicitly in $d=3$.}
\begin{equation}
u_* \approx 1/ \alpha_2, \quad r_* \approx - \alpha_1 u_* / 2.
\end{equation}
The sign of the beta function $\beta(u)$ is opposite that of the discussion in the lattice theory chapter because increasing $b$ corresponds to decreasing $\mu$.
It is worthwhile to pause for a moment and summarize what has happened. We began with the bare theory $S_0$ involving field modes up to $\mn\Lambda_0$. We then performed a particular change of variables which yielded an effective theory $S_\mn\Lambda$ involving field modes up to $\mn\Lambda < \mn\Lambda_0$. A passive change of momenta and field variables, determined by removing canonical mass dimensions with the scale $\mn\Lambda$, together with a further rescaling for the field in order to normalize the kinetic term, led to a dimensionless, rescaled effective theory. The flow of this theory as $\mn\Lambda$ decreased was analyzed by studying the leading effective couplings, and it was found that as $\mn\Lambda \to 0$, the system of ODE's for these couplings had a fixed point. This is just the kind of IRFP discussed in chapter 1 in the context of block-spin RG. We note that, because the transformations involved were passive, the rescaled and unrescaled effective actions are numerically equal, so a fixed point of one is a fixed point of the other. The correlations of the rescaled theory, for example $\langle \mn\Phi \cdots \mn\Phi \rangle_\mn\Lambda$, approach the correlations of the fixed point theory as $\mn\Lambda \to 0$, whereas the correlations of the unrescaled theory $\langle \phi \cdots \phi \rangle_\mn\Lambda$ asymptotically approach a scaling determined by $c_\mn\Lambda^{1/2} \mn\Lambda^{d_\phi}$, the wave function renormalization. We emphasize the distinction between rescaled and unrescaled variables because it is quite important in lattice simulations, as we will see in chapter 4 (and as we already saw in chapter 2 with GFRG).
\subsection{Effective field theory} The fundamental assumption of effective field theory (EFT) is that \textit{all theories we currently work with and will continue to work with, up to the possible exception of a quantum theory of gravity, have a limited range of applicability, in terms of distance or energy scales}. This is certainly true of every real-world theory that has been tested to date. This means that every theory we formulate should contain within it a parameter which functions as a (possibly unknown) \textit{cutoff} above which the theory becomes invalid. The bare theory we considered above was not intended to describe any physics above $\mn\Lambda_0$, for example. Now, for condensed matter systems such as ferromagnets, the cutoff not only sets the cutoff scale, but has a literal manifestation: the atomic spacing. In quantum field theory, on the other hand, all or nearly all theories under consideration take place in a continuum. But these theories possess a cutoff, nevertheless, and therefore must correspond to some kind of smooth cutoff, qualitatively similar to what was used above. In practice, examples of such cutoffs are the masses of particularly heavy particles, or symmetry-breaking scales like that of chiral perturbation theory.\footnote{Another interesting example of how a natural smooth cutoff can arise is in the interpretation of nonlocal quantization given in \cite{Namsrai:1986md}, where the cutoff function $K_0$ arises from an underlying stochastic spacetime.} The general question of how to formulate such theories based on the data we have at low energies therefore becomes of central interest in particle physics.
We noted above that the effective action $S_\mn\Lambda(\phi)$ contains all terms consistent with the symmetries of the bare theory. The action therefore contained all possible ``nonrenormalizable'' (NR) interactions once the scale was lowered even slightly from $\mn\Lambda_0$. The action $S_\mn\Lambda$ at scale $\mn\Lambda$ is an effective theory; it describes the same physics as $S_{0}$, but with lowered cutoff $\mn\Lambda$. Now, in the real world, we might not know what $S_0$ is, according to the fundamental assumption stated above. The best we can do at first is to write down some effective action with cutoff $\mn\Lambda$ as its regularization, and go perform scattering experiments, say, at energy $p$; some of these measurements are used to set the renormalized couplings. But we know that this action typically contains all sorts of interactions, including the NR ones, so we have to also set those by experiment too. The reason our effective theory remains predictive is the fact that, to any given order in $p^2/\mn\Lambda^2$, only a \textit{finite} number of NR interactions must be set (see \cite{Lepage:1989hf} for a detailed explanation). And once set, we can produce predictions for all \textit{other} processes to that order. As we approach energies closer to $\mn\Lambda$, the number of NR interactions we must set will proliferate, and the theory will break down. In many cases, therefore, we can \textit{estimate} the breakdown scale by measuring the strength of the NR interactions.
In some cases we know the bare theory, in others we do not. We know that QCD is the high-energy theory (or ``UV completion'') whose low-energy interactions involve mesons and nucleons. We have to use the methods of EFT to describe the low-energy processes of QCD, however, because perturbation theory breaks down at low energies as the gauge coupling becomes strong.\footnote{This ``low-energy'' scale corresponds to $\mn\Lambda$ in the discussion above; it arises from the dynamics of QCD. In this context, $\mn\Lambda_0$ refers instead to whatever the cutoff of QCD might be.} This EFT is called \textit{chiral perturbation theory}. It is widely expected that the Standard Model itself is an effective theory, and experimental measurement of the NR-interactions allows for predictions of its breakdown scale. Lastly, we remark that even quantum gravity can be treated in an effective manner, because whatever its correct description might be at very high energies (the \textit{Planck scale} $M_\mathrm{pl}$), it is still sensible to use the (nonrenormalizable) theory obtained by direct quantization of General Relativity, expanded about a flat spacetime metric, for processes at scales far, far below $M_\mathrm{pl}$. For an introduction to EFT in QED and the Standard Model, we refer the reader to \cite{Lepage:1989hf}, to \cite{Petrov:2016azi} for a more systematic account (including gravity, chiral perturbation theory, and non-relativistic EFT's), and for a rigorous exposition of the existence of effective scalar field theory in 4 dimensions, to \cite{Ball:1993zy}.
\section{Exact RG equations}
We now turn to the derivation of the so-called \textit{exact} RG equations which are studied in the enterprise of functional RG (FRG). We will discuss the nature of fixed points of such transformations, finding a close parallel with the block-spin analysis of chapter 1, and a few examples will be given along the way. Before plunging into these derivations, we first describe some of the early history of FRG.
On 2 June 1971, Wilson's paper \cite{Wilson:1971dh} was received, in which his approximate RG recursion formula for blocking transformations was introduced. In October, his paper with Fisher \cite{Wilson:1971dc} on the epsilon expansion was submitted. In this paper they described the recursion formula in $4-\epsilon$ dimensions.
By 27 October of 1972, Wegner and Houghton \cite{Wegner:1972ih} derived a differential equation for the ``blocked'' Hamiltonian using a sharp cutoff, which implied continuous versions of the recursion formulas of Wilson and Fisher. The paper was published in July 1973, the same month that Wilson and Kogut's review \cite{Wilson:1973jj} of the epsilon-expansion was received. Deep in their grand review, on the $74^\mathrm{th}$ page, was presented a differential equation for the blocked Hamiltonian, which, like Wegner's, involved a sharp cutoff for the bare theory, but unlike Wegner's, utilized a \textit{smooth suppression} of high modes, rather than a sharp elimination, in a manner similar to what we saw under smooth high-mode elimination above. They distinguished low modes from high modes by referring to the former as ``not terribly integrated,'' and the latter as ``almost completely integrated''; see figure \ref{fig:WK_plot}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{not_terribly_integrated.png}
\caption{\small{Wilson and Kogut's qualitative depiction of the difference between sharp and smooth high-mode elimination under exact RG transformations. Adapted from \cite{Wilson:1973jj}. \label{fig:WK_plot}}}
\end{figure}
In the review, Wilson notes that he presented the exact RG equations at a conference at Irvine in 1970 \cite{Wilson:1973jj}. It is possible to imagine Wilson having put them away while pursuing the more tractable approach provided by the epsilon-expansion, as he goes on to state that ``these equations are very complicated so they will not be discussed in great detail.'' Nevertheless, the works of Wilson, Kogut, Wegner, and Houghton constituted the first instances of functional RG equations, which differ in kind from the recursion formulas and epsilon-expansion by tracking the evolution of the effective action, \textit{as a whole}, rather than a small number of couplings. Thus we see that FRG was developed essentially simultaneously with modern RG theory. In the ensuing decade, the subject was advanced somewhat slowly, as the Callan-Symanzik approach combined with epsilon expansion (and/or dimensional regularization) proved its worth, having been used to demonstrate the asymptotic freedom of Yang-Mills theories and QCD \cite{Gross:1973id, Politzer:1973fx,tHooft:1998qmr}.
Although a small number of researchers continued to work on FRG during this time, it is fair to say that FRG remained somewhat stagnant. Its revival did not come until the late 80's and early 90's after the works of Polchinski \cite{Polchinski:1983gv}, Hasenfratz and Hasenfratz \cite{Hasenfratz:1985dm}, and Wetterich \cite{Ringwald:1989dz,Wetterich:1989xg} recalled the work of the early days and proposed new methods and applications of the FRG.
\subsection{Wegner's approach} The following derivation is based on that of Wegner \cite{Wegner:1974} and that of Rosten \cite{Rosten:2010vm}. We have seen that, because the RG transformations are passive, the partition functions ``at different scales'' are equal. Let us imagine we have transformed down to scale $\mn\Lambda$, and then perform a small
transformation such that the new scale is $\mn\Lambda - \delta \mn\Lambda$. Labeling the partition functions with respect to the effective cutoffs in their actions, we can characterize the invariance condition as
\begin{equation}
\frac{\mathrm{d} Z_\mn\Lambda}{\mathrm{d} \mn\Lambda} = \lim_{\delta \mn\Lambda \to 0} \frac{1}{\delta \mn\Lambda} \big( Z_\mn\Lambda - Z_{\mn\Lambda - \delta \mn\Lambda} \Big) = 0,
\end{equation}
The second term in the limit definition is
\begin{equation}
Z_{\mn\Lambda - \delta \mn\Lambda} = \int \mathscr{D} \phi \; \mathrm{e}^{-S_{\mn\Lambda - \delta \mn\Lambda}(\phi)}.
\end{equation}
We imagine that the effective action at $\mn\Lambda-\delta\mn\Lambda$ was obtained by a transformation of field variables $\phi'$, a continuous analog of a blocking transformation,
\begin{equation} \label{cov}
\phi = \phi' - \delta \tau \Psi_\mn\Lambda(\phi') + O(\delta \tau^2),
\end{equation}
where $\phi'$ was the field at scale $\mn\Lambda$, and $\delta\tau = - \delta \mn\Lambda / \mn\Lambda = \delta \mn\Lambda^{-1}$ (obtained from $\tau = \ln \mn\Lambda_0 / \mn\Lambda$), so that decreasing $\mn\Lambda$ corresponds to increasing $\tau$. For example, the new fields might have been obtained from the old ones by a local smoothing transformation which damps high modes,\footnote{We will see that this transformation is in fact not sufficient as an RG transformation. The form of the transformation corresponding to the high-mode elimination RG we considered earlier is given in eq. (\ref{highmode_psi}).}
\begin{equation}
\phi(p) = \phi'(p) - \delta \tau p^2 \phi'(p) + O(\delta \tau^2).
\end{equation}
The functional vector $\Psi_\mn\Lambda$ is sometimes called the ``flow-vector.'' Introducing a blackboard bold gradient symbol for the functional derivative, the notation\footnote{The \LaTeX \; command for this symbol is available upon request.}
\begin{equation}
\Bnab_\phi = \frac{\delta}{\delta \phi}
\end{equation}
will be used in what follows, often omitting the $\phi$ subscript when it is not too confusing. The expansion of the action in $\delta \tau$ is then
\begin{equation}
S_{\mn\Lambda - \delta \mn\Lambda}(\phi' - \delta \tau \Psi_\mn\Lambda(\phi')) = S_\mn\Lambda(\phi') + \delta \tau \mn\Lambda \partial_\mn\Lambda S_\mn\Lambda (\phi') - \delta \tau \Psi_\mn\Lambda(\phi') \circ \Bnab S_\mn\Lambda (\phi') +O(\delta \tau^2 ).
\end{equation}
In general, the measure will change as well:
\begin{equation}
\mathscr{D} \phi = \mathscr{D} \phi' \det\Big[\mathbb{I} - \delta \tau \Bnab \otimes \Psi_\mn\Lambda(\phi') \Big] = \mathscr{D} \phi' \Big[ 1 - \delta \tau \; \Bnab \circ \Psi_\mn\Lambda(\phi') + O(\delta \tau^2) \Big].
\end{equation}
From the invariance of the partition function above, and because the Boltzmann factor is positive definite, we equate the integrand to zero. This yields the Wegner flow equation, dropping primes,
\begin{equation}
- \mn\Lambda \partial_\mn\Lambda S_\mn\Lambda (\phi) = - \Psi_\mn\Lambda(\phi) \circ \Bnab S_\mn\Lambda (\phi) + \Bnab \circ \Psi_\mn\Lambda(\phi).
\end{equation}
Wegner suggested that the flow equation \textit{must} be nonlinear in $S_\mn\Lambda$ in order to constitute an RG transformation, meaning that $\Psi_\mn\Lambda$ must depend on $S_\mn\Lambda$. We will attempt to give an explanation for \textit{why} in what follows. We then choose the form (which can be called ``diffusive'' for reasons discussed later on)
\begin{equation} \label{diffusive_ERGE}
\Psi_\mn\Lambda (\phi) = \frac{1}{2} C_\mn\Lambda \Bnab S_\mn\Lambda(\phi) - B_\mn\Lambda(\phi),
\end{equation}
where $C_\mn\Lambda$ is some appropriately chosen (positive) cutoff function, which is a functional matrix, and $B_\mn\Lambda$ is a functional vector. In Rosten's analysis, $B_\mn\Lambda = C_\mn\Lambda \Bnab \hat S_\mn\Lambda$ where $\hat S_\mn\Lambda$ is the ``seed action.'' The resulting flow equation is (abusing matrix notation slightly by factoring out $C_\mn\Lambda$)
\begin{equation} \label{ERGES}
- \mn\Lambda \partial_\mn\Lambda S_\mn\Lambda = \frac{1}{2} C_\mn\Lambda \Big[ - \Bnab S_\mn\Lambda \circ \Bnab S_\mn\Lambda + \Bnab^2 S_\mn\Lambda \Big] + B_\mn\Lambda \circ \Bnab S_\mn\Lambda - \Bnab \circ B_\mn\Lambda.
\end{equation}
This equation will generally be called the ``exact RG equation'' (ERGE). Most of the ERGE's for effective actions considered in the literature so far have been of this form. The most common choices for scalar theories are $B_\mn\Lambda = 0$ or $B_\mn\Lambda \propto \phi$. When $B_\mn\Lambda$ is a polynomial of higher order in $\phi$, the RG transformation is called \textit{nonlinear}. Wilson and Bell considered an example of such a flow in 1974, but very few others have been considered analytically (spin models with the constraint $|\phi| = 1$, however, induce a nonlinearity, which is implemented numerically by projections). We shall look at nonlinear RG's in chapter 4.
We can understand intuitively what the ERGE is doing by considering the Euler approximation of the PDE,
\begin{equation}
S_{\mn\Lambda - \delta \mn\Lambda} = S_\mn\Lambda + \frac{\delta \mn\Lambda}{\mn\Lambda} \Bigg( \frac{1}{2} C_\mn\Lambda \Big[ - \Bnab S_\mn\Lambda \circ \Bnab S_\mn\Lambda + \Bnab^2 S_\mn\Lambda \Big] + B_\mn\Lambda \circ \Bnab S_\mn\Lambda - \Bnab \circ B_\mn\Lambda \Bigg).
\end{equation}
If the action $S_\mn\Lambda$ is more than quadratic in $\phi$, the $(\Bnab S_\mn\Lambda)^2$ term generates higher-polynomial terms, the $\Bnab^2 S_\mn\Lambda$ term generates lower-polynomial terms, and depending on the choice of $B_\mn\Lambda$, the last two terms can either generate new terms and/or modify existing terms. For example, with a $\phi^2 + \phi^4$ type action, the effective action after one step would have modified quadratic and quartic couplings but also a sextic term is generated. This kind of generation and mixing of terms in the action is what one expects of generic RG transformations. If $\Psi_\mn\Lambda$ had been chosen to be independent of $S_\mn\Lambda$, then this behavior would not have occurred (at least for linear $B_\mn\Lambda$); the effective action might have terms generated by $B_\mn\Lambda$, but there would be no feedback of $S_\mn\Lambda$ on itself. We take this as justifying Wegner's instinct.
In terms of the Boltzmann weight $\rho_\mn\Lambda = \mathrm{e}^{-S_\mn\Lambda}/Z_\mn\Lambda$, the flow equation can be written as
\begin{equation}
\mn\Lambda \partial_\mn\Lambda \rho_\mn\Lambda = \Bnab \circ ( \Psi_\mn\Lambda \rho_\mn\Lambda ).
\end{equation}
The flow vector $\Psi_\mn\Lambda$ typically depends on $\rho_\mn\Lambda$, so this form is not very useful. For the choice of diffusive $\Psi_\mn\Lambda$ above, it becomes
\begin{equation}
- \mn\Lambda \partial_\mn\Lambda \rho_\mn\Lambda = \frac{1}{2} C_\mn\Lambda \Bnab^2 \rho_\mn\Lambda + \Bnab \circ ( B_\mn\Lambda \rho_\mn\Lambda).
\end{equation}
This equation is of the form of a Fokker-Planck equation with diffusion matrix $C_\mn\Lambda$ and drift $- B_\mn\Lambda$, an observation which forms the basis of stochastic renormalization group, which we explore in the last chapter. We also see the reason for calling the choice eq. (\ref{diffusive_ERGE}) ``diffusive'': the Fokker-Planck equation describes the evolution of a diffusion process.
In the section on high-mode elimination, we observed the importance of considering rescaled effective degrees of freedom when searching for RG fixed points. To implement the effect of rescaling on the level of the flow equations, we can change variables in the ERGE above. From the definitions $p = \bar p \mn\Lambda$ and $\mn\Phi(\bar p) = \mn\Lambda^{-d_\phi} \phi(\bar p \mn\Lambda)$, we can replace functional derivatives using
\begin{equation}
\frac{\delta}{\delta \phi(p)} = \int_{\bar k} \frac{\delta \mn\Phi(\bar k)}{\delta \phi(p)} \frac{\delta}{\delta \mn\Phi(\bar k)} = \mn\Lambda^{-d - d_\phi} \int_{k} \frac{\delta \phi(k)}{\delta \phi(p)} \frac{\delta}{\delta \mn\Phi(\bar k)} = \mn\Lambda^{-d - d_\phi} \frac{\delta}{\delta \mn\Phi(\bar p)}.
\end{equation}
The second-derivative term then becomes
\begin{equation}
\frac{1}{2} \Bnab_\mn\Phi \circ \overline C_\mn\Lambda \Bnab_\mn\Phi \rho_\mn\Lambda,
\end{equation}
where $\overline C_\mn\Lambda(\bar p) = \mn\Lambda^2 C_\mn\Lambda(\bar p \mn\Lambda)$. The seed term similarly becomes
\begin{equation}
\Bnab_\mn\Phi \circ ( \overline B_\mn\Lambda \rho_\mn\Lambda),
\end{equation}
with $\overline B_\mn\Lambda(\mn\Phi; \bar p) = \mn\Lambda^{-d_\phi} B_\mn\Lambda(\mn\Lambda^{d_\phi} \mn\Phi(\bar p); \bar p \mn\Lambda)$. The distribution $\bar \rho_\mn\Lambda(\mn\Phi)$ of the rescaled fields is equal to $\rho_\mn\Lambda(\phi)$ up to an overall normalization, but because they have different dependence on $\mn\Lambda$ at fixed field argument, the $\mn\Lambda$-derivative on the l.h.s. of the ERGE changes. Write the distribution as $\rho_\mn\Lambda(\phi) = \rho(\mn\Lambda, \phi)$. Then the relation of the two distributions is
\begin{equation}
\rho(\mn\Lambda, \phi) = \bar \rho(\mn\Lambda, \mn\Phi) \Big|_{\mn\Phi(\bar p) = \mn\Lambda^{-d_\phi} \phi(\bar p \mn\Lambda)},
\end{equation}
and the derivative becomes
\begin{align}
\frac{\partial}{\partial \mn\Lambda} \rho (\mn\Lambda, \phi) = & \frac{\partial}{\partial \mn\Lambda} \bar \rho (\mn\Lambda,\mn\Phi) + \int_{\bar p} \frac{\delta \bar \rho (\mn\Lambda, \mn\Phi)}{\delta \mn\Phi(\bar p)} \frac{\partial}{\partial \mn\Lambda} \big[ \mn\Lambda^{-d_\phi} \phi(\bar p \mn\Lambda) \big] \nonumber \\
= & \frac{\partial}{\partial \mn\Lambda} \bar \rho (\mn\Lambda,\mn\Phi) - \mn\Lambda^{-1} \int_{\bar p} \big(d_\phi - \bar p \cdot \nabla_{\bar p}\big) \mn\Phi(\bar p) \frac{\delta \bar \rho (\mn\Lambda,\mn\Phi)}{\delta \mn\Phi(\bar p)}.
\end{align}
The operator
\begin{equation}
(D \mn\Phi) \circ \Bnab = \int_{\bar p} \big(d_\phi - \bar p \cdot \nabla_{\bar p}\big) \mn\Phi(\bar p) \frac{\delta}{\delta \mn\Phi(\bar p)}
\end{equation}
is a representation of the dilatation generator on functionals. If the field rescaling involves the full wave function renormalization, $\zeta_\mn\Lambda = \mn\Lambda^{d_\phi} c_\mn\Lambda^{1/2}$, then
\begin{equation}
d_\phi \longrightarrow \Delta_\phi(\mn\Lambda) = \mn\Lambda \frac{\mathrm{d} \ln \zeta_\mn\Lambda}{ \mathrm{d} \mn\Lambda},
\end{equation}
in $D$, while the cutoff function and drift terms become
\begin{equation}
\overline C_\mn\Lambda(\bar p) = \mn\Lambda^{-d} \zeta_\mn\Lambda^2 C_\mn\Lambda(\bar p \mn\Lambda), \quad \overline B_\mn\Lambda(\mn\Phi;\bar p) = \zeta_\mn\Lambda B_\mn\Lambda(\zeta_\mn\Lambda^{-1} \mn\Phi; \bar p \mn\Lambda).
\end{equation}
The rescaled flow equation for $\bar \rho_\mn\Lambda$ is then
\begin{equation}
- \mn\Lambda \partial_\mn\Lambda \bar \rho_\mn\Lambda = \frac{1}{2} \overline C_\mn\Lambda \Bnab^2 \bar \rho_\mn\Lambda + \Bnab \circ ( \overline B_\mn\Lambda \bar \rho_\mn\Lambda ) - D\mn\Phi \circ \Bnab \bar \rho_\mn\Lambda.
\end{equation}
The dilatation generator can be thought of as a redefinition of $B_\mn\Lambda$, but we will keep them separate in our presentation. It has been suggested that $C_\mn\Lambda$ and $B_\mn\Lambda$ should be chosen such that $\overline C_\mn\Lambda$ and $\overline B_\mn\Lambda$ are independent of $\mn\Lambda$. I do not regard this as an essential ingredient of RG transformations, but only one of practical benefit. Solving this equation directly is generally difficult, but can be done exactly in a few simple cases. However, it is interesting that the equation obtained by setting $\partial_\mn\Lambda \bar \rho_\mn\Lambda = 0$, in principle, determines the RG fixed point action. Apart from the approximations made in FRG (described at the end of this chapter), we remark that one can attempt applying a \textit{functional} method of characteristics \cite{Dahmen:1972jx}; we remark that this approach can reproduce the solution to many linear RG's.
By direct differentiation of the formula for the effective low-mode action, eq. (\ref{lowmode_ampconn}), we can obtain the ERGE for high-mode elimination as performed in section 2. One finds that it has the form of eq. (\ref{ERGES}) with diffusion and drift
\begin{equation}
C_\mn\Lambda(p) = \frac{2p^2 K_\mn\Lambda(p)}{p^2 + m_0^2}, \quad B_\mn\Lambda(p) = 2 p^2,
\end{equation}
and therefore the flow vector is
\begin{equation} \label{highmode_psi}
\Psi_\mn\Lambda(p) = \frac{2p^2 K_\mn\Lambda(p)}{p^2 + m_0^2} \frac{\delta S_\mn\Lambda(\phi)}{\delta \phi(p)} - 2 p^2 \phi(p),
\end{equation}
which determines the appropriate type of change of variables eq. (\ref{cov}) in this case, although we did not need to perform it in this way to study the effective action.
\subsection{Constraint functionals} Wilson and Kogut (WK) did not follow the approach above to arrive at their functional RG equation \cite{Wilson:1973jj}. They began, rather, with an analogy to the Green function solution of partial differential equations, generalizing the notion to functional equations. They noted that the functional\footnote{WK use an unconventional field $\phi$ with mass dimension $d/2$, i.e., their kinetic term coefficient is dimensionful. But they work in units such that $\mn\Lambda_0 = 1$ and integrals have a sharp cutoff.}
\begin{equation}
G_\tau(\phi,\varphi) = \mathcal{N}_\tau \exp \Big[ - \frac{1}{2} \int_p \mn\Lambda_0^2 \frac{\big(\phi(p) - \mathrm{e}^{-\alpha_\tau(p)} \varphi(p) \big)\big(\phi(-p) - \mathrm{e}^{-\alpha_\tau(-p)} \varphi(-p)\big)}{1 - \mathrm{e}^{-2 \alpha_\tau(p)}} \Big],
\end{equation}
where $\alpha_\tau(p) = p^2 (\mathrm{e}^{2\tau} - 1) + \beta(\tau)$ for some as-yet undetermined $\beta(\tau)$, is the Green functional of the PDE
\begin{equation}
\fdelAB{\rho_\tau (\phi)}{\alpha_\tau(p)} = \frac{\delta}{\delta \phi(p)} \Big[ \frac{\delta \rho_\tau(\phi)}{\delta \phi(-p)} + \phi(p) \rho_\tau(\phi) \Big]
\end{equation}
subject to the initial condition $\rho_0(\phi) = \delta(\phi - \varphi)$, when the normalization $\mathcal{N}_\tau$ is chosen appropriately. For arbitrary initial condition $\rho_0(\varphi)$, the solution would then be
\begin{equation}
\rho_\tau(\phi) = \int \mathscr{D} \varphi \; G_\tau(\phi,\varphi) \; \rho_0(\varphi).
\end{equation}
In words, the distribution of fields $\phi$ is a gaussian-smearing of the initial distribution $\rho_0$, such that the mean value of the field $\phi(p)$ is $\mathrm{e}^{-\alpha_\tau(p)} \varphi(p)$, within a variance determined by the denominator in the exponent. Thus, the Green function can be thought of as imposing a statistical constraint which suppresses the high modes of $\varphi$ in a smooth fashion.\footnote{RG transformations which use a delta function rather than $G_t$ are sometimes used. Traditional spin-blocking RG is an example. But one must be careful if using delta functions in the continuum, as we discuss in chapter 4.} If the Green functional is such that
\begin{equation}
\int\mathscr{D} \phi \; G_\tau(\phi,\varphi) = \text{independent of $\varphi$},
\end{equation}
then one can insert the r.h.s. into the partition function, integrate over $\varphi$, and obtain $\rho_\tau(\phi)$ as the new Boltzmann weight. Such a procedure seems different from that of Wegner, on the face of it, but we will see below that WK's route is a special case of Wegner's. From the relation
\begin{equation}
\frac{\partial \rho_\tau(\phi)}{\partial \tau} = \int_p \frac{\mathrm{d} \alpha_\tau(p)}{\mathrm{d} \tau} \fdelAB{\rho_\tau (\phi)}{\alpha_\tau(p)},
\end{equation}
one obtains the ERGE
\begin{equation}
\frac{\partial \rho_\tau(\phi)}{\partial \tau} = \int_p \dot \alpha_\tau(p) \frac{\delta}{\delta \phi(p)} \Big[ \frac{\delta \rho_\tau(\phi)}{\delta \phi(-p)} + \phi(p) \rho_\tau(\phi) \Big].
\end{equation}
By comparison with Wegner's formalism in the previous section, we see that WK's exact RG is a special case of diffusive FRG with\footnote{The ``cutoff function'' $C_t$ is not truly a cutoff function for Wilson and Kogut. They use a sharp cutoff on all momentum integrals, $|p| \in [0,\mn\Lambda_0]$.}
\begin{equation}
B_\tau(\phi;p) = \dot \alpha_\tau(p) \phi(p), \quad C_\tau(p) = \dot \alpha_\tau(p).
\end{equation}
Inspection of $\alpha_\tau(p)$ suggests that the effective scale is $\mn\Lambda = \mn\Lambda_0 \mathrm{e}^{-\tau}$, which implies the rescaled variables
\begin{equation}
p = \mn\Lambda_0 \mathrm{e}^{-\tau} \bar p, \quad \mn\Phi(\bar p) = \mn\Lambda_0^{\frac{d}{2}} \mathrm{e}^{\frac{d}{2} \tau} \phi(p).
\end{equation}
WK insist that the rescaling factor $\zeta_\tau = \mathrm{e}^{\frac{d}{2} \tau}$ should be determined by demanding that the ERGE is independent of $\tau$ in rescaled variables. Since $\dot \alpha_\tau(p) = 2 \bar p^2 + \dot \beta(\tau)$, the rescaled ERGE is
\begin{equation}
\frac{\partial \rho_\tau(\mn\Phi)}{\partial \tau} = \int_{\bar p} \big(\smallfrac{d}{2} + \bar p \cdot \nabla_{\bar p}\big) \mn\Phi(\bar p) \frac{\delta \rho_\tau(\mn\Phi)}{\delta \mn\Phi(\bar p)} + \int_{\bar p} \big(2 \bar p^2 + \dot \beta(\tau)\big) \frac{\delta}{\delta \mn\Phi(\bar p)} \Big[ \frac{\delta \rho_\tau(\mn\Phi)}{\delta \mn\Phi(-\bar p)} + \mn\Phi(\bar p) \rho_\tau(\mn\Phi) \Big].
\end{equation}
The function $\beta(\tau)$ is determined by choosing a normalization condition for the kinetic term for all $\tau$. In the gaussian model, it will turn out that $\beta(\tau) = \tau$ is appropriate. We will describe the gaussian fixed point of WK's ERGE in the next section.
\subsection{Fixed points}
In terms of $\tau = \ln \mn\Lambda_0 / \mn\Lambda$, a fixed point solution is an action $S_*$ for which $\partial_\tau S_* = 0$, which typically must occur in the limit $\tau \to \infty$. Dropping bars for dimless quantities, the ERGE eq. (\ref{ERGES}) implies that a fixed point action satisfies
\begin{equation}
0 = - \frac{1}{2} C_* \Big[ \Bnab S_* \circ \Bnab S_* - \Bnab^2 S_* \Big] + B_* \circ \Bnab S_* - \Bnab \circ B_* - D \mn\Phi \circ \Bnab S_*,
\end{equation}
assuming there is some limit of $C_\tau$ and $B_\tau$ as $\tau \to \infty$.
We are often interested in the behavior of actions that are slightly deformed from the fixed point, in order to study the various asymptotic behaviors of these deformations. We may perturb about the fixed point by letting
\begin{equation}
S_\tau = S_* + \mathcal{E}_\tau, \quad B_\tau = B_* + \mathcal{F}_\tau, \quad C_\tau = C_* + \mathcal{G}_\tau,
\end{equation}
with $\mathcal{E}_\tau, \; \mathcal{F}_\tau, \; \mathcal{G}_\tau$ small for large $\tau$, and linearizing the flowing equation:
\begin{align}
\partial_\tau \mathcal{E}_\tau + D \mn\Phi \circ \Bnab \mathcal{E}_\tau = - \frac{1}{2} C_* \Big[ & 2 \Bnab S_* \circ \Bnab \mathcal{E}_\tau - \Bnab^2 \mathcal{E}_\tau \Big] - \frac{1}{2} \mathcal{G}_\tau \Big[ \Bnab S_* \circ \Bnab S_* - \Bnab^2 S_* \Big] \\
& - \Bnab \circ \mathcal{F}_\tau + B_* \circ \Bnab \mathcal{E}_\tau + \mathcal{F}_\tau \circ \Bnab S_*.
\end{align}
Assuming $\mathcal{G}_\tau$ decays with time, we drop the $\mathcal{G}_\tau$ term in what follows. Furthermore, we see that assuming $B_\tau$ is independent of $\tau$ further simplifies the equation by dropping $\mathcal{F}_\tau$. It is then clear how WK's demand of a $\tau$-independent ERGE can simplify analyses. In this simplified (but typical) case, then, the linearized flow equation becomes
\begin{equation}
\partial_\tau \mathcal{E}_\tau + D \mn\Phi \circ \Bnab \mathcal{E}_\tau = - \frac{1}{2} C_* \Big[2 \Bnab S_* \circ \Bnab - \Bnab^2 \Big] \mathcal{E}_\tau + B_* \circ \Bnab \mathcal{E}_\tau.
\end{equation}
Variables separate, $\mathcal{E}_\tau = f(\tau) \mathcal R(\phi)$, and if we let $f(\tau) = f(0) \mathrm{e}^{y \tau}$, then we have an eigenvalue equation for $\mathcal{R}$,
\begin{equation}
y \mathcal R = - \frac{1}{2} C_* \Big[ 2 \Bnab S_* \circ \Bnab - \Bnab^2 \Big] \mathcal R + B_* \circ \Bnab \mathcal{R} - D \mn\Phi \circ \Bnab \mathcal{R},
\end{equation}
which generally will have a spectrum $\{ y_a \}$ (which will be discrete under certain assumptions \cite{Rosten:2010vm}). Therefore, the perturbed action near a fixed point can be written as
\begin{equation}
S_\tau(\phi) = S_*(\phi) + \sum_{i} \alpha_i \; \mathrm{e}^{y_a \tau} \mathcal R_a(\phi).
\end{equation}
The $\mathcal{R}_a$ are called \textit{scaling operators}, and the $y_a$ are their RG eigenvalues. We observe three distinct types of behavior for such perturbations. For the operator $\mathcal{R}_a$, we have
\begin{itemize}
\item $y_a < 0$: the perturbation decays with time exponentially, and is called \textit{irrelevant},
\item $y_a = 0$: the perturbation is independent of time, and is called \textit{exactly marginal},
\item $y_a > 0$: the perturbation increases exponentially, and is called \textit{relevant}.
\end{itemize}
Thus we recover the same kind of behavior for the perturbations about a fixed point that were observed using the discrete block-spin theory of chapter 1, except that the RG flow is parameterized by $\tau$ rather than $b$. It will be useful in what follows to use the continuous analog of the scale factor, $b_\tau = \mathrm{e}^\tau = \mn\Lambda_0 / \mn\Lambda$.
The expectation values of scaling operators behave in a simple way under RG transformations in the vicinity of the fixed point. Suppose $S_\tau \to S_{\tau+\epsilon}$ close to $S_*$ under the RG flow.\footnote{The following derivation is adapted to FRG from the approach described in \cite{ZinnJustin:2007zz}.} Then deform the initial action via the scaling operator $\mathcal{R}_a$,
\begin{equation}
S_\tau(\theta) := S_\tau + \theta \mathcal{R}_a,
\end{equation}
where $\theta$ is a smooth parameter. The flowing action $S_\tau$ may be written as a linear combination of the scaling operators. Thus $\theta$ can be viewed as a deformation of the associated coupling. This means that, under an RG transformation $\tau \to \tau + \epsilon$, the coupling changes simply: $\theta \to b_\epsilon^{y_a} \theta$, where $b_\epsilon = b_{\tau + \epsilon} / b_\tau$. Now, from the general relation
\begin{equation}
\langle \mathcal{R}_a \rangle_{S_\tau} = - \frac{\mathrm{d}}{\mathrm{d} \theta} \Big( \ln \int_\mn\Phi \mathrm{e}^{-S_\tau(\theta)} \Big) \Big|_{\theta = 0},
\end{equation}
we may derive $\langle \mathcal{R}_a \rangle_{S_{\tau+\epsilon}} = b_\epsilon^{-y_a} \langle \mathcal{R}_a \rangle_{S_\tau}$. The scaling operators above are volume integrals of local scaling operators $\mathcal{R}_a(\bar x)$, where $\bar x = \hat x b_\tau^{-1}$ and $\hat x$ is a dimensionless distance at the bare scale ($x = a_0 \hat x$). Hence
\begin{equation} \label{scalingops}
\langle \mathcal{R}_a(\bar x) \rangle_{S_{\tau+\epsilon}} = b_\epsilon^{\Delta_a} \langle \mathcal{R}_a (b_\epsilon \bar x) \rangle_{S_\tau}, \quad \text{with} \quad \Delta_a := d - y_a
\end{equation}
being the \textit{scaling dimension} of the operator. By letting $S_\tau \to S_\tau(\theta)$ in this scaling formula, we can derive scaling laws for higher $n$-point functions of $\mathcal{R}_a$ by further differentiation. Thus, when we say that a scaling operator changes as $\mathcal{R}_a \to b_\epsilon^{\Delta_a} \mathcal{R}_a$, it is true either as a term in the effective action (at constant coupling), or within expectation values at different RG scales. In particular, it implies the correlator scaling laws for scaling operators that we described in block-spin RG and which formed the basis of the GFRG method of chapter 2.
The example of the GFP of Wilson and Kogut's ERGE will now be discussed. In terms of the action $S_\tau$, their ERGE becomes
\begin{equation}
\partial_\tau S_\tau = - D \mn\Phi \circ \Bnab S_\tau - \dot \alpha \Big[ \Bnab S_\tau \circ \Bnab S_\tau - \Bnab^2 S_\tau - \mn\Phi \circ \Bnab S_\tau \Big].
\end{equation}
If the bare action is gaussian, the effective action will also still be gaussian. Writing
\begin{equation}
S_\tau(\mn\Phi) = \frac{1}{2} u_\tau \mn\Phi \circ \mn\Phi,
\end{equation}
where $u_\tau = u(\tau,\bar p)$, and noting that integration by parts (discarding boundaries) implies
\begin{equation}
\int_{\bar p} \bar p \cdot \nabla_{\bar p} \mn\Phi( \bar p) \; u_\tau(\bar p) \mn\Phi(-\bar p) = - \frac{1}{2} \int_{\bar p} \big[ d + \bar p \cdot \nabla_{\bar p} u_\tau(\bar p) \big] \mn\Phi(\bar p) \mn\Phi(-\bar p),
\end{equation}
then determines a PDE for the 2-point term $u_\tau( \bar p)$:
\begin{equation}
\partial_\tau u_\tau = - \bar p \cdot \nabla_{\bar p} u_\tau - 2 \dot \alpha \big[1 - u_\tau \big] u_\tau,
\end{equation}
For $\beta(\tau) = a \tau + b$, then $\dot \alpha = 2 \bar p^2 + a$. We solve the equation by the method of characteristics. First we find the integral curves of the vector field on $\mathbb{R}^{1+d}$,
\begin{equation}
X = \partial_\tau + \bar p \cdot \nabla_{\bar p},
\end{equation}
namely, the curves $(\tau_\lambda, \bar p_\lambda)$ parameterized by $\lambda$ and determined by the ODE's
\begin{equation}
\frac{\mathrm{d} \tau}{\mathrm{d} \lambda} = 1, \quad \frac{\mathrm{d} \bar p}{\mathrm{d} \lambda} = \bar p \quad \Rightarrow \quad \tau_\lambda = \lambda, \quad \bar p_\lambda = \bar p \mathrm{e}^{\lambda},
\end{equation}
with initial condition $\tau_{0} = 0$ --- hence we can just use $\tau$ as the parameter along every curve. The first order PDE above then transports $u(\tau,p)$ along the integrals curves of $X$ via
\begin{equation}
\frac{\mathrm{d} u}{\mathrm{d} \lambda} = 2 (2\bar p^2_\lambda + a) u(\lambda) \big( 1 - u(\lambda) \big),
\end{equation}
whose solution $u(\lambda) = u(\tau_\lambda, \bar p_\lambda)$ is the value of $u$ at a point along the curve determined by $\lambda$ and the initial conditions for $\tau_\lambda, \; \bar p_\lambda$. The solution is then
\begin{equation}
u(\lambda) = \frac{u(0)}{u(0) + (1 - u(0)) \exp B(\lambda)},
\end{equation}
where $u(0) = u(\tau_0, \bar p_0) = u(0,\bar p)$, and
\begin{equation}
B(\lambda) = - 2 \int_0^\lambda \mathrm{d} \lambda' \big[ 2 \bar p_{\lambda'}^2 + a \big] = -2 \bar p^2 ( \mathrm{e}^{2\lambda} - 1) - 2 a \lambda.
\end{equation}
The initial condition for $u$ is the quadratic part $S^{(2)}_0(\bar p) = \omega(\bar p)$ of bare action. Geometrically, it is the value of $u$ along the $\tau=0$ axis. If we want the solution at a point $(\tau,\bar k)$, we use the fact that $\bar k = \bar p_\tau = \bar p \mathrm{e}^\tau$ can be taken as the value of the momentum at parameter value $\tau$ along a curve starting from the $\tau$-axis, where the momenta are $\bar p_0 = \bar p$. Writing $\tau=\lambda$, we have
\begin{align}
u(\tau,\bar k) & = \frac{\omega(\bar p)}{\omega(\bar p) + (1 - \omega(\bar p)) \exp [ -2 \bar p^2 ( \mathrm{e}^{2\tau} - 1) - 2 a \tau]} \nonumber \\
& = \frac{\omega(\mathrm{e}^{-\tau} \bar k)}{\omega(\mathrm{e}^{-\tau} \bar k) + (1 - \omega(\mathrm{e}^{-\tau} \bar k)) \exp [ -2 \bar k^2 (1 - \mathrm{e}^{-2\tau}) - 2 a \tau]}.
\end{align}
If the initial condition is the standard kinetic term, $\omega(\bar p) = z \bar p^2$, we find
\begin{equation}
u(\tau,\bar k) = \frac{z \bar p^2}{z \bar p^2 + (1 - z \bar p^2 \mathrm{e}^{-2\tau}) \exp [ -2 \bar p^2 (1 - \mathrm{e}^{-2\tau}) - 2 (a-1) \tau]}.
\end{equation}
To obtain a nonzero and nonuniform Boltzmann distribution as $\tau \to \infty$, we see that we must choose $a=1$ in $\beta(\tau)$. The limit is then
\begin{equation}
u_*(\bar p) = \frac{z \bar p^2}{z \bar p^2 + \mathrm{e}^{ - 2 \bar p^2}}.
\end{equation}
Thus, the ERGE of Wilson and Kogut indeed has a gaussian fixed point, and in fact it possesses a \textit{line} of fixed points parameterized by $z$ such that the Boltzmann factor is bounded above. One may alternatively solve the fixed point equation directly, in which case $z$ arises as an integration constant.
\section{Various implementations}
I close this chapter with a brief summary of various methods and applications of the formalism of functional RG that have arisen after its initial formulation in the 1970's. We will not give in-depth accounts of them and refer instead to other sources and reviews for the interested reader. Also note that the items below are, of course, not necessarily mutually exclusive.
\begin{itemize}
\item \textit{Polchinski equation.} In 1983 Polchinski wrote down an RG equation inspired by that of Wilson and Kogut \cite{Polchinski:1983gv}. Rather than using his ERGE to study fixed points and RG flows in the abstract, he used it to prove perturbative renormalizability in a novel and simpler way than usual. It also laid the groundwork for precise formulations of effective field theory. His proof has since been made quite rigorous \cite{Keller:1990ej,Ball:1993zy}, and versions of it have been carried out in other systems, like QED4 \cite{Bonini:1993kt}.
\item \textit{Derivative expansion.} The simplest truncation strategy in FRG is the \textit{local potential approximation} (LPA) \cite{Hasenfratz:1985dm,Golner:1985fg,Bagnuls:2000ae}. It proceeds by fixing the kinetic term as $(\partial \phi)^2$ and ignoring all other momentum dependence in the effective action, yielding an ERGE for the potential $V(\phi)$.
The LPA is a first-order approximation to the more general \textit{derivative expansion}, in which the effective action is expanded in powers of momenta. A drawback to the most basic approach (the LPA) is that the $\eta$ exponent is zero, because there is no need to correct for would-be changes of the kinetic term, and therefore will only be expected to be accurate in systems like $\phi^4_3$ where $\eta$ is small, but this defect lessens with higher orders \cite{Bagnuls:2000ae}.
\item \textit{Average effective action.} The concept of a constraint functional was revived in the works of Wetterich in the early 1990's. He introduced a quantity called the \textit{average effective action} $\Gamma_k$, which corresponds to the high-mode action discussed above \cite{Wetterich:1989xg}. This action satisfies an ERGE that is often simpler in form than that of the flowing effective action $S_\mn\Lambda$, as it deals directly with 1PI functions \cite{Wetterich:1992yh}, whereas in general the contributions $S_\mn\Lambda$ are merely connected. Wetterich's original application of this formalism was to the evolution of the effective potential in the broken-symmetry phase of scalar field theories, but has since found numerous applications.
\item \textit{Condensed matter.} Functional RG has been adapted to fermionic models in condensed matter theory since the early 2000's in order to study long-distance properties of such systems. By ``long distance,'' one here means \textit{close to the Fermi surface}. An important difference with respect to RG as presented in this thesis is that the rescaling step in the RG transformation should rescale momenta relative to the Fermi surface. See \cite{Kopietz:2010zz} for an exposition.
\item \textit{Asymptotic safety.} In the realm of quantum gravity it was proposed by Weinberg long ago \cite{Weinberg:1980gg} that a possible solution to the puzzle of the high-energy limit of quantized General Relativity would be the existence of a UVFP for the gravitational interaction. Perturbative methods are typically assumed to be untrustworthy at high energies in this theory, so it is natural to attempt to apply the nonperturbative methods of FRG to quantum gravity in search of fixed points. See \cite{Eichhorn:2020mte,Reichert:2020mja} for reviews.
\end{itemize}
\newpage
\chapter{Stochastic RG}
In this chapter, we will demonstrate the equivalence of certain kinds of FRG transformations with a class of stochastic (Markov) processes on field space.\footnote{See \cite{Pavliotis:2014} for a mathematician's introduction to stochastic processes, \cite{ZinnJustin:2002ru} for a physicist's introduction, and \cite{Damgaard:1987rr, ZinnJustin:2002ru} for an introduction to their field-theoretical generalization in the context of stochastic quantization. The essential aspects are reviewed in Appendix B.} It has been noted before \cite{Gaite:2000jv, Pawlowski:2017rhn, ZinnJustin:2007zz} that the functional RG equations for effective actions, when written in terms of effective Boltzmann weights, are of the form of a Fokker-Planck (FP) equation, whose solution is therefore a probability distribution over effective fields. Taking this observation seriously, and recalling that Fokker-Planck distributions can be thought of as being generated by a Langevin equation on the degrees of freedom appearing in the FP distribution, one may ask what kinds of Langevin equation generate the FRG effective actions. In what follows, we will define an RG transformation by a particularly simple (linear) choice of Langevin equation, and show by direct calculation that the transition functions resemble the constraint functionals found in the literature of FRG. The effective action for the specific case of $\phi^4$ theory in three dimensions will then be discussed, and the existence of a nontrivial IR fixed point will be checked to 1-loop order in perturbation theory. It will therefore become apparent that although the stationary distribution of the FP equation would be expected to be gaussian, a simple rescaling of variables allows for an interacting fixed point solution.\footnote{This is not surprising from the FRG perspective, of course, but it may be unexpected from the standpoint of stochastic processes, where the stationary distributions of the Fokker-Planck equation are expected to involve the potential whose gradient appears as the drift term in the Langevin equation \cite{Pavliotis:2014}.}
In chapter 2 we described the relationship of gradient flow (GF) with RG, and at various points mentioned that an alternative approach to the theory, not based on a block-spin analogy, was possible. Before describing the approach, we mention that other analytic work has been done \cite{Kagimura:2015via,Yamamura:2015kva,Makino:2018rys, Abe:2018zdc, Sonoda:2019ibh} connecting GF to the framework of functional RG. In particular, it was noted by Abe and Fukuma that certain definitions of a GF effective action lead to a kind of Langevin equation \cite{Abe:2018zdc} (though different from what we propose here), and later by Sonoda, that the connected $n$-point functions of a particular FRG effective theory are equal to the GF observables up to proportionality \cite{Sonoda:2019ibh}. The relationship between stochastic processes with ``colored noise'' and RG has also been explored recently in \cite{Ziegler:2019jgf}.
The equivalence we discuss here is a formulation of the Monte Carlo Renormalization Group (MCRG) principle for FRG. Recall that the kind of MCRG discussed by Swendsen \cite{Swendsen:1979gn} in the 1980's provided a prescription for computing observables in an effective theory by computing \textit{blocked} observables in a bare theory, that is, without having to know the effective action. A similar property will be found for the stochastic RG transformation, namely, that effective observables may be computed from the stochastic observables generated by the Langevin equation, whose initial condition is the bare field. The MCRG property will be valid for both lattice and continuum theories alike, thereby suggesting the possibility of computing general observables in an effective theory on the lattice by integrating a Langevin equation on top of the ensemble generated in the MCMC simulation of the corresponding bare theory.
The relationship to gradient flow will then follow from an observation made by Wilson and Kogut \cite{Wilson:1973jj}, and recently connected to gradient flow by Sonoda and Suzuki \cite{Sonoda:2019ibh}. In the context of the stochastic RG transformation, it follows from the MCRG equivalence that the connected expectation values of an FRG effective theory are equal to gradient-flowed expectations up to additive corrections that depend on the choice of Langevin equation, and which decay exponentially at large distances. This relationship implies that the measurement of gradient-flowed quantities is sufficient for the determination of long-distance critical properties of the theory, in much the same way as spin-blocked observables at large distances. This avoids the necessity of performing a full Langevin equation simulation if one only cares about long-distance properties.
A virtue of the characterization of FRG in terms of stochastic processes is that the observables of the effective theory satisfy differential equations involving the generator of the Markov process, allowing one to study the flow of the observables directly, without knowledge of the effective action. An analysis of these equations for discrete, small time steps leads to the stochastic RG instantiation of usual RG scaling laws of correlations of the fundamental field, as well as of composite operators built from it. In particular, by virtue of the stochastic MCRG equivalence, one is led to correlator ratio formulas of the sort described in chapter 2, implying a method for measuring scaling dimensions of operators close to a critical fixed point. Thus, the results of chapter 2 may be regarded as a consequence of the stochastic RG idea.
What follows is an exposition of stochastic RG based on the contents of \cite{Carosso:2019qpb}, but expanded upon in various places.
\section{Stochastic processes and FRG}
Here we discuss the general framework for stochastic RG. The RG transformation will be defined by a Langevin equation on the degrees of freedom of a field theory. The simplicity of the equation will allow for an explicit calculation of the probability distribution which it generates, and the functional form of the distribution will entail an equivalence to conventional FRG transformations. A brief consideration of the observables generated by the stochastic process will lead to the MCRG equivalence between the effective theory and the stochastic observables. Lastly, we will comment on the pitfalls of a seemingly simpler definition of the effective theory.
\subsection{The Langevin equation}
We will define an RG transformation by a stochastic process $\phi_t$ on field space over $\mathbb{R}^d$, determined by a Langevin equation (LE) of the form
\begin{equation} \label{LE_momspace}
\partial_t \phi_t(p) = -\omega(p) \phi_t(p) + \eta_t(p), \quad \phi_0(p) = \varphi(p),
\end{equation}
where $\omega(p)$ is positive for $\|p\|>0$ and $\omega(0) \geq 0$, e.g. $\omega(p) = p^2$, where $p^2 := \| p \|^2$.\footnote{Of course, the realm of stochastic quantization \cite{Damgaard:1987rr} deals with writing field theory expectation values as equilibrium limits of a stochastic process on field space. Here, however, the bare theory is kept as a traditional field theory, and the stochasticity applies to the RG transformation only. See the end of Appendix B for the main ideas of SQ.} The ``time'' $t$ in this equation does not denote physical time, but rather an ``RG time'' which we will call \textit{flow time}, or simply \textit{time}. The noise $\eta_t(p)$ is chosen to be gaussian-distributed according to the measure
\begin{equation} \label{noise_distribution}
\mathrm{d} \mu_{0} (\eta) := \mathrm{c}(\mn\Lambda_0, \Omega) \exp\Big[ - \frac{1}{2\Omega} \int_I \mathrm{d} t \; (\eta_t, K_{0}^{-1}\eta_t)\Big] \mathscr{D} \eta,
\end{equation}
where the notation $(\phi, M \chi)$ denotes a quadratic form, written variously as\footnote{We abbreviate $\int_x = \int_{\mathbb{R}^d} \mathrm{d}^d x$ and $\int_p = \int_{\mathbb{R}^d} \mathrm{d}^d p / (2\pi)^d$ when no confusion arises.}
\begin{equation}
(\phi, M \chi) = \int_{xy} \phi (x) M(x,y) \chi(y) = \int_{pk} \phi(p) M(p,k) \chi_t(k).
\end{equation}
The cutoff function $K_{0}(p)$ suppresses noise momentum modes greater than $\mn\Lambda_0$, e.g. $K_{0}(p) = \mathrm{e}^{-p^2/\mn\Lambda_0^2}$ under Schwinger regularization.\footnote{The LE and measure $\mathrm{d} \mu_0$ can easily be written for a lattice theory, in which case the cutoff function $K_0$ is not necessary, as the lattice naturally regulates the noise at the bare scale.} Expectation values with respect to the noise distribution of functions $O(\eta)$ are defined by
\begin{equation}
\mathbb{E}_{\mu_0}[O(\eta)] := \int \! O(\eta) \mathrm{d} \mu_0 (\eta).
\end{equation}
The first two moments of $\mu_{0}$ are then
\begin{equation}
\mathbb{E}_{\mu_0}[ \eta_t(p) ] = 0, \quad \mathbb{E}_{\mu_0}[ \eta_t(p) \eta_s(k) ] = 2\pi \Omega \; \delta(t-s) \; (2\pi)^d \delta(p+k) K_{0}(k).
\end{equation}
Later we will take the initial condition $\phi_0 = \varphi$ to be distributed according to a measure $\mathrm{d} \rho_0 (\varphi)$ corresponding to the bare theory of interest, the cutoff of which is chosen to be $\mn\Lambda_0$. Hence, the cutoff for the noise is chosen to match the cutoff of the bare theory.
Turning back to eq. (\ref{noise_distribution}), the constant $\mathrm{c}(\mn\Lambda_0, \Omega)$ is chosen to normalize $\mathrm{d} \mu_{0}$ to unity, $\Omega$ is the (dimensionless) variance of the noise, and $I \subset \mathbb{R}$ is an arbitrary time interval large enough to include all desired times. In position space, the Langevin equation takes the form of a stochastic heat equation
\begin{equation}
\partial_t \phi_t(x) = - (\omega \phi_t)(x) + \eta_t(x), \quad \phi_0(x) = \varphi(x).
\end{equation}
For the case $\omega(p) = p^2$, one has $(\omega \phi)(x) = -\Delta \phi (x) = -\partial_\mu \partial_\mu \phi(x)$. In position space, therefore, we see that the equation becomes a stochastic partial differential equation.
The form of the momentum space equation above is a simple field-theoretic generalization of the well-known Ornstein-Uhlenbeck (OU) process (i.e. damped Brownian motion) $q_t$ with Langevin equation and solution \cite{ZinnJustin:2002ru}, respectively,
\begin{equation}
\dot q_t = - \omega q_t + \eta_t, \quad q_t = \mathrm{e}^{-\omega t} q_0 + \int_0^t \mathrm{d} s \; \mathrm{e}^{-\omega(t-s)} \eta_s,
\end{equation}
where $\eta_t$ is gaussian white noise, so it is quite simple to find the solution. One treats the noise term like a non-homogeneous part of the equation, finding
\begin{equation}
\phi_t(p) = f_t(p) \varphi(p) + \int_0^t \mathrm{d} s \; f_{t-s}(p) \eta_s(p),
\end{equation}
where $f_t(p)$ is a generalized momentum space heat kernel of the form
\begin{equation}
f_t(p) = \mathrm{e}^{-\omega(p)t}, \quad f_t(z) = \int_p \mathrm{e}^{i p \cdot z} f_t(p).
\end{equation}
In position space, one finds
\begin{equation}
\phi_t(x) = (f_t \varphi)(x) + \int_0^t \mathrm{d} s \; (f_{t-s}\eta_s)(x).
\end{equation}
We will sometimes denote the solution's dependence on initial condition and noise by $\phi_t[\varphi; \eta]$. The first term on the r.h.s. implies that the mean of $\phi_t(x)$ satisfies the free gradient flow equation, i.e. ``heat'' equation, corresponding to the differential operator $\omega$.
\subsection{The Fokker-Planck distribution}
With the explicit solution in-hand, one can compute the probability distribution of fields $\phi$ at time $t$ given $\varphi$ at $t=0$. We say that the Langevin equation \textit{generates} a Fokker-Planck (FP) distribution $P(\phi,t;\varphi,0)$ defined by
\begin{equation} \label{Pdef}
P(\phi,t;\varphi,0) := \mathbb{E}_{\mu_0} \big[ \delta(\phi - \phi_t[\varphi;\eta])\big] = \int \mathscr{D} \lambda \; \mathbb{E}_{\mu_0} \big[ \mathrm{e}^{i(\lambda,\phi-\phi_t [\varphi; \eta])}\big].
\end{equation}
From the definition of noise expectations, we then find
\begin{equation}
P(\phi, t; \varphi, 0) = \mathrm{c}(\mn\Lambda_0, \Omega) \int \! \mathscr{D} \lambda \int \! \mathscr{D} \eta \; \exp\Big[i(\lambda, \phi - \phi_t[\varphi; \eta]) -\frac{1}{2\Omega} \int_I \mathrm{d} s (\eta_s, K^{-1}_{0} \eta_s)\Big].
\end{equation}
Substituting in the explicit solution for $\phi_t$, the integrand becomes
\begin{align}
\exp&\Big[i(\lambda, \phi - f_t \varphi) - i \int_0^t \mathrm{d} s\; (\lambda, f_{t-s} \eta_s) - \frac{1}{2\Omega} \int_I \mathrm{d} s \; (\eta_s, K_{0}^{-1} \eta_s)\Big] \nonumber \\
& = C \exp\Big[i(\lambda, \phi - f_t \varphi) - \int_0^t \mathrm{d} s\; \Big( i (\lambda, f_{t-s} \eta_s) + \frac{1}{2\Omega} (\eta_s, K_{0}^{-1} \eta_s) \Big) \Big],
\end{align}
the constant $C$ involving times $s > t$, which divide out of any noise averages and will now be dropped. The noise integral over relevant $\eta_t$'s is a standard gaussian integral, which yields
\begin{equation}
P(\phi, t; \varphi, 0) = \int \! \mathscr{D}\lambda \; \exp\Big[i(\lambda, \phi - f_t \varphi) - \frac{\Omega}{2} \int_0^t \mathrm{d} s \; (f_{t-s}^\top \lambda, K_{0} f_{t-s}^\top
\lambda)\Big].
\end{equation}
Next, note that the $s$-integral (which does not care about $\lambda$ or $K_{0}$) produces a kernel
\begin{equation} \label{Adef}
A_t := \Omega \int_0^t \mathrm{d} s \; f_{t-s} K_{0} f_{t-s}^\top,
\end{equation}
which in momentum space is given by a diagonal matrix,
\begin{equation} \label{A_momspace}
A_t(p,k) = \Omega (2\pi)^d \delta(p+k) K_{0}(p) \; \frac{1-\mathrm{e}^{-2 \omega(p) t}}{2\omega(p)}.
\end{equation}
We will sometimes denote $A^{-1}_t$ by $B_t$; the inverse exists by virtue of the restrictions on $\omega(p)$. The remaining $\lambda$-integral is also gaussian, and evaluates to
\begin{equation} \label{FP_dist}
P(\phi, t; \varphi, 0) = \big[\det 2 \pi B_t \big]^{\frac{1}{2}} \exp\Big[- \frac{1}{2} \big(\phi - f_t\varphi, B_t (\phi - f_t \varphi)\big)\Big].
\end{equation}
Here we recognize the similarity of this functional to the constraint functional of Wilson and Kogut \cite{Wilson:1973jj} that we worked with in chapter 3, as well as those found in \cite{Wetterich:1989xg, Igarashi:2009tj}. We will call such a functional a \textit{gaussian constraint functional} or a \textit{transition function} (when emphasizing its probabilistic interpretation). In momentum space, the exponent is explicitly
\begin{equation} \label{constraint_functional}
- \frac{1}{2} \int_p \; \frac{2\omega(p) K_{0}^{-1}(p)}{1-\mathrm{e}^{-2\omega(p)t}} \big(\phi(p) - \mathrm{e}^{-\omega(p)t} \varphi(p)\big)\big(\phi(-p) - \mathrm{e}^{-\omega(p)t} \varphi(-p)\big).
\end{equation}
One observes that the mean of the field $\phi$ is set to the flowed field $f_t(p) \varphi(p)$ within a functional variance determined by $A_t(p)$. Thus the effect of the stochastic RG transformation is to produce a low-mode fluctuating field, in the sense that the mean value of modes of $\phi$ with $\omega(p) \gg 1/t$ are exponentially suppressed. For $\omega(p) = p^2$, this suggests that the effective cutoff of the resulting theory is roughly $\mn\Lambda_t \sim 1/\sqrt{t}$ ; a more precise identification will be made later.
For reasons explained in the next subsection, we may write the transition function as $P_t(\phi,\varphi)$ rather than $P(\phi,t;\varphi,0)$, and we will sometimes suppress the initial condition by writing $P_t(\phi)$. The transition function is a Green function for the Fokker-Planck equation
\begin{align} \label{FP_transfn}
\frac{\partial P_t(\phi)}{\partial t} & = \Bnab \circ \Big( \frac{1}{2} \mn\Sigma(\phi,t) \Bnab P_t(\phi) + \mathscr{B}(\phi,t) P_t(\phi) \Big), \nonumber \\
& \lim_{t \to 0} P(\phi,t; \varphi,0) = \delta(\phi - \varphi),
\end{align}
where the drift vector $\mathscr{B}$ and diffusion matrix $\mn\Sigma$ are defined by \cite{Pavliotis:2014}
\begin{align}
\mathscr{B}(\phi,t) & = - \lim_{t' \to t} \frac{1}{t'-t} \int \mathscr{D} \phi' (\phi' - \phi) P(\phi',t';\phi,t), \\
\mn\Sigma(\phi,t) &= \lim_{t' \to t} \frac{1}{t'-t} \int \mathscr{D} \phi' (\phi' - \phi) \otimes (\phi' - \phi) P(\phi',t';\phi,t).
\end{align}
A derivation of the FP equation above is provided in Appendix B, where it is demonstrated that such an equation follows from the LE
\begin{equation}
\partial_t \phi_t = - \mathscr{B}(\phi_t,t) + \eta_t.
\end{equation}
With the explicit solution eq. (\ref{FP_dist}), we compute
\begin{align}
\mathscr{B}(\phi,t) &= \omega \phi, \\
\mn\Sigma(\phi,t) &= \Omega K_{0},
\end{align}
as expected. If the initial condition $\varphi$ is distributed according to a measure $\mathrm{d} \rho_0(\varphi) = \mathrm{e}^{-S_0(\varphi)} \mathscr{D} \varphi$ corresponding to a bare theory, then the effective distribution
\begin{equation}
\rho_t(\phi) := \int \! P(\phi,t;\varphi,0) \; \mathrm{d} \rho_0(\varphi)
\end{equation}
also satisfies the FP equation, with initial condition $\rho_0(\varphi)$. For the specific choice eq. (\ref{LE_momspace}) of LE above, we find
\begin{equation} \label{FP_EFT}
\partial_t \rho_t(\phi) = \frac{1}{2} K_0 \; \Bnab^2 \rho_t(\phi) + \Bnab \circ \big( \omega \phi \; \rho_t(\phi) \big),
\end{equation}
where we have set $\Omega = 1$.
The drift term $(\omega \phi_t)(x)$ may be regarded as the functional derivative of what we might call a ``flow action''
\begin{equation}
\hat S(\phi) = \frac{1}{2} (\phi, \omega \phi) \quad \Rightarrow \quad \partial_t \phi_t = - \Bnab\hat S(\phi_t) + \eta_t,
\end{equation}
in which case one would have $\mathscr{B} = \Bnab \hat S$.
For arbitrary choices of $\hat S$, the Langevin equation may become nonlinear and the (still linear) FP equation generalizes to
\begin{equation} \label{FP_seed}
\partial_t \rho_t(\phi)= \frac{1}{2} K_{0} \Bnab^2 \rho_t(\phi) + \Bnab \circ \big( \Bnab \hat S(\phi) \rho_t(\phi) \big).
\end{equation}
Thus we observe that the stochastic process generates ERGE's of the form described in chapter 3. Of course, by writing $\rho_t = \mathrm{e}^{-S_t}$ and letting $\mathrm{d} \rho_0(\varphi) = \mathrm{e}^{-S_0(\varphi)} \mathscr{D}\varphi$, one recovers functional PDE's for the effective action $S_t(\phi)$ given some bare action $S_0(\varphi)$, similar to the Polchinski equation.
There are many possibilities for how to generalize the scheme presented above. First, one could choose a different distribution for the noise, perhaps even a non-gaussian one. Second, one could generalize the flow action to be arbitrarily complicated in $\phi$, thereby making the Langevin equation non-linear, but these will generate FP distributions which are more difficult to calculate; we will discuss nonlinear RG's at the end of this chapter. For theories whose field variables are in compact spaces, or theories with local symmetries, however, one \textit{must} use non-linear LEs to ensure that the flow preserves the symmetry; such equations will likely resemble those found in the context of stochastic quantization \cite{Damgaard:1987rr, Batrouni:1985jn}.
I remark that the stochastic characterization of RG is a natural one to take. In ordinary stochastic processes, such as Einstein's theory of Brownian motion in 1905, the random noise represents the influence of small-scale degrees of freedom on the large-scale ones: the molecular bath in which the dust particle is submerged imparts random kicks to the particle. In the case of field theory, we see that the noise plays the role of short-distance degrees of freedom randomly kicking the momentum modes of the field. The drift term enforces the overall damping of high modes, while the noise guarantees that the high modes are made to interact (indirectly) with the low modes, as is apparent from the form of the gaussian constraint functional and the influence of high-mode loops in the effective action that we will describe below.
\subsection{MCRG} Although the transition functional above has the same form as the constraint functionals found in the FRG literature, a notable difference here is that the kernel $B_t = A_t^{-1}$ is \textit{determined} by the associated Langevin equation, having a fixed relation to $\omega$, the choice of drift. Thus, if one wants to change the details of the constraint functional, one must find the appropriate LE.
The initial condition for the transition function, $P_0(\phi,\varphi) = \delta(\phi - \varphi)$, is guaranteed by the fact that it is generated by a LE with initial condition $\varphi$. As a distribution, it is furthermore normalized such that for all $t \geq 0$,
\begin{equation}
\int \! \mathscr{D} \phi \; P_t(\phi,\varphi) = 1,
\end{equation}
and in particular, the integral is independent of the field $\varphi$. These conditions allow one to define the effective theory in a more conventional way by inserting unity into the partition function $Z$ of the bare theory as
\begin{equation}
Z = \int \! \mathrm{d} \rho_0(\varphi) = \int \! \mathscr{D} \varphi \! \int \! \mathscr{D} \phi \; P_t(\phi,\varphi) \; \mathrm{e}^{-S_0(\varphi)},
\end{equation}
thereby defining a Boltzmann weight of effective (low-mode) fields
\begin{equation} \label{eff_action}
\mathrm{d} \rho_t(\phi) = \frac{1}{Z} \; \mathrm{e}^{-S_t(\phi)} \mathscr{D} \phi , \qquad \mathrm{e}^{-S_t(\phi)} := \int \! \mathscr{D} \varphi \; P_t(\phi,\varphi) \; \mathrm{e}^{-S_0(\varphi)},
\end{equation}
and the partition function remains invariant.
The stochastic process generated by a Langevin equation is a Markov process, so that future states depend only on the present state, so long as the noise at different times are uncorrelated. This kind of feature was desirable at least in Wilson's philosophy of RG, where any particular blocking step could be carried out by knowing only the previous step. In terms of the abstract distribution $P$ this implies, $\forall \; t > s \geq 0$,
\begin{equation}
P(t,0) = P(t, s)P(s, 0), \quad \mathrm{or} \quad P(\phi, t; \varphi, 0) = \int \! \mathscr{D} \chi \; P(\phi, t; \chi, s) P(\chi, s; \varphi, 0).
\end{equation}
By considering time-homogeneous Langevin equations (i.e. no explicit $t$-dependence in the LE or the noise variance), the transition function depends only on the difference $t-s$, and we can write $P(\phi, t; \chi, s) = P_{t-s}(\phi, \chi)$. \footnote{The noise variance can be chosen to depend on time, but this spoils the convenience of time-homogeneity.} This property may also be directly computed from the definition, eq. (\ref{Pdef}), suitably modified to have the initial condition $\phi_{t\to s} = \chi$. It follows that the set $\{P_t : t\geq 0\}$ form an abelian semigroup of operators and may be written in terms of a generator $\mathcal{L}$ as $P_t = \mathrm{e}^{t\mathcal{L}}$ \cite{Pavliotis:2014}. We will discuss $\mathcal{L}$ in the last section. For now we simply note that $\mathcal{L}$ is the adjoint of the functional differential operator appearing in the FP equation.
Next, consider the usual definition of the expectation value of an operator $\mathcal{O}$ in the effective theory,
\begin{equation}
\langle \mathcal{O}(\phi) \rangle_{S_t} := \frac{1}{Z} \int \! \mathscr{D} \phi \; \mathcal{O}(\phi) \; \mathrm{e}^{- S_t(\phi)}.
\end{equation}
By inserting the definition eq. (\ref{eff_action}), and noting that
\begin{equation}
\int \! \mathscr{D} \phi \; \mathcal{O}(\phi) P(\phi, t; \varphi, 0) = \mathbb{E}_{\mu_0} \big[ \mathcal{O}(\phi_t[\varphi;\eta]) \big],
\end{equation}
where $\phi_t[\varphi; \eta]$ denotes the solution of the LE, one readily obtains the equality
\begin{equation} \label{Equivalence}
\big\langle \mathcal{O}(\phi) \big\rangle_{S_t} = \big\langle \mathbb{E}_{\mu_0} \big[\mathcal{O}(\phi_t[\varphi; \eta]) \big] \big\rangle_{S_0}
\end{equation}
This formula states the equivalence of a low-mode FRG effective theory and a double expectation value over the bare fields and the random noise. Since the right-hand side may be calculated without knowledge of the effective action, it further constitutes a generalization of MCRG to FRG for all observables. Notice that there are just as many degrees of freedom $\phi$ as there are $\varphi$ (this is especially clear on the lattice). A possible application of this formula to Swendsen-style MCRG will be discussed at the end of the chapter.
In the next section, we will explore various properties of the effective action $S_t(\phi)$ defined above. First, however, one might wonder why the noise average is necessary in eq. (\ref{Equivalence}), when compared with the corresponding statement for a spin-blocked theory \cite{Swendsen:1979gn},
\begin{equation}
\big\langle \mathcal{O}(\phi) \big\rangle_{S_b} = \langle \mathcal{O}(B_b\varphi) \rangle_{S_0},
\end{equation}
where $B_b$ denotes the blocking operator. This is perhaps clarified by the fact that when spin-blocking, there are fewer blocked spins than bare spins, so the blocked expectation values really involve an integration over ``extra'' degrees of freedom, from the perspective of the effective theory; here the role is played by noise. If one were to choose a blocked lattice of the same size as the original, so that the bare Boltzmann factor were integrated against a delta functional over the whole lattice, the resulting blocked action would be trivial, namely, $S_0(B^{-1}_b \phi)$. Likewise in the continuum, it has long been assumed \cite{Wetterich:1989xg} that a pure $\delta$-function constraint functional is not sufficient to define a non-trivial effective action for continuum FRG. Let us elaborate on this. One might have wanted to define the effective action through
\begin{equation} \label{delta_constraint}
\mathrm{e}^{-S_t(\phi)} = \int \! \mathscr{D} \varphi \; \delta(\phi - f_t \varphi) \; \mathrm{e}^{-S_0(\varphi)},
\end{equation}
where $f_t\varphi$ is the solution of a gradient flow equation such as
\begin{equation}
\partial_t \phi_t(x) = \Delta \phi_t(x),
\end{equation}
or some generalization thereof. The problem with this definition is that it generates a trivial effective action, in the sense to be described. In momentum space, the solution is simply $(f_t \varphi)(p) = \mathrm{e}^{-p^2 t} \varphi(p)$, so one can do a linear change of variables in eq. (\ref{delta_constraint}) and compute
\begin{equation}
S_t(\phi) = -\mathrm{tr} \ln f_t + S_0(f_t^{-1} \phi).
\end{equation}
Hence, the couplings of the new action are exactly computable, and because their dependence on $t$ is trivially determined by how many powers of $\phi$ and $p^2$ appear in each term, without involving any loop corrections, one verifies that the resulting ``effective action'' is not acceptable.
We remark that the inadequacy of eq. (\ref{delta_constraint}) to define an effective action \textit{does not} mean that the observables computed from gradient-flowed fields are not useful for studying certain RG properties of the system. At the end of the next section, in particular, we will describe how gradient-flowed observables are sufficient for studying the \textit{long-distance} properties of an effective theory that \textit{does} have a well-defined effective action.
\section{The effective theory and fixed points} In what follows, the effective action determined by the stochastic RG transformation will be discussed for the example cases of the gaussian model and $\phi^4_3$ theory. We will show that by a rescaling of variables, the existence of an IRFP of the transformation becomes possible. From the point of view of stochastic processes, the result implies that, for $\phi^4_3$ theory, the stationary solutions of the Fokker-Planck equation may be non-gaussian even though the Langevin equation is linear. Lastly, the correlation functions of the effective theory will be related to gradient-flowed correlations.
\subsection{The effective action} That the EFT defined by a gaussian constraint functional for $\Omega > 0$ is nontrivial can be understood as follows. One may insert the expression eq. (\ref{FP_dist}) for the transition function into the definition of the effective action eq. (\ref{eff_action}), and then expand out the exponent of $P_t(\phi,\varphi)$; the part proportional to $\varphi^2$ modifies the bare theory propagator, and the part linear in $\varphi$ acts as a source term with $J = f_t B_t \phi$. The remaining $\phi^2$ term contributes to the $\phi$ propagator. The result is a relation between effective and bare actions:\footnote{In a sense, this constitutes an exact solution to the FP equation, giving the finite-$t$ distribution $\rho_t$ in terms of the cumulants of $\rho_0$. It is the stochastic RG analog of eq. (\ref{lowmode_action}).}
\begin{equation}
S_t(\phi) = F_t + \frac{1}{2} (\phi, B_t \phi) - W_{0}^{(t)}(B_t f_t \phi),
\end{equation}
where $F_t$ is due to the normalization of $P_t(\phi,\varphi)$, and $W^{(t)}_0(J) = \ln \langle \mathrm{e}^{(J,\phi)} \rangle_{S_0^{(t)}}$ is the generator of connected Green functions for the bare theory $S_0$ with a modified $t$-dependent inverse propagator
\begin{equation}
[\mn\Delta^{(t)}_{0}]^{-1} := \mn\Delta_{0}^{-1} + h_t, \quad h_t := f_t B_t f^\top_t.
\end{equation}
Expanding the generator term in $\phi$ yields a formula which allows for the systematic computation of effective vertices,
\begin{equation}
W_0^{(t)}(B_t f_t \phi) = \sum_{n=0}^\infty \frac{1}{n!} [W_0^{(t)}]^{(n)} (B_t f_t \phi, \cdots , B_t f_t \phi).
\end{equation}
It is then apparent that the effective action for any finite $t$ is indeed non-trivial, since the vertices of $S_t$ contain the dynamics of the bare theory via the $[W^{(t)}_0]^{(n)}$.
The scale $\mn\Lambda_t$ of the effective theory may be determined by looking at the effective 2-point function at tree level, after isolating the quadratic part of $S_t(\phi)$:
\begin{equation}
\langle \phi(p) \phi(-p) \rangle_{S_t}^\mathrm{tree} = A_t(p) + f_t^2(p) \mn\Delta_0(p) = A_t(p) + \frac{\mathrm{e}^{- p^2(a_0^2 + 2 t)}}{p^2 + m_0^2},
\end{equation}
where the inverse cutoff $a_0 = \mn\Lambda_0^{-1}$ has been used, and we recall that $A_t$ is given by eq. (\ref{A_momspace}). In position space, the first term decays rapidly at large distances with respect to the first. The second term is a Schwinger-regularized propagator; we therefore observe that the effective cutoff induced by the stochastic RG transformation is
\begin{equation} \label{effective_cutoff}
\mn\Lambda_t^{-2} = \mn\Lambda_0^{-2} + 2t, \quad \mathrm{or} \quad \mn\Lambda_t = \frac{\mn\Lambda_0}{\sqrt{1+ 2\hat t}},
\end{equation}
where the dimensionless flow time $\hat t = \mn\Lambda_0^2 t$ has been introduced. The continuous scale factor is therefore $b_t = \sqrt{1 + 2 \hat t}$. We will take another look at the effective correlation functions and the function $A_t$ in the next section.
We can make sense of the odd-looking factors of $f_t$ and $B_t$ that appear in the effective action as follows. First, the additive $h_t$ in the propagator $\mn\Delta_0^{(t)}$ acts as a sliding IR cutoff for the \textit{bare} theory, since
\begin{equation}
\lim_{p\to 0} h_t(p) = \frac{1}{t},
\end{equation}
which means that as $t$ increases, more of the bare field modes get integrated out. For example, in the case of $\phi^4_d$ theory (discussed in more detail in the next subsection), the momentum-independent part of the 1-loop contribution to the amputated effective 4-point vertex in $W^{(t)}_0$ is proportional to\footnote{We choose to consider the mass term in $S_0$ as part of the interaction $V_0(\phi)$ from now on.}
\begin{equation}
\int_{\mathbb{R}^d} \frac{\mathrm{d}^d k}{(2\pi)^d} [\mn\Delta^{(t)}_{0}(k)]^2 = \int_{\mathbb{R}^d} \frac{\mathrm{d}^d k}{(2\pi)^d} \; \frac{\mathrm{e}^{-2 k^2 a_0^2}}{\big(k^2 + h_t(k) \big)^2} \; .
\end{equation}
We observe that the presence of $h_t$ in the denominator, combined with the multiplicative bare cutoff function, effectively restricts the domain of integration to $\| p \| \in [\mn\Lambda_t, \mn\Lambda_0]$, similarly to what one would have found in a standard (sharp) high-mode elimination RG step $\mn\Lambda_0 \to \mn\Lambda$, where the domain of the integral would be $\| p \| \in [\mn\Lambda, \mn\Lambda_0]$ (see \cite{Kopietz:2010zz} for details). Next, note that the argument $B_t f_t \phi$ of $W^{(t)}_0$ in $S_t$ implies that the $[W^{(t)}_0]^{(n)}$ vertices are multiplied by a factor of
\begin{equation}
B_t(p) f_t(p) = K_{0}^{-1}(p) \frac{2 \omega(p) f_t(p)}{1-f_t^2(p)}
\end{equation}
for each factor of $\phi(p)$. Since the vertices $[W^{(t)}_0]^{(n)}$ are connected $n$-point functions, which have $n$ factors of external propagators $\mn\Delta_0^{(t)}(p_i) \propto K_0(p_i)$ attached to them, we see that the effective vertices decay like $f_t(p_i) = \mathrm{e}^{-p^2_i t}$ and therefore strongly suppress the $\| p_i \| \gg \mn\Lambda_t$ contribution of the $n$-point functions. Moreover, the leading momentum behavior of the products of $B_t f_t$ with $\mn\Delta^{(t)}_0$ demonstrates that they are, in a sense, \textit{amputated},
\begin{equation}
B_t(p) f_t(p) \mn\Delta^{(t)}_0(p) = 1 - \frac{1}{2} (p^2 t)^2 + O(p^8 t^4),
\end{equation}
in a manner similar to what was found under smooth high-mode elimination. Thus, in sum, the effective vertices are amputated connected $n$-point functions to leading order in external momenta, which are heavily damped in the UV $(\| p \| \gg \mn\Lambda_t)$, and whose loop corrections effectively involve domains of integration $\| p \| \in [\mn\Lambda_t, \mn\Lambda_0]$. It is also noteworthy that the external momentum dependence implied by the amputation formula above goes like powers of $p^2 / \mn\Lambda_t^2$, for $\mn\Lambda_t^{-2} \gg \mn\Lambda_0^{-2}$, as one expects from the general philosophy of effective field theory.
\subsection{Gaussian fixed point} I'll begin the discussion of possible fixed points of the stochastic RG transformation with the gaussian model, which is explicitly solvable in a manner similar to the Wilson-Kogut ERGE.
Here we consider the existence of a gaussian fixed point in the case where
\begin{equation}
S_{\mn\Lambda_0}(\varphi) = \frac{1}{2} ( \varphi, M_{\mn\Lambda_0} \varphi), \quad M_{\mn\Lambda_0}(p) = c p^2 K_0^{-1}(p),
\end{equation}
with drift $\omega(p) = p^2$ in the LE. A straight-forward gaussian integration yields the exact effective action at time $t$:
\begin{equation}
S_t(\phi) = \frac{1}{2} \int_p \frac{2}{\Omega} \frac{p^2 K_0^{-1}(p)}{1 + (2/\Omega c - 1) \mathrm{e}^{-2p^2t}} \phi(p) \phi(-p).
\end{equation}
This action has the expected OU process stationary limit as $t\to\infty$,
\begin{equation}
S_\infty(\phi) = \frac{1}{\Omega} \int_p p^2 K_0^{-1}(p) \phi(p) \phi(-p),
\end{equation}
which is therefore independent of the choice of $c$ in the bare action. The rescaled effective action, however, has a different limit. Let $p = \mn\Lambda_0 \bar p / b_t$, yielding
\begin{equation}
S_t(\phi) = \frac{1}{2} \mn\Lambda_0^{d+2} b_t^{-d-2} \int_{\bar p} \frac{2}{\Omega} \frac{\bar p^2 K_0^{-1}(\mn\Lambda_0 \bar p / b_t)}{1 + (2/\Omega c - 1) \mathrm{e}^{-2\bar p^2 \mn\Lambda_0^2 t / b_t^2}} \phi(p) \phi(-p).
\end{equation}
Note that
\begin{equation}
K_0^{-1}(\mn\Lambda_0 \bar p / b_t) = \mathrm{e}^{\bar p^2 / b_t^2}, \quad \mathrm{e}^{-2\bar p^2 \mn\Lambda_0^2 t / b_t^2} = \mathrm{e}^{-\bar p^2 (1 - b^{-2}_t)}.
\end{equation}
The asymptotic limit $b_t \to \infty$ then has leading behavior
\begin{equation}
S_t(\phi) = \frac{1}{2} \mn\Lambda_0^{d+2} b_t^{-d-2} \int_{\bar p} \frac{2}{\Omega} \frac{\bar p^2}{1 + (2/\Omega c - 1) \mathrm{e}^{-\bar p^2}} \phi(p) \phi(-p).
\end{equation}
We see that in order to get a non-uniform distribution ($\rho_t \neq 1$) in the limit, we must look at rescaled fields $\phi(p) = \mn\Lambda_0 ^{d_\phi} b_t^{-d_\phi} \mn\Phi(\bar p)$, leading to a stationary action
\begin{equation}\label{SRG_gfp}
S_*(\mn\Phi) = \lim_{t\to\infty} S_t(\phi)\Big|_{\phi(p) = \mn\Lambda_0 ^{d_\phi} b_t^{-d_\phi} \mn\Phi(\bar p)} = \frac{1}{\Omega} \int_{\bar p} \frac{\bar p^2}{1 + (2/\Omega c - 1) \mathrm{e}^{-\bar p^2}} \mn\Phi(\bar p) \mn\Phi(-\bar p).
\end{equation}
Observe that a fixed point exists for every choice of $c$: the GFP exists as a \textit{line} of fixed points, parameterized by the bare coupling. The canonical choice $c=1$ has fixed point (assuming $\Omega=1$)
\begin{equation} \label{standard_gfp}
S_*(\mn\Phi)|_{c=1} = \int_{\bar p} \frac{\bar p^2}{1 + \mathrm{e}^{-\bar p^2}} \mn\Phi(\bar p) \mn\Phi(-\bar p).
\end{equation}
Notice that the rescaled effective action has a regularization-independent (indeed, unregularized) fixed point, since the $K_0$ factor disappears in the limit. However, the $\mathrm{e}^{-\bar p^2}$ term, which came from the heat kernel $f_t(p)$, exhibits the scheme-dependence (i.e. choice of flow) of the fixed points thereby obtained.
For the sake of general applicability, we now discuss the parallel derivation in the lattice gaussian model. On the lattice, there is sharp cutoff $\mn\Lambda_0 = \pi / a_0$ and the drift is $\omega(p) = \hat p^2$, with $\hat p_\mu = (2/a) \sin p_\mu a_0 / 2$. The bare action is taken to be
\begin{equation}
S_{a_0}(\varphi) = \frac{1}{2} (\varphi, M_{a_0} \varphi), \quad M_{a_0}(p) = c \hat p^2.
\end{equation}
Taking $\Omega = c = 1$ for simplicity, the effective action is then
\begin{equation}
S_t(\phi) = \frac{1}{2} \int_p^{\pi/a_0} \frac{2 \hat p^2}{1 + \mathrm{e}^{-2\hat p^2t}} \phi(p) \phi(-p).
\end{equation}
Again, we see that the infinite time limit of the unrescaled effective theory is a simple gaussian model, the lattice OU stationary process.
We expect the effective spacing $a_t$ to behave qualitatively like $a_t^2 = a_0^2 + \mathrm{c}_0 t$, for some constant $\mathrm{c}_0$, since by inspection of the effective action, we see that the effective theory propagator, although it still has sharp cutoff $a_0$, \textit{further} suppresses the high modes according to $\mathrm{e}^{-2t \hat p^2/a_0^2}$. Now define the rescaled momenta by $p_\mu = \bar p_\mu/a_t = \bar p_\mu / a_0 b_t$ and $b_t = a_t / a_0$. One obtains
\begin{equation}
S_t(\phi) = a_0^{-d} b_t^{- d} \int_{\bar p}^{\pi b_t} \frac{(4/a_0^2) \sum_\mu \sin^2 \bar p_\mu / 2b_t }{1 + \exp\big[- 8 t/a_0^2 \sum_\mu \sin^2 \bar p_\mu / 2b_t\big]} \phi(p) \phi(-p).
\end{equation}
Next, define the rescaled fields $\mn\Phi(\bar p) = b_t^{d_\phi} \phi(p)$ as usual and note that $t \propto b_t^2(1-b_t^{-2})/c_0$. Expanding the lattice momenta $\hat p$ in $\bar p / a_0 b_t$, we see that the $b_t \to \infty$ limit picks out only the leading, continuum-like term $\bar p^2$, and the rest are suppressed by $b_t^{-1}$. No other rescaling will lead to a propagating fixed point theory. It follows that the fixed point action is described by the same action we found in the direct continuum approach, eq. (\ref{standard_gfp}), which is an expression of universality.
We can compute the scaling operators at the GFP as follows. Let $\tau = \ln \mn\Lambda_0 / \mn\Lambda_t$, and perturb the fixed point as $S_\tau = S_* + \mathcal E_\tau$, keeping only first orders in $\mathcal E_\tau$. One finds
\begin{equation}
\partial_\tau \mathcal E_\tau = - \frac{1}{2} K_* \Big[ 2 \Bnab S_* \circ \Bnab \mathcal E_\tau - \Bnab^2 \mathcal E_\tau \Big] - (D-\omega) \mn\Phi \circ \Bnab \mathcal E_\tau,
\end{equation}
which is separable for $\mathcal E_\tau = T(\tau) \mathcal R(\mn\Phi)$, leading to $T(\tau) = \mathrm{e}^{\lambda \tau}$ and the eigenvalue equation
\begin{equation}
\lambda \mathcal R = -\frac{1}{2} K_* \Big[ 2 \Bnab S_* \circ \Bnab \mathcal R - \Bnab^2 \mathcal R \Big] - (D-\omega) \mn\Phi \circ \Bnab \mathcal R.
\end{equation}
We give as an example the solution for quadratic scaling operators. Letting $\mathcal{R}(\mn\Phi) = \mn\Phi \circ g \mn\Phi$ and $S_*(\mn\Phi) = \frac{1}{2} \mn\Phi \circ f \mn\Phi$, for $g(p)$ to be determined, leads to (recall $d_\phi + d/2 = -1$)
\begin{equation}
- \lambda g = 4 f g - 2g + p \cdot \nabla_p g - 2 p^2 g,
\end{equation}
or in spherical coordinates, and using $f$ from eq. (\ref{standard_gfp}),
\begin{equation}
p \frac{\mathrm{d} g}{\mathrm{d} p} = \Big( 2 - \lambda +2 p^2 - \frac{4 p^2}{1 + \alpha \mathrm{e}^{-p^2}} \Big) g.
\end{equation}
The solution is
\begin{equation}
g(p) = C_0 \; \frac{p^{2-\lambda}}{(1 + \alpha \mathrm{e}^{-p^2})^2},
\end{equation}
but how do we determine the permissible values of $\lambda$? By demanding analyticity as $p \to 0$. In \cite{Wilson:1973jj}, Wilson and Kogut argue that non-analyticity would lead to unacceptable nonlocality in the perturbed action. Hence, $2 - \lambda = 2m$ for $m \in \mathbb{Z}_+$. The eigenperturbations are then
\begin{equation}
g_m(p) = \frac{(p^2)^m}{(1+\alpha \mathrm{e}^{-p^2})^2}.
\end{equation}
The general solution to the full perturbation from $S_*$ is then
\begin{equation}
\mathcal E_\tau(\mn\Phi) = \sum_{m = 0}^\infty \varepsilon_m \mathrm{e}^{\lambda_m \tau} \mn\Phi \circ g_m \mn\Phi, \quad \lambda_m = 2 - 2m.
\end{equation}
Hence $m=0$ gives the relevant mass deformation, $m=1$ gives the marginal (redundantly so) kinetic term deformation, and $m \geq 2$ gives irrelevant operators.
The analysis above can be extended to higher operators as well. The $\Bnab^2$ term implies that only polynomials in $\mn\Phi$ are nonzero solutions, however. For the quartic scaling operators, one tries
\begin{equation}
\mathcal R(\mn\Phi) = \frac{1}{2} g_2(\mn\Phi, \mn\Phi) + \frac{1}{4!} g_4(\mn\Phi, \dots , \mn\Phi),
\end{equation}
which leads to a coupled system of PDE's for $g_2$ and $g_4$. We leave its solution as an exercise for the reader.
\subsection{Fixed point in $\phi^4_3$} For the case of interacting $\phi^4_3$ theory, we cannot solve the problem exactly, so we resort to perturbation theory for the sake of comparison to the high-mode elimination RG in chapter 3, and we will find that the two approaches are quite similar.
One might initially think that the effective action, written as an integration against the bare density, eq. (\ref{eff_action}), has a gaussian infinite flow time limit, as
\begin{equation}\label{trivlim}
\lim_{t \to \infty} S_t(\phi) = \frac{1}{2} (\phi, B_\infty \phi),
\end{equation}
where $B_\infty(p) = 2 K_{0}^{-1}(p) \omega(p)$, due to the exponential decay of $f_t$. Indeed, it is well-known that the Ornstein-Uhlenbeck process has a gaussian stationary distribution. As we saw in the gaussian model, however, rescaling can make a big difference. In this case we should expect that the fixed point theory is generally interacting.
To understand qualitatively why the gaussian limit is not obtained, note that the properties of the drift $\omega$ imply that the zero mode of the bare field is not suppressed (see eq. (\ref{constraint_functional})); only its variance changes. Since the zero-mode theory is not gaussian, in general, the flowed distribution will also have a non-gaussian zero mode effective action, implying that the long-distance physics is still non-trivial. This would suggest, however, that the infinite-time degrees of freedom do not propagate. To further clarify the situation, we will look at the flow of the most relevant effective couplings as the RG time $t$ increases, and then we will address the role of rescaling of degrees of freedom, finding that the limit of the \textit{rescaled} effective action differs from the gaussian limit, eq. (\ref{trivlim}), obtained above.
We will treat the mass term also as a perturbation. Denoting the coefficient of $p^2$ in the quadratic part of $S_t(\phi)$ by $c_t$, and the momentum-independent parts of the quadratic and quartic terms, respectively, by $m^2_t, \; \lambda_t$, we find
\begin{align}
c_t &= 1 + O(\lambda_0^2), \\
m^2_t & = m^2_0 + \frac{\lambda_0}{2} I^d_0(t) + O(\lambda_0^2, \lambda_0 m^2_0), \\
\lambda_t & = \lambda_0 - \frac{3\lambda_0^2}{2} C^d_0(t) - 2 \lambda_0^2 t I^d_0(t) + O(\lambda_0^3, \lambda_0 m^2_0),
\end{align}
at 1-loop order, where the loop integrals are given by
\begin{align}
I^d_{0}(t) & = \int_{\mathbb{R}^d} \! \frac{\mathrm{d}^d p}{(2 \pi)^d} \frac{\mathrm{e}^{-p^2 a_0^2}}{p^2 + h_t(p)} = \Omega_d \int_{\mathbb{R}_+} \! \mathrm{d} p \; p^{d-3} \mathrm{e}^{-p^2 a_0^2} \tanh p^2 t, \\
C^d_{0}(t) & = \int_{\mathbb{R}^d} \! \frac{\mathrm{d}^d p}{(2\pi)^d} \; \frac{\mathrm{e}^{-2 p^2 a_0^2}}{\big(p^2 + h_t(p) \big)^2} = \Omega_d \int_{\mathbb{R}_+} \! \mathrm{d} p \; p^{d-5} \mathrm{e}^{-2 p^2 a_0^2} \tanh^2 p^2 t,
\end{align}
and $\Omega_d = S_{d-1} / (2\pi)^d$. The first integral is superficially divergent, but for $a_0 > 0$, it has a finite $t\to\infty$ limit, and one may compute
\begin{equation}
t \frac{\mathrm{d}}{\mathrm{d} t} I^d_0(t) = \Omega_d \alpha_1 \; t^{1-d/2} + O(t^{-d/2} a_0^2),
\end{equation}
where $\alpha_1 \approx 0.379064$ for $d=3$. The second integral $C^d_0(t)$ exists even for $a_0=0$, and its time derivative is
\begin{equation}
t \frac{\mathrm{d}}{\mathrm{d} t} C^d_0(t) = \Omega_d \alpha_2 \; t^{2 - d/2} + O(t^{2 - d/2 - \delta} a_0^{2\delta}),
\end{equation}
where $\delta > 0$ and $\alpha_2 \approx 0.594978$ for $d = 3$.\footnote{Recall that $\phi^4_3$ theory is superrenormalizable, having only two superficially divergent diagrams: the snail and the sunset diagrams.} Hence, to 1-loop order, we find for the derivatives of effective couplings
\begin{align}
t \frac{\mathrm{d}}{\mathrm{d} t} m^2_t & = \frac{\lambda_0}{2} \Omega_d \alpha_1 \; t^{1-d/2} + O(t^{-d/2} a_0^2), \\
t \frac{\mathrm{d}}{\mathrm{d} t} \lambda_t & = - \lambda_0^2 \Omega_d \big( \smallfrac{3}{2} \alpha_2 + 2 \alpha_1 \big) \; t^{2-d/2} + O(t^{2 - d/2 - \delta} a_0^{2\delta}).
\end{align}
These expressions do not clearly indicate any nontrivial fixed-point behavior at this order in perturbation theory. To proceed further, one must cast the flow equations in terms of rescaled dimensionless quantities, as one usually does to study RG flows. We will find below that such quantities naturally arise after a passive momentum and field redefinition.
Now we introduce dimensionless rescaled variables using the effective scale $\mn\Lambda_t$ to give dimension \cite{Morris:1993qb}. Dimensionless momenta $\bar p$ are defined as in chapter 3, by setting
\begin{equation}
\bar p = p / \mn\Lambda_t.
\end{equation}
The kinetic term in the effective action therefore becomes
\begin{equation}
\frac{1}{2} \int_{\bar p} \mn\Lambda_t^{d + 2} \bar p^2 \phi(\mn\Lambda_t \bar p) \phi(-\mn\Lambda_t \bar p).
\end{equation}
This motivates a change of field variables $\phi \to \mn\Phi$, where $\mn\Phi$ is dimensionless:
\begin{equation} \label{field_cov}
\phi(\bar p \mn\Lambda_t) =: \mn\Lambda_t^{d_\phi} \mn\Phi(\bar p),
\end{equation}
with $d_\phi = -d/2-1$ being the canonical mass dimension of $\phi$ in momentum space. After doing so, the kinetic term is of the canonical form
\begin{equation}
\frac{1}{2} \int_{\bar p} \bar p^2 \mn\Phi(\bar p) \mn\Phi(- \bar p)
\end{equation}
at 1-loop order, while the mass and quartic terms pick up factors of $\mn\Lambda_t$ which define dimensionless couplings $r_t, \; u_t$ by
\begin{equation}
r_t := \mn\Lambda_t^{-2} m^2_t, \qquad u_t := \mn\Lambda_t^{d-4} \lambda_t.
\end{equation}
We note that these rescalings are all quite familiar when written in terms of the scale factor
\begin{equation}
b_t := \frac{\mn\Lambda_0}{\mn\Lambda_t} \quad \Rightarrow \quad r_t = b_t^2 \hat m_t^2, \quad u_t = b_t^{4-d} \hat \lambda_t,
\end{equation}
reflecting that the mass and the 4-point coupling are relevant at the gaussian fixed point (hats denote quantities rendered dimensionless with $\mn\Lambda_0$).
Next, we compute the RG flow equations which describe how the dimensionless variables change with the flow time $t$. In the expression for the derivatives above, one replaces $m^2_0$ and $\lambda_0$ by $m^2_t$ and $\lambda_t$, valid at this order in perturbation theory. The derivatives of the dimensionless couplings with respect to $b$ (dropping $t$-subscripts) are then
\begin{align}
b \frac{\mathrm{d} r}{\mathrm{d} b} &= 2 r + \beta_1 u, \\
b \frac{\mathrm{d} u}{\mathrm{d} b} &= (4 - d) u - \beta_2 u^2,
\end{align}
up to terms of order $b^{-2}$, since $t = \frac{1}{2} \mn\Lambda_t^{-2}(1-b^{-2})$, and where $\beta_1 = 2^{\frac{1}{2}} \Omega_3 \alpha_1, \; \beta_2 = 2^{\frac{1}{2}} \Omega_3 (\frac{3}{2} \alpha_2 + 2 \alpha_1)$ in $d=3$. As $b \to \infty$, the second equation has a nontrivial stationary solution $u_*$, and implies a corresponding critical value $r_*$, which for $d = 3$ are given, at 1-loop order, by $u_* \approx 8.46, \; r_* \approx -0.12$. Linearizing about the fixed point and computing the left-eigenvalues $y_a$ of the stability matrix, one finds that $y_2 = 2, \; y_4 = - 1$, which are crude approximations to the precisely-known values $y_2 = 1.58831(76), \; y_4 = -0.845(10)$ at the Wilson-Fisher fixed point \cite{Hasenbusch:1999mw}. This is our third and final derivation of the WFFP.
The values at 1-loop order from sharp high-mode elimination combined with epsilon expansion in \cite{Kopietz:2010zz} are $y_2 = 1.67, \; y_4 = -1$, which, however, treats the mass non-perturbatively. As a step in that direction, we can extend the analysis above to include terms of order $ru$. This brings in several more non-1PI diagrams, leading to a system of ODE's given by
\begin{align}
b \frac{\mathrm{d} r}{\mathrm{d} b} & = 2 r + \mathcal{K}_{01} u + \mathcal{K}_{20} r^2 + \mathcal{K}_{11} r u, \nonumber \\
b \frac{\mathrm{d} u}{\mathrm{d} b} & = u + \mathcal{L}_{20} u^2 + \mathcal{L}_{11} ur,
\end{align}
where the coefficients in 3d are
\begin{equation}
\mathcal{K}_{01} = 2^{\frac{1}{2}} \Omega_3 \alpha_1, \quad \mathcal{K}_{20} = -1, \quad \mathcal{K}_{11} = 2^{+\frac{1}{2}} \Omega_3 (\alpha_1 - \smallfrac{1}{2} \alpha_2),
\end{equation}
\begin{equation}
\mathcal{L}_{20} = - 2^{\frac{1}{2}} \Omega_3 (\smallfrac{3}{2} \alpha_2 + 2 \alpha_1), \quad \mathcal{L}_{11} = -4.
\end{equation}
Setting the $b$-derivatives to zero yields, of course, two fixed points. One is gaussian, and the other is the WFFP, whose couplings are determined from
\begin{equation}
u_* = - \frac{1+\mathcal{L}_{11} r_*}{\mathcal{L}_{20}}, \quad 0 = -\frac{\mathcal{K}_{01}}{\mathcal{L}_{20}} + \Big[2 - \frac{\mathcal{K}_{01} \mathcal{L}_{11}}{\mathcal{L}_{20}} - \frac{\mathcal{K}_{11}}{\mathcal{L}_{20}} \Big] r_* + \Big[ \mathcal{K}_{20} - \frac{\mathcal{K}_{11} \mathcal{L}_{11}}{\mathcal{L}_{20}} \Big] r_*^2.
\end{equation}
Expanding the couplings near the WFFP as
\begin{equation}
r = r_* + \delta r, \quad u = u_* + \delta u,
\end{equation}
one can linearize the flow equations about the WFFP, finding
\begin{equation}
b \frac{\mathrm{d}}{\mathrm{d} b}
\begin{bmatrix}
\delta r \\
\delta u
\end{bmatrix}
=
\begin{bmatrix}
2 + 2 \mathcal{K}_{20} r_* + \mathcal{K}_{11} u_* & \mathcal{K}_{01} + \mathcal{K}_{11} r_* \\
\mathcal{L}_{11} u_* & 1 + 2\mathcal{L}_{20} u_* + \mathcal{L}_{11} r_*
\end{bmatrix}
\begin{bmatrix}
\delta r \\
\delta u
\end{bmatrix}
.
\end{equation}
By computing the left-eigenvalues, one finds modified exponents $y_2 \approx 1.63, \; y_4 \approx -1.33$; we stress that our formalism is not expected to do any better than the epsilon expansion.
Thus we observe the existence of an IR fixed point in perturbation theory, as we expect in $\phi^4_3$ theory. If we worked to $O(\lambda_0^2)$, we would find, as usual, the necessity of including a wave function renormalization factor $\zeta_t = b_t^{d_\phi} c_t^{1/2}$ to normalize the kinetic term coefficient, so that eq. (\ref{field_cov}) is replaced by
\begin{equation}\label{finalrescale}
\phi(\bar p \mn\Lambda_t) = \mn\Lambda_t^{d_\phi} c_t^{-1/2} \mn\Phi(\bar p) = \mn\Lambda_0^{d_\phi} \zeta_t^{-1} \mn\Phi(\bar p),
\end{equation}
which modifies the scaling dimension $\Delta_\phi$ of $\phi$ to include an \textit{anomalous} dimension $\gamma_\phi = O(u_t^2)$, which has a non-zero $t\to\infty$ limit.
The existence of an IR fixed point for the dimensionless, rescaled effective action implies that the expectation values of rescaled effective observables
\begin{equation}
\langle \mn\Phi(\bar p_1) \cdots \mn\Phi(\bar p_n) \rangle_{S_t} = b_t^{n \Delta_\phi} \mn\Lambda_0^{-nd_\phi} \langle \phi(p_1) \cdots \phi(p_n) \rangle_{S_t}
\end{equation}
can have nontrivial infinite flow time limits. In terms of the stochastic RG transformation of section 2, this is written as
\begin{equation}
\langle \mn\Phi(\bar p_1) \cdots \mn\Phi(\bar p_n) \rangle_{S_t} = b_t^{n \Delta_\phi} \mn\Lambda_0^{-nd_\phi} \big\langle \mathbb{E}_{\mu_0} \big[\phi_t(p_1) \cdots \phi_t(p_n) \big] \big\rangle_{S_0}.
\end{equation}
Since the stochastic RG transformation was generated by a linear Langevin equation, it may be surprising to find that by simply rescaling the correlation functions, one can arrive at a non-gaussian stationary distribution of the Fokker-Planck equation. We also note that the quantities $\mn\Lambda_0^{-d_\phi} \phi_t$ correspond directly to the dimensionless field variables one would obtain by numerical integration of the LE on lattice.
Lastly, the Fokker-Planck equation for the stochastic RG transformation may be written in dimensionless form following the procedure outlined in chapter 3. Using $\partial_t = b^{-1}\partial_b$, one finds the rescaled equation\footnote{We saw in chapter 3 that including the full wave function renormalization $\zeta_t$ modifies $D$ and the diffusion and drift terms. In this case, $\zeta_t$ cancels in the drift term, but survives in the diffusion as $\zeta_t^2$. In conventional FRG, this is typically accounted for by a redefinition $C_t = \zeta_t^{-2} C'_t$. We see two options for accounting for it in SRG, if we wish to have a simple rescaled FP equation. First, we can let $K_0 \to \zeta_t^{-2} K_0$, rendering the noise variance time-dependent, which must be input by hand in a simulation. A bolder solution may be to consider the field-dependent diffusion matrix $\mn\Sigma(\phi) = K_0 \phi \otimes K_0 \phi$, which then implies a total cancellation of $\zeta_t$ factors upon rescaling. Amusingly, such a stochastic process is the field-theoretical generalization of \textit{geometric Brownian motion} \cite{Pavliotis:2014}, which is used in stock market modeling under the name of \textit{Black-Scholes equation}.}
\begin{equation}
\partial_\tau \rho + D\mn\Phi \circ \Bnab \rho = \frac{1}{2} K_b \Bnab^2 \rho + \bar \omega \mn\Phi \circ \Bnab \rho,
\end{equation}
where $\tau = \ln b$ has been defined, and $K_b(\bar p) = \mathrm{e}^{-\bar p^2 / b^2}$. In terms of the flowing action,
\begin{equation}
\partial_\tau S_\tau = -\frac{1}{2} K_b \Big[ \Bnab S_\tau \circ \Bnab S_\tau - \Bnab^2 S_\tau \Big] - (D-\omega) \mn\Phi \circ \Bnab S_\tau.
\end{equation}
It can be checked that the GFP eq. (\ref{SRG_gfp}) is a solution to this equation.\footnote{By choosing $\omega$ to be a higher polynomial in $p^2$, we expect that the exotic fixed points discussed in \cite{Wilson:1974mb,Kuti:1994ii} may become accessible.} We see that an explicit time-dependence enters via $K_b(\bar p)$, but in the limit $b \to \infty$, $K_b \to 1$.
\subsection{Correlation functions}
Wilson and Kogut demonstrated a relation between effective $n$-point functions and the bare $n$-point functions in their FRG scheme \cite{Wilson:1973jj}. Recently, the authors of \cite{Sonoda:2019ibh} have noted that this relation is an equivalence between effective correlations and gradient-flowed correlations. In the context of the stochastic approach here, the corresponding relation is given in terms of generators $W(J)$ of connected Green functions by
\begin{equation}
W_t(J) = \frac{1}{2}(J,A_t J) + W_0(f_t J),
\end{equation}
where $A_t$ is given by (eq. \ref{Adef}). This relation is simply derived by shifting $\phi' = \phi - f_t \varphi$ in eq. (\ref{eff_action}) and using the definition of the generator,
\begin{equation}
\mathrm{e}^{W_t(J)} := \frac{1}{Z_0} \int \! \mathscr{D} \phi \; \mathrm{e}^{-S_t(\phi) + (J,\phi)},
\end{equation}
with $Z_0$ being the free theory partition function \cite{Kopietz:2010zz, ZinnJustin:2002ru}. It follows that the 2-point functions of $S_t$ and $S_0$ are related by
\begin{equation}
W^{(2)}_t = A_t + f_t W^{(2)}_0 f_t,
\end{equation}
and higher $n$-points are related by
\begin{equation}
W^{(n)}_t(\chi, \dots, \chi) = W^{(n)}_0(f_t \chi, \dots, f_t \chi)
\end{equation}
in multilinear notation. The function $A_t(x,y)$ is determined by the choice of Langevin equation. In the case $\omega(p) = p^2$, for example, one finds an expression in terms of upper incomplete gamma functions
\begin{equation}
A_t(z,0) = \frac{1}{8 \pi^{d/2} z^{d-2}} \Big[\Gamma\Big(\frac{d}{2}-1, \frac{z^2}{4 a_t^2}\Big) - \Gamma\Big(\frac{d}{2}-1, \frac{z^2}{4 a_0^2}\Big) \Big],
\end{equation}
where the inverse effective cutoff $a_t = \mn\Lambda_t^{-1}$ was used. For large separations $\| z \| \gg a_t$, this quantity decays as a gaussian. The effective propagator is therefore equal to the gradient-flowed propagator asymptotically in $x-y$ (so long as the correlation length $\xi \gg a_t$):
\begin{equation}
\langle \phi(x) \phi(y) \rangle_{S_t} \longrightarrow \langle (f_t\varphi)(x)(f_t\varphi)(y) \rangle_{S_0}.
\end{equation}
Note also that if no cutoff function were imposed on the gaussian noise $\eta_t$, there would be a short-distance singularity in $A_t(z,0)$, regardless of whether the bare theory was regulated.
The connected correlators of composite operators also are simply related to their gradient flow counterparts, except we must be careful to define the generators of their $m$-point functions properly. For example, in the case of $\mathcal{O} = \phi^2$, the generator of correlators $W_t^{(0,m)}$ is defined by \cite{ZinnJustin:2002ru, Amit:1984ms}
\begin{equation}
\mathrm{e}^{W_t(L)} := \frac{1}{Z_0} \int \! \mathscr{D} \phi \; \mathrm{e}^{-S_t(\phi) + \frac{1}{2}(L,\phi^2)}.
\end{equation}
By inserting the definition of $S_t(\phi)$, one may compute the relation between effective and bare generators exactly, as the integrals involved are gaussian. We note, however, that given the simplicity of the Langevin equation, we can just as easily use the explicit solution $\phi_t[\varphi; \eta]$ to compute expectations. For example, the 2-point correlator of the $\phi^2$ composite operator is
\begin{equation}
\langle \phi^2(x) \phi^2(y) \rangle_{S_t}^\mrm{c} = \langle (f_t\varphi)^2(x)(f_t\varphi)^2(y) \rangle_{S_0}^\mrm{c} + A_t(x-y)\langle (f_t\varphi)(x)(f_t\varphi)(y) \rangle_{S_0}^\mrm{c} + 2 A_t(x-y)^2,
\end{equation}
where the connected part of a correlator of local operators $A, \; B$ is defined by
\begin{equation}
\langle A(x) B(y) \rangle^\mrm{c} := \langle A(x) B(y) \rangle - \langle A(x) \rangle \langle B(y) \rangle,
\end{equation}
which again shows the asymptotic equivalence of effective and gradient-flowed quantities.
In sum, what we have found is that the correlation functions of composite operators in the effective theory are equal to the gradient-flowed correlators, up to terms proportional to powers of $A_t(x-y)$, which itself is determined by the drift term $\omega$. Thus, so long as the drift is chosen to imply an exponentially decaying $A_t$, the flowed observables are sufficient to determine the long-distance properties of the effective theory.
\section{Ratio formulas}
The fact that the transition functional $P_t$ satisfies the Fokker-Planck equation implies that observables at finite $t$ satisfy
\begin{equation}
\frac{\partial}{\partial t} \langle \mathcal{O}(\phi) \rangle_{S_t} = \langle \mathcal{L} \mathcal{O}(\phi) \rangle_{S_t},
\end{equation}
where the \textit{generator} $\mathcal{L}$ of the Markov process is a linear differential operator given by
\begin{equation}
\mathcal{L} = \frac{1}{2} \mn\Sigma(\phi,t) \Bnab \circ \Bnab - \mathscr{B}(\phi,t) \circ \Bnab.
\end{equation}
For the flow we have been considering, the generator takes the form
\begin{equation}
\mathcal{L} = \frac{1}{2} K_0 \Bnab \circ \Bnab - \omega \phi \circ \Bnab,
\end{equation}
where $\omega$ is (minus) the laplacian operator. We remark that corresponding equations for the rescaled observables may easily be written.
After a small timestep $\epsilon$, then, successive observables are related by
\begin{equation}
\langle \mathcal{O}(\phi) \rangle_{S_{t+\epsilon}} = \langle \mathcal{O}(\phi) \rangle_{S_{t}} + \epsilon \langle \mathcal{L} \mathcal{O}(\phi) \rangle_{S_t} + O(\epsilon^2).
\end{equation}
Applied to $n$-point functions, the formula reads\footnote{This formula corresponds to the spin-blocking equation
\begin{equation}
\langle B_b \varphi (m_1) \cdots B_b \varphi (m_n) \rangle_{S} = \langle \varphi (m_1) \cdots \varphi(m_n) \rangle_{S} + O(\varepsilon / \Delta m),
\end{equation}
where $B_b \varphi(m) = b^{-d}\sum_\varepsilon \varphi(m+\varepsilon)$ is the blocking operator, $\varepsilon \leq b$, and $\Delta m$ stands for the differences $|m_i - m_j| \gg b \; \forall i \neq j$. This follows from the usual correlator scaling relations of \textit{rescaled} spins $\varphi_b(n/b) := b^{\Delta_\phi} (B_b \varphi)(n)$,
\begin{equation}
\langle \varphi_b (m_1/b) \cdots \varphi_b (m_n / b) \rangle_{S_b} = b^{n \Delta_\phi} \langle \varphi (m_1) \cdots \varphi(m_n) \rangle_{S} + O(\varepsilon / \Delta m),
\end{equation}
that one finds in textbooks, e.g. \cite{Cardy:1996xt, Amit:1984ms}. See chapter 1 for more details.
}
\begin{equation} \label{npoint_step}
\langle \phi(x_1) \cdots \phi(x_n) \rangle_{S_{t+\epsilon}} = \langle \phi(x_1) \cdots \phi(x_n) \rangle_{S_{t}} + O(\epsilon).
\end{equation}
Writing both sides in terms of the rescaled theory variables, $\phi(x) \propto \mn\Lambda_t^{\Delta_\phi} \mn\Phi(\bar x)$, where the dimless position $\bar x$ is defined by $x = \mn\Lambda_t^{-1} \bar x$, one finds
\begin{equation}
\mn\Lambda_{t+\epsilon}^{n\Delta_\phi} \langle \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} = \mn\Lambda_t^{n\Delta_\phi} \big[ \langle \mn\Phi( \bar y_1) \cdots \mn\Phi(\bar y_n) \rangle_{S_{t}} + O(\epsilon)\big].
\end{equation}
Motivated by the definition of scale changes $b_t = \mn\Lambda_0 / \mn\Lambda_t$ with respect to the bare scale, we introduce the \textit{relative} scale change $b_\epsilon(t) := b_{t+\epsilon} / b_t = \mn\Lambda_t / \mn\Lambda_{t+\epsilon}$. Since the rescaled positions at different scales, $\bar x$ and $\bar y$, refer to the \textit{same} dimensionful position $x$ defined at the bare scale (i.e. in units of $a_0 = \mn\Lambda_0^{-1}$), it follows that $\bar y = b_\epsilon \bar x$, and we may write the previous formula as
\begin{equation}
\langle \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} = b_\epsilon(t)^{n\Delta_\phi} \big[\langle \mn\Phi(b_\epsilon \bar x_1) \cdots \mn\Phi(b_\epsilon \bar x_n) \rangle_{S_{t}} + O(\epsilon)\big],
\end{equation}
To the extent that we may neglect the $O(\epsilon)$ terms (which we justify in appendix C), we therefore find a familiar RG scaling relation,
\begin{equation} \label{phi_ratios}
\langle \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} \approx b_\epsilon(t)^{n\Delta_\phi} \langle \mn\Phi(b_\epsilon \bar x_1) \cdots \mn\Phi(b_\epsilon \bar x_n) \rangle_{S_{t}}.
\end{equation}
This formula is the stochastic RG analogue of a spin-blocked correlator scaling relation.
The RG scaling property of correlations of scaling operators $\mathcal{R}_a$ now follows from an argument identical to that of chapter 3, but now we may relate it to gradient flow. Writing the rescaled variables in eq. (\ref{scalingops}) in terms of $\phi$, and by using the MCRG equivalence between expectations of $\phi$ and $f_t \varphi$, one finds that the factors of $b_\epsilon^{n\Delta_\phi}$ cancel, and the remaining gradient-flowed quantities satisfy a ratio formula:
\begin{equation}
\frac{\langle \mathcal{R}_a[b_{t+\epsilon}^{\Delta_\phi} f_{t+\epsilon} \varphi(x)] f_{t+\epsilon} \varphi(x_1) \cdots f_{t+\epsilon} \varphi(x_n) \rangle_{S_0}}{\langle \mathcal{R}_a[b_t^{\Delta_\phi} f_{t} \varphi(x)] f_{t} \varphi(x_1) \cdots f_{t} \varphi(x_n) \rangle_{S_0}} \approx b_\epsilon(t)^{\Delta_a}.
\end{equation}
To reiterate, the position arguments in the numerator and denominator are the \textit{same} physical positions in units of $a_0$. We have therefore produced an alternative derivation of correlator ratio formulas of the sort used in chapter 2 in GFRG, making no use of spin-blocking analogies.
\section{Concluding remark}
In Wilson and Kogut's 1973 review, they express the hope that ``a longer range possibility is that one will be able to develop approximate forms of the transformation which can be integrated numerically; if so, then one may be able to solve problems which cannot be solved in any other way.'' One may consider the discrete spin-blocking MCRG of Swendsen and the numerical integration of truncated ERGE's as an actualization of their wish. I believe that the framework of stochastic RG presented in this chapter may provide another actualization, perhaps one even closer to their wish, as it constitutes a direct discretization of the ``blocking'' that leads to their constraint functional.
\section{Future directions}
To wrap up our discussion of stochastic RG, we will now speculate on a few applications which will be pursued in future work.
\subsection{Nonlinear RG's} In 1974, Wilson and Bell (WB) defined and studied the difference between linear and nonlinear RG transformations \cite{Bell:1974vv}. A \textit{linear} RG transformation is one that relates blocked and bare spins linearly, an example of which is the usual transformation
\begin{equation}
\varphi_b(x_b) = \frac{b^\Delta}{b^d} \sum_{\varepsilon} \varphi(x + \varepsilon),
\end{equation}
as well as the SRG transformation with drift $\omega(p)$ presented in this chapter. By contrast, a \textit{nonlinear} RG transformation relates blocked and bare spins in a nonlinear way. An example is the majority-rule transformation in the Ising model, whereby one sets the block spins to be $\pm 1$ determined by whichever type is the majority of the block. Another example would be any transformation that involves a projection of the transformed variables back to the original target space of the theory, such as the blocking transformations defined on link variables in gauge theories. An equivalent characterization is that the $n$-point functions of the blocked and bare theory are not linearly related.
The motivation for systematically analyzing these two types of transformation is the following. Linear RG's require the fine-tuning of some parameter, $b^\Delta$ in the block-spin case, in order for the transformation to have a fixed point, whereas the Ising majority-rule and gauge theory blockings seem to automatically produce fixed points without any such tuning. A blocking step $\sigma \to \sigma'$ of the nonlinear transformation WB chose to analyze was implemented via constraint functional:
\begin{equation} \label{WB_NL}
\mathrm{e}^{-S'(\sigma')} = \int \mathscr{D} \sigma \; \exp \Big( - \frac{a}{2} \| \sigma' - b \sigma - c \sigma^3 \|^2 - S_0(\sigma) \Big),
\end{equation}
with sharp cutoffs on the momentum integrals and using the notation $\| \psi \|^2 = (\psi, \psi)$. As opposed to the linear constraint functional, we see that the mean of $\sigma'$ is set to $b\sigma + c \sigma^3$ rather than just $b\sigma$. WB carried out their analysis in perturbation theory, demonstrating that the nonlinear transformation had a fixed point at first order in $c$, which was slightly displaced from the gaussian fixed point with $c=0$. They also reproduced the behavior of the majority-rule transformation: for a certain region of $(b,c)$ parameter space, the transformation had a fixed point with no requirement of tuning $b$. By computing the stability of the new fixed point to perturbations, they determined that the (``nonphysical'') RG eigenvalue associated with field rescalings became negative to first order in $c$, whereas for the linear transformation it was exactly marginal, at the gaussian fixed point, thus explaining the non-necessity of tuning $b$.
The framework of stochastic RG seems particularly well-suited to studying nonlinear RG's both analytically and numerically. This is because, for a discrete time-step $\epsilon$, one can show that the transition functional produced by a LE
\begin{equation}
\partial_t \phi_t = - \mathscr{B}(\phi_t) + \eta_t
\end{equation}
is given by \cite{ZinnJustin:2002ru}
\begin{equation}
P(\phi,t+\epsilon;\varphi,t) = C \exp\Big( - \frac{1}{2\epsilon \Omega} \| \phi - \varphi + \epsilon \mathscr{B}(\varphi) \|_{K_0}^2 \Big),
\end{equation}
where we write $\| \psi \|^2_K = (\psi, K \psi)$. (This result also allows one to derive a path integral representation of the stochastic process, a feature used extensively in stochastic quantization \cite{Damgaard:1987rr}.) The choice $\mathscr{B}(\phi) = -\Delta \phi + c \phi^3$ reproduces an SRG analogue of WB's transformation. Moreover, we see that such a transformation would be the stochastic generalization of the nonlinear gradient flow equations that were studied by Fujikawa and Suzuki \cite{Fujikawa:2016qis}. Now, we remarked in chapter 2 that such flows were typically not renormalized by a renormalization of the bare parameters of the theory. It is not clear how to interpret their result in the context of FRG; nonrenormalizability has been argued to not be a problem in Wilsonian RG. What \textit{is} clear is that the flow must be regularized in any case. The noise is regularized by $K_0$, but the product $\phi^3(x)$ can lead to $\delta(0)$ singularities. A natural choice which guarantees a Markov property is to replace $\phi^3 \to (K_0 \phi)^3$, so that the interacting $\phi^4$ flow equation is
\begin{equation}
\partial_t \phi_t(x) = \Delta \phi_t(x) - c (K_0 \phi_t)^3(x).
\end{equation}
This choice would replace
\begin{equation}
\delta(x-y) \longrightarrow K_0(x-y) = \frac{\mathrm{e}^{-(x-y)^2/4 a_0^2}}{(4\pi a_0^2)^{d/2}}
\end{equation}
upon functional differentiations ($\Bnab \circ \mathscr{B}$) which arise in the Fokker-Planck equation.
The existence of new fixed points can be analyzed by expanding the flowing action to first order in $c$ in the ERGE for a nonlinear SRG, which reads:
\begin{equation}
\partial_\tau S_\tau = -\frac{1}{2} K_b \Big[ \Bnab S_\tau \circ \Bnab S_\tau - \Bnab^2 S_\tau \Big] + \mathscr{B}_b \circ \Bnab S_\tau - \Bnab \circ \mathscr{B}_b - D \mn\Phi \circ \Bnab S_\tau.
\end{equation}
After solving for the new fixed point, one can compute its stability as in the analysis of scaling operator perturbations to the GFP under SRG. A problem which arises for $\phi^4_d$ interacting flow is that, after rescaling, the drift takes the form
\begin{equation}
\mathscr{B}_b(\mn\Phi) = -\bar \partial^2 \mn\Phi(p) + c b^{4-d} (K_b \mn\Phi)^3.
\end{equation}
For $d=3$, we see that the presence of $b$ prevents writing down an equation for the fixed point action. Two options suggest themselves as possible solutions: (1) Consider only canonically marginal flow actions $\hat S$ (where $\mathscr{B}_b = \Bnab \hat S$), e.g., $\phi^6$ rather than $\phi^4$ in 3d, and (2) Give time-dependence to $c$ by redefining $c \to c b^{d-4}$, but this would spoil the Markov property. However, it is not obvious that the Markov property is a necessity in RG transformations, rather than merely a convenience. The extension to an interacting bare $\phi^4_3$ theory presents a further challenge to the analysis, but the possibility of numerical implementation presents itself as a nonperturbative means of performing the study. And of course, one could apply the derivative expansion of FRG to study the problem analytically, as well. The viability of these options is being pursued by the author.
\subsection{Stochastic MCRG} The continuity of SRG naturally suggests a method for implementing a smooth counterpart to the Swendsen equations described in chapter 1. One begins with the observation that, for any observable $\mathcal{O}$, the path integral representation of its expectation value implies
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d} t} \langle \mathcal{O}(\phi) \rangle_{S_t} = - \langle \mathcal{O}(\phi) \dot S_t(\phi) \rangle_{S_t}^\mrm{c},
\end{equation}
where $S_t$ is the effective action. Writing it as a linear combination of (volume-averaged) operators, and assuming all time-dependence is confined to the couplings, we have
\begin{equation}
S_t(\phi) = \sum_i g_i(t) S_i(\phi).
\end{equation}
This leads to
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d} t} \langle \mathcal{O}(\phi) \rangle_{S_t} = - \sum_j \dot g_j(t) \langle \mathcal{O}(\phi) S_j(\phi) \rangle_{S_t}^\mrm{c}.
\end{equation}
Letting $\mathcal{O} = S_i$, we find a \textit{continuous} cousin of the Swendsen equations, eq. (\ref{Swendsen_Eqns}),\footnote{Actually, the derivation leading to the Swendsen equation can be carried over line by line, by differentiating with respect to couplings rather than $t$. Both approaches would be interesting to pursue}
\begin{equation}\label{cont_swendsen}
\frac{\mathrm{d}}{\mathrm{d} t} \langle S_i(\phi) \rangle_{S_t} = - \sum_j \dot g_j(t) \langle S_i(\phi) S_j(\phi) \rangle_{S_t}^\mrm{c}.
\end{equation}
The expectation values on either side can be measured in a lattice simulation using the stochastic MCRG equivalence,
\begin{equation}
\langle \mathcal{O}(\phi) \rangle_{S_t} = \langle \mathbb{E}_\mu \big[ \mathcal{O}(\phi_t[\varphi;\eta]) \big] \rangle_{S_0}.
\end{equation}
The derivative $\partial_t \langle \mathcal{O} \rangle$ can be measured either by discretization of the $t$-derivative, or using the Markov property and computing $\langle \mathcal{L} \mathcal{O} \rangle$. Thus, eq. (\ref{cont_swendsen}) enables one to measure the beta functions $\dot u_i$ of couplings in the action by inverting the matrix of $\langle S_i S_j \rangle^\mrm{c}$ correlations on the vector of derivatives $\partial_t \langle S_i \rangle$. SRG would offer a serious advantage over conventional MCRG, since the latter is necessarily confined to a few blocking steps on any given lattice size, whereas only the integrator step size $\epsilon$ limits the continuity of SRG.
One is generally more interested in the flow of the scaling variables $u_a$ corresponding to scaling operators $\mathcal{R}_a$, however, since one expects that
\begin{equation} \label{scaling_deriv}
b_t \frac{\mathrm{d} u_a(t)}{\mathrm{d} b_t} = y_a u_a(t).
\end{equation}
The derivatives of the $u_a$ may be constructed from knowledge of $\dot g_i(t)$ and $g_i(t)$ as follows. The scaling variables are linear combinations of couplings $g_i$:
\begin{equation} \label{scaling_LC}
u_a = \sum_i c_{ai} g_i.
\end{equation}
The matrix $\boldsymbol C = [c_{ai}]$ can be computed numerically by diagonalization of the mixed action operator correlations, since
\begin{equation}
\langle S_i S_j \rangle^\mrm{c} = \sum_{ab} c_{ai} \langle \mathcal R_a \mathcal R_b \rangle^\mrm{c} c_{bj},
\end{equation}
and $\langle \mathcal{R}_a \mathcal{R}_b \rangle^\mrm{c}$ is diagonal near the IRFP. A caveat is that these expectations are true only of the rescaled effective theory, so that $S_i(\mn\Phi) = b_t^{m_i\Delta_\phi + \ell_i - d} S_i(\phi_t)$, where $m_i$ is the number of factors of $\phi$ in $S_i$, and $\ell_i$ the number of derivatives. Thus, one must input a value of $\Delta_\phi$, as was necessary in the diagonalization method in chapter 2. To eliminate such a systematic, it is therefore desirable to have determined if the nonlinear SRG can be achieved without needing to tune the rescaling factor, as suggested by Wilson and Bell. Alternatively, one could attempt to measure $\Delta_\phi$ using the correlator method of chapter 2.
One still needs values of the flowing couplings $g_i(t)$ in order to make use of eqs. (\ref{scaling_deriv},\; \ref{scaling_LC}). This can be done by numerically integrating the coupling beta functions obtained from eq. (\ref{cont_swendsen}) up to the desired flow time:
\begin{equation}
g_i(t) = g_i(0) + \int_0^t \mathrm{d} s \; \dot g_i(s),
\end{equation}
knowing that the $t=0$ couplings are the bare couplings one simulates with. The implementation of this procedure is therefore somewhat involved. One needs a fine discretization of the flow time to reduce numerical integration errors in the $g_i(t)$, one needs to know the exact functional form of $b_t$ on the lattice, and one needs to input a value for $\Delta_\phi$ (at least in the linear RG case). Further, a truncation error is introduced by necessarily considering only a finite number of action operators $S_i$, although this would be systematically improvable. Last, and certainly not least, one must perform a double-ensemble average to compute the stochastic expectation values, coming from the integration of a Langevin equation and the bare ensemble average, thereby producing two separate sources of statistical error. On the bright side, the elements of the ensemble of Langevin integrations would be automatically uncorrelated, making \textit{those} errors simple to calculate.
\chapter{Summary}
In this work we have developed new, continuous RG transformations, in the continuum and on the lattice, based on gradient flow and the Langevin equation, and explained their relationship with Wilsonian RG in the form of block-spin RG and functional RG. In fact, RG methods based on GF or LE's essentially constitute the implementation of functional RG on the lattice.
In chapter 2, we saw how to define the GFRG transformation, and described how to measure scaling dimensions of local operators in lattice simulations by virtue of the correlator scaling laws associated with the RG transformation. We applied the method in two scalar field theories, $\phi^4_3, \; \phi^4_2,$ and a 12-flavor SU(3) gauge theory in four dimensions, thereby displaying the general viability of GFRG not only across various physical systems with different field content, but also across various spacetime dimensions. In the 3d scalar model, we produced a numerical determination of the leading four scaling dimensions of the theory, including the $\phi^3$ scaling dimension $\Delta_3$. In the gauge theory, we produced an estimate of the fermion mass anomalous dimension, as well as a first lattice determination of the baryon anomalous dimension. We noted that the method will be applied to 3-dimensional noncompact QED in future work, and that it is already being applied in some interesting 4d systems by other groups, as well.
In chapter 4, we demonstrated an equivalence between functional RG transformations and stochastic processes, based on the observation that functional RG equations for effective actions have the same form as Fokker-Planck equations which govern stochastic processes. The viability of the stochastic RG (SRG) transformation was checked in $\phi^4_3$ theory, where the Wilson-Fisher fixed point was observed in perturbation theory. This result furthermore implied that, from the perspective of stochastic processes, the stationary distribution of the field theoretical Ornstein-Uhlenbeck process (with $\phi^4$ theory initial condition) is non-gaussian, up to a rescaling. An equivalence of long-distance correlation functions of the SRG effective theory and gradient-flowed correlators was found, which permitted a reinterpretation of the GFRG transformation from chapter 2 as an implication of the SRG transformation.
Lastly, we speculated on a few possibilities for future work. SRG seems to provide a natural framework in which to study nonlinear RG transformations, which may be practically more useful than the linear RG's we simulated in chapter 2 if it indeed eliminates the requirement of a finely-tuned rescaling factor $b_t^\Delta$ for the field variables being transformed. The continuity of SRG furthermore suggests that a continuous version of Swendsen's MCRG can be carried out, which would allow for the measurement of the RG eigenvalues $y_a$ in a manner distinct from the correlator ratio method of chapter 2.
\bibliographystyle{JHEP}
|
2301.08376
|
\section{Introduction}
The explosive growth of the amount of wireless data traffic increases the burden of the computation at the core networks and local devices, and the pressure on the wireless communication systems. These new challenges motivate new techniques, e.g., edge computing \cite{9606720} which offloads the computation burden from the core network and local devices to the edge servers (ESs), and semantic communications \cite{qin2022semantic} which only transmit the essential task-related information to reduce the wireless data traffic.
The limited resource in wireless communications has become the bottleneck for the data transmission between the local devices and ESs, which has affected the performance of the edge computing significantly. Some works have investigated the resource allocation techniques for data transmission by optimizing the transmit power~\cite{9210812}, user scheduling~\cite{8851249}, and bandwidth allocation~\cite{9194337} for edge intelligence, aiming to reduce the communication latency and improve the computational capability. Additionally, Zhou \emph{et al.}~\cite{9556549} consider the communication resource and the computation resource for the local devices and ESs, and jointly optimize the computational capability and transmit power of local users to realize the energy-efficient resource allocation. However, the above works are based on the stable channel state information (CSI), which is impractical for the 5G and above communication system. The fast-varying channels and limited bandwidth resource place higher demand on the communication techniques which is robust to channel variations and less data traffic for the offloading tasks.
Recently, semantic communications have shown the potential to tackle these challenges. Yang \emph{et al.}~\cite{yang2022semantic} have investigated how edge intelligence can be enhanced with the semantic extraction procedure. Compared with traditional communications, the semantic communication is more robust to channel variations because it transmits and receives the meaning of the raw data instead of converting the data to bits at the transmitters, which requires a high signal-noise-ratio (SNR) for recovering the data accurately. Some works have made progress on the applications of deep learning based semantic communications on text, image, audio, video, and multi-modal data~\cite{qin2022semantic}. Particularly, Xie \emph{et al.}~\cite{9830752} have verified that some specific tasks can be performed according to the retrieved semantic information, which inspires us to compress the local tasks to semantic information and transmit the compressed semantic information to the ESs, so that enhancing the robustness of the communication system and reducing the data traffic significantly.
In this paper, a semantic-aware task offloading system is proposed. The high local computation load of the tasks can be converted to semantic information with low communication load and offloaded to the ESs with low computation load. Furthermore, the energy-efficient resource management is adopted to perform joint resource management for the communication cost and the computation cost, thus reducing the latency and energy consumption for performing local tasks, which are key requirements to realize the edge intelligence in 6G communications. To demonstrate the effectiveness of the proposed resource management for the semantic-aware task offloading system, the machine translation task is adopted as an example. However, to realize the aforementioned techniques, there are some challenges that we need to tackle.
\begin{itemize}
\item {How to measure the computational cost for a specific semantic task? }
\item {Why do we need to investigate the semantic-aware task offloading system? Is it necessary to offload the machine translation tasks to the edge server?}
\item {How to jointly optimize the computational cost and communication cost to improve the performance of executing the machine translation tasks?}
\end{itemize}
The contributions of this paper address the above challenges and are concluded as follows.
\begin{enumerate}
\item {In this paper, a semantic-aware task offloading system is considered. Instead of offloading the tasks to the ESs directly, we extract the semantics of tasks and transmit the low-dimensional information to the ESs based on the deep learning enabled semantic communication (DeepSC) systems. }
\item {We use the machine translation tasks as an example to implement the semantic-aware task offloading system. The machine translation task is performed by bidirectional encoder representations from transformers (BERT) model~\cite{vaswani2017attention}, and we quantify the computational energy consumption of the BERT model.\footnote{For the BERT decoder, the number of attention layers is twice as many as the encoder, which means the computational cost of the decoder is higher than the encoder, so that verifying the necessity of offloading the tasks to the ESs.}}
\item {The joint management for computation resource and communication resource is performed distributedly using the proximal policy optimization based multi-agent reinforcement learning (MAPPO) algorithm at the local user equipments (UEs). The local UEs can make decisions based on the local information, i.e., GPS location, battery life, and task queue, so that reducing the computational pressure of the BS and data traffic. To the best of our knowledge, this is the first paper that performs the joint optimization of the computation resource and communication resource for semantic-aware task offloading systems.}
\end{enumerate}
\section{System Model}
\subsection{Network Model}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Semantic_RA.eps}
\caption{System model for semantic task offloading process.}
\label{system_model}
\end{figure}
To reduce the transmission cost of the offloading tasks from local users to the edge server, we adopt the DeepSC model proposed by Xie~\emph{et al.}~\cite{9252948}. The semantic communications are implemented based on the pre-trained semantic encoder and decoder. We define the set of UEs as ${\cal I} = \{UE_i|i=1,\dots,I\}$. The $i$-th UE can associate with the edge server (ES) to offload the machine translation tasks, i.e., the associate coefficient $\rho_i = 1$, or perform the task locally, where $\rho_i = 0$.
Generally, the text-based semantic task offloading processes include several steps. Firstly, the $UE_i$ upload the semantic task $T_i = \{d_i, l_i, \tau_\text{max}\}$ to the ES, where $d_i$ denotes the number of task queue, which is the number of sentences in the proposed system model, $l_i$ is the average hardware requirement per sentence\footnote{Note that we consider the floating-point operations per second (FLOPS) computing capability for BERT models, which are usually performed by the graphics processing unit (GPU).}, and $\tau_\text{max}$ is the maximize latency constraint for a task. Then the semantic tasks and the computation are performed by the ESs. In the third phase, the results for the semantic tasks are downloaded from the ESs to UEs. The process is illustrated as Fig.~\ref{system_model}.
\subsection{Transmission Model}
Consider an orthogonal frequency-division multiple access (OFDMA) based semantic-aware network with the ES. The ES serves the user group ${\cal I}$ with the bandwidth $BW$. The overall bandwidth is equally divided into $I$ subbands with bandwidth $W_i = BW/I, \forall i \in \{1,\dots, I\}$, and allocated to each UE for the possible transmission of semantic tasks. Suppose that all channels consist of large-scale fading and small-scale Rayleigh fading, and the signal-to-noise ratio (SNR) of $UE_i$ can be denoted as
\begin{equation}
\gamma_{i} = \frac{\rho_{i}{p_i}{g_i}|{h_i}|^2}{W_i \sigma^2},
\end{equation}
where $p_i$ is the transmit power, $g_i$ represents the large-scale channel gain and $h_i \thicksim \mathcal{CN}(0,1)$ is the Rayleigh fading coefficient for the sub-channel assigned to $UE_i$, $\sigma^2$ is the noise power spectral density.
Unlike the bit-stream data rate for the traditional Shannon paradigm, the semantic rate is based on the semantic unit and the amount of semantic information. For the pre-trained DeepSC model, the semantic similarity can be defined as $\epsilon_i = \epsilon(k_i, \gamma_i)$, where $k_i$ denotes the average number of semantic symbols used for each word. We define the average semantic information per sentence as $A^s_i$ and the average word length per sentence as $A^w_i$ according to~\cite{9763856}, and the semantic rate can be expressed by
\begin{equation}
\Gamma_i = \frac {W_i A^s_i \epsilon_i}{A^w_ik_i}.
\end{equation}
Note that $A^s_i$, $A^w_i$ and $k_i$ are the parameters that depend on the pre-trained DeepSC model and the type of source, and can be considered as constant values during the training process.
\subsection{Computational Model}
For the tasks performed locally, i.e., $\rho_i = 0$, the computational latency can be expressed by
\begin{equation}
t^{lc}_i = (1-\rho_i)\frac{d_i l_i}{c_i},
\end{equation}
where $d_i$ and $l_i$ are the task related parameters defined as before, $c_i = n_if_i$ is the theoretical GPU computation capability, $f_i$ is the GPU clock frequency, and $n_i$ is the number of floating-point operations that the local GPU can execute per cycle of the $UE_i$. Hence, the energy consumption for performing semantic tasks locally can be denoted as $E^{lc}_i = \alpha t_i (f_i)^3$, where $\alpha$ is the local energy coefficient related to the structure of the computation chips.
For the tasks offloaded to ESs, i.e., $\rho_i = 1$, the $UE_i$ is associated with the ES, the machine translation tasks are offloaded to the ES for computation remotely. The transmit latency is expressed by
\begin{equation}
t^{ut}_i={\rho_id_iA^s_i}/{\Gamma_i}={\rho_id_iA^w_ik_i}/{W_i\epsilon_i},
\end{equation}
while the energy consumption of communication can be expressed by $E^{ut}_i=p_it^{ut}_i$. For the machine translation tasks performed remotely, the time latency can be further expressed by
\begin{equation}
t^{rc}_i = \frac{d_i l_i}{c^{rc}/\sum \nolimits_{{i \in I}} \rho_i},
\end{equation}
where $c^{rc} = n^{rc}f^{rc}$ is the GPU computation capability frequency of the ES, $n^{rc}$ is the number of floating-point operations that the remote GPU can execute per cycle, and $f^{rc}$ is the clock frequency of the remote GPU. Thereby, the energy consumption by $ES$ can be expressed by
$E^{rc}_i = \beta t^{rc}_i ({f^{rc}}/{\sum \nolimits_{{i \in I}}\rho_i})^3$, where $\beta$ is the remote energy related coefficient.
The overall task perform latency for $UE_i$ can be therefore expressed by
\begin{equation}
t_i = \rho_i (t^{ut}_i+t^{rc}_i+t^{dl}_i)+(1-\rho_i)t^{lc}_i,
\end{equation}
where $t^{dl}_i$ denotes the results downloading latency. Note that the results for the semantic tasks are relatively small and the downstream bandwidth are higher, thus the downloading latency can be treated as a small constant. Considering that there is sufficient power supply at the ES side, and our objective is to prolong the battery lifetime of the local users, which can be converted to minimize the energy consumption at the UEs side
\begin{equation}
E(\boldsymbol \rho, \boldsymbol p, \boldsymbol f) = \sum \limits_{i\in I}E_i=\sum \limits_{i\in I}(E^{lc}_i+E^{ut}_i),
\end{equation}
where $\{\boldsymbol \rho, \boldsymbol p, \boldsymbol f\} = \{\rho_i, p_i,f_i\}, \forall i \in \{1,\dots, I\}$. The energy minimization problem is formulated as
\begin{mini!}|l|
{\{\boldsymbol \rho, \boldsymbol p, \boldsymbol f\}}{E(\boldsymbol \rho, \boldsymbol p, \boldsymbol f)}
{\label{eq20}}{(\textbf{P0})}
\addConstraint{\epsilon_i \geq \epsilon_{min} \label{objective:c1} }
\addConstraint{\rho_i = \{0,1\} \label{objective:c2} }
\addConstraint{p_i \leq p_\text{max} \label{objective:c3}
}
\addConstraint{f_i \leq f_\text{max} \label{objective:c4}
}
\addConstraint{t_i \leq \tau_\text{max} \label{objective:c5}
}
\addConstraint{E_i \leq E^\text{max}_i \label{objective:c6}
}
\addConstraint{i = 1,2,\dots, I, \label{objective:c7}
}
\end{mini!}
where $\epsilon_{min}$ represents the minimum semantic accuracy requirement, $p_\text{max}$ is the maximum transmit power constraint, $f_\text{max}$ is the maximum boost GPU clock frequency, $E^\text{max}_i$ is the maximum battery capacity of $UE_i$, which guarantee the battery life during the task processing time. If all constraints are satisfied, the task can be executed successfully, and the task queue $d_i \rightarrow d_i -1$.
Traditionally, the computation offloading strategy can be determined by Lyapunov optimization by decoupling the joint optimization problem to sequential per-stage deterministic subproblems~\cite{8638800}. However, these algorithms are usually executed at the central BS or the ESs, which increases the communication cost and computation pressure of the center. Additionally, it is unreasonable for the center to control the transmit power and GPU clock frequency of the massive UEs. To perform the online optimization problem distributedly and reduce the complexity of the joint optimization algorithm, we proposed the MAPPO algorithm, which is introduced in Section III.
\section{Muiti-agent Proximal
Policy Optimization for Resource Allocation Algorithm}
In this section, we first design the state, the observation state, the action space, and the reward for the algorithm. Then an advanced network structure and the training process for the MAPPO are proposed. Each UE is considered as an RL agent, which can make decisions individually according to the local RL model.
\subsection{MAPPO Components Definition}
To make the task offloading decision, while achieving the objective and satisfying the constraints, the UE needs to consider its battery life, the channel state information, and the computation requirement of the tasks. However, as a local agent, the UE cannot observe the global state. Mathematically, the local observation space for the $UE_i$ can be expressed as $o_i = \{\boldsymbol{s}_i, E^\text{max}_i, T_i\}$, where $\boldsymbol{s}_i$ is the location of the $UE_i$. According to the local observation, the local UEs need to determine if the task needs to be offloaded.
For the tasks offloaded to edge servers, the UEs need to determine the transmit power $p_i$, while the frequency of the local GPU $f_i$ is set to a negligible value. Similarly, for the tasks executed locally, the transmit power is set to a negligible value. The action space includes three components, i.e., the offloading policy, the offloading transmit power, and the local computation frequency. The action space can be therefore expressed as $a_i = \{\rho_i, p_i, f_i\}$. The actions of all UEs form a joint action, which can be mathematically denoted by ${\cal A} = \{a_i|i=1,\dots,I\}$.
At time step $t$, each UE is given a global reward $r_t$ based on the joint action ${\cal A}$. The reward is designed based on the objective function. As we aim to minimize the energy cost of UEs, the value of reward should be negatively correlated to the energy consumption, which can be denoted by
\begin{equation}
r_t= \xi_t - E(\boldsymbol \rho, \boldsymbol p, \boldsymbol f; t),
\label{eq22}
\end{equation}
where $\xi_t$ is a positive constant value. When the tasks are completed or the time step ends, an extra reward is given to UEs for evaluating the execution performance of the whole task set. For the tasks finished before the max time step, a positive reward should be given for completing the tasks in advance. Otherwise, a punishment should be given. Mathematically, the extra reward can be expressed by
\begin{equation}
r_{T}=
\begin{cases}
\xi_TI(T-t_0),&t_0\leq T;\\
- \xi_T \sum \nolimits_{i\in I} d_i(T),&\text{otherwise}.
\end{cases}
\label{eq23}
\end{equation}
where $t_0$ is the task complete time step, $d_i(T)$ represents the length of the task queue $d_i$ at the max time step $T$. Hence, the sum reward for a whole training episode can be expressed by $R(\tau) = \sum_{t=0}^{T} \gamma^t r_t + r_{T}$, where $\gamma \in (0,1)$ represents the discount factor, representing how much the future reward effect on the current state, $\tau$ is a sequence of states and actions which denotes the trajectory in the environment.
\subsection{Training Process for the Local Model}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{network_structure.eps}
\centering
\caption{The proposed MAPPO network structure.}
\label{network_structure}
\end{figure}
Generally, all of the deep reinforcement learning (RL) algorithms with continuous action space can be applied to (\textbf{P0}), e.g., deep deterministic policy gradient (DDPG), twin delay DDPG (TD3), asynchronous advantage actor-critic (A3C), PPO, etc. In this paper, we apply the PPO because of its robustness and good performance~\cite{PPO_2017}. The proposed local PPO network structure is illustrated as Fig. \ref{network_structure}.
The local models are trained using the data in the local data set, which is called the replay memory in RL. At each training step $t$, the experience $e_{t, i} = (o_{t, i}, a_{t, i}, r_t, o_{t+1, i})$ acquired by $UE_i$ is stored in $i$-th local replay memory ${\cal B}_i$. The objective function of the RL is to maximize the expected reward for each trajectory, which can be expressed by
\begin{equation}
J\left (\pi_\theta\right) =
\mathbb{E}_{\tau\sim \pi_\theta(\tau)}\left [R(\tau)\right]= \int_{\tau} P(\tau|\pi_\theta) R(\tau),
\label{objective_function}
\end{equation}
where $\pi_\theta$ is the parameterized policy, $P(\tau|\pi_\theta) =P (o_0) \prod_{t=0}^{T-1} P(o_{t+1, i} | o_{t,i}, a_{t,i}) \pi_\theta(a_{t,i} | o_{t,i})$ represents the probability of the trajectory $\tau$, $P(o_{t+1, i} | o_{t,i}, a_{t,i})$ is the state transformation probability and $\pi_\theta(a_{t,i} | o_{t,i})$ is the action choice probability, and $P (o_0)$ is the probability of the initial state $o_0$. To optimize the policy, the policy gradient need to be calculated, i.e., $\theta_{j+1} = \theta_j + \alpha \left. \nabla_{\theta} J(\pi_{\theta}) \right|_{\theta_j}$, where $\alpha$ is the learning rate or the learning step. The gradient of the policy can be expressed as
\begin{equation}
\nabla_{\theta}J(\pi_{\theta})\!=\!\mathbb{E}_{\tau \sim \pi_{\theta}(\tau)}\left[{\sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}(a_{t,i} |o_{t,i}) A^{\pi_\theta}(o_{t,i},a_{t,i})}\right],
\label{policy_gradient}
\end{equation}
where $A^{\pi_\theta}(o_{t,i},a_{t,i})$ is the advantage function, and we will explain it in details later.
To evaluate whether an action $a$ is good or not, we need to use the action-value function $Q^{\pi_\theta}(o,a) = \mathbb{E}_{\tau\sim \pi_\theta(\tau)}\left[R(\tau)| o_0 = o, a_0 = a\right]$, which is the expectation reward for taking action $a$ at state $o$. However, this function also relies on the state, which means the action-value function for an optimal policy under a bad state may be even lower than an arbitrary action under a better state, i.e., the state-value function $V^{\pi_\theta}(o) = \mathbb{E}_{\tau\sim \pi_\theta(\tau)}\left[R(\tau)| o_0 = o\right]$ need to be taken into consideration. Instead of comparing the action-value function directly, the advantage of an action $a$ compared with the other action under the state $o$ can be expressed by
\begin{equation}
A^{\pi_\theta}(o_{t,i},a_{t,i}) = Q^{\pi_\theta}(o_{t,i},a_{t,i}) - V^{\pi_\theta}(o_{t,i}).
\label{action_value_function}
\end{equation}
It is noted that the action-value function can be expressed by the temporal difference form as $Q^{\pi_\theta}(o_{t,i},a_{t,i}) = r_t + \gamma V^{\pi_\theta}(o_{t+1, i})$. However, the action-value function and the state-value function cannot be acquired directly from the experience $e_{t,i}$, and in deep RL approaches, we can use the NN to estimate the state-value function. In this way, we can express the estimated advantage function $\hat{A}^{\pi_\theta}(o_{t,i},a_{t,i}) = \delta^V_{t,i} = r_t + \gamma \hat{V}^{\pi_\theta}(o_{t+1,i}) -\hat{V}^{\pi_\theta}(o_{t,i})$. However, the bias for this estimation is high, which restricts the training and convergence performance. To overcome this issue, generalized advantage estimation (GAE)~\cite{GAE_paper} can be applied to estimate the advantage function for multi-steps and strike a tradeoff between the bias and variance. The GAE advantage function is denoted by
\begin{equation}
A^{\text{GAE}}(o_{t,i},a_{t,i}) = \sum \limits _{l=0}^{T-t}(\lambda\gamma)^l\delta^V_{t+l,i},
\label{GAE_function}
\end{equation}
where $\lambda \in (0,1]$ is the discount factor for reducing the variance of the future advantage estimation.
The actor network is optimized by maximising $L_{AC} = \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)}\left[ratio_{t,i} \times A^{\text{GAE}}(o_{t,i},a_{t,i})\right]$, where $ratio_{t,i} =\frac{\pi_{\theta}(a_{t,i} |o_{t,i})}{\pi_{\theta_{\text{old}}}(a_{t,i} |o_{t,i})}$ is the action step. However, too large step could lead to an excessively large policy update, hence we can clip this step and restrict it. The clipped actor objective function is expressed by
\begin{equation}
L^{\text{Clip}}_t\!=\!\min\left(
ratio_{t,i}\!\times\!A^{\text{GAE}}(o_{t,i},a_{t,i}),\!
g(\epsilon,\!A^{\text{GAE}}(o_{t,i},a_{t,i}))\right),
\label{clipped_loss}
\end{equation}
where
\begin{equation}
g(\epsilon, A) =
\begin{cases}
(1 + \epsilon) A, &A \geq 0;\\
(1 - \epsilon) A & A < 0,
\end{cases}
\label{clip}
\end{equation}
in which $\epsilon$ is a constant value to control the policy update steps. The clip operation have been proved to improve the robustness of the model by openAI~\cite{PPO_2017}.
The loss $L_{\text{CR}}$ for the critic network is to minimize the gap between the estimated state-value function and the discount sum reward, which can be expressed by
\begin{equation}
L^{\text{CR}}_t =\left\Vert \hat{V}^{\pi_\theta}(o_{t,i}) - R_t \right\Vert^ 2,
\label{TD}
\end{equation}
where $R_t = \sum^T_{l=t} \gamma^{l-t} r_l + r_{T}$ represents the discount future reward from time step $t$, which can be estimated by $\hat{R}_t = r_t + \gamma\hat{V}^{\pi_\theta}(o_{t+1,i}).$
Combining the objective of the actor network and critic network, we can express the overall objective as
\begin{equation}
L=\arg \max_{\theta} \mathbb{E}_{t}\left[L^{\text{Clip}}_t-c_1L^{\text{CR}}_t+c_2E\right],
\label{overall_loss}
\end{equation}
where $E$ represents an entropy bonus to ensure sufficient exploration, $c_1$ and $c_2$ are weight parameters for the estimation of value function and entropy, respectively.
\subsection{Federated Learning based MAPPO}
Generally, a multi-agent RL process can be considered as a Markov game, which means the global next state depends on the joint action. However, the local UEs are not able to acquire the information for the joint action and the global state. From the view of each UE, even takes the same action, the reward and the next state may be different, which actually forms a nonstationary environment and makes multi-agent RL hard to converge. To overcome the nonstationary naturality, we apply the federated reinforcement learning framework~\cite{ji2022federated} to overcome the unstable environment, and compare the proposed algorithm with other benchmarks.
\begin{table}[t]
\begin{center}
\caption{Simulation Parameters}
\begin{tabular}{|c|c|}
\hline
\textbf{Parameter}&
\textbf{Value} \\
\hline
Number of UEs $I$&
4\\
\hline
Carrier frequency&
6GHz\\
\hline
Bandwidth of each sub-band&
100kHz\\
\hline
Transmit power range&
$(15,24)$dBm\\
\hline
Number of processors of the local GPU&
1024\\
\hline
Local GPU frequency range&
$(0.96, 1.72)$GHz\\
\hline
Number of processors of the remote GPU&
8192\\
\hline
Remote GPU frequency&
$0.96$GHz\\
\hline
Output dimension of DeepSC $k$&
15\\
\hline
Batchsize of the MAPPO&
256\\
\hline
Learning rate&
5e-7\\
\hline
Discount rate $\gamma$&
0.95\\
\hline
\end{tabular}
\end{center}
\label{tab3}
\end{table}
\section{Numerical Results}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{training_reward.eps}
\caption{Training convergence performance of the proposed MAPPO algorithm. Local models are averaged every 100 episodes.}
\label{Training_reward}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{testing_reward.eps}
\caption{Energy consumption for 100 random environment settings.}
\label{Testing_performance}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{energy_over_k.eps}
\caption{Energy consumption over the output dimension of the DeepSC encoder. Testing results are averaged by 100 random settings.}
\label{Output_dimension}
\end{figure}
In this section, the performance of the proposed MAPPO algorithm for joint resource management is demonstrated. We implement the semantic communication based on the trained DeepSC model, which is a famous deep learning based semantic communication model. The DeepSC model is trained by the proceedings of the European Parliament~\cite{koehn-2005-europarl}, then the accuracy mappings, i.e., $\epsilon_i = \epsilon(k_i, \gamma_i)$, can be obtained by running the DeepSC models under different SNRs. The pathloss model is set according to~\cite{3gpp_38.901}. Moreover, the simulation settings for the wireless channels, the specification of GPUs for the ES and local UEs, and the parameters for training the MAPPO models are listed in TABLE \uppercase\expandafter{\romannumeral1}.
The convergence of the proposed MAPPO has been demonstrated in Fig. \ref{Training_reward}. It is clear that the proposed algorithm can achieve fast convergence at about 300 episodes, and keeps at the near optimal performance over the rest of the training stages. We also use the entropy to measure the stability and the convergence of the policy. The policy entropy is an index in policy gradient based RL algorithms, to measure the randomness of a policy. The decreasing trend verifies the convergence performance of the proposed algorithm.
Each testing task includes 10 sentences, and maximize latency requirement is set as 0.05 seconds for each sentence. One testing snapshot is displayed in Fig. \ref{Testing_performance}. To verify the performance of the proposed MAPPO algorithm, we compare it with the following benchmarks.
\begin{itemize}
\item {Deep Q-network (DQN): This algorithm is based on the traditional DQN and multi-agent RL algorithm. The transmit power and computation frequency are converted to discrete levels, as the DQN can only deal with discrete action space.}
\item {Exhaustive search: The upper bound of the proposed problem (P0) is hard to acquire. To evaluate the near-optimal results, we apply the exhaustive search of the discrete power and frequency levels to find the optimal discrete solutions.}
\item {Local: All tasks are performed at local UEs.}
\item {Remote: All tasks are offloaded to the ES and performed by the ES.}
\item {Random: The offloading policy, the transmit power, and computation frequency are randomly chosen by each UE.}
\end{itemize}
It is clear that task offloading reduces the energy consumption as the benchmark for tasks performed locally consumes much more UEs power compared with other benchmarks. Meanwhile, the proposed MAPPO algorithm can achieve less energy consumption compared with the traditional DQN based algorithm, since the MAPPO algorithm can deal with the continuous power and frequency variables. Unlike the DQN with discrete action space, the proposed MAPPO can find a better solution, and is more robustness for different UEs with different transmit power and operate frequency range.
We investigate the energy consumption for models over different output dimensions, which is demonstrated in Fig. \ref{Output_dimension}. We can observe that the proposed MAPPO algorithm outperforms the traditional DQN algorithm, as well as the benchmarks that perform tasks using the single mode. It is also noted that with the increase of the output dimensions, the optimal energy consumption drops, which is because the DeepSC models with larger output dimension can achieve higher accuracy, which can speed up the semantic task transmission and avoid failure task offloading.
\section{Conclusion}
In this paper, a semantic-aware task offloading system has been investigated and the energy consumption of user equipments has been minimized by the proposed proximal policy optimization based multi-agent reinforcement learning algorithm. The semantic information of the sentence in machine translation tasks is extracted by the semantic encoder, which can be offloaded to the edge servers or decoded and translated locally. Numerical analysis verifies the necessity of task offloading, and the simulation results demonstrate the proposed algorithm outperforms the benchmarks.
|
1807.01477
|
\section{Introduction}\label{intro}
Traditionally, machine learning methods can learn {model's} parameters automatically with {the} training samples and thus it can {provide models with good performances which can satisfy the special requirements of various applications.} Actually, it has achieved great success in tackling many real-world artificial intelligence and data mining problems \cite{128}, such as object detection \cite{54, 55}, natural image processing \cite{32}, autonomous car driving \cite{130}, urban scene understanding \cite{urban}, machine translation \cite{103}, and web search/information retrieval \cite{155}, {and others}. A success machine learning system often requires plentiful training data which can provide enough information to train the model, a good model learning process which can better model the data, and an accurate inference to discriminate different objects.
However, in real-world applications, limited number of labelled training data are available. Besides, {there exist large amounts of parameters in the machine learning model. These} would make the "over-fitting" phenomenon in the machine learning process. Therefore, {obtaining an accurate inference from the machine learning model tends to be a difficult task.}
Many factors can help to improve the performance of the machine learning process, among which the diversity in machine learning plays an important role.
{Diversity shows different concepts depending on context and application \cite{34}.
Generally, a diversified system contains more information and can better fit for various environments. It has already become an important property in many social fields, such as biological system, culture, products and so on. Particularly, the diversity property also has significant effects on the learning process of the machine learning system.}
{Therefore, we wrote this survey mainly for two reasons. First, while the topic of diversity in machine learning methods has received attention for many years, there is no framework of diversity technology on general machine learning models. Although Kulesza et al. discussed the determinantal point processes (DPP) in machine learning which is only one of the measurements for diversity\cite{34}.
\cite{add_40} mainly summarized the diversity-promoting methods for obtaining multiple diversified search results in the inference phase.
Besides, \cite{add_3,add_14,30} analyzed several methods on classifier ensembles, which represents only a specific form of ensemble learning. All these works do not provide a full survey of the topic, nor do they focus on machine learning with general forms. Our main aim is to provide such a survey, hoping to induce diversity in general machine learning process. As a second motivation, this survey is also useful to researchers working on designing effective learning process.}
{Here, the diversity in machine learning works mainly on decreasing the redundancy between the data or the model and providing informative data or representative model in the machine learning process.}
{This work will discuss the diversity property from} different components of the machine learning process, including the training data, {the learned model}, {and} the inference.
{ The diversity in machine learning tries to decrease the redundancy in the training data, the learned model as well as the inference and provide more information for machine learning process. It can improve the performance of the model and has played an important role in machine learning process. In this work, we} summarize the diversification of machine learning into three categories: the diversity in training data (data diversification), the diversity of the model/models (model diversification) and the diversity of the inference (inference diversification).
\textit{Data diversification} can provide samples with enough information to train the machine learning model. The diversity in training data { aims to }maximize the information contained in the data. Therefore, the model can learn more information {from the data via the learning process} and the learned model can {be better fit for the data}. Many prior works have imposed the diversity on the construction of each training batch for the machine learning process to train the model more effectively \cite{36}. In addition, diversity in active learning can also make the labelled training data contain the most information \cite{1,3} and thus the learned model can achieve good performance with limited training samples. Moreover, in special unsupervised learning method by \cite{gong_ijcnn}, diversity of the pseudo classes can encourage the classes to repulse from each other and thus the learned model can provide more discriminative features from the objects.
\textit{Model diversification} comes from the diversity in human visual system. \cite{66, 67, 68} have shown that the human visual system represents decorrelation and sparseness, namely diversity. This makes different neurons in the human learning respond to different stimuli and generates little redundancy in the learning process which ensures the high effectiveness of the human learning. However, general machine learning methods usually perform the redundancy in the learned model where different factors model the similar features \cite{44}. Therefore, {\textit{diversity between the parameters of the model (D-model)}} could significantly improve the performance of the machine learning systems.
The D-model tries to encourage different parameters in each model to be diversified and each parameter can model unique information \cite{14, 26}. As a result, the performance of each model can be significantly improved \cite{13}.
However, general machine learning model usually provides a local optimal representation of the data with limited training data. Therefore, ensemble learning, which can learn multiple models simultaneously, becomes another hot machine learning methods to provide multiple choices and {has been widely applied in many real-world applications, such as the speech recognition \cite{131, 132}, and image segmentation \cite{99}}. However, general ensemble learning usually makes the learned multiple base models converge to the same or similar local optima. Thus,
\textit{diversity among multiple base models by ensemble learning (D-models) }, which tries to repulse different base models and encourages each base model to provide choice reflecting multi-modal belief \cite{22,99,114}, can provide multiple diversified choices and significantly improve the performance.
Instead of learning multiple models with D-models, one can also obtain multiple choices in the inference phase, {which is generally called multiple choice learning (MCL)}. However, the obtained choices from usual machine learning systems presents similarity between each other where the next choice will be one-pixel shifted versions of others \cite{23}. Therefore, to overcome this problem, {diversity-promoting prior can be imposed over the obtained multiple choices from the inference}. Under the
\textit{inference diversification}, the model can provide choices/representations with more complement information \cite{5,9,12,20}. This could further improve the performance of the machine learning process and provide { multiple discriminative choices} of the objects.
This work systematically covers the literature on diversity-promoting methods over data diversification, model diversification, and inference diversification in machine learning tasks. In particular, three main questions from the analysis of diversity technology in machine learning have arisen.
\begin{itemize}
\item How to measure the diversity of the training data, the learned model/models, and the inference and enhance these diversity in machine learning system, respectively?
How do these methods work on the diversification of the machine learning system?
\item Is there any difference between the diversification of the model and models? Furthermore, is there any similarity between the diversity in the training data, the learned model/models, and the inference?
\item Which real-world applications can the diversity be applied in to improve the performance of the machine learning models? How do the diversification methods work on these applications?
\end{itemize}
Although all of the three problems are important, none of them has been thoroughly answered. Diversity in machine learning can balance the training data, encourage the learned parameters to be diversified, and {diversify the multiple choices from the inference.} Through enforcing diversity in the machine learning system, the machine learning model can present a better performance. Following the framework, the three questions above have been answered with both the theoretical analysis and the real-world applications.
{The remainder of this paper is organized as Fig. \ref{fig:article_structure} shows.} Section \ref{sec:general} discusses the general forms of the supervised learning and the active learning as well as a special form of unsupervised learning in machine learning model. {Besides,
as Fig. \ref{fig:article_structure} shows, Sections \ref{sec:data}, \ref{sec:model} and \ref{sec:inference} introduce the diversity methods in machine learning models.}
Section \ref{sec:data} outlines some of the prior works on diversification in training data. Section \ref{sec:model} reviews the strategies for model diversification, including the D-model, and the D-models. The prior works for inference diversification are summarized in Section \ref{sec:inference}. {Finally, section \ref{sec:application} introduces some applications of the diversity-promoting methods in prior works, and then we do some discussions, conclude the paper and point out some future directions.}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{article_structure.pdf}
\caption{{The basic framework of this paper. The main body of this paper consists of three parts: General Machine Learning Models in Section II, Diversity in Machine Learning in Section III-V, and Extensive Applications in Section VI.}}
\label{fig:article_structure}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{machine_learning_2.pdf}
\caption{Flowchart for training process of general machine learning (including the active learning, supervised learning and unsupervised learning). We can find that when the training data is labelled, the training process is supervised. In contrast, the training process is unsupervised. Besides, it should be noted that when both the labelled and unlabelled data are used for training, the training process is semi-supervised.}
\label{fig:01}
\end{figure*}
\section{General Machine Learning Models}\label{sec:general}
Traditionally, machine learning consists of supervised learning, active learning, unsupervised learning, and reinforcement learning. For reinforcement learning, training data is given only as the feedback to the program's actions in a dynamic environment, and it does not require accurate input/output pairs and the sub-optimal actions need not to be explicitly correct. However, the diversity technologies mainly work on the model itself to improve the model's performance. Therefore, this work will ignore the reinforcement learning and mainly discuss the machine learning model as Fig. \ref{fig:01} shows. In the following, we'll introduce the general forms of supervised learning and {a representative form of active learning as well as a special form of unsupervised learning.}
\subsection{Supervised Learning}\label{subsec:machine}
We consider the task of general supervised machine learning models, which are commonly used in real-word machine learning tasks. Fig. \ref{fig:01} shows the flowchart of general machine learning methods in this work. As Fig. \ref{fig:01} shows, the supervised machine learning model consists of data pre-processing, training (modeling), and inference. All of the steps can affect the performance of the machine learning process.
Let $X=\{{\bf x}_1,{\bf x}_2, \cdots ,{\bf x}_{N_1}\}$ denote the set of training samples and $y_i$ is the corresponding label of ${\bf x}_i$, where {$y_i \in \Omega=\{cl_1, cl_2, \cdots, cl_n\}$ ($\Omega$ is the set of class labels, $n$ is the number of the classes, and $N_1$ is the number of the labelled training samples)}.
{Traditionally, the machine learning task can be formulated as the following optimization problem \cite{157,158}:}
\begin{equation}\label{eq:01}
\begin{aligned}
&\max_W L(W|X)\\
&s.t.\ g(W)\geq 0
\end{aligned}
\end{equation}
where {$L(W|X)$ represents the loss function and $W$ is the parameters of the machine learning model. Besides, $g(W)\geq 0$ is the constraint of the parameters of the model.
Then, }the Lagrange multiplier of the optimization can be reformulated as follows.
\begin{equation}\label{eq:02}
L_0=L(W|X)+\eta g(W)
\end{equation}
where $\eta$ is a positive value. Therefore, the machine learning problem can be seen as the minimization of $L_0$.
Figs. \ref{fig:02} and \ref{fig:03} show {the flowchart of two special forms} of supervised learning models, which are generally used in real-world applications. Among them, Fig. \ref{fig:02} shows the {flowchart of a special form of} supervised machine learning with a single model. Generally, in the data-preprocessing stage, the more diversification and balance each training batch has, the more effectiveness the training process is. In addition, it should be noted that the factors in the same layer of the model can be diversified to improve the representational ability of the model (which is called D-model in this paper). Moreover, when we obtain multiple choices from the model in the inference, the obtained choices are desired to provide more complement information. Therefore, some works focus on the diversification of multiple choices (which we call inference diversification). Fig. \ref{fig:03} shows the flowchart of supervised machine learning with multiple parallel base models. We can find that a best strategy to diversify the training set for different base models can improve the performance of the whole ensemble (which is called D-models). Furthermore, we can diversify these base models directly to enforce each base model to provide more complement information for further analysis.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{model_2.pdf}
\caption{{Flowchart of a special form of supervised machine learning with single model. Since diversity mainly occurs in the training batch in the data-preprocessing, this work will mainly discuss the diversity of samples in the training batch for data diversification. Generally,} the more diversification and balance each training batch is, the more effectiveness the training process is. In addition, it should be noted that the factors in the same layer of the model can be diversified to improve the representational ability of the model (which is called D-model in this paper). Moreover, when we obtain multiple choices from the model, the obtained choices are desired to provide more complement information. Therefore, some works focus on the diversification of multiple choices (which we call inference diversification).}
\label{fig:02}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{model_1.pdf}
\caption{Flowchart of supervised machine learning with multiple parallel models. We can find that a best strategy to diversify the training set for different models can improve the performance of multiple models. Furthermore, we can diversify different models directly to enforce different model to provide more complement information for further analysis.}
\label{fig:03}
\end{figure*}
\subsection{Active Learning}\label{sec:active}
Since labelling is always cost and time consuming, it usually cannot provide enough labelled samples for training in real world applications. Therefore, active learning, which can reduce the label cost and keep the training set in a moderate size, plays an important role in the machine learning model \cite{2}. {It can make use of the most informative samples and provide a higher performance with less labelled training samples.}
Through active learning, we can choose the most informative samples for labelling to train the model. This paper will take the Convex Transductive Experimental Design (CTED) as a representative of the active learning methods \cite{69, 70}.
Denote {$U=\{{\bf u}_i\}_{i=1}^{N_2}$} as the candidate unlabelled samples for active learning, {where $N_2$ represents the number of the candidate unlabelled samples.}
Then, the active learning problem can be formulated as the following optimization problem \cite{70}:
\begin{equation}\label{03}
\begin{aligned}
& A^*, {\bf b}^* = \arg\min\limits_{A,{\bf b}} \|U-UA\|_F^2+\sum_{i=1}^{N_2} \frac{\sum_{j=1}^{N_2} a_{ij}^2}{b_i}+\alpha \|{\bf b}\|_1 \\
&s.t.\ b_i \geq 0, i=1,2,\cdots, N_2
\end{aligned}
\end{equation}
where $a_{ij}$ is the $(i, j)$-th entry of $A$, and $\alpha$ is a positive tradeoff parameter. {$\|\cdot\|_F$ represents the Frobenius norm (F-norm) which calculates the root of the quadratic sum of the items in a matrix. } As is shown, CTED utilizes {a data reconstruction framework to select the most informative samples for labelling}. The matrix $A$ contains reconstruction coefficients and ${\bf b}$ is the sample selection vector. The $L_1$-norm makes the learned ${\bf b}$ to be sparse.
Then, the obtained ${\bf b}^*$ is used to select samples for labelling and finally the training set is constructed with {the selected training samples.}
However, the selected samples from CTED usually make similarity from each other, which leads to the redundancy of the training samples. Therefore, diversity property is also required in the active learning process.
\subsection{Unsupervised Learning}\label{subsec:unsupervised}
As discussed in former subsection, limited number of the training samples will limit the performance of the machine learning process. Instead of the active learning, to solve the problem, {unsupervised learning methods provide another way to train the machine learning model without the labelled training samples.} This work will mainly {discuss a special unsupervised learning process developed by \cite{gong_ijcnn}}, which is an end-to-end self-supervised method.
Denote ${\bf c}_i(i=1,2,\cdots, \Lambda)$ as the center points which is used to formulate the pseudo classes in the training process where $\Lambda$ represents the number of the pseudo classes. {Just as subsection \ref{sec:active}, $U=\{{\bf u}_1,{\bf u}_2, \cdots ,{\bf u}_{N_2}\}$ represents the unlabelled training samples} and {$N_2$ denotes the number of the unsupervised samples}. {Besides, denote $\varphi({\bf u}_i)$ as the features of ${\bf u}_i$ extracted from} the machine learning model. Then, the pseudo label $z_i$ of the data { ${\bf u}_i$} can be defined as
\begin{equation}\label{eq:un_1}
z_i=\min\limits_{k\in \{1,2,\cdots,\Lambda\}} \|{\bf c}_k-\varphi({{\bf u}_i})\|,
\end{equation}
Then, the problem can be transformed to a supervised one with the pseudo classes.
As shown in subsection \ref{subsec:machine}, the machine learning task can be formulated as the following optimization \cite{gong_ijcnn}
\begin{equation}\label{eq:un_2}
\max\limits_{W,{\bf c}_i} L(W|U,z_i) + \eta g(W) + \sum_{k=1}^{N_2}\|{\bf c}_{z_k}-\varphi({\bf u}_k)\|
\end{equation}
{where $L(W|U,z_i (i=1,2,\cdots,N_2))$ denotes the optimization term and $\sum_{k=1}^{N_2}\|{\bf c}_{z_k}-\varphi({\bf u}_k)\|$} is used to minimize the intra-class variance of the constructed pseudo-classes. $g(W)$ demonstrates the constraints in Eq. \ref{eq:02}.
With the iteratively learning of Eq. \ref{eq:un_1} and Eq. \ref{eq:un_2}, the machine learning model can be {trained unsupervisedly}.
{Since the center points play an important role in the construction of the pseudo classes, diversifying these center points and repulsing the points from each other can better discriminate these pseudo classes.} This would show positive effects on improving the effectiveness of the unsupervised learning process.
\subsection{Analysis}\label{sec:analysis}
As former subsections show, diversity can improve the performance of the machine learning process. In the following,
this work will summarize the diversification in machine learning from three aspects: data diversification, model diversification, and inference diversification.
To be concluded, diversification can be used in supervised learning, active learning, and unsupervised learning to improve the model's performance. According to the models in \ref{subsec:machine} and \ref{sec:active}, the diversification technology in machine learning model has been divided into three parts: data diversification (Section \ref{sec:data}), model diversification (Section \ref{sec:model}), and inference diversification (Section \ref{sec:inference}). Since the diversification in training batch (Fig. \ref{fig:02}) and the diversification in active learning and unsupervised learning mainly consider the diversification in training data, we summarize the prior works in these diversification as data diversification in section \ref{sec:data}. Besides, the diversification of the model in Fig. \ref{fig:02} and {the multiple base models in Fig. \ref{fig:03} mainly focus on the diversification in the machine learning model directly,} and thus we summarize these works as model diversification in section \ref{sec:model}. Finally, the inference diversification in Fig. \ref{fig:02} will be summarized in section \ref{sec:inference}. In the following section, we'll first introduce the data diversification in machine learning models.
\section{Data Diversification}\label{sec:data}
{Obviously, the training data plays an important role in the training process of the machine learning models. For supervised learning in subsection \ref{subsec:machine}, the training data provides more plentiful information for the learning of the parameters. While for active learning in subsection \ref{sec:active}, the learning process would select the most informative and less redundant samples for labelling to obtain a better performance. Besides, for unsupervised learning in subsection \ref{subsec:unsupervised}, the pseudo classes can be encouraged to repulse from each other and the model can provide more discriminative features unsupervisedly. The following will introduce the methods for these data diversification in detail.}
\subsection{Diversification in Supervised Learning} \label{subsec:dpp}
General supervised learning model is usually trained with mini-batches to accurately estimate the training model. Most of the former works generate the mini-batches randomly. {However, due to the imbalance of the training samples under random selection, redundancy may occur in the generated mini-batches which shows negative effects on the effectiveness of the machine learning process.} Different from classical stochastic gradient descent (SGD) method which relies on the uniformly sampling data points to form a mini-batch, {\cite{36, add_12}} proposes a non-uniformly sampling scheme based on the {determinantal point process (DPP)} measurement.
A DPP is a distribution over subsets of a fixed ground set, which prefers {a diverse set of data} other than a redundant one \cite{34}.
{Let $\Theta$ denote a continuous space and the data ${\bf x}_i\in \Theta (i=1,2,\cdots,N_1)$.} Then, the DPP denotes a positive semi-definite kernel function on $\Theta$,
\begin{equation}
\begin{aligned}
&\phi: \Theta \times \Theta \rightarrow R \\
&P(X\in \Theta)=\frac{\det(\phi(X))}{\det(\phi+I)}\\
\end{aligned}
\end{equation}
{where $\phi(X)$ denotes the kernel matrix and the pairwise $\phi({\bf x}_i,{\bf x}_j)$ is the pairwise correlation between the data ${\bf x}_i$ and ${\bf x}_j$. $\det(\cdot)$ denotes the determinant of matrix.} $I$ is an identity matrix. Since the space $\Theta$ is constant, $\det(\phi+I)$ is a constant value. Therefore, the corresponding diversity prior of transition parameter matrix modeled by DPP can be formulated as
\begin{equation}
P(X)\propto \det(\phi(X))
\end{equation}
In general, the kernel can be divided into the correlation and the prior part. Therefore, the kernel can be reformulated as
\begin{equation}
\phi({\bf x}_i,{\bf x}_j)=R({\bf x}_i,{\bf x}_j)\sqrt{\pi({\bf x}_i)\pi({\bf x}_j)}
\end{equation}
{where $\pi({\bf x}_i)$ is the prior for the data ${\bf x}_i$ and $R(X)$ denotes the correlation of these data.} These kernels would always induce repulsion between different points and thus a diverse set of points tends to have higher probability. Generally, the vectors are supposed to be uniformly distributed variables. Therefore, the prior $\pi({\bf x}_i)$ is a constant value, and then, the kernel
\begin{equation}
\phi({\bf x}_i,{\bf x}_j)=R({\bf x}_i,{\bf x}_j).
\end{equation}
{The DPPs provide a probability measure over every configuration of subsets on data points.} Based on a similarity matrix over the data and a determinant operator, the DPP assigns higher probabilities to those subsets with dissimilar items. Therefore, it can give lower probabilities to mini-batches which contain the redundant data, and higher probabilities to mini-batches with more diverse data {\cite{36}}. This simultaneously balances the data and generates the stochastic gradients with lower variance. {Moreover, \cite{add_12} further regularizes the DPP (R-DPP) with an arbitrary fixed positive semi-definite matrix inside of the determinant to accelerate the training process.}
{Besides, \cite{add_10} generalizes the diversification of the mini-batch sampling to arbitrary repulsive point processes, such as the Stationary Poisson Disk Sampling (PDS). The PDS is one type of repulsive point process. It can provide point arrangements similar to DPP but with much more efficiency. The PDS indicates that the smallest distance between each pair of sample points should be at least $r$ with respect to some distance measurement $D({\bf x}_i,{\bf x}_j)$ \cite{add_10}, such as the Euclidean distance and the heat kernel. The measurement can be formulated as}
\noindent{{Euclidean distance:}}
\begin{equation}
D({\bf x}_i,{\bf x}_j)=\|{\bf x}_i-{\bf x}_j\|^2
\end{equation}
\noindent{{Heat kernel:}}
\begin{equation}
D({\bf x}_i,{\bf x}_j)=e^{\frac{\|{\bf x}_i-{\bf x}_j\|^2}{\sigma}}
\end{equation}
{where $\sigma$ is a positive value.}
{Given a new mini-batch $B$, and the algorithm of PDS can work as follows in each iteration.}
\begin{itemize}
\item {Randomly select a data point ${\bf x}_{new}$.}
\item {If $D({\bf x}_{new},{\bf x}_i) \leq r (\forall {\bf x}_i \in B)$, throw out the point; otherwise add $x_{new}$ in batch $B$.}
\end{itemize}
{The computational complexity of PDS is much lower than that of the DPP.}
{Under these diversification prior, such as the DPP and the PDS, each mini-batch consists of the training samples with more diversity and information,} which can train the model more effectively, and thus the learned model can exact more discriminative features from the objects.
\subsection{Diversification in Active Learning}
{As section \ref{sec:active} shows, active learning can obtain a good performance with less labelled training samples.} However, some selected samples with CTED are similar to each other and contain the overlapping and redundant information. {The highly similar samples make the redundancy of the training samples, and this further decreases the training efficiency, which requires more training samples for a comparable performance.}
{To select more informative and complement samples with the active learning method, some prior works introduce the diversity in the selected samples obtained from CTED (Eq. \ref{03}) \cite{3,1}.}
To promote diversity between the selected samples, \cite{3} enhances CTED with a diversity regularizer
\begin{equation}
\begin{aligned}
&\min\limits_{A,{\bf b}} \|U-UA\|_F^2+\sum_{i=1}^{N_2} \frac{\sum_{1}^{N_2} a_{ij}^2}{b_i}+\alpha \|{\bf b}\|_1 + \gamma{\bf b}^T S {\bf b}\\
&s.t.\ b_i \geq 0, i=1,2,\cdots, {N_2}
\end{aligned}
\end{equation}
where $A=[{\bf a}^1, \cdots,{\bf a}^{N_2}]$, {$\|\cdot\|$ represents the F-norm,} and the similarity matrix $S\in R^{{N_2}\times {N_2}}$ is used to model the pairwise similarities among all the samples, such that larger value of $s_{ij}$ demonstrates the higher similarity between the $i-${th} sample and the $j-${th} one.
Particularly, \cite{3} chooses the cosine similarity measurement to formulate the diversity term. And the diversity term can be formulated as
\begin{equation}\label{eq:87}
s_{ij}=\frac{{\bf a}^i({\bf a}^j)^T}{\|{\bf a}^i\|\|{\bf a}^j\|}.
\end{equation}
As \cite{13} introduces, $s_{ij}$ tends to be zero when ${\bf a}^i$ and ${\bf a}^j$ tends to be uncorrelated.
Similarly, \cite{1} denotes the diversity term in active learning with the angular of the cosine similarity to obtain a diverse set of training samples.
The diversity term can be formulated as
\begin{equation}\label{eq:88}
s_{ij}=\frac{\pi}{2}-\arccos(\frac{{\bf a}^i({\bf a}^j)^T}{\|{\bf a}^i\|\|{\bf a}^j\|}).
\end{equation}
Obviously speaking, when the two vectors become vertical, the vectors tend to be uncorrelated. Therefore, {under the diversification,} the selected samples would be more informative.
{Besides,\cite{add_20} takes advantage of the well-known RBF kernel to measure the diversity of the selected samples, the diversity term can be calculated by}
\begin{equation}\label{eq:86}
s_{ij}=\frac{\|{\bf a}^i-{\bf a}^j\|^2}{\sigma^2}
\end{equation}
{where $\sigma$ is a positive value. Different from Eqs. \ref{eq:87} and Eq. \ref{eq:88} which measure the diversity from the angular view, Eq. \ref{eq:86} calculates the diversity from the distance view. Generally, given two data, if they are similar to each other, the term will have a large value.}
Through adding diversity regularization over the selected samples by active learning, {samples with more information and less redundancy would be chosen for labelling and then used for training.} Therefore, the machine learning process can obtain comparable or {even better performance with limited training samples} than that with {plentiful} training samples.
\subsection{Diversification in Unsupervised Learning}
As subsection \ref{subsec:unsupervised} shows, the unsupervised learning in \cite{gong_ijcnn} is based on the construction of the pseudo classes with the center points. By repulsing the center points from each other, the pseudo classes would be {further enforced to be away from one another}. If we encourage the center points to be diversified and repulse from each other, the learned features from different classes can be more discriminative. Generally, the Euclidean distance can be used to calculate the diversification of the center points. The pseudo label of ${\bf x}_i$ is also calculated by Eq. \ref{eq:un_1}. Then, the unsupervised learning method with the diversity-promoting prior can be formulated as
\begin{equation}\label{eq:un_diversity}
\max\limits_{W,{\bf c}_i} L(W|U,z_i) + \eta g(W) + \sum_{k=1}^{N_2}\|{\bf c}_{z_k}-{\bf u}_k\| + \gamma \sum_{j\neq k}\|{\bf c}_{j}-{\bf c}_k\|
\end{equation}
where $\gamma$ is a positive value which {denotes} the tradeoff between the optimization term and the diversity term. Under the diversification term, in the training process, the center points {would} be encouraged to repulse from each other. This makes the unsupervised learning process be more effective to obtain discriminative features from samples in different classes.
\section{Model Diversification}\label{sec:model}
{In addition to the data diversification to improve the performance with more informative and less redundant samples, we can also diversify the model to improve the representational ability of the model directly.} As introduction shows, the machine learning methods aim to learn parameters by the machine itself with the training samples. However, due to the limited and imbalanced training samples, { highly similar parameters} would be learned by {general} machine learning process. This would lead to the redundancy of the learned model and {negatively affect} the model's representational ability.
Therefore, in addition to the data diversification, one can also diversify the learned parameters in the training process and further improve the representational ability of the model (D-model). Under the diversification prior, each parameter factor can model unique information and the whole factors model a larger proportional of information {\cite{13}}. Another method is to obtain diversified multiple models (D-models) {through machine learning}. Traditionally, if we train {the} multiple models separately, the obtained representations from different models would be similar and this would lead to the redundancy between different representations. Through regularizing the multiple base models with the diversification prior, different models would be enforced to repulse from each other and each base model can provide choices reflecting multi-modal belief \cite{99}. In the following subsections, we'll introduce the diversity methods for D-model and D-models in detail separately.
\subsection{D-Model}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{D_model.pdf}
\caption{Effects of D-model on improving the performance of the machine learning model. Under the model diversification, each parameter factor of the machine learning model tends to model unique information and the whole machine learning model can model more useful information from the objects. Thus, the representational ability can be improved. The figure shows the results from the image segmentation task in \cite{9}. As showed in the figure, the extracted features from the model can better discriminate different objects.}
\label{fig:d_model}
\end{figure}
The first method tries to diversify the parameters of the model in the training process to directly improve the representational ability of the model.
Fig. \ref{fig:d_model} shows the effects of D-model on improving the performance of the machine learning model. {As Fig. \ref{fig:d_model} shows, under the D-model, each factor would model unique information and the whole factors model a larger proportional of information and then the information will be further improved.}
Traditionally, Bayesian method and posterior regularization method can be used to impose diversity over the parameters of the model. Different diversity-promoting priors have been developed in prior works to measure the diversity between the learned parameter factors according to the special requirements of different tasks. This subsection will mainly introduce the methods which can enforce the diversity of the model and summarize these methods occurred in prior works.
\subsubsection{Bayesian Method}
Traditionally, diversity-promoting priors can be used to measure the diversification of the model. The parameters of the model can be calculated by the Bayesian method as
\begin{equation}
W\propto P(W|X)=P(X|W)\times P(W)
\end{equation}
where $W=[w_1,w_2,\cdots,w_K]$ {denotes} the parameters in the machine learning model, {$K$ is the number of the parameters,} $P(X|W)$ {represents} the likelihood of the training set on the constructed model and $P(W)$ stands for the prior knowledge of the learned model. For the machine learning task at hand, $P(W)$ {describes} the diversity-promoting prior. Then, the machine learning task can be written as
\begin{equation}
W^*=\arg \max\limits_W P(W|X)=\arg\max\limits_W P(X|W)\times P(W)
\end{equation}
The log-likelihood of the optimization can be formulated as
\begin{equation}\label{eq:45}
W^*=\arg\max\limits_W (\log P(X|W)+\log P(W))
\end{equation}
Then, Eq. \ref{eq:45} can be written as the following optimization
\begin{equation}\label{eq:46}
\max\limits_W \log P(X|W)+\log P(W)
\end{equation}
where $\log P(X|W)$ represents the optimization objective of the model, which can be formulated as $L_0$ in subsection \ref{subsec:machine}.
the diversity-promoting prior $\log P(W)$ aims to encourage the learned factors to be diversified. With Eq. \ref{eq:46}, the diversity prior can be imposed over the parameters of the learned model.
\subsubsection{Posterior Regularization Method}
In addition to the former Bayesian method, posterior regularization methods can be also used to impose the diversity property over the learned model \cite{53}.
Generally, the regularization method can add side information into the parameter estimation and thus it can encourage the learned factors to possess {a} specific property. We can also use the posterior regularization to enforce the learned model to be diversified.
The diversity regularized optimization problem can be formulated as
\begin{equation} \label{eq:47}
\max\limits_W L_0 + \gamma f(W)
\end{equation}
where $f(W)$ stands for the diversity {regularization} which measures {the diversity of the factors in the learned model}. $L_0$ represents the optimization term of the model which can be seen in subsection \ref{subsec:machine}. $\gamma$ demonstrates the tradeoff between the optimization and the diversification term.
From Eqs. \ref{eq:46} and \ref{eq:47}, we can find that the posterior regularization has the similar form as the Bayesian method. In general, the optimization (\ref{eq:46}) can be transformed into the form (\ref{eq:47}). Many methods can be applied to measure the diversity property of the learned parameters. In the following, {we will introduce different diversity priors to realize the D-model in detail.}
\begin{table}
\centering
\caption{Overview of most frequently used diversification method in D-model and the papers in which example measurements can be found.}
\label{table:01}
\begin{tabular}{p{0.22\textwidth}p{0.2\textwidth}}
\hline\noalign{\smallskip}
Measurements & Papers \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Cosine Similarity & \cite{13,14,27,44,46,47,48,49,51,52,59, add_6} \\
Determinantal Point Process & \cite{10,25,28,29,33,34,35, 104,105,106,107,108,109,111,112,113, add_1} \\
Submodular Spectral Diversity & \cite{39} \\
Inner Product & \cite{46,50} \\
Euclidean Distance & \cite{58,60,61} \\
Heat Kernel & \cite{54,55,72} \\
Divergence & \cite{58} \\
Uncorrelation and Evenness & \cite{24} \\
$L_{2,1}$ & \cite{31,40,41,42,43,62,63,64,65} \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsubsection{Diversity Regularization} \label{subsec:diversity_regularization}
As Fig. \ref{fig:d_model} shows, the diversity regularization encourages the factors to repulse from each other or to be uncorrelated. The key problem with the diversity regularization is the way to calculate the diversification of the factors in the model.
Prior works mainly impose the diversity property into the machine learning process from six aspects, namely the distance, the angular, the eigenvalue, the divergence, the $L_{2,1}$, and the DPP. {The following will introduce the measurements and further discuss the advantages and disadvantages of these measurements.}
{\bf Distance-based measurements.} The simplest way to formulate the diversity between different factors is the Euclidean distance. Generally, {enlarging} the distances between different factors can decrease the similarity between these factors. Therefore, the redundancy between the factors can be decreased and the factors can be diversified. \cite{58, 60, 61} have applied the Euclidean distance as the measurements to encourage the latent factors in machine learning to be diversified.
In general, the larger of the Euclidean distance two vectors have, the more difference the vectors are. {Therefore, we can diversify different vectors through enlarging the pairwise Euclidean distances between these vectors.}
Then, the diversity regularization by Euclidean distance from Eq. \ref{eq:47} can be formulated as
\begin{equation}\label{eq:85}
f(W)= \sum_{i\neq j}^{K} \|w_i-w_j\|^2
\end{equation}
where $K$ is the number of the factors which we intend to diversify in the machine learning model.
{Since} the Euclidean distance {uses} the distance between different factors to measure the similarity of these factors {, generally the regularizer in Eq. \ref{eq:85} is variant to scale due to the characteristics of the distance.} This may decrease the effectiveness of the diversity measurement and cannot fit for some special models with large scale range.
Another commonly used distance-based method to encourage diversity in the machine learning is the heat kernel \cite{54,55,72}.
The correlation between different factors is {formulated} through Gaussian function and it can be calculated as
\begin{equation}
f(w_i,w_j)=-e^{-\frac{\|w_i-w_j\|^2}{\sigma}}
\end{equation}
where $\sigma$ is a positive value. {The term measures the correlation between different factors and we can find that when $w_i$ and $w_j$ are dissimilar, $f(w_i,w_j)$ tends to zero.} Then,
{the} diversity-promoting prior by {the} heat kernel from Eq. \ref{eq:46} can be formulated as
\begin{equation}
P(W)=e^{-\gamma \sum_{i\neq j}^{K}e^{-\frac{\|w_i-w_j\|^2}{\sigma}}}
\end{equation}
The {corresponding diversity regularization form} can be formulated as
\begin{equation}
f(W)=-\sum_{i\neq j}^{K}e^{-\frac{\|w_i-w_j\|^2}{\sigma}}
\end{equation}
where $\sigma$ is a positive value.
Heat kernel takes advantage of the distance between {the} factors to encourage the diversity of the model.
{It can be noted that the} heat kernel has the form of Gaussian function and the weight of the diversity penalization is affected by the distance. Thus, the heat kernel presents more variance with the penalization and shows better performance than general Euclidean distance.
All the former distance-based methods encourage the diversity of the model by enforcing the factors away from each other and thus these factors would show more difference. However, it should be noted that the distance-based measurements can be significantly affected by scaling which can limit the performance of the diversity prior over the machine learning.
{\bf Angular-based measurements.} To make the diversity measurement be invariant to scale, some works take advantage of the angular to encourage the diversity of the model. Among these works, the cosine similarity measurement is the {most commonly used} \cite{13, 14}. Obviously, the cosine similarity can measure the similarity between different vectors. In machine learning tasks, it can be used to measure the redundancy between different latent parameter factors \cite{13, 14, 27, 44, 46}. The aim of cosine similarity prior is to encourage different latent factors to be uncorrelated, such that each factor in the learned model can model unique features from the samples.
{The cosine similarity between different factors $w_i$ and $w_j$ can be calculated as \cite{156, add_6}}
\begin{equation}\label{eq:04}
c_{ij}=\frac{<w_i,w_j>}{\|w_i\|\|w_j\|},i\neq j,1\leq i,j\neq K
\end{equation}
{Then,} the diversity-promoting prior of generalized cosine similarity measurement from Eq. \ref{eq:46} can be written as
\begin{equation}\label{eq:12}
P(W)\propto e^{-\gamma(\sum_{i\neq j}c_{ij}^p)^{\frac{1}{p}}}
\end{equation}
It should be noted that when $p$ is set to 1, the diversity-promoting prior over different vectors $w_i(i=1,2,\cdots,K)$ by cosine similarity from Eq. \ref{eq:46} can be formulated as
\begin{equation}\label{eq:05}
P(W)\propto e^{-\gamma \sum_{i\neq j}c_{ij}}
\end{equation}
where $\gamma$ is a positive value.
{It can be noted that under the diversity-promoting prior in Eq. \ref{eq:05}, the
$c_{ij}$ is encouraged to be 0.} Then, $w_i$ and $w_j$ tend to be orthogonal and different factors are encouraged to be uncorrelated and diversified.
Besides, the diversity regularization form {by} the cosine similarity measurement from Eq. \ref{eq:47} can be formulated as
\begin{equation}
f(W)=-\sum_{i\neq j}^{K}\frac{<w_i,w_j>}{\|w_i\|\|w_j\|}
\end{equation}
However, there exist some defects in the former measurement where the measurement is variant to orientation.
To overcome this problem, many works use the angular of cosine similarity to measure the diversity between different factors \cite{26, 44, 47}.
Since the angular between different factors is invariant to translation, rotation, orientation and scale, \cite{26, 44, 47} {develops the angular-based diversifying method for Restricted Boltzmann Machine (RBM)}.
These works use the variance and mean value of the angular between different factors to formulate the diversity of the model to overcome the problem occurred in cosine similarity. The angular between different factors can be formulated as
\begin{equation}
\Gamma_{ij}=\arccos \frac{<w_i,w_j>}{\|w_i\|\|w_j\|}
\end{equation}
Since we do not care about the orientation of the vectors just as \cite{26}, we prefer the angular to be acute or right. From the mathematical view, two factors would tend to be uncorrelated when the angular between the factors enlarges.
Then, the diversity function can be defined as {\cite{add_3, 26, add_7, add_4}}
\begin{equation} \label{eq:84}
f(W)=\Psi(W)-\Pi(W)
\end{equation}
where
\begin{equation*}
\Psi(W)=\frac{1}{K^2}\sum_{i\neq j}\Gamma_{ij},
\end{equation*}
\begin{equation*}
\Pi(W)=\frac{1}{K^2}\sum_{i \neq j}(\Gamma_{ij}-\Psi(W))^2.
\end{equation*}
{In other words, $\Psi(W)$ denotes the mean of the angular between different factors and $\Pi(W)$ represents the variance of the angular. Generally, a larger $f(W)$ indicates that the weight vectors in $W$ are more diverse. Then, the diversity promoting prior by the angular of cosine similarity measurement can be formulated as}
\begin{equation}\label{eq:10}
P(W)\propto e^{\gamma f(W)}
\end{equation}
{The prior in Eq. \ref{eq:10} encourages the angular between different factors to approach $\displaystyle{\frac{\pi}{2}}$, and thus these factors are enforced to be diversified under the diversification prior. Moreover, the measurement is invariant to scale, translation, rotation, and orientation.}
Another form of the angular-based measurements is to calculate the diversity with the inner product \cite{46,50}.
Different vectors present more diversity when they tend to be more orthogonal. The inner product can measure the orthogonality between different vectors and therefore it can be applied in machine learning models for more diversity.
The general form of diversity-promoting prior by inner product measurement can be written as \cite{46,50}
\begin{equation}
P(W)=e^{-\gamma\sum_{i\neq j}^{K}<w_i, w_j>}.
\end{equation}
Besides, \cite{82} uses the special form of the inner product measurement, which is called exclusivity. The exclusivity between two vectors $w_i$ and $w_j$ is defined as
\begin{equation}
\chi(w_i,w_j)=\|w_i\odot w_j\|_0=\sum_{k=1}^{m} w_i(k)\cdot w_j(k)
\end{equation}
where $\odot$ denotes the Hadamard product, and $\|\cdot\|_0$ denotes the $L_0$ norm. Therefore, the diversity-promoting prior can be written as
\begin{equation}
P(W)=e^{-\gamma\sum_{i\neq j}^{K}\|w_i\odot w_j\|_0}
\end{equation}
Due to the non-convexity and discontinuity of $L_0$ norm, the relaxed exclusivity is calculated as \cite{82}
\begin{equation}
\chi_r(w_i,w_j)=\|w_i\odot w_j\|_1=\sum_{k=1}^{m}|w_i(k)|\cdot |w_j(k)|
\end{equation}
where $\|\cdot\|_1$ denotes the $L_1$ norm. Then, the diversity-promoting prior based on relaxed exclusivity can be calculated as
\begin{equation}\label{eq:89}
P(W)=e^{-\gamma\sum_{i\neq j}^{K}\|w_i\odot w_j\|_1}
\end{equation}
The inner product measurement takes advantage of the characteristics among the vectors and tries to encourage different factors to be orthogonal to enforce the learned factors to be diversified. It should be noted that the measurement can be seen as a special form of cosine similarity measurement. Even though {the inner product measurement} is variant to scale and orientation, in many real-world applications, it is usually considered first to diversify the model since it is easier to implement than other measurements.
Instead of the distance-based and angular-based measurements, the eigenvalues of the kernel matrix can also be used to encourage different factors to be orthogonal and diversified.
Recall that, for an orthogonal matrix, all the eigenvalues of the kernel matrix are equal to 1. Here, we denote $\kappa(W)=WW^T$ as the kernel matrix of $W$. Therefore, when we constrain the eigenvalues to 1, the obtained vectors would tend to be orthogonal {\cite{add_2, add_9}}. {Three ways are generally used to encourage the eigenvalues to approach constant 1, including the submodular spectral diversity (SSD) measurement, the uncorrelation and evenness measurement, and the log-determinant divergence (LDD).} In the following, the two form of the eigenvalue-based measurements will be introduced in detail.
{\bf Eigenvalue-based measurements.} {As the former denotes,} $\kappa(W)=WW^T$ stands for the kernel matrix of the latent factors. Two commonly used methods to promote diversity in the machine learning process based on the kernel matrix would be introduced. The first method is the {submodular spectral diversity (SSD)}, which is based on the eigenvalues of the kernel matrix. \cite{39} introduces the {SSD measurement} in the process of feature selection, which aims to select a diverse set of features. Feature selection is a key component in many machine learning settings. The process involves choosing a small subset of features in order to build a model to approximate the target concept well.
The {SSD measurement} uses the square distance to encourage the eigenvalues to approach 1 directly. Define $(\lambda_1,\lambda_2,\cdots,\lambda_K)$ as the eigenvalues of the kernel matrix. Then,
the diversity-promoting prior by SSD from Eq. \ref{eq:46} can be formulated as \cite{39}
\begin{equation}
P(W)=e^{-\gamma\sum_{i=1}^{K}(\lambda_i(\kappa(W))-1)^2}
\end{equation}
where $\gamma$ is also a positive value.
From Eq. \ref{eq:47}, {the diversity regularization $f(W)$ can be formulated as}
\begin{equation}
f(W)=-\sum_{i=1}^{K}(\lambda_i(\kappa(W))-1)^2
\end{equation}
This measurement regularizes the variance of the eigenvalues of the matrix. Since all the eigenvalues are enforced to approach 1, the obtained factors tend to be more orthogonal and thus the model can present more diversity.
Another diversity measurement based on the kernel matrix is the uncorrelation and evenness \cite{24}. This measurement encourages
the learned factors to be uncorrelated and to play equally important roles in modeling data. Formally, this amounts to encouraging the kernel matrix of the vectors to have more uniform eigenvalues.
The basic idea is to normalize the eigenvalues into a probability simplex and encourage the discrete distribution parameterized by the normalized eigenvalues to have small Kullback-Leibler (KL) divergence with the uniform distribution \cite{24}. Then, the diversity-promoting prior by uniform eigenvalues from Eq. \ref{eq:46} is formulated as
\begin{equation}
P(W)=e^{-\gamma(\frac{tr((\frac{1}{d}\kappa(W))\log(\frac{1}{d}\kappa(W)))}{tr(\frac{1}{d}\kappa(W))}-\log tr(\frac{1}{d}\kappa(W)))}
\end{equation}
subject to $\kappa(W)\succ 0$ ( $\kappa(W)$ is positive definite matrix) and $W{\bf 1}=0$, where $\kappa(W)$ is the kernel matrix.
Besides, the diversity-promoting uniform eigenvalue regularizer (UER) from Eq. \ref{eq:47} is formulated as
\begin{equation}
f(W)=-[\frac{tr((\frac{1}{d}\kappa(W))\log(\frac{1}{d}\kappa(W)))}{tr(\frac{1}{d}\kappa(W))}-\log tr(\frac{1}{d}\kappa(W))]
\end{equation}
where $d$ is the dimension of each factor.
{Besides, \cite{add_9} takes advantage of the log-determinant divergence (LDD) to measure the similarity between different factors. The diversity-promoting prior in \cite{add_9} combines the orthogonality-promoting LDD regularizer with the sparsity-promoting $L_1$ regularizer. Then, the diversity-promoting prior from Eq. \ref{eq:46} can be formulated as}
\begin{equation}
P(W)=e^{-\gamma(tr(\kappa(W))-\log\det(\kappa(W))+\tau |W|_1)}
\end{equation}
{where $tr(\cdot)$ denotes the matrix trace. Then, the corresponding regularizer from Eq. \ref{eq:47} is formulated as}
\begin{equation}
f(W)=-(tr(\kappa(W))-\log\det(\kappa(W))+\tau |W|_1)).
\end{equation}
{The LDD-based regularizer can effectively promote nonoverlap \cite{add_9}. Under the regularizer, the factors would be sparse and orthogonal simultaneously. }
These eigenvalue-based measurements calculate the diversity of the factors from the kernel matrix view. They not only consider the pairwise correlation between the factors, but also take the multiple correlation into consideration. Therefore, they generally present better performance than the distance-based and angular-based methods which only consider the pairwise correlation. However, the eigenvalue-based measurements would cost more computational sources in the implementation. Moreover, the gradient of the diversity term which is used for back propagation would be complex to compute and usually requires special processing methods, such as projected gradient descent algorithm \cite{24} for the uncorrelation and evenness.
{\bf DPP measurement.}
Instead of the eigenvalue-based measurements, another measurement which takes the multiple correlation into consideration is the determinantal point process (DPP) measurement.
{As subsection \ref{subsec:dpp} shows, the DPP on the parameter factors $W$ has the form as}
\begin{equation}
P(W)\propto \det(\phi(W)).
\end{equation}
{Generally, it can encourage the learned factors to repulse from each other.} Therefore, the DPP-based diversifying prior can obtain machine learning models with a diverse set of the learned factors other than a redundant one. Some works have shown that the DPP prior is usually not arbitrarily strong for some special case when applied into machine learning models \cite{37}. To encourage the DPP prior strong enough for all the training data, the DPP prior is augmented by an additional positive parameter $\gamma$. Therefore, just as section \ref{subsec:dpp}, the DPP prior can be reformulated as
\begin{equation}
P(W)\propto \det(\phi(W))^\gamma
\end{equation}
where $\phi(W)$ denotes the kernel matrix and $\phi(w_i, w_j)$ demonstrates the pairwise correlation between $w_i$ and $w_j$.
The learned factors are usually normalized, and thus the optimization for machine learning can be written as
\begin{equation}
\max\limits_{W}\log P(X|W)+\gamma \log(\det(\phi(W)))
\end{equation}
where $f(W)=\log(\det(\phi(W)))$ represents the diversity term for machine learning. It should be noted that different kernels can be selected according to the special requirements of different machine learning tasks \cite{38, gong_cnn}. For example, in \cite{gong_cnn}, the similarity kernel is adopted for the DPP prior which can be formulated as
\begin{equation}
\phi(w_i, w_j)=\frac{<w_i, w_j>}{\|w_i\|\|w_j\|}.
\end{equation}
{When we set the cosine similarity as the correlation kernel $\phi$, from geometric interpretation, the DPP prior $P(W)\propto \det(\phi(W))$ can be seen as the volume of the parallelepiped spanned by the columns of $W$ \cite{34}. Therefore, diverse sets are more probable because their feature vectors are more orthogonal, and hence span larger volumes.}
{It should be noted that most of the diversity} measurements consider the pairwise correlation between the factors and ignore the multiple correlation between three or more factors. {While the DPP measurement} takes advantage of the merits of the DPP to make use of the multiple correlation by calculating the similarity between multiple factors.
{\bf $L_{2,1}$ measurement.} While all the former measurements promote the diversity of the model from the pairwise or multiple correlation view, many prior works prefer to use the $L_{2,1}$ for diversity since $L_{2,1}$ can take advantage of the group-wise correlation and obtain a group-wise sparse representation of the latent factors $W$ {\cite{31, 40, 41, add_5}}.
It is well known that the $L_{2,1}$-norm leads to the group-wise sparse representation of $W$. $L_{2,1}$ can also be used to measure the correlation between different parameter factors and diversify the learned factors to improve the representational ability of the model. Then, the $L_{2,1}$ prior from Eq. \ref{eq:46} can be calculated as
\begin{equation}
P(W)=e^{-\gamma\sum_{i}^{K}(\sum_{j}^{n}|w_i(j)|)^2}
\end{equation}
where $w_i(j)$ means the $j-$th entry of $w_i$.
The internal $L_1$ norm encourages different factors to be sparse, while the external $L_2$ norm is used to control the complexity of entire model.
Besides, the diversity term based on $f(W)$ from Eq. \ref{eq:47} can be formulated as
\begin{equation}
f(W)=-\sum_{i}^{K}(\sum_{j}^{n}|w_{i}{(j)}|)^2
\end{equation}
where $n$ is the dimension of each factor $w_i$.
The internal $L_1-$norm encourages different factors to be sparse, while the external $L_2-$norm is used to control the complexity of entire model.
In most of the machine learning models, the parameters of the model can be looked as the vectors and diversity of these factors can be calculated from {the mathematical view just as these former measurements.} When the norm of the vectors are constrained to constant 1, we can also take these factors as the probability distribution. Then, the diversity between the factors can be also measured from the Bayesian view.
{\bf Divergence measurement.} Traditionally, divergence, which is generally used Bayesian method to measure the difference between different distributions, can be used to promote diversity of the learned model \cite{58}.
Each factor is processed as a probability distribution firstly. Then, the divergence between factors $w_i$ and $w_j$ can be calculated as
\begin{equation}
D(w_i \| w_j)=\sum_{k=1}^n(w_{i}{(k)}\log\frac{w_{i}{(k)}}{w_{j}{(k)}}-w_{i}{(k)}+w_{j}{(k)})
\end{equation}
subject to $\|w_i\|=1$.
The divergence can measure the dissimilarity between the learned factors, such that the diversity-promoting regularization by divergence from Eq. \ref{eq:47} can be formulated as \cite{58}
\begin{equation}
\begin{aligned}
f(W)=&\sum_{i\neq j}^{K}D(w_i \| w_j) \\
=&\sum_{i\neq j}^{K}\sum_{k=1}^{n}(w_{i}{(k)}\log \frac{w_{i}{(k)}}{w_{j}{(k)}}-w_{i}{(k)}+w_{j}{(k)})
\end{aligned}
\end{equation}
The measurement takes advantage of the characteristics of the divergence to measure the dissimilarity between different distributions. However, the norm of the learned factors need to satisfy $\|w_i\|=1$ which limits the application field of the diversity measurement.
In conclusion, there are numerous approaches to diversify the learned factors {in machine learning models.} A summary of the most frequently encountered diversity methods is shown in Table \ref{table:01}. Although most papers use slightly different specifications for their diversification of the learned model, the fundamental representation of the diversification is similar. It should also be noted that the thing in common among the studied diversity methods is that the diversity enforced in a pairwise form between members strikes a good balance between complexity and effectiveness \cite{82}. In addition, different applications should choose the proper diversity measurements according to the specific requirements of different machine learning tasks.
\subsubsection{Analysis}\label{subsec:analysis}
These diversity measurements can calculate the similarity between different vectors and thus encourage the diversity of the machine learning model. However, there exists the difference between these measurements. The details of these diversity measurements can be seen in Table \ref{table:comparison}. It can be noted from the table that all these methods take advantage of the pairwise correlation except $L_{2,1}$ which uses the group-wise correlation between different factors. Moreover, the determinantal point process, submodular spectral diversity, and uncorrelation and evenness can also take advantage of correlation among three or more factors.
Another property of these diversity measurement is scale invariant. Scale invariant can make the diversity of the model be invariant w.r.t. the norm of these factors. The cosine similarity measurement calculates the diversity via the angular between different vectors. As a special case for DPP, the cosine similarity can be used as the correlation term $R(w_i,w_j)$ in DPP and thus the DPP measurement is scale invariant. Besides, for divergence measurement, since the factors are constrained with $\|w_i\|=1$, the measurement is scale invariant.
\begin{table*}
\centering
\caption{Comparisons of Different Measurements. $\bigcirc$ represents that the measurement possess the property while $\times$ means the measurement does not possess the property.}
\label{table:comparison}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{p{0.2\textwidth}p{0.14\textwidth}p{0.14\textwidth}p{0.16\textwidth}p{0.14\textwidth}p{0.0\textwidth}}
\hline\noalign{\smallskip}
Measurements & \centering{Pairwise Correlation} & \centering{Multiple Correlation} & \centering{Group-wise Correlation} & \centering{{Scale Invariant}} & \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Cosine Similarity & \centering{$\bigcirc$} & \centering{$\times$} & \centering{$\times$} & \centering{{$\bigcirc$}} & \\
Determinantal Point Process & \centering{$\bigcirc$} & \centering{$\bigcirc$} &\centering{$\times$} & \centering{$\bigcirc$} & \\
Submodular Spectral Diversity &\centering{$\bigcirc$} &\centering{$\bigcirc$} & \centering{$\times$}& \centering{$\bigcirc$} & \\
Euclidean Distance &\centering{$\bigcirc$} & \centering{$\times$}&\centering{$\times$} & \centering{$\times$} & \\
Heat Kernel & \centering{$\bigcirc$}&\centering{$\times$} & \centering{$\times$}&\centering{$\times$} & \\
Divergence &\centering{$\bigcirc$} & \centering{$\times$}&\centering{$\times$} &\centering{$\bigcirc$} & \\
Uncorrelation and Evenness &\centering{$\bigcirc$} & \centering{$\bigcirc$}& \centering{$\times$}& \centering{$\bigcirc$}& \\
$L_{2,1}$ &\centering{$\times$} & \centering{$\times$}& \centering{$\bigcirc$}&\centering{$\times$} & \\
Inner Product &\centering{$\bigcirc$} &\centering{$\times$} & \centering{$\times$}& \centering{$\times$}& \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
These measurements can encourage diversity within different vectors. Generally, the machine learning models can be looked as the set of latent parameter factors, which can be represented as the vectors. These factors can be learned and used to represent the objects. In the following, we'll mainly summarize the methods to diversify the ensemble learning (D-models) for better performance of machine learning tasks.
\subsection{D-Models}\label{subsec:d_models}
The former subsection introduces the way to diversify the parameters in single model and improve the representational ability of the model directly. Much efforts have been done to obtain the highest probability configuration of the machine learning models in prior works. However, even when the training samples are sufficient, the maximum a $posteriori$ (MAP) solution could also be sub-optimal.
In many situations, one could benefit from additional representations with multiple models. As Fig. \ref{fig:03} shows, ensemble learning (the way for training multiple models) has already occurred in many prior works. However, traditional ensemble learning methods to train multiple models may provide representations that tend to be similar while the representations obtained from different models are desired to provide complement information.
Recently, many diversifying methods have been proposed to overcome this problem. As Fig. \ref{fig:d_models} shows, under the model diversification, each base model of the ensemble can produce different outputs reflecting multi-modal belief. Therefore, the whole performance of the machine learning model can be improved. Especially, the D-models play an important role in structured prediction problems with multiple reasonable interpretations, of which only one is the groundtruth \cite{99}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{D_models.pdf}
\caption{Effects of D-models for improving the performance of the machine learning model. The figure shows the image segmentation task from the prior work \cite{99}. The single model often produce solutions with low expected loss and step into the sub-optimal results. Besides, general ensemble learning usually provide multiple choices with great similarity. Therefore, this work summarizes the methods which can diversify the ensemble learning (D-models). As the figure shows, under the model diversification, each model of the ensemble can produce different outputs reflecting multi-modal belief \cite{99}.}
\label{fig:d_models}
\end{figure}
{Denote $W_i (i=1,2,\cdots, s)$ and $P(W_i)$ as the parameters and the inference from the $i$th model where $s$ is the number of the parallel base models.}
Then, the optimization of the machine learning to obtain multiple models can be written as
\begin{equation}
\max\limits_{W_1, W_2, \cdots, W_s} \sum_{i=1}^s L(W_i|X_i)
\end{equation}
where $L(W_i|X_i)$ represents the optimization term of the $i${th} model and $X_i$ denotes the training samples of the $i$th model. Traditionally, the training samples are randomly divided into multiple subsets and each subset trains a corresponding model. However, selecting subsets randomly may lead to the redundancy between different representations. Therefore, the first way to obtain multiple diversified models is to diversify these training samples over different base models, which we call sample-based methods.
Another way to encourage the diversification between different models is to measure the similarity between different base models with a special similarity measurement and encourage different base models to be diversified in the training process, which is summarized as the optimization-based methods. The optimization of these methods can be written as
\begin{equation}\label{eq:58}
\max\limits_{W_1, W_2, \cdots, W_s} \sum_{i=1}^s L(W_i|X)+\gamma\Gamma(W_1, W_2, \cdots, W_s)
\end{equation}
where $\Gamma(W_1, W_2, \cdots, W_s)$ measures the diversification between different base models. These methods are similar to the methods for D-model in former subsection.
Finally, some other methods try to obtain large amounts of models and select the top-$L$ models as the final ensemble, which is called the ranking-based methods in this work. In the following, we'll summarize different methods for diversifying multiple models from the three aspects in detail.
\begin{table}
\centering
\caption{{Overview of most frequently used diversification method in D-models and the papers in which example measurements can be found.}}
\label{table:02}
\begin{tabular}{l | p{0.12\textwidth}p{0.15\textwidth}}
\hline\noalign{\smallskip}
Methods & Measurements & Papers \\
\noalign{\smallskip}\hline
\multirow{4}*{{Optimization-based}} & Divergence & \cite{118,114} \\
\cline{2-3}
&Renyi-entropy & \cite{77} \\
\cline{2-3}
& Cross Entropy & \cite{74,80} \\
\cline{2-3}
& Cosine Similarity & \cite{117, 82} \\
\cline{2-3}
& $L_{2,1}$ & \cite{41} \\
\cline{2-3}
& NCL & \cite{115,116, 92, 129} \\
\cline{2-3}
& Others & \cite{90, 118, 119, 30, 117, 120, 121} \\
\hline
{Sample-based} & \centering{-} & \cite{15,75,88,99,104} \\
\hline
{Ranking-based} & \centering{-} & \cite{85, 22, 105} \\
\hline
\end{tabular}
\end{table}
\subsubsection{Optimization-Based Methods}\label{subsubsec:optimization}
Optimization-based methods are one of the most commonly used methods to diversify multiple models. These methods try to obtain multiple diversified models by optimizing a given objective function as Eq. \ref{eq:58} shows, which includes a diversity measurement. Just as the diversity of D-model in prior subsection, the main problem of these methods is to define diversity measurements which can calculate the difference between different models.
{Many prior works \cite{30, 90, 117, 84, add_14, add_3} have summarized some pairwise diversity measurements, such as Q-statistics measure \cite{118, add_3}, correlation coefficient measure \cite{118, add_3}, disagreement measure \cite{119, 90, 113}, double-fault measure \cite{120, 90, 113}, $k$ statistic measure \cite{121}, Kohavi-Wolpert variance \cite{30, 113}, inter-rater agreement \cite{30, 113}, the generalized diversity \cite{30} and the measure of "Difficult" \cite{30, 113}.} Recently, some more measurements have also been developed, including not only the pairwise diversity measurement \cite{114, 118, 117} but also the measurements which calculate the multiple correlation and others \cite{41,78,92, 116}. This subsection will summarize these methods systematically.
{\bf Bayesian-based measurements.} Similar to D-model, Bayesian methods can also be applied in D-models. Among these Bayesian methods, divergence is the most commonly used one. As former subsection shows, the divergence can measure the difference between different distributions. The way to formulate the diversity-promoting term by the divergence method over the ensemble learning is to calculate the divergence between different distributions from the inference of different models, respectively \cite{114,118}. The diversity-promoting term by divergence from Eq. \ref{eq:58} can be formulated as
\begin{equation}
\begin{aligned}
\Gamma(W_1, W_2, \cdots, & W_s)= \\
\sum_{i,j}^{s}\sum_{k=1}^{n}(P(W_{i}&{(k)})\log\frac{P(W_{i}{(k)})}{P(W_{j}{(k)})}-P(W_{i}{(k)})+P(W_{j}{(k)}))
\end{aligned}
\end{equation}
where $W_i{(k)}$ represents the $k-$th entry in $W_i$. $P(w_i)$ denotes the distributions of the inference from the $i-${th} model. The former diversity term can increase the difference between the inference obtained from different models and would encourage the learned multiple models to be diversified.
In addition to the divergence measurements, Renyi-entropy which measures the kernelized distances between the images of samples and the center of ensemble in the high-dimensional feature space can also be used to encourage the diversity of the learned multiple models \cite{77}.
The Renyi-entropy is calculated based on the Gaussian kernel function and the diversity-promoting term from Eq. \ref{eq:58} can be formulated as
\begin{equation}
\begin{aligned}
\Gamma(W_1, W_2, \cdots, W_s)&= \\
-\log[\frac{1}{s^2}\sum_{i=1}^{s}\sum_{j=1}^{s}&G(P(W_i)-P(W_j),2\sigma^2)]
\end{aligned}
\end{equation}
where $\sigma$ is a positive value and $G(\cdot)$ represents the Gaussian kernel function, which can be calculated as
\begin{equation}
\begin{aligned}
G(W_i-W_j,&2\sigma^2)= \\
\frac{1}{(2\pi)^{\frac{d}{2}}\sigma^d}&\exp\{-\frac{(P(W_i)-P(W_j))^T(P(W_i)-P(W_j))}{2\sigma^2}\}
\end{aligned}
\end{equation}
where $d$ denotes the dimension of $W_i$. Compared with the divergence measurement, the Renyi-entropy measurement can be more fit for {the machine learning model} since the difference can be adapted for different models with different value $\sigma$. However, the Renyi-entropy would cost more computational sources and the update of the ensemble would be more complex.
Another measurement which is based on the Bayesian method is the cross entropy measurement\cite{74,80,93}. The cross entropy measurement uses the cross entropy between pairwise distributions to encourage two distributions to be dissimilar and then different base models could provide more complement information.
Therefore, the cross-entropy between different base models can be calculated as
\begin{equation}
\begin{aligned}
\Gamma(w_i,w_j)=&\frac{1}{n}\sum_{k=1}^{n}(P_k(w_i)\log P_k(w_j) \\
&+(1-P_k(w_i))\log (1-P_k(w_j)))
\end{aligned}
\end{equation}
where $P(W_i)$ is the inference of the $i-$th model and $P_k(W_i)$ is the probability of the sample belonging to the $k$th class.
According to the characteristics of the cross entropy and the requirement of the diversity regularization, the diversity-promoting regularization of the cross entropy from Eq. \ref{eq:58} can be formulated as
\begin{equation}
\begin{aligned}
\Gamma(w_1,w_2,\cdots,w_K)=&\frac{1}{n}\sum_{i,j}^{s}\sum_{k=1}^{n}(P_k(w_i)\log P_k(w_j) \\
&+(1-P_k(w_i))\log(1-P_k(w_j)))
\end{aligned}
\end{equation}
We all know that the larger the cross entropy is, the more difference the distributions are. Therefore, under the cross entropy measurement, different models can be diversified and provide more complement information.
Most of the former Bayesian methods promote the diversity in the learned {multiple base models} by calculating the pairwise difference between these base models. However, these methods ignore the correlation among three or more base models.
To overcome this problem, \cite{78} proposes a hierarchical pair competition-based parallel genetic algorithm (HFC-PGA) to increase the diversity among the component neural networks. The HFC-PGA takes advantage of the average of all the distributions from the ensemble to calculate the difference of each base model. The diversity term by HFC-PGA from Eq. \ref{eq:58} can be formulated as
\begin{equation}
\Gamma(W_1,W_2,\cdots,W_s)=\sum_{j=1}^{s}(\frac{1}{s}\sum_{i=1}^{s}P(W_i)-P(W_j))^2
\end{equation}
It should be noted that the HFC-PGA takes advantage of multiple correlation between the multiple models. However, the HFC-PGA method uses the fix weight to calculate the mean of the distributions and further calculate the covariance of the multiple models which usually cannot fit for different tasks. This would limit the performance of the diversity promoting prior.
To deal with the {shortcomings} of the HFC-PGA, negative correlation learning (NCL) tries to reduce the covariance among all the models while the variance and bias terms are not increased \cite{115,116, 92, 129}. The NCL trains the base models simultaneously in a cooperative manner that decorrelates individual errors. The penalty term can be designed in different ways depending on whether the models are trained sequentially or parallelly.
\cite{115} uses the penalty to decorrelate the current learning model with all previously learned models
\begin{equation}
\Gamma(W_1,W_2,\cdots,W_s)=\sum_{k=1}^{s}(P(W_k)-l)\sum_{j=1}^{k-1}(P(W_j)-l)
\end{equation}
where $l$ represents the target function which is a desired output scalar vector.
{Besides, define $\bar{P}=\sum_{i=1}^{s}\alpha_i P(W_i)$ where $\sum_{i=1}^{s}\alpha_i=1$.} Then, the penalty term can also be defined to reduce the correlation mutually among all the learned models by using the actual distribution $\bar{P}$ obtained from each model instead of the target function $l$ {\cite{116,92,add_14}}.
\begin{equation}
\Gamma(W_1,W_2,\cdots,W_s)=\sum_{k=1}^{s}(P(W_k)-\overline{P})\sum_{j=1}^{k-1}(P(W_j)-\overline{P})
\end{equation}
This measurement uses the covariance of the inference results obtained from the multiple models to reduce the correlation mutually among the learned models. Therefore, the learned multiple models can be diversified.
{In addition, \cite{76} further combines the NCL with sparsity.} The sparsity is purely pursued by the $L_1$ norm regularization without considering the complementary characteristics of the available base models.
Most of the Bayesian methods promote diversity in ensemble learning mainly by increasing the difference between the probability distributions of the inference of different base models. There exist other methods which can promote diversity over the parameters of each base model directly.
{\bf Cosine similarity measurement.} Different from the Bayesian methods which promote diversity from the distribution view,
\cite{117} introduces the cosine similarity measurements to calculate the difference between different models from geometric view. Generally, the diversity-promoting term from Eq. \ref{eq:58} can be written as
\begin{equation}
\Gamma(W_1, W_2, \cdots, W_s)=-\sum_{i\neq j}^{s}\frac{<W_i,W_j>}{\|W_i\|\|W_j\|}.
\end{equation}
In addition, as a special form of angular-based measurement, a special form of inner product measurement, termed as exclusivity, has been proposed by \cite{82} to obtain diversified models. It can jointly suppress the training error of ensemble and enhance the diversity between bases. The diversity-promoting term by exclusivity (see Eq. \ref{eq:89} for details) from Eq. \ref{eq:58} can be written as
\begin{equation}
\Gamma(W_1, W_2, \cdots, W_s)=-\sum_{i\neq j}^{s}\|W_i\bigodot W_j\|_1
\end{equation}
These measurements try to encourage the pairwise models to be uncorrelated such that each base model can provide more complement information.
{\bf $L_{2,1}$ measurement.} Just as the former subsection, $L_{2,1}$ norm can also be used as the diversification of multiple models\cite{41}. the diversity-promoting regularization by $L_{2,1}$ from Eq. \ref{eq:58} can be formulated as
\begin{equation}
\Gamma(W_1, W_2, \cdots, W_s)=-\sum_{i}^{s}(\sum_{j}^{K}|W_i(j)|)^2
\end{equation}
The $L_{2,1}$ measurement uses the group-wise correlation between different base models and favors selecting diverse models residing in more groups.
Some other diversity measurements have been proposed for deep ensemble.
\cite{91} reveals that it may be better to ensemble many instead of all of the neural networks at hand. The paper develops an approach named Genetic Algorithm
based Selective Ensemble (GASEN) to obtain different weights of each neural network. Then, based on the obtained weights, the deep ensemble can be formulated. Moreover, \cite{79} also encourages the diversity of the deep ensemble by defining a pair-wise similarity between different terms.
{These optimization-based methods utilize the correlation between different models and try to repulse these models from one another. The aim is to enforce these representations which are obtained from different models to be diversified and thus these base models can provide outputs reflecting multi-modal belief.}
\subsubsection{Sample-Based Methods}\label{subsubsec:sample}
In addition to diversify the ensemble learning from the optimization view, we can also diversify the models from the sample view. {Generally, we randomly divide the training set into multiple subsets where each base model corresponds to a specific subset which is used as the training samples.} However, there exists the overlapping between the representations of different base models. This may cause the redundancy and even decrease the performance of the ensemble learning due to the reduction of the training samples over each model by the division of the whole training set.
To overcome this problem and provide more complement information from different models, \cite{99} develops a novel method by dividing the training samples into multiple subsets by assigning the different training samples into the specified subset where the corresponding learned model shows the lowest predict error. Therefore, each base model would focus on modeling the features from specific classes. Besides, clustering is another popular method to divide the training samples for different models \cite{104}. Although diversifying the obtained multiple subsets can make the multiple models provide more complement information, the less of training samples by dividing the whole training set will show negative effects over the performance.
To overcome this problem, another way to enforce different models to be diversified is to assign each sample with a specified weight \cite{15}. By training different base models with different weights of samples, each base model can focus on complement information from the samples. The detailed steps in \cite{15} are as follows:
\begin{itemize}
\item {Define the weights over each training sample randomly, and train the model with the given weights;}
\item {Revise the weights over each training sample based on the final loss from the obtained model, and train the second model with the updated weights;}
\item {Train $M$ models with the aforementioned strategies.}
\end{itemize}
The former methods take advantage of the labelled training samples to enforce the diversity of multiple models. There exists another method, namely Unlabeled Data to Enhance Ensemble (UDEED) \cite{75}, which {focuses on} the unlabelled samples to promote diversity of the model. Unlike the existing semi-supervised ensemble methods where error-prone pseudo-labels are estimated for unlabelled data to enlarge the labelled data to improve accuracy. UDEED works by maximizing accuracies of base models on labelled data while maximizing diversity among them on unlabelled data.
Besides, \cite{88} combines the different initializations, different training sets and different feature subsets to encourage the diversity of the multiple models.
The methods in this subsection process on the training sets to diversify different models. By training different models with different training samples or samples with different weights, these models would provide different information and thus the whole models could provide a larger proportional of information.
\subsubsection{Ranking-Based Methods}\label{subsubsec:ranking}
Another kind of methods to promote diversity in the obtained multiple models is ranking-based methods.
All the models is first ranked according to some criterion, and then the top-$L$ are selected to form the final ensemble. Here, \cite{85} focuses on pruning techniques based on forward/backward selection, since they allow a direct comparison with the simple estimation of accuracy from different models.
Cluster can be also used as ranking-based method to enforce diversity of the multiple models \cite{105}. In \cite{105}, each model is first clustered based on the similarity of their predictions, and then each cluster is then pruned to remove redundant models, and finally the remaining models in each cluster are finally combined as the base models.
In addition to the former mentioned methods, \cite{22} provides multiple diversified models by selecting different sets of multiple features. Through multi-scale or other tricks, each sample will provide large amounts of features, and then choose top-$L$ multiple features from the all the features as the base features (see \cite{22} for details). Then, each base feature from the samples is used to train a specific model, and the final inference can be obtained through the combination of these models.
In summary, this paper summarizes the diversification methods for D-models from three aspects: optimization-based methods, sample-based methods, and ranking-based methods. The details of the most frequently encountered diversity methods is shown in Table \ref{table:02}. Optimization-based methods encourage the multiple models to be diversified by imposing diversity regularization between different base models while optimizing these models. In contrary, sample-based methods mainly obtain diversified models by training different models with specific training sets. Most of the prior works focus on diversifying the ensemble learning from the two aspects. While the ranking-based methods try to obtain the multiple diversified models by choosing the top-$L$ models.
{The researchers can choose the specific method for D-models based on the special requirements of the machine learning tasks.}
\section{Inference Diversification}\label{sec:inference}
The former section summarizes the methods to diversify different parameters in the model or multiple base models. The D-model focuses on the diversification of parameters in the model and improves the representational ability of the model itself while D-models tries to obtain multiple diversified base models, each of which focus on modeling different features from the samples. These works improve the performance of the machine learning process {in the modeling stage (see Fig. \ref{fig:01} for details).}
In addition to these methods, there exist some other works focusing on obtaining multiple choices in the inference of the machine learning model.
{This section will summarize these diversifying methods in the inference stage.} To introduce the methods for inference diversification in detail, we choose the graph model as the representation of the machine learning models.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{inference_diversification.pdf}
\caption{Effects of inference diversification for improving the performance of the machine learning model. The results come from prior work \cite{9}. Through inference diversification, multiple diversified choices can be obtained. Then, under the help of other methods, such as re-ranking in \cite{9}, the final solution can be obtained.}
\label{fig:inference_diversification}
\end{figure}
We consider a set of discrete random variables $X=\{x_i|i\in\{1,2,\cdots,N\}\}$, each taking value $y_i$ in a finite label set $L_v$. Let $G=(V,E)$($V=\{1,2,\cdots,N\}$, $E=\binom{V}{2}$) describe a graph defined over these variable. The set $L_\chi=\prod_{v\in \chi}L_v$ denotes a Cartesian product of sets of labels corresponding to the subset $\chi\in V$ of variables. Besides, denote $\theta_A:X_A\rightarrow R$, ($\forall A\in V\cup E$) as the functions which define the energy at each node and edge for the labelling of variables in scope. The goal of the MAP inference is to find the labelling ${\bf y}=\{y_1,y_2,\cdots,y_N\}$ of the variables that minimizes this real-valued energy function:
\begin{equation}
{\bf y}^*=\arg\min\limits_{\bf y}E({\bf y})=\arg \min\limits_{{\bf y}}\sum_{A\in V\cup E} \theta_A({\bf y})
\end{equation}
However, ${\bf y}^*$ usually converges to the sub-optimal results due to the limited representational ability of the model and the limited training samples. Therefore, multiple choices, which can provide complement information, are desired from the model for the specialist.
Traditional methods to obtain multiple choices ${\bf y}^1, {\bf y}^2,\cdots,{\bf y}^M$ try to solve the following optimization:
\begin{equation} \label{eq:81}
\begin{aligned}
&{\bf y}^m=\arg\min\limits_{\bf y}E({\bf y})=\arg\min\limits_{\bf y}\sum_{A\in V\bigcup E}\theta_A({\bf y})\\
&s.t. \ {\bf y}^m\neq {\bf y}^i, i=1,2,\cdots,m-1
\end{aligned}
\end{equation}
However, the obtained second-best choice will typically be one-pixel shifted versions of the best \cite{23}. In other words, the next best choices will almost certainly be located on the upper slope of the peak corresponding with the most confident detection, while other peaks may be ignored entirely.
To overcome this problem, many methods, such as diversified multiple choice learning (D-MCL), submodular, M-Modes, M-NMS, have been developed for inference diversification in prior works. These methods try to diversify the obtained choices (do not overlap under a user-defined criteria) while obtaining high score on the optimization term.
Fig. \ref{fig:inference_diversification} shows some results of image segmentation from \cite{9}. Under the inference diversification, we can obtain multiple diversified choices, which represent the different optima of the data. Actually, {there also exist many methods which focus on providing multiple diversified choices in the inference phase.} In this work, we summarize the diversification in these works as inference diversification. The following subsections will introduce these works in detail.
\begin{table}
\centering
\caption{Overview of most frequently used inference diversification methods and the papers in which example measurements can be found.}
\label{table:03}
\begin{tabular}{p{0.2\textwidth}p{0.22\textwidth}}
\hline\noalign{\smallskip}
Measurements & Papers \\
\noalign{\smallskip}\hline\noalign{\smallskip}
D-MCL & {\cite{4,5,7,9,11,16, 87}} \\
Submodular for Diversification & {\cite{6,8,94, add_34,add_35, add_44,add_45}} \\
M-modes & \cite{12} \\
M-NMS & {\cite{19,20,21,89, 96, 97}} \\
DPP & {\cite{32, 152, add_8}} \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Diversity-Promoting Multiple Choice Learning (D-MCL)}\label{subsec:mcl}
The D-MCL tries to find a diverse set of highly probable solutions under a discrete probabilistic model. Given a dissimilarity function measuring similarity between the pairwise choices, our formulation involves maximizing a linear combination of the probability and the dissimilarity to the previous choices. Even if the MAP solution alone is of poor quality, a diverse set of highly probable hypotheses might still enable accurate predictions. The goal of D-MCL is to produce a diverse set of low-energy solutions.
The first method is to approach the problem with a greedy algorithm, where the next choice is defined as the lowest energy state with at least some minimum dissimilarity from the previously chosen choices. To do so, a dissimilarity function $\Delta({\bf y},{\bf y}^i)$ is defined first. In order to find the $M$ diverse, low energy, labellings ${\bf y}^1, {\bf y}^2, \cdots, {\bf y}^M$, the method proceeds by solving a sequence of problems of the form \cite{5, 9, 11, 16, 87}
\begin{equation} \label{eq:65}
{\bf y}^m=\arg\min\limits_{\bf y}(E({\bf y})-\gamma\sum_{i=1}^{m-1}\Delta({\bf y},{\bf y}^i))
\end{equation}
for $m=1,2,\cdots,M$, where $\gamma>0$ determines a trade-off between diversity and energy, ${\bf y}^1$ is the MAP-solution and the function $\Delta:L_V\times L_V\rightarrow R$ defines the diversity of two labels. In other words, $\Delta({\bf y},{\bf y}^i)$ takes a large value if ${\bf y}$ and ${\bf y}^i$ are diverse, and a small value otherwise. For special case, the M-Best MAP is obtained when $\Delta$ is a 0-1 dissimilarity (i.e. $\Delta({\bf y}, {\bf y}^i)=I({\bf y}\neq {\bf y}^i)$).
The method considers the pairwise dissimilarity between the obtained choices. {More importantly, it is easy to understand and implement.} However, under the greedy strategy, each new labelling is obtained based on the previously found solutions, and ignores the upcoming labellings \cite{7}.
Contrary to the former form, the second method formulate the $M$-best diverse problem in form of a single energy minimization problem \cite{7}. Instead of the greedy sequential procedure in (\ref{eq:65}), this method suggests to infer all $M$ labellings jointly, by minimizing
\begin{equation} \label{eq:82}
E^M({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M)=\sum_{i=1}^{M}E({\bf y}^i)-\gamma \Delta^M({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M)
\end{equation}
where $\Delta^M$ defines the total diversity of any $M$ labellings. To achieve this, let us first create $M$ copies of the initial model. Three specific different diversity measures are introduced. The split-diversity measure is written as the sum of pairwise diversities, i.e. those penalizing pairs of labellings \cite{7}
\begin{equation}
\Delta^M({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M)=\sum_{i=2}^{M}\sum_{j=1}^{i-1}\Delta({\bf y}^i,{\bf y}^j)
\end{equation}
The node-diversity measure is defined as \cite{7}
\begin{equation}
\Delta^M({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M)=\sum_{v\in V}\Delta_v(y_v^1,y_v^2,\cdots,y_v^M)
\end{equation}
Finally, the special case of the split-diversity and node-diversity measures is the node-split-diversity measure \cite{7}
\begin{equation}
\Delta^M({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M)=\sum_{v\in V}\sum_{i=2}^{M}\sum_{j=1}^{i-1}\Delta_v(y_v^i,y_v^j)
\end{equation}
The D-MCL methods try to find multiple choices with a dissimilarity function. This can help the machine learning model to provide choices with more difference and show more diversity. However, the obtained choices may not be the local extrema and {there may exist other choices} which could better represent the objects than the obtained ones.
\subsection{Submodular for Diversification }\label{subsec:submodular_mcl}
The problem of searching for a diverse but high-quality subset of items in a ground set $V$ of $N$ items has been studied in {information retrieval \cite{add_35}, web search \cite{add_34}, social networks \cite{add_47}, sensor placement \cite{add_48}, observation selection problem \cite{add_46}, set cover problem \cite{add_49}, document summarization \cite{add_44,add_45}, and others}. In many of these works, an effective, theoretically-grounded and practical tool for measuring the diversity of a set $S\subseteq V$ are submodular set functions. Submodularity is a property that comes from marginal gains. A set function $F:2^V\rightarrow R$ is submodular when its marginal gains $F(a|S)\equiv F(S\cup a)-F(S)$ are decreasing: $F(a|S)\geq F(a|T)$ for all $S\subseteq T$ and $a\notin T$. In addition, if $F$ is monotone, i.e. $F(S)\leq F(T)$ whenever $S\subseteq T$, then a simple greedy algorithm that iteratively picks the element with the largest marginal gain $F(a|S)$ to add to the current set $S$, achieves the best possible approximation bound of $\displaystyle{(1-\frac{1}{e})}$\cite{94}. This result has presented significant practical impact. Unfortunately, if the number $N=|V|$ of items is exponentially large, then even a single linear scan for greedy augmentation is simply infeasible. The diversity is measured by a monotone, nondecreasing and normalized submodular function $f:2^V\rightarrow R^+$.
Denote $S$ as the set of choices. The diversification is measured by a monotone, nondecreasing and normalized submodular function $D: 2^V\rightarrow R^+$ . Then, the problem can be transformed to find a maximizing {configurations for the combined score \cite{6, add_34, add_35, add_44, add_45}}
\begin{equation}
F(S)=E(S)+\gamma D(S)
\end{equation}
The optimization can be solved by the greedy algorithm that starts out with $S^0=\emptyset$, and {iteratively adds the best term:}
\begin{equation}
\begin{aligned}
{\bf y}^m=&\arg\max\limits_{\bf y}F({\bf y}|S^{m-1}) \\
=&\arg\max\limits_{\bf y}\{E({\bf y})+\gamma D({\bf y}|S^{m-1})\}
\end{aligned}
\end{equation}
where $S^m={\bf y}^m\bigcup S^{m-1}$. The selected choice $S^m$ is within a factor of $\displaystyle{1-\frac{1}{e}}$ of the optimal solution $S^*$:
\begin{equation}
F(S^m)\geq (1-\frac{1}{e})F(S^*).
\end{equation}
The submodular takes advantage of the maximization of marginal gains to find multiple choices which can provide the maximum of complement information.
\subsection{M-NMS}\label{subsec:nms}
Another way to obtain multiple diversified choices is the non-maximum suppression (M-NMS) \cite{150, 97}. The M-NMS is typically defined in an algorithmic way: starting from the MAP prediction one goes through all labellings according to an increasing order of the energy. A labelling becomes part of the predicted set if and only if it is more than $\rho$ away from the ones chosen before, where $\rho$ is the threshold defined by user to judge whether two labellings are similar. The M-NMS guarantee the choices to be apart from each other. The M-NMS is typically implemented by greedy algorithm \cite{20, 89, 96, 97}.
A simple greedy algorithm for instantiating multiple choices are used: Search over the exponentially large space of choices for the maximally scoring choice, instantiate it, {remove all choices with overlapping, and repeat}. The process is repeated until the score for the next-best choice is below a threshold or M choices have been instantiated. However, {the general implementation} of such an algorithm would take exponential time.
The M-NMS method tries to find M-best choices by throwing away the similar choices from the candidate set. To be concluded, the D-MCL, submodular, and M-NMS have the similar idea. All of them tries to find the M-best choices under a dissimilarity function or the ones which can provide the most complement information.
\subsection{M-modes}\label{subsec:modes}
Even though the former three methods guarantee the obtained multiple choices to be apart from each other, the choices are typically not local extrema of the probability distribution. {To further guarantee both the local extrema and the diversification of the obtained multiple choices simultaneously, the problem can be transformed to the M-modes.} The M-modes have multiple possible applications, because they are intrinsically diverse.
For a non-negative integer $\delta$, define the $\delta$-neighborhood of a labelling ${\bf y}$ to be $N_\delta=\{{\bf y}|d({\bf y},{\bf y}')\leq \delta\}$ as the set of labellings whose distances from ${\bf y}$ is no more than $\delta$, where $d(\cdot)$ measures the distance between two labellings, such as the Hamming distance.
Then, a labelling ${\bf y}$ is defined as a local maximum of {the energy function} $E(\cdot)$, iff $ E({\bf y})\geq E({\bf y}')$, $\forall {\bf y}'\in N({\bf y})$.
Given $\delta$, the set of modes is denoted by $M^\delta$, formally, \cite{12}
\begin{equation}
M^\delta=\{{\bf y}| E({\bf y}')\geq E({\bf y}), \forall {\bf y}'\in N_\delta({\bf y})\}
\end{equation}
As $\delta$ increases from zero to infinity, the $\delta$-neighborhood of ${\bf y}$ monotonically grows and the set of modes $M^\delta$ monotonically decreases. Therefore, the $M^\delta$ can form a nested sequence, \cite{12}
\begin{equation}
M^0\supseteq M^1\supseteq \cdots \supseteq M^\infty=\{{\text{MAP}}\}
\end{equation}
{Here, the M-modes} can be defined as computing the $M$ labellings with minimal energies in $M^\delta$.
{Then, the problem has been transformed to M-modes:} Compute the $M$ labellings with minimal energies in $M^\delta$.
Besides, \cite{12} has already validated that a labelling is a mode if and only if it behaves like a "local mode" everywhere, and thus a new chain has been constructed and M-modes problem is reduced into the $M$ best problem of the new chain.
Furthermore, it also validates the one-to-one cost preserving correspondence between consistent configurations $\alpha$ and the set of modes $M^\delta$. Therefore, the problem of computing the $M$ best modes are transferred to the problem of computing the $M$ best configurations in the new chain.
Different from the former three methods, M-modes can obtain M choices which are the local extrema of the optimization and this can provide M choices which contains the most complement information.
\subsection{DPP}\label{subsec:dpp_mcl}
General M-NMS and M-Modes try to select choices with the highest scores. Since these methods give a priority to scores of the choices, it might finally end up in selecting the overlapping choices and miss the best possible set of non-overlapping ones with acceptable scores {\cite{32, 152, add_8}}. To address this problem, {\cite{32, 152, add_8}} attempt to use the DPP to select a set of diverse and informative choices with enriched representations.
The definition of DPP have been introduced in detail in subsection \ref{subsec:dpp}.
It can be noted that the DPP is a distribution over the subsets of a fixed ground set, which prefers a diverse set of points.
The selected subset of items by DPP can be representative and cover significant amount of information from the whole set. Besides, the selection would be diverse and non-repetitive {\cite{32, add_8}}.
To make the inference diversification, \cite{32} calculates the probability of inclusion of each choice depends on the determinant of a kernel matrix. The kernel matrix is defined such that it captures all spatial and contextual information between choices all at once. To apply the DPP, the quality and diversity terms need to be defined. The quality term (unary score) defines the optimization term, such as the $E(y)$ in Eq. \ref{eq:81}. The diversity term defines the pairwise correlation of the obtained choices.
Similar to Eq. \ref{eq:82}, the model is first transformed into the M-best diverse problem in form of a single energy minimization problem. The optimization problem based on the DPP can be formulated as
\begin{equation} \label{eq:83}
E^M({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M)=\sum_{i=1}^{M}E({\bf y}^i)-\gamma \det(L({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M))
\end{equation}
The kernel matrix in DPP is defined as
\begin{equation}
L({\bf y}^1,{\bf y}^2,\cdots,{\bf y}^M)=[L_{ij}]_{i,j=1,2,\cdots,M}
\end{equation}
where $L_{ij}=E({\bf y}^i)E({\bf y}^j)S_{ij}$. $S_{ij}$ is the similarity term to computer the dissimilarity between different choices.
A DPP tends to pick uncorrelated choices in these cases. On the other hand, higher quality items increase the determinant, and thus a DPP tends to pick high-quality and diverse set of choices.
\subsection{Analysis}
Even though all the methods in former subsections can be used for inference diversity, there exists some difference between these methods. These methods in prior works are summarized in Table \ref{table:03}. It can be noted from the former subsections that the D-MCL is {the easiest one to implement}. One only needs to calculate the MAP choice and obtain other choices by constraining the optimization with a dissimilarity function. In contrary, the M-NMS neglects the choices which are in the neighbors of the former choice and obtain other choices from the remainder. The D-MCL and M-NMS obtain choices by solving the optimization with the user-defined similarity while the submodular method tries to obtain the choices which can provide the maximal marginal and complement information. The former three methods may provide choices which is not local optima while the local optimal choices usually contain more information than others. Therefore, different from the former three methods, the M-modes tries to obtain multiple diversified choices which are also local optima.
All these methods before can be used in traditional machine learning methods. The DPP method for inference diversification which is developed by \cite{32, 152} is mainly applied in object detection tasks. The methods in \cite{32, 152} take advantage of the merits of DPP to obtain high quality choices with less overlapping which could present better performance than M-NMS and M-modes. From the introduction of data diversification, D-model, D-models, and inference diversification, one can choose the proper method for diversification of machine learning in various computer vision tasks. In the following, we'll introduce some applications of diversity technology in machine learning model.
\section{Applications}\label{sec:application}
Diversity technology in machine learning can significantly improve the representational ability of the model in many computer vision tasks, including {the remote sensing imaging tasks \cite{13, 14, 74, 123}, camera relocalization \cite{15, 104}, natural image segmentation \cite{9, 5, 11}, object detection \cite{19, 20}, machine translation \cite{16, 124}, information retrieval \cite{33, add_41, add_42, add_43, add_35}, social network analysis \cite{add_36, add_35, add_39}, document summarization \cite{add_44, add_45, add_51}, web search \cite{add_34,add_40, add_38, add_53}, and others}. The diversity priors, which can decrease the redundancy in the learned model or diversify the obtained multiple choices, can provide more informative features and show powerful ability in real-world application, especially for the {machine learning tasks} with limited training samples and complex structures in the training samples. In the following, we'll introduce some of the applications of the diversity technology in machine learning.
\subsection{Remote Sensing Imaging Tasks}
Remote sensing images, such as the hyperspectral images and the multi-spectral images, have played a more and more important role in the past two decades \cite{101}. However, there exist some typical difficulties in remote sensing imaging tasks. First, limited number of training samples in remote sensing imaging tasks usually make it difficult to {describe the images}. Since labelling is usually time-consuming and cost, it usually cannot provide enough training samples to train the model. Besides, remote sensing images usually have large intra-class variance and low inter-class variance, which make it difficult to extract discriminative features from the images.
Therefore, proper feature extraction models are required for the representations of the remote sensing images. Recently, deep models have demonstrated their impressive performance in extracting features from the remote sensing images \cite{153}. However, the deep models usually consist of large amounts of parameters while the limited training samples would make the learned deep model be sub-optimal. This would limit the performance of the deep models for the representation of the remote sensing images.
To overcome these problems, some works have applied the diversity-promoting prior to diversify the model for better performance \cite{13, 14, gong_cnn, 154}. \cite{13} and \cite{14} attempt to diversify the learned model with the independence prior, which is based on the cosine similarity in the former section for remote sensing images. \cite{13} develops a special diversity-promoting deep structural metric learning method for scene classification in remote sensing while \cite{14} imposes the independence prior on deep belief network (DBN) for hyperspectral image classification. If we denote $W=[{\bf w}_1, {\bf w}_2, \cdots, {\bf w}_s]$ as the metric parameter factors in \cite{13} and the latent factors of RBM in \cite{14}, then the diversity term by the independence prior in the two papers can be formulated as
\begin{equation}
f(W)=\sum^{K}_{i\neq j}\frac{<{\bf w}_i,{\bf w}_j>}{\|{\bf w}_i\|\|{\bf w}_j\|}
\end{equation}
{where $K$ is the number of the factors.}
The diversity term $f(W)$ encourages the factors to be diversified, so as to improve the representational ability of the model for the images. As introduced in subsection \ref{subsec:diversity_regularization}, to make use of the multiple information, \cite{gong_cnn} have applied the DPP prior in the learning process of deep model for hyperspectral image classification. The DPP prior can be formulated as
\begin{equation}
P(W)=(\det(\psi(W)))^\gamma
\end{equation}
The diversity regularization $f(W)$ in Eq. \ref{eq:47} can be written as
\begin{equation}
f(W)=-\log[(\det(\psi(W)))]
\end{equation}
{Besides, \cite{gong_cnn} also provides the way to process the diversity regularization by the back propagation,}
\begin{equation}
\frac{\partial f(W)}{\partial W}= - \frac{\partial \log \det(\psi(W))}{\partial W}=-2 W(WW^T)^{-1}.
\end{equation}
With the developed diversified model for hyperspectral image representation, the classification performance can be significantly improved \cite{gong_cnn}.
These prior works mainly improve the representational ability of the model for better performance from the D-model view.
Besides, \cite{154, gong_ijcnn} tries to improve the representational ability from the data diversification way. \cite{154, gong_ijcnn} create the pseudo classes with the center points. In \cite{154}, the pseudo classes are used to decrease the intra-class variance. Furthermore, the diversity-promoting prior, which is created based on the Euclidean distance to repulse different pseudo classes from each other, is used to improve the effectiveness of the developed training process for remote sensing scenes.
In \cite{gong_ijcnn}, the pseudo classes are used for unsupervised learning of remote sensing scene representation. The pseudo classes are used to allocate pseudo labels and training the model under the supervised way. Similar to \cite{154}, the diversity-promoting prior is also used to repulse different pseudo classes from each other to improve the effectiveness of the unsupervised learning process.
Furthermore, some other works focus on the diversification of multiple models for remote sensing images \cite{74, 123}. \cite{74} has applied cross entropy measurement to diversify the obtained multiple models and then the obtained multiple models could provide more complement information (See subsection \ref{subsubsec:optimization} for details). Different from \cite{74}, \cite{123} divides the training samples into several subsets for different models, separately. Then, each model focuses on the representation of different classes and the whole representation of these models could be improved (See subsection \ref{subsubsec:sample} for details).
\begin{table*}
\centering
\caption{{Some comparison results between the general model and the diversified model for remote sensing imaging tasks.}}
\label{table:comparison_result}
\begin{tabular}{c | c | c c | c c}
\hline\noalign{\smallskip}
Dataset & Reference & Methods & Accuracy (\%) & Diversified Method & Accuracy (\%) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{2}*{{Ucmerced Land Use dataset}} & \cite{13} & DSML & $95.95 \pm 0.24$ & D-DSML & $96.76 \pm 0.36$ \\
& \cite{74} & CaffeNet & $95.48$ & Diversified MCL & $97.05 \pm 0.55$ \\
\hline
\multirow{3}*{{Pavia University}} & \cite{14} & DBN & $91.18 \pm 0.08$ & D-DBN-PF & $93.11 \pm 0.06$ \\
& \cite{gong_cnn} & DML-MS-CNN & $99.03 \pm 0.25$ & DPP-DML-MS-CNN & $99.46 \pm 0.03$ \\
& \cite{123} & DBN & $90.61 \pm 1.15$ & M-DBN & $92.55 \pm 0.74$ \\
\hline
\multirow{2}*{{Indian Pines}} & \cite{14} & DBN & $88.25 \pm 0.17$ & D-DBN & $91.03 \pm 0.12$ \\
& \cite{gong_cnn} & DML-MS-CNN & $98.87 \pm 0.21$ & DPP-DML-MS-CNN & $99.08 \pm 0.23$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
{To further describe the effectiveness of the diversity-promoting methods in machine learning, we listed some comparison results between the general model and the diversified model over different datasets in Table \ref{table:comparison_result}. The results come from the prior works. This work choose the Ucmerced Land use dataset, Pavia University dataset, and the Indian Pines as representatives.}
{Ucmerced Land Use dataset \cite{dataset_1} was manually extracted from orthoimagery. It is multi-class land-use scenes in the visible spectrum which contains 2100 aerial scene images divided into 21 challenging scene categories, including agricultural, airplane, baseball diamond, beach, buildings, chaparral, dense residential, forest, freeway, golf course, harbor, intersection, medium density residential, mobile home park, overpass, parking lot, river, runway, sparse residential, storage tanks, and tennis court. Each scene has $256 \times 256$ pixels with a resolution of one foot per pixel. For the experiments in Table \ref{table:comparison_result}, 80\% scenes of each class are used for training and the remainder are for testing.}
{Pavia University dataset \cite{dataset_2} was gathered by a sensor known as the reflective optics system imaging spectrometer (ROSIS-3) over the city of Pavia, Italy. The image consists of $610 \times 340$ pixels with 115 spectral bands. The image is divided into 9 classes with a total of 42, 776 labelled samples, including the asphalt, meadows, gravel, trees, metal sheet, bare soil, bitumen, brick, and shadow. For the experiments, 200 samples of each class are used for training and the remainder are for testing.}
{Indian Pines dataset \cite{dataset_3} was taken by AVIRIS sensor in northwestern Indiana. The image has $145 \times 145$ pixels with 224 spectral channels where 24 channels are removed due to the noise. The image is divided into 8 classes with a total of 8, 598 labelled samples, including the Corn no\_till, Corn min\_till, Grass pasture, hay windrowed, Soybeans no\_till, Soy beans min, Soybeans clean, and woods. For the experiments, 200 samples of each class are used for training and the remainder are for testing.}
{From the comparisons in Table \ref{table:comparison_result}, we can find that the diversity technology can improve the representational ability of the machine learning model and thus significantly improve the classification performance of the learned machine learning model.}
\subsection{Image Segmentation}
In computer vision, image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels). The goal of segmentation is to simplify and change the representation of an image into something that is more meaningful and easier to analyze. More precisely, image segmentation is the process of assigning a label to each pixel in an image such that pixels with the same label share certain characteristics. Since a semantic segmentation algorithm deals with tremendous amount of uncertainty from inter and intra object occlusion and varying appearance, lighting and pose, obtaining multiple best choices from all possible segmentations tends to be one of the possible way to solve the problem. Therefore, the image segmentation problem can be transformed into the M-best problem. However, as traditional problem in M-best problem, the obtained multiple choices are usually similar and the information provided to the user tends to be redundant.
As section \ref{sec:inference} shows, the obtained multiple choices will usually be only one-pixel shifted versions to each other.
The way to solve this problem is to introduce diversity into the training process to encourage the multiple choices to be diverse. {Under inference diversification,} the model is desired to provide a diverse set of low-energy solutions which represent different local optimal results from the data. Fig. \ref{fig:inference_diversification} shows the examples of inference diversification over the task in prior work \cite{9}.
Many works \cite{9, 5, 11, 4, 7, 6, 125, 126, 127} have introduced inference diversity into the image segmentation tasks via different ways. \cite{5} first introduces the D-MCL in subsection \ref{subsec:mcl} for image segmentation. The work developed the diversification method as Eq. \ref{eq:65} shows. To further improve the performance in work \cite{5},
prior works \cite{9} and \cite{4} combine the D-MCL with reranking which provide a way to obtain multiple diversified choices and select the proper one from the multiple choices.
As discussed in subsection \ref{subsec:mcl}, the greedy nature of original D-MCL makes the obtained labelling only be influenced by previously found labellings while ignore the upcoming labellings. To overcome this problem, \cite{7} develops a novel D-MCL which has the form as Eq. \ref{eq:82}.
Besides, \cite{6, 127} use the submodular to measure the diversification between multiple choices (see details in subsection \ref{subsec:submodular_mcl}).
\cite{126} combines the NMS (see details in subsection \ref{subsec:nms}) and the sliding window to obtain multiple choices.
Instead of inference diversification for better performance of image segmentation, prior work \cite{99} tries to obtain multiple diversified models for image segmentation task.
The method proposed by \cite{99} is to divide the training samples into several subsets where each base model is trained with a specific one. Through allocating each training sample to the model with lowest predict error, each model tends to model different classes from others.
Under the inference diversification and D-models for image segmentation tasks, the obtained multiple choices would be diversified as Fig. \ref{fig:inference_diversification} shows and the performance of the model would also be significantly improved.
\begin{table*}
\centering
\caption{{Some comparison results between the general model and the diversified model for image segmentation.}}
\label{table:comparison_result_segmentation}
\begin{tabular}{c | c | c c | c c}
\hline\noalign{\smallskip}
Dataset & Reference & Methods & Accuracy (\%) & Diversified Method & Accuracy (\%) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
{{PASCAL VOC 2010 dataset}} & \cite{5} & MAP & $91.54$ & DivMBEST & $95.16$ \\
{{PASCAL VOC 2011 dataset}} & \cite{99} & MCL & about 66 & sMCL & about 71 \\
{{PASCAL VOC 2012 dataset}} & \cite{9} & Second Order Pooling ($O_2P$)-MAP & $46.5$ & DivMBEST+Ranking & $48.1$ \\
{{PASCAL VOC 2012 dataset}} & \cite{6} & MAP & $43.43$ & submodular-MCL & $55.32$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
{Just as the remote sensing imaging tasks, we list some comparison results in prior works to show the effectiveness of the diversity technology in machine learning. The comparison results are listed in Table \ref{table:comparison_result_segmentation}. Generally, the experimental results on the PASCAL Visual Object Classes Challenge (PASCAL VOC) dataset are chosen to show the effectiveness of the diversity in machine learning for image segmentation.}
{Also, From the table \ref{table:comparison_result_segmentation}, we can find that the inference diversification can significantly improve the performance for segmentation.}
\subsection{Camera Relocalization}
Camera relocalization is to estimate the pose of a camera relative to a known 3D scene from a single RGB-D frame \cite{104}. It can be formulated as the inversion of the generative rendering procedure, which is to find the camera pose corresponding to a rendering of the 3D scene model that is most similar to the observed input. Since the problem is a non-convex optimization problem which has many local optima, one of the methods to solve the problem is to find a set of M predictors which generate M camera pose hypotheses and then infers the best pose from the multiple pose hypotheses. Similar to traditional M-best problems, the obtained M predictors is usually similar.
To overcome this problem and obtain hypotheses that are different from each other, \cite{15} tries to learn 'marginally relevant' predictors, which can make complementary predictions, and compare their performance when used with different selection procedures. In \cite{15}, greedy algorithm is used to obtain multiple diversified models. Different weights are defined on each training samples, and the weights is updated with the training loss from the former learned model. Finally, multiple diversified models can be obtained for camera relocalization.
\subsection{Object Detection}
Object detection is one of the computer vision tasks which deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Similar to image segmentation tasks, great uncertainty is contained in the object detection algorithms. Therefore, obtaining multiple diversified choices is also an important way to solve the problem.
Some prior works \cite{19, 20} have made great effects to obtain multiple diversified choices by M-NMS (see \ref{subsec:nms} for details). \cite{20} use the greedy procedure for eliminating repeated detections via NMS. Besides, \cite{19} demonstrates that the energies resulting from M-NMS lead to the maximization of submodular function, and then through branch-and-bound strategy \cite{100}, all the image can be explored and diversified multiple detections can be obtained.
\subsection{{Machine Translation}}
Machine translation (MT) task is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another. Recently, machine translation systems have been developed and widely used in real-world application. Commercial machine translation services, such as Google translator, Microsoft translator, and Baidu translator, have made great success. From the perspective of the user interaction, the ideal machine translator is an agent that reads documents in one language and provides accurate, high quality translations in another. This interaction ideal has been implicit in machine translation (MT) research since the field's inception. Unfortunately, when a real, imperfect MT system makes an error, the user is left trying to guess what the original sentence means. Therefore, to overcome this problem, providing the M-best translations instead of a single best one is necessary \cite{102}.
However, in MT, for example, many translations on M-best lists are extremely similar, often differing only by a single punctuation mark or minor morphological variation. The implicit goal behind these technologies is to better explore the output space by introducing diversity into the surrogate set.
Some prior works have introduced diversity into the obtained multiple choices and obtained better performance \cite{124}. \cite{16} develops the method to diversify multiple choices which is introduced in subsection \ref{subsec:mcl}. In the works, a novel dissimilarity function has been defined on different translations to increase the diversity between the obtained translations. It can be formulated as \cite{16}
\begin{equation}
\Delta_n(y,y')=-\sum_{i=1}^{|y|-q}\sum_{j=1}^{|y'|-q}[[y_{i:i+q}=y'_{j:j+q}]]
\end{equation}
where $[[\cdot]]$ is the Iverson bracket (1 if input condition is true, 0 otherwise) and $y_{i:j}$ is the subsequence of $y$ from word $i$ to word $j$ (inclusive). The advantage of this dissimilarity function is its simplicity. Besides, the diversity-promoting can ensure the machine learning system obtain multiple diversified translation for the user.
\subsection{{Information Retrieval}}
\subsubsection{{Natural Language Processing}}
In machine learning and natural language processing, a topic model is a statistical model for discovering the abstract "topics" that occur in a collection of documents. Probabilistic topic models such as Latent Dirichlet Allocation (LDA) and Restricted Boltzmann Machine (RBM) can provide a useful and elegant tool for discovering hidden structure within large data sets of discrete data, such as corpuses of text. However, LDA implicitly discovers topics along only a single dimension while RBM tends to learn multiple redundant hidden units to best represent dominant topics and ignore those in the long-tail region \cite{26}. To overcome this problem, diversifying over the learned model (D-model) can be applied over the learning process of the model.
Recent research on multi-dimensional topic modeling aims to devise techniques that can discover multiple groups of topics, where each group models some different dimension or aspect of the data.
Therefore, prior work \cite{33} presents a new multi-dimensional topic model that uses a determinantal point process prior (see details in subsection \ref{subsec:mcl}) to encourage different groups of topics to model different dimensions of the data. Determinantal point processes are probabilistic models of repulsive phenomena which originated in statistical physics but have recently seen interest from the machine learning community.
Besides, \cite{26} introduces the RBM for topic modeling to utilize hidden units to discover the latent topics and learn compact semantic representations for documents. Furthermore, to reduce the redundancy of the learned RBM, \cite{26} utilizes the angular-based diversification method which has the form of Eq. \ref{eq:84} to diversify the learned hidden units in RBM. Under the diversification, the RBM can learn much more powerful latent document representations that boost the performance of topic modeling greatly \cite{26}.
\subsubsection{{Web Search}}
{The problem of result diversification has been studied in various tasks, but the most robust literature on result diversification exists in web search \cite{add_53}. Web search has become the prodominant method for people to fulfill their information needs. In web search, it is general to provide different search results with different interpretations of a query. To satisfy the requirement of multiple distinct user type, the web search system is desired to provide a diverse set of results.
The increasing requirements for easily accessible information via web-based services have attracted a lot of attentions on the studies of obtaining diverse search results \cite{add_34,add_40, add_38, add_53}. The objective is to achieve large coverage on a few features but very small coverage on the remaining features, which satisfies the submodularity. Therefore, \cite{add_34,add_53} take advantage of the submodular for result diversification (see Subsection \ref{subsec:submodular_mcl} for details). Besides, \cite{add_38} uses the distance-based measurement to formulate the dissimilarity function in Subsection \ref{subsec:mcl} and \cite{add_40} has further summarized the diversification methods for result diversification which has the form as D-MCL (See Subsection \ref{subsec:mcl} for details). These search diversification methods can provide multiple diversified choices for the users to satisfy the requirements of specific information.}
\subsection{{Social Network Analysis}}
{Social network analysis is a process of investigating social structures through the use of networks and graph theory. It characterizes networked structures in terms of nodes and the ties, edges, or links (relationships or interactions) that connect them. Ranking nodes on graphs is a fundamental task in social network analysis and it can be applied to measure the centrality in social networks \cite{add_35}. However, many nodes in top-K ranking list obtained general methods are general similar since it only takes the relevance of the nodes into consideration \cite{add_36}.
To improve the effectiveness of the ranking process, many prior works have incorporated the diversity into the top-K ranking results \cite{ add_36, add_35, add_39}. To enforce diversity in the top-K ranking results, the way to measure the similarity tends to be the key problem. \cite{add_35} has formulated the diversified ranking problem as a submodular set function maximization and processes the problem as Subsection \ref{subsec:submodular_mcl} shows. Besides, \cite{add_36} takes advantage of the Heat kernel to formulate the weights on the social graph (see Subsection \ref{subsec:diversity_regularization} for details)
where larger weights would be if points are closer. Then through the random walks in an absorbing Markov chain, diversified ranking can be obtained \cite{add_36}. Furthermore, according to the special characteristics, \cite{add_39} develops a novel goodness measure to balance both the relevant and the diversity. For simplicity, the goodness measure would not be shown in this work and more details can be found in \cite{add_39} for interested readers. It should be noted that the optimization problem in \cite{add_39} is equal to the D-MCL in subsection \ref{subsec:mcl}.}
\subsection{{Document Summarization}}
{Multi-document summarization is an automatic procedure aimed at extraction of information from multiple texts written about the same topic. Generally, a good summary should coverage elements from distinct parts of data to be representative of the corpus and does not contain elements that are too similar to each other. Therefore, coverage and diversity are usually essential for the multi-document summarization \cite{add_50}. }
{Since coverage and diversity could sometimes be conflicting requirements \cite{add_50}, some prior works try to find a tradeoff between the coverage and the diversity. As Subsection \ref{subsec:submodular_mcl} shows, since there exist efficient algorithms (with near-optimal solutions) for a diverse set of constraints when the submodular function is monotone in the document summarization task, the submodular optimization methods have been applied in the diversification of the obtained summary of the multiple documents \cite{add_44, add_45, add_50, add_51, add_52}. \cite{add_44} defines a class of submodular functions meant for document summarization and \cite{add_45} treats the document summarization problem as maximizing a submodular function with a budget constraint. \cite{add_50} further investigates the personalized data summarization by the submodular function subject to multiple constraints. Under the diversification by submodular, the obtained elements can be diversified and the multiple documents can be better summarized.}
\section{Discussions}
This article surveyed the available works on diversity technology in general machine learning model, by systematically categorizing the diversity in training samples, D-model, D-models, and inference diversity in the model. We first summarize the main results and identify the challenges encountered throughout the article.
Recently, due to the excellent performance of the machine learning model for feature extraction, machine learning methods have been widely applied in real-world applications. However, the limited number and imbalance of training samples in real-world applications usually make the learned machine learning models be sub-optimal, sometimes even lead to the "over-fitting" in the training process. This would limit the performance of the machine learning models. Therefore, this work summarizes the diversity technology in prior works which can work on the machine learning model as one of the methods to improve the model's representational ability. We want to emphasize that the diversity technology is not decisive. The diversity can only be considered as an additional technology to improve the performance of the machine learning process.
{Through the introduction of the diversity in machine learning, the three questions proposed in the introduction can be easily answered. The detailed descriptions of data diversification, model diversification, and inference diversification are introduced in sections \ref{sec:data}, \ref{sec:model}, and \ref{sec:inference}. With these methods, the machine learning model can be diversified and the performance can be improved. Besides, the diversification of the model (D-model) tries to improve the representational ability of the machine learning model directly (see Fig. \ref{fig:d_model} for details) while the diversification of the models (D-models) aims to obtain multiple diversified choices under the diversification of the ensemble learning (see Fig. \ref{fig:d_models} for details). It should also be noted that the diversification measurements for data diversification, model diversification, and the inference diversification show some similarity. As introduced, the diversity aims to decrease the redundancy between the data or the factors.
The key problem for diversification is the way to measure the similarity between the data or the factors.
However, in the machine learning process, the data and the factors are processes as the vectors. Therefore, there exist overlaps between the data diversification, model diversification as well as the inference diversification, such as the DPP measurement. More importantly, we should also note that the diversification in different steps of machine learning models presents its special characteristics. The details for the applications of diversity methods are shown in section \ref{sec:application}. This work only lists the most common applications in real-world. The readers should consider whether the diversity methods are necessary according to the specific task they face with.}
{\bf Advice for implementation.} We expect this article is useful to researchers who want to improve the representational ability of machine learning models for computer vision tasks. For a given computer vision task, the proper machine learning model should be chosen first. Then, we advise to consider adding diversity-promoting priors to improve the performance of the model and further what type of diversity measurement is desired. When one desires obtain multiple models or multiple choices, then one can consider diversifying multiple models or the obtained multiple choices and section \ref{subsec:d_models} and \ref{sec:inference} would be relevant and helpful. We advise the reader to first consider whether the multiple models or multiple choices can be helpful for the performance.
\section{Conclusions}
The training of machine learning models requires large amounts of labelled samples. However, the limited training samples constrain the performance of machine learning models. Therefore, effective diversity technology, which can encourage the model to be diversified and improve the representational ability of the model, is expected to be an active area of research in machine learning tasks. This paper summarizes the diversity technology for machine learning in previous works. We introduce diversity technology in data pre-processing, model training, inference, respectively.
Other researchers can judge whether diversity technology is needed and choose the proper diversity method for the special requirements according to the introductions in former sections.
|
1807.01434
|
\section{Introduction}
Dirac semimetals (DSMs)~\cite{Young2012,Wang2012a,Wang2013,Weng2016,Armitage2018} are the 3D analogues of graphene~\cite{Novoselov2004} with and only with Dirac nodes on the Fermi level. These Dirac nodes are formed by band crossing, and the low-energy excitation around them leads to quasiparticles described by Dirac equation as emergent massless Dirac fermions.~\cite{Liu2014b, Armitage2018,Bradlyn2016,Bernevig2018,Orlita2014,Yang2014a} Up to now, there have been three classes of DSM proposed. One is the Dirac nodes with four-fold essential degeneracy, which is enforced by the nonsymmorphic symmetry at the high-symmetric momenta on the boundary of the Brillouin zone.~\cite{Young2012} The second is the accidental degenerate Dirac nodes, which appears as the topological phase transition critical point between different topological insulating states~\cite{Murakami2007}. The third one is also an accidental DSM, but the band crossing points are caused by band inversion and protected by proper crystal symmetry.~\cite{Wang2012a,Yang2014a} DSMs serve as a singular point of various topological states, such as topological insulators, Weyl semimetals, nodal line semimetals and triple-point semimetals~\cite{Weng2017}. DSMs exhibit many novel properties, such as high carrier mobility~\cite{Zdanowicz1975}, unique surface states with Fermi arcs~\cite{Wang2012a,Wan2011} and negative longitudinal magnetoresistivity due to the chiral anomaly.~\cite{Xiong2015,Gorbar2014}
The breakthrough in the search for stable DSMs~\cite{Yang2014a} is achieved in the series of studies on Na$_3$Bi~\cite{Wang2012a,Liu2014b} and Cd$_3$As$_2$~\cite{Wang2013,Liu2014c,Borisenko2014,Jeon2014,Neupane2014}, both of which were first proposed through first-principles calculations. They present good examples of the realization of the DSM in the above third class. The Dirac nodes are induced by band inversion and protected by proper axial rotational symmetry.~\cite{Wang2012a, Yang2014a} Such protection makes the Dirac nodes quite robust within a finite range of Hamiltonian parameters, which is exactly the reason why this class of DSM is experimentally available while the other two remain to be found.
Despite the success in identifying Na$_3$Bi and Cd$_3$As$_2$ and the intensive studies on them, to identify more DSMs remains a big challenge. How to locate a specific material among thousands of known compounds is not clear. Here, we demonstrate a chemically intuitive approach for searching new DSMs to show the underlying physics and ideas. We choose the first DSM Na$_3$Bi as a model system for tuning the chemical degree of freedom. Three sodium ternary compounds, Na$_2$MgSn, Na$_2$MgPb, and Na$_2$CdSn, are naturally selected. Further theoretical calculations reveal that the chemical trend in the elements of the same column in periodic table plays an important role in band inversion. The proposed general design principle can be used for finding new DSMs, as well as other topological materials.
\section{Results and Discussions}
\subsection{Material design}
The crystal structure of Na$_3$Bi~\cite{Wang2012a,Liu2014b} can be viewed as the AB stacking of honeycomb layers along the $c$-axis, as shown in Fig.~\ref{lattice}(a). For each honeycomb layer, one Na(1) atom and one Bi atom take the A and B sub-lattice site, respectively. There are two additional Na(2) atoms above and below the Na(1)-Bi honeycomb layer to connect the Bi atoms in the neighboring layers. As a well-understood DSM, its low-energy electronic band structure has been found to be mostly determined by the Na(1) and Bi atoms in the honeycomb layer. The two crossing bands along the $\Gamma$-A direction forming Dirac nodes are dominated by Na(1)-$s$ orbitals and Bi 6$p_{x,y}$ orbitals.~\cite{Wang2012a} At $\Gamma$ point the Na(1)-$s$ bands are lower than those of Bi 6$p_{x,y}$ mainly due to two things. One is that the heavy Bi has a relatively high on-site energy for 6$p$ orbitals. The other is the interlayer coupling leads to splittings between the bonding and anti-bonding states for both $s$ and $p$ bands along $\Gamma$-A. These two crossing bands with different orbital characters have different irreducible representations along the $\Gamma$-A direction and the Dirac nodes are protected.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{Na-Figure1.pdf}
\caption{(a) Crystal structure of Na$_3$Bi with Na(1), Na(2) and Bi sites indicated. (b) Top view of the Na$_2$MgSn unit-cell with Mg and Sn replacing Na(1) and Bi atoms in (a), respectively. (c) The bulk Brillouin zone and the projected surface Brillouin zone for (100), (010) and (001) surfaces.}
\label{lattice}
\end{figure}
Inspired by the above understanding, we notice that Na$_3$Bi can be regarded as Na$_2$Na$_1$Bi. The first two Na are on Na(2) site, which support the 3D lattice structure and also supply two electrons to the Na(1)-Bi honeycomb layer. If the crystal structure and the electronic structure could be kept the similar to those of Na$_3$Bi, one can get a new DSM material. Thus, this leads to the idea to find other potential DSMs by simply changing the atoms in the Na(1)-Bi layer. To induce band inversion, Bi should be substituted with other similar heavy metal atoms such as Pb and Sn. Since Pb and Sn have one fewer valence electron than Bi, to maintain the same band-filling, Na(1) should be substituted with atoms having two-valence electrons, such as alkaline-earth metal and II-B elements like Mg, Ca, Sr, Zn, Cd and Hg. Thus, three sodium-containing ternary compounds reported experimentally, namely Na$_2$MgSn, Na$_2$MgPb, and Na$_2$CdSn, are naturally and immediately located. Na$_2$MgSn and Na$_2$MgPb have been successfully synthesized recently~\cite{Yamada2012,Yamada2014}, while Na$_2$CdSn has been synthesized and investigated in 1980.~\cite{Matthes2014}
\begin{table}[h]
\centering
\caption{Optimized lattice constants, and lengths of the two shortest bonds (in-plane Mg/Cd-Sn/Pb bonds and vertical Na-Sn/Pb bonds) for Na$_2$MgSn, Na$_2$MgPb, and Na$_2$CdSn. The experimental data are presented in parentheses for comparison.}
\begin{tabular}{ccccccccc}
\hline
& $a$ (\AA) & $c$ (\AA) & $d_{\rm{II-IV}}$ (\AA) & $d_{\rm{Na-IV}}$ (\AA) \\
\hline
Na$_2$MgSn & 5.078 (5.049 \cite{Yamada2012}) & 10.112 (10.095 \cite{Yamada2012}) & 2.932 (2.915 \cite{Yamada2012}) & 3.336 (3.328 \cite{Yamada2012}) \\
Na$_2$MgPb & 5.157 (5.110 \cite{Yamada2014}) & 10.240 (10.171 \cite{Yamada2014}) & 2.977 (2.950 \cite{Yamada2014}) & 3.375 (3.377 \cite{Yamada2014}) \\
Na$_2$CdSn & 5.068 (4.990 \cite{Matthes2014}) & 10.152 (10.111 \cite{Matthes2014}) & 2.926 & 3.366 \\
\hline
\end{tabular}
\label{t1}
\end{table}
Similar to Na$_3$Bi, all these compounds crystallize in hexagonal lattice with the space group $P6_3/mmc$ (\#194, $D^4_{6h}$). We take Na$_2$MgSn as an example, as demonstrated in Fig.~\ref{lattice}(b). There are four Na atoms, two Mg atoms and two Sn atoms in each unit cell. The shortest bonds are those in the Mg-Sn layer. Na and Sn atoms align along the $c$-axis connected by the second shortest bonds. The optimized lattice constants and bond lengths are listed in Table~\ref{t1}, which are in good agreements with previous experimental results. \cite{Yamada2012,Yamada2014,Matthes2014}
For future experimental explorations, the stability of these three structures is an important aspect.~\cite{Zhang2012,Zhou2014a,Peng2017a} A material is dynamically stable when there is no imaginary phonon frequency existing in its phonon spectrum. As shown in Fig.~\ref{phonon}, no imaginary phonon frequency is found in all three materials, indicating their dynamical stability at 0 K. This is consistent with the existence of them reported by experiments. As possible candidates for DSMs, one main advantage of these sodium ternary compounds compared to Na$_3$Bi is structural dynamic stability. For Na$_3$Bi, the $P6_3/mmc$ phase has been found dynamically unstable at the ground state due to large imaginary phonon frequencies.~\cite{Cheng2014} In fact, even now the ground state of Na$_3$Bi is still under debate.~\cite{Cheng2015,Shao2017}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Na-Figure2.pdf}
\caption{Phonon dispersion for (a) Na$_2$MgSn, (b) Na$_2$MgPb, and (c) Na$_2$CdSn.}
\label{phonon}
\end{figure}
\subsection{Electronic structures}
The calculated electronic structures of all three materials using the Perdew-Burke-Ernzerhof (PBE) functional and the Heyd-Scuseria-Ernzerhof (HSE) hybrid functional are shown in the top and middle panels of Fig.~\ref{band}, respectively. The fatted bands with the weight of projected atomic orbitals are also shown in the middle panel for each of them. We focus on the band structures along $\Gamma$-A, where the band inversion and Dirac nodes happen in Na$_3$Bi.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Na-Figure3.pdf}
\caption{Calculated electronic structures for (a) Na$_2$MgSn, (b) Na$_2$MgPb, and (c) Na$_2$CdSn using the PBE functional without spin-orbit coupling (top panel), and hybrid functional without (middle panel) and with (bottom panel) spin-orbit coupling. The fatted bands with the weight of atomic orbital projection near the Fermi level are present in the middle panel. The two arrows point out the two Dirac cones formed by band crossings from $s$-band and bonding, anti-bonding $p_{x,y}$-bands along $\Gamma$-A.}
\label{band}
\end{figure}
In general, the strength of band inversion between the bands composed of $s$ orbitals (of Mg or Cd on Na(1) site) and $p$ orbitals (of Sn or Pb on Bi site) follows the order of total atomic number (mass) of the atoms in the unit cell within both PBE and HSE calculations. The overestimation of band inversion in PBE is improved by HSE calculation. One can find that the lightest Na$_2$MgSn has no band inversion and it is a normal semiconductor in HSE case. Na$_2$MgPb has the same total mass as Na$_3$Bi and is slightly lighter than the heaviest Na$_2$CdSn, but all of them have the similar band inversion along $\Gamma$-A.
The spin-orbit coupling (SOC) is further included and the band structures of them are shown in the bottom panel in Fig.~\ref{band}. Both Na$_2$MgPb and Na$_2$CdSn are DSMs with Dirac nodes on the path $\Gamma$-A, while Na$_2$MgSn is an indirect band gap of 0.13 eV. For Na$_2$MgPb and Na$_2$CdSn, one notable difference from Na$_3$Bi is that there are two pairs of Dirac nodes since the one $s$-orbital band inverts with both the bonding and anti-bonding $p_{x, y}$-orbital bands. The $s$-band belongs to $\Gamma_7$ representation while the two $p_{x,y}$ bands belong to $\Gamma_9$ representation. The splitting in the bonding and anti-bonding $p_{x, y}$ (in-plane orbitals) bands along $\Gamma$-A ($z$-direction) seems quite small, indicating the weak interlayer coupling among these in-plane orbitals along the stacking direction.
\subsection{Surface states}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Na-Figure4.jpg}
\caption{Surface band structure for (a) (100), (b) (010), and (c) (001) surfaces of Na$_2$MgPb. The arrow points out the bulk Dirac cone, and the circle labels the topological surface states due to Z$_2$=1 in $k_z$=0 plane. The corresponding Fermi surface with Fermi level at bulk Dirac point (61 meV) is shown in (d)-(f).}
\label{density}
\end{figure}
Similar to Na$_3$Bi, there will be surface states for DSMs Na$_2$MgPb and Na$_2$CdSn. To simulate surface states to be observed by the angle-resolved photoemission spectroscopy (ARPES), we use an iterative surface Green's function method~\cite{Zhang2010a,Wu2018}, where the HSE+SOC band structures are used in generating the maximally localized Wannier functions. The Brillouin zone of bulk and the projected surface Brillouin zones of (100), (010), and (001) planes are exactly the same as those of Na$_3$Bi,~\cite{Wang2012a} WC-type ZrTe,~\cite{Weng2016a} and KHgAs.~\cite{Wang2016k} The projected surface density of states for the (100), (010), and (001) surfaces of Na$_2$MgPb are shown in Fig.~\ref{density}(a)-(c). On both (100) and (010) side surfaces, the projection of bulk Dirac cone (pointed by the arrow) is well separated from the topological surface Dirac cone (labelled by the circle). The surface Dirac cone has its branches merging into the bulk states at the projection of 3D Dirac point, which leads to the arc like Fermi surface when the Fermi level is set at the bulk Dirac nodal point. There are two Fermi arcs touch each other at the surface projection of bulk Dirac point at 61 meV, as shown in Fig.~\ref{density}(d) and (e). For the (001) surface, the projection of bulk Dirac nodes overlaps with the surface Dirac cone as shown in Fig.~\ref{density}(c), which is similar to the case in Na$_3$Bi.~\cite{Wang2012a,Weng2016}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Na-Figure5.jpg}
\caption{Surface band structure for (a) (100), (b) (010), and (c) (001) surfaces of Na$_2$CdSn. The arrow points out the bulk Dirac cone, and the circle labels the topological surface states. The corresponding Fermi surface with Fermi level at bulk Dirac point (40 meV) is shown in (d)-(f).}
\label{Na2CdSn}
\end{figure}
The projected surface density of states for the (100), (010), and (001) surfaces of Na$_2$CdSn are shown in Fig.~\ref{Na2CdSn}. For both the (100) and (010) surfaces, the bulk Dirac cone is closer to the $\Gamma$ point. Due to the smaller band splitting between the bonding and anti-bonding $p_{x, y}$-orbital bands, the nontrivial surface states of Na$_2$CdSn are not as clear as those in Na$_2$MgPb. For the (100) surface, the Fermi arcs are hidden within the projection of the bulk states on the surface. They can be well revealed in the Fermi surface plot on the (010) surface with Fermi level at bulk Dirac point of 40 meV, as shown in Fig.~\ref{Na2CdSn}(e). For the (001) surface, the surface projection of the bulk states is superposed with the nontrivial surface states, which is similar to the case in Na$_2$MgPb.
In this paper, we demonstrate an approach for searching new DSM materials by tuning the chemical degree of freedom based on material design of well-known DSM Na$_3$Bi. By keeping both the crystal and electronic structures essentially identical to Na$_3$Bi, three compounds Na$_2$MgSn, Na$_2$MgPb, and Na$_2$CdSn are naturally located and two of them are identified as DSM candidates based on our theoretical calculations. The phonon calculations confirm that these compounds are stable than Na$_3$Bi, paving the way for experimental verification. The hybrid-functional calculations with spin-orbit coupling show that Na$_2$MgSn is an indirect band gap normal semiconductor. By substituting Sn by heavier Pb, the band inversion occurs, and the Dirac nodes due to band crossing are protected by crystal symmetry in Na$_2$MgPb. For Na$_2$CdSn, the band inversion is induced by replacing Mg with heavier Cd in Na$_2$MgSn. Moreover, the coexistence of both a bulk 3D Dirac cone and topological surface states can be observed in the projected surface density of states for side surfaces (100) and (010), which can be used as a reference for further experimental validation in ARPES or scanned tunneling microscopy measurements. We hope the idea in this example would lead to more material design efforts based on known topological materials for more successful and efficient predictions.
\section{Computational methods}
First-principles calculations are performed using the Vienna \textit{ab-initio} simulation package (VASP)~\cite{Kresse1996} based on density functional theory (DFT). The generalized gradient approximation (GGA) in the PBE parameterization for the exchange-correlation functional is used for structural relaxation. A plane-wave basis set is employed with kinetic energy cutoff of 500 eV. We use the projector-augmented-wave method and the related pseudo-potential for each element. A 11$\times$11$\times$5 \textbf{q}-mesh is used during structural relaxation for the unit cell until the energy difference is converged within 10$^{-6}$ eV, with a Hellman-Feynman force convergence threshold of 10$^{-4}$ eV/\AA. To improve the underestimation of band gap in the PBE functional, hybrid functional method based on the HSE method are adopted.~\cite{HSE1,HSE2,HSE3} The harmonic interatomic force constants (IFCs) are obtained by density functional perturbation theory using a 3$\times$3$\times$2 supercell with a 3$\times$3$\times$3 \textbf{q}-mesh. The phonon dispersion is calculated from the harmonic IFCs using the PHONOPY code.~\cite{Togo2008,Togo2015} The Wannier functions~\cite{Mostofi2014} for Cd/Mg $s$-orbital and Sn/Pb $s$-and $p$-orbitals are generated, which are used in the surface state calculations.
During the preparation of this manuscript, Ref.~\onlinecite{wan2018} proposed that Na$_2$CdSn is a topological crystalline insulator (TCI) candidate, which is consistent with our PBE+SOC calculation. From Fig.~\ref{band}(c), it is seen that both bonding and anti-bonding $s$ bands are lower than the $p_{x,y}$ bands along the whole path $\Gamma$-A. And we have confirmed that in this case it is a TCI of $Z_{12}$=8~\cite{Song2018} with mirror Chern number 2 in $m_{001}$ plane.
\subsection*{Data Availability}
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
\section*{Acknowledgements}
The authors thank the valuable discussions with X. Wan, C. Fang, Z. Song and T. Zhang. This work is supported by the the National Key Research and Development Program of China (No. 2016YFA0300600 and 2018YFA0305700), the National Natural Science Foundation of China (Grant Nos. 11374063 and 11674369) and the ``Strategic Priority Research Program (B)" of the Chinese Academy of Sciences (Grant No. XDB07020100).
\subsection*{Competing Interests statement}
The authors declare no competing interests.
\subsection*{Author Contributions}
H.M.W. and H.Z. designed the research. B.P., C.M.Y and H.Z. performed the calculations. B.P., H.M.W., Z.F. and H.Z. analyzed and discussed the results. B.P. and W.H.M. wrote the text of the manuscript. B.P. and C.M.Y. contributed equally to this work. All authors commented on the manuscript.
|
2002.10494
|
\section{Introduction}
Quantum clock models have recently attracted a strong interest. They display a discrete internal symmetry $\mathbb{Z}_q$ that can be spontaneously broken, in analogy with the $\mathbb{Z}_2$ symmetry of spin chains~\cite{Fradkin1980,Ostlund81}. So far, the majority of the works focused on one-dimensional short-range models~\cite{ortiz2012,zhuang2015, Sachdev_num, Sachdev_QFT, sun2019},
which are particularly interesting because of their relation with parafermionic chains: spontaneous symmetry breaking in the clock model results in non-trivial topological phases of the corresponding parafermionic model, in analogy with the relation between the Ising chain and Kitaev superconducting wire~\cite{fendley2012}. Parafermionic edge modes
are in fact the analogs of the Majorana modes of the $\mathbb{Z}_2$ symmetric models, and can be relevant for quantum computation due to their potential applicability for universal quantum computing hardware~\cite{Nayak2008}.
The implementation, however, is extremely challenging and,
also at the theoretical level, studying parafermionic chains has revealed a much more intricate problem than studying their fermionic counterparts. The first fundamental issue is that parafermionic models are intrinsically interacting, since free parafermions cannot exist in Hermitian Hamiltonians~\cite{Fendley2014}. On the other hand, their complexity offers interesting properties: Parafermionic chains can simultaneously host symmetry breaking and non-trivial topology~\cite{Bondesan2013,Alexandradinata2016}; moreover, parafermionic zero-energy edge modes can be of different nature~\cite{Jermyn2014} ("strong" or "weak" depending on whether they extend to the full spectrum or to the low-energy manifold only).
In parallel with this plethora of parafermionic phases, quantum clock models can host a wider variety of phases compared to the Ising model.
Already the simplest case with $q=3$ shows, in addition to the trivial and the symmetry-breaking phase, also a gapless incommensurate phase~\cite{zhuang2015}.
In the incommensurate phase correlations decay algebraically and are characterized by a wavelength that is incommensurate with the lattice spacing.
This rich phase diagram depends on an additional parameter, the chirality, i.e. the explicit breaking of charge conjugation symmetry~\cite{Ostlund81,Huse81,howes1983,huse1983}, which is not present in the $\mathbb Z_2$ case. Very little is known on the phase diagrams and the phase transitions of clock models with $q>3$:
for example, for $q\geq 5$ the self-dual clock models
exhibit phase transitions
of the Kosterlitz-Thouless universality class~\cite{Matsuo2006,ortiz2012,sun2019}. In general, characterizing the phase transitions of clock models has required a considerable theoretical effort and the application of advanced numerical techniques~\cite{zhuang2015, Sachdev_num, Sachdev_QFT, ChepigaMila, Giudici}.
Quantum clock models are interesting also from the point of view of experiments and applications. In a recent experiment with Rydberg atom chains~\cite{bernien2017} it has been observed that Rydberg excitations on the chains can arrange in $\mathbb{Z}_q$ ordered states, with phase transitions belonging to the same universality class as $\mathbb{Z}_q$ clocks.
Furthermore, clock models could be used for realizing exotic phases of matter, such as many-body localized phases and Floquet time crystals with arbitrary period $n$-tupling~\cite{federica,markus}: time-translation symmetry breaking can occur in disordered one-dimensional short-range clock models, but also in models with infinite-range interactions.
In this paper we inquire in more depth and generality the nature of phase transitions in the clock models with infinite-range interactions. We use a mean-field analysis which in this context is exact in the thermodynamic limit and allows us to directly study the properties of the order parameter, while numerical works in one dimension focused on other probes for the transition, like the entanglement entropy~\cite{zhuang2015}, the ground-state degeneracy or the fidelity susceptibility~\cite{sun2019}.
The model we study is a generalization of the $p$-spin model~\cite{Bapst_2012,mathfound,J_rg_2010} (with $p=1$ corresponding to the case of two-body interactions). Besides the method employed in constructing the mean-field free energy which relies on steepest descent arguments, It must be mentioned that there exists a general rigorous solution of the non-polynomial mean-field models~\cite{BRANKOV197782,Brankov1979}.
Instead of the approximation Hamiltonian method of ~\cite{BRANKOV197782,Brankov1979}, we provide
in ~\ref{sec:trotter} a calculation of the pseudo-free energy suitable for our
purposes.
We allow for an explicit breaking of charge conjugation symmetry, parameterized by the phase $\varphi$. We find that the phase structure is simpler than the one of the one-dimensional short-range model: there are a disordered phase and a broken-symmetry phase, and the transition between the two phases is either first or second order depending on $q$ and $p$. We reconstruct the phase diagram in all the cases, by finding the transition point as a function of the chirality $\varphi$.
The paper is organized as follows. In Section~\ref{model:sec} we introduce the Hamiltonian and discuss its symmetries. In Section~\ref{free:sec} we derive the free energy density by mean-field treatment and we discuss the possible phase transitions in the light of the symmetries.
In Section~\ref{sec:perth} we compare the numerical results concerning the continuous phase transition with the analytical results obtained via perturbation theory. We are able to derive the analytical expression of the phase-boundary line (see Fig.~\ref{phase_diagram:fig}) for $q\ge 4$. In Section~\ref{sec:piq} we discuss the fully chiral case $\varphi=\pi/q$ and we interpret the corresponding absence of the trivial phase as an exception to the analytic expansion of the free energy density introduced in Section~\ref{model:sec}. In Section~\ref{larq:sec} we consider the limit of large $q$ and study its thermodynamic properties using a harmonic approximation. We conclude and present the perspectives of future work in Section~\ref{conca:sec}. In all the paper we will assume
the Planck constant $\hbar = 1$ and the Boltzmann constant $k_B = 1$.
\section{The model} \label{model:sec}
In this section we introduce the $\mathbb{Z}_q$-invariant fully connected model (Sec.~\ref{hilbert:sec}).
We summarize the phase structure of our model in Sec.~\ref{summary:sec}.
\subsection{Hilbert space and Hamiltonian}
\label{hilbert:sec}
Clock models generalize the Ising $\mathbb{Z}_2$ symmetry to a symmetry $\mathbb{Z}_q$ with an integer $q\ge 2$~\cite{fendley2012}. We consider a system of $N$ clock variables: each variable has $q$ possible states, that can be pictorially represented as $q$ points on a unit circle (see Fig.~\ref{clock:fig}).
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=2.6cm]{horologium.pdf}
\end{tabular}
\end{center}
\caption{Pictorial representation of a clock variable with $q=6$. It belongs to a
$q$-dimensional Hilbert space, and the blue points on the circle
indicate the possible states of the clock in the basis where $\sigma$ is diagonal. The red arrows represent
the action of the $\tau$ operators in this basis.} \label{clock:fig}
\end{figure}
We label each state with the corresponding complex number, which can assume the values $1, \omega, \omega^2,\dots, \omega^{q-1}$, where $\omega=e^{2\pi i/q}$. On the $q$-dimensional Hilbert space of a quantum clock variable we define the two operators $\hat \sigma$ and $\hat \tau$ that generalize the Pauli matrices $\hat \sigma^z$, $\hat \sigma^x$. They satisfy
\begin{eqnarray}
\label{zqspin1}
\hat \sigma^q = \hat \tau^q =1\ , \qquad \hat \sigma^\dagger &=& \hat \sigma^{q-1}\ , \qquad \hat \tau^\dagger = \hat \tau^{q-1}\ ,\\
\hat \sigma \hat\tau = \omega\, \hat \tau \hat \sigma\ . & &
\label{zqspin2}
\end{eqnarray}
A convenient representation for the operators is the following
\begin{equation} \hat \sigma =
\begin{pmatrix}
1&0&0&\ \dots\ & 0\\
0&\omega&0&\ \dots\ & 0\\
0&0&\omega^2&\ & 0\\
\vdots&\vdots&\vdots&&\vdots \\
0&0&0&\ \dots\ & \omega^{q-1}
\end{pmatrix},\quad\quad
\hat \tau =
\begin{pmatrix}
0&0&0& \dots\ &0& 1\\
1&0&0& \dots\ &0 & 0\\
0&1&0& \dots & 0 &0\\
\vdots&\vdots&\vdots&&\vdots&\vdots \\
0&0&0& \dots\ & 1& 0
\end{pmatrix}
\label{explicitrep}
\end{equation}
In this representation, $\hat \sigma$ measures the position on the unit circle, and $\hat\tau$ shifts the state of one position counter-clockwise along the circle. In the case $q=2$, the matrices in Eq.~(\ref{explicitrep}) coincide with the canonical Pauli matrices $\hat \sigma_z$, $\hat \sigma_x$.
We define a Hamiltonian for $N$ sites, in terms of $\hat \sigma_j$ and $\hat \tau_j$ acting on site $j$. On the same site the operators satisfy the relations in Eqs.~(\ref{zqspin1},\ref{zqspin2}), and on different sites they commute.
We define the two operators
\begin{equation}
\hat{m}_{\sigma} =\frac{1}{N} \sum_{j=1}^N\hat\sigma_{j}\qquad
\hat{m}_{\tau} = \frac{1}{N} \sum_{j=1}^N\hat\tau_{j}
\end{equation}
which represent the total ``magnetizations'' along $\hat \sigma$ and $\hat \tau$. The Hamiltonian of our fully connected model is then defined as
\begin{equation}\label{fcham}
\hat H = -N\left(\hat{m}_{\sigma}\hat{m}_{\sigma}^{\dagger} \right)^{p} - h q^2 N\left(\hat{m}_{\tau}e^{i\varphi} + \hat{m}_{\tau}^{\dagger}e^{-i\varphi}\right),
\end{equation}
where $p \ge 1$, $h\ge 0$ is the transverse field, $\varphi$ is real and the factors $N$ guarantee the extensivity.\newline The case $q=2$ and $\varphi = 0 $ corresponds to the fully connected $p$-spin ferromagnet~\cite{Bapst_2012}. Before proceeding, we briefly outline the main features for this case. In the limit of large $h$, one observes a paramagnetic $\mathbb{Z}_2$ invariant state. For $h$ below a critical value, on the opposite, the system chooses between two broken-symmetry states which physically correspond to a ferromagnet with all spins pointing either up or down in the $z$ direction. The nature of the transition separating these two phases is second order for $ p = 1 $ and first order for $p >1$. The case of $ p \rightarrow \infty $ is connected to Grover's search algorithm~\cite{grover1997quantum}. \newline Qualitatively one would expect a similar behaviour in the case of $ q > 2 $ and $\varphi = 0$ (i.e. $q$ broken symmetry states for $ h \rightarrow 0 $, and a $\mathbb{Z}_q$ invariant para-magnetic phase for $ h \rightarrow \infty$). The crux of this paper will be the elucidation of the nature of the phase transitions for $q \ge 2$ and the interesting behaviour that arises with the introduction of chirality ($\varphi \ne 0$). To this end, it is important to discuss the symmetries of the model.
The Hamiltonian (\ref{fcham}) has a global $\mathbb{Z}_q$ symmetry generated by the unitary operator
\begin{equation}\label{symm}
\hat G=\prod_{j=1}^N \hat\tau_j.
\end{equation}
We can also notice that the Hamiltonian is invariant under time reversal, which is defined as the antiunitary transformation
\begin{equation}
\hat T \hat \sigma_j \hat T = \hat \sigma_j^\dagger,\qquad \hat T \hat \tau_j \hat T =\hat \tau_j,\qquad \hat T^2=\mathbb{I}.
\end{equation}
We introduce the charge conjugation unitary operator
\begin{equation}
\hat C \hat \sigma_j \hat C=\hat \sigma_j,\qquad \hat C \hat \tau_j\hat C=\hat \tau_j^\dagger, \qquad \hat C^2=\mathbb{I}.
\end{equation}
This transformation is a symmetry only for $\varphi=0$. We refer to this special case as the non-chiral clock model, while the parameter $\varphi$ is called chirality. In general, charge conjugation transforms the Hamiltonian by changing sign to the chirality: $\hat C \hat H(\varphi) \hat C =\hat H(-\varphi)$.
The global operator
\begin{equation}
\hat K=\prod_{j=1}^N \hat\sigma_j^\dagger.
\end{equation}
transforms the Hamiltonian as $\hat K^{-1} \hat H(\varphi) \hat K = \hat H(\varphi+2\pi/q)$. Therefore, using the combined action of $\hat C$ and $\hat K$, we can restrict without loss of generality to the case $0\le \varphi\le \pi/q$.
\subsection{Summary of the results}
\label{summary:sec}
We find that the phase diagram of the model in Eq.~(\ref{fcham}) contains a trivial phase and a symmetry-breaking phase.
For $q=3$, $p=1$ the transition between the trivial phase and the symmetry-breaking phase is first order (we show the phase diagram in Fig.~\ref{phase_diagram:fig}).
The most peculiar point is at chirality $\varphi=\pi/3$: The value of the field at the transition goes to infinity as we approach $\varphi=\pi/3$, and the system is always in a broken symmetry phase for that value of $\varphi$.
For any $q>3$, $p=1$ in the infinite-range model there is a second-order transition from symmetry-breaking to trivial phase. We show the phase diagram for $q=5$ in Fig.~\ref{phase_diagram:fig}.
The phase-boundary curve is given by the analytical formula in Eq.~\eqref{eq:hc}.
For $p>1$, on the other hand, the transition is of first order for any value of $q$. In all these cases, when $\varphi=\pi/q$ (fully chiral case) and the temperature is below a threshold ($T\le 1/2$) we still see that only the symmetry-breaking phase exists in our model.
Remarkably, this result shows that the chirality and the explicit breaking of the charge conjugation symmetry have a deep influence on the thermodynamic properties also in this infinite-range interacting context.
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{phasediag.pdf}
\end{center}
\caption{(Left panel) Phase diagram for the model with $q=3$ and $p=1$. Notice the first-order transition between symmetry-breaking and trivial phase and the phase-boundary tending to $h\to\infty$ for $\varphi\to\pi/3$. (Right panel) Phase diagram for $q=5$ and $p=1$. Now the transition is second order (a fact true for all $q>3$) and at $\varphi=\pi/5$ the transition moves to infinity. For generic $q$ this fact occurs at the fully chiral point $\varphi=\pi/q$.} \label{phase_diagram:fig}
\end{figure}
\begin{center}
\begin{tabular}{ | m{5cm} | m{3cm}| m{3cm} | }
\hline
$(q,\varphi)$ & $p = 1$ & $p > 1$ \\
\hline
$q = 3,\varphi \neq \frac{\pi}{q} $ &1st order & 1st order \\
\hline
$q = 3,\varphi = \frac{\pi}{q} $ & No transition & No transition \\
\hline
$q > 3,\varphi \neq \frac{\pi}{q}$ & 2nd order & 1st order \\
\hline
$q > 3,\varphi = \frac{\pi}{q}$ & No transition & No transition \\
\hline
\end{tabular}
\end{center}
\section{Free energy}
\label{free:sec}
In this section, we study the free-energy density $f(\beta,h)$ of the model at inverse temperature $\beta$ in the thermodynamic limit, for a generic $q\ge 2$.
Thanks to the full connectivity of the interactions, a mean-field analysis provides a good description for the statistical-mechanical properties of the system. The canonical prescription for the mean-field approach on a quantum model involves first transforming the quantum partition function $Z$ into a classical one by means of Suzuki-Trotter decomposition~\cite{Suzuki1976}. We introduce the order parameter, defined as $m =(|m|,\theta)=\braket{\hat m_\sigma}$, for the mean-field analysis and we apply the static approximation in order to remove the time dependence of the order parameter (see~\ref{sec:trotter} for details). The free energy density of our model as calculated by this procedure is given by
\begin{equation}
\label{eq:f}
f=(2p-1)|m|^{2p}+f_{s}.
\end{equation}
with
\begin{eqnarray}
f_s & = &-\frac{1}{\beta}\log \text{Tr } e^{-\beta \hat H_{s}} \\
\hat H_{s} & = & -(\lambda^*\hat\sigma +\lambda\hat\sigma^\dagger)-hq^2(\hat\tau e^{i\varphi}+\hat\tau^\dagger e^{-i\varphi}). \label{eq:mf}
\end{eqnarray}
where $\hat H_{s}$ corresponds to a single-site Hamiltonian, and
the complex number $\lambda= p m|m|^{2p-2}$ is an effective longitudinal field that depends on the average magnetization $m=\braket{\hat m_\sigma}$.
Computing the function $f_{s}$ requires the diagonalization of a $q\times q$ Hermitian matrix. However, building on the Landau theory of phase transitions, general considerations can be formulated based on the symmetries of the model. As will become clear, we further need the assumption that $f_{s}$ is an analytic function of $\lambda$ and $\lambda^*$ close to the point $\lambda=\lambda^*=0$. In the following subsections we qualitatively discuss the expansion of the free energy density $f_{s}$ as a power series in $\lambda$ and $\lambda^*$, and we examine the case where the assumption of analyticity is not valid. In both cases, these arguments are sufficient to determine if a phase transition occurs and whether it can be continuous. Quantitative results concerning the expansion of the free energy density will be obtained using perturbation theory and are discussed in Section \ref{sec:perth}.
\subsection{Series expansion}\label{sec:exp}
The single-site Hamiltonian Eq.~\eqref{eq:mf} transforms under the unitary operator $\hat \tau$ and under time reversal $\hat T$ as
\begin{equation}
\hat \tau \hat H_{s}(\lambda,\lambda^*) \hat \tau^\dagger=\hat H_{s}(\omega \lambda,\omega^*\lambda^*),\qquad \hat T \hat H_{s}(\lambda,\lambda^*) \hat T=\hat H_{s}(\lambda^*,\lambda).
\end{equation}
Since these transformations leave the trace of $\exp(-\beta \hat H_{s})$ invariant, the free energy density $f_{s}$ has to satisfy the following properties
\begin{equation}
f_{s}(\lambda,\lambda^*)=f_{s}(\omega \lambda,\omega^*\lambda^*),\qquad f_{s}(\lambda,\lambda^*)=f_{s}(\lambda^*,\lambda).
\end{equation}
As a consequence, the only non-zero terms that can appear in the power series are of the form $[\lambda^q+(\lambda^*)^q]^j(\lambda\lambda^*)^k$, for generic integers $j,k$. To lowest power in $|m|$ the free energy density $f_{s}$ reads
\begin{equation}
f_{s}\simeq a_0+a_2\lambda\lambda^*=a_0+ a_2p^2|m|^{4p-2}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{F_q4_p2.pdf}
\includegraphics[width=0.32\textwidth]{F_q4_p1.pdf}
\includegraphics[width=0.32\textwidth]{F_q3_p1.pdf}
\caption{ Free energy density at zero temperature as a function of $m$ for $\varphi=0$. (a) For $p>1$ the transition from an ordered phase to a disordered one is of first order. (b) For $p=1$, $q\ge 4$ the transition is of second order. (c) For $p=1$, $q=3$ the transition is of first order. The value of the field where the concavity in $m=0$ changes sign is called $h_c$.}\label{fig:f}
\end{figure}
Using this relation in Eq.~\eqref{eq:f}, we see that:
\begin{itemize}
\item For $p>1$, the most relevant term in the limit $|m|\rightarrow 0$ is $(2p-1)|m|^{2p}$, which is always positive. This means that $m=0$ is a local minimum, and the phase transition to an ordered phase with $|m|\neq 0$ can only be of first order (see Fig~\ref{fig:f}-a).
\item For $p=1$, on the other hand, the dominant term is $(1+ a_2)|m|^2$, so a continuous phase transition is in principle possible when $a_2=-1$ (see Fig.~\ref{fig:f}-b). Another possibility is to have, as we vary $h$, a regime where $a_2>-1$ (so $m=0$ is locally a minimum) but a lower global minimum appears for $m\neq 0$ (see Fig.~\ref{fig:f}-c).
We postpone to Section \ref{sec:perth} a more detailed discussion about the order of the phase transition in this case.
\end{itemize}
\subsection{Non-analytic behaviour}\label{sec:nonan}
The results of the previous section crucially depend on the assumption that $f_{s}$ is an analytic function of $\lambda$ and $\lambda^*$ close to the point $|\lambda|=0$. We now show a case where this assumption is not valid (due to the chirality $\varphi$) and discuss the consequences on the properties of the phase transition.
Let us consider the case of $q>2$ and zero temperature, for which $f_{s}$ is equal to the ground state energy of $\hat H_{s}$, and examine how this energy depends on the small fields $\lambda, \lambda^*$. For $|\lambda|=0$, $H_0\equiv\hat H_{s}(\lambda=0, \lambda^*=0)$ is diagonal in the $\tau$ basis and has eigenvalues $-2hq^2\cos(2\pi j/q+\varphi)$ for $j=0,1,\dots, q-1$. If the ground state of $H_0$ is unique (i.e. for $\varphi\neq \pi/q$), the first perturbative correction to the ground state energy is of second order (proportional to $\lambda\lambda^*$). On the other hand, if $\varphi= \pi/q$, the ground state of $H_0$ has double degeneracy. The Hamiltonian $H_{s}$ restricted to the ground state manifold has the form
\begin{equation}
H_{s}|_{GS}=\begin{pmatrix}
\epsilon_0 & -\lambda^*\\
-\lambda & \epsilon_0
\end{pmatrix}
\end{equation}
with $\epsilon_0=-2hq^2\cos(\pi /q)$. We obtain that, to lowest order in $|\lambda|$, the ground state energy is $f_{s}\simeq \epsilon_0 -|\lambda|$, which is not an analytic function of $\lambda$ and $\lambda^*$.
We deduce that, while the discussion of the previous section applies almost everywhere, a different scenario appears at zero temperature for $\varphi=\pi/q$. In this case, the free energy density in Eq.~\ref{eq:f} reads
\begin{equation}
f=(2p-1)|m|^{2p}-2hq^2\cos(\pi /q)-p|m|^{2p-1}+O(|m|^{4p-2}/2hq^2).
\end{equation}
Note that, in the limit of large $h$, we can neglect higher order terms, and the minimum is found for $|m|=1/2$. Remarkably, the model does not have a transition to a paramagnet, and the magnetization remains finite for arbitrarily large field $h$. This peculiar behaviour is further discussed in Section~\ref{sec:piq}.
\section{Continuous phase transition}\label{sec:perth}
Since a continuous phase transition has already been ruled out for $p>1$, we will focus from now on on the case $p=1$. As explained in the previous section, if a continuous phase transition occurs we can obtain the exact location in the phase diagram from the condition $a_2=-1$. The coefficient $a_2$ can be computed exactly using perturbation theory, for arbitrary field $h$ and inverse temperature $\beta$ (explicit calculations are reported in~\ref{app:pertth}). In particular, as we show in Fig.~\ref{fig:mp1}-a and \ref{fig:mp1}-b, the zero temperature transition is located at
\begin{equation}\label{eq:hc}
h_c=\frac{1}{2q^2}\left[\left(\cos(\varphi)-\cos\left(\varphi+\frac{2\pi}{q}\right)\right)^{-1}+\left(\cos(\varphi)-\cos\left(\varphi-\frac{2\pi}{q}\right)\right)^{-1}\right],
\end{equation}
while the transition at zero field occurs at $\beta_c=1$.
\begin{figure}
\centering
\includegraphics[width=0.30\textwidth]{m_p1_phi0.pdf}
\includegraphics[width=0.30\textwidth]{m_p1_phi_non0.pdf}
\includegraphics[width=0.38\textwidth]{m_p1_q3.pdf}
\caption{Magnetization $m$ as a function of $h/h_c$ (from Eq.~\ref{eq:hc}). (a),(b) A second order phase transition occurs at $T=0$, $h=h_c$ for different values of $q>3$ and $\varphi<\pi/q$. (c) A first order phase transition takes place at $T=0$, $h=h_*>h_c$ for $q=3$ and different values of $\varphi<\pi/q$.}\label{fig:mp1}
\end{figure}
There is, however, another possibility: the transition may be a discontinuous first-order one and may occur at a value of the field $h_*>h_c$ (or $\beta_*>\beta_c$). We argue that this is indeed the case for $q=3$. In this case, the free energy density has a third order term $\propto 2|\lambda|^3\cos(3\theta)$ which is negative for some values of $\theta=\arg(\lambda)$, and a fourth order term, which is always positive.
Given these signs of the coefficients, it can be proven
that for $h\rightarrow h_c^+$ the difference of the free energy densities $f_{MF}(m)-f_{MF}(0)$ becomes negative for certain values of $m$ (see \ref{app:firstorder}). Therefore, $m=0$ is not the global minimum: a first order phase transition occurs for a value $h_*>h_c$ (which we obtain numerically) at $T=0$, as shown in Fig.~\ref{fig:f}-c and Fig.~\ref{fig:mp1}-c. For any other value of $q$, the third order coefficient is zero, and we expect the transition to be continuous (Fig.~\ref{fig:mp1}-a,b).
\section{Case $\varphi=\pi/q$}\label{sec:piq}
From Eq.~\eqref{eq:hc} we see that the zero-temperature critical field diverges when $\varphi\rightarrow \pi/q$. We have further proved in section \ref{sec:nonan} that no transition occurs for $\varphi=\pi/q$, in which case the magnetization tends to $m\rightarrow 1/2$ for $h \rightarrow \infty$. We illustrate this non-analytic behaviour in Fig.~\ref{fig:pi3}-a: both for a discontinuous and for a continuous transition, as we approach the value $\varphi=\pi/q$, the fields at the transition ($h_*$ and $h_c$ respectively) diverge. Moreover, for the discontinuous case, the jump of the magnetization at the transition ($m_*$) tends to zero. The asymptotic behaviours at $T=0$ for $x=\pi/q-\varphi\ll 1$ read
\begin{equation}
h_c \simeq \frac{1}{4q^2\sin(\pi/q)x}
\end{equation}
and for $q=3$
\begin{equation}
h_* \simeq h_c\left(1+\frac{4}{3}x^2\right)
\hspace{2cm}
m_*=36h_* x^2.
\end{equation}
As can be seen in Fig.~\ref{fig:pi3}-a, for $\varphi=\pi/q$, the magnetization is always larger than $1/2$.
We now consider the case of $T\neq 0$ but small compared to $h$, such that $h\cdot x \ll \beta^{-1} \ll h$. In this case, the perturbative expansion can be used and
\begin{equation}
a_2\simeq -\frac{\tanh{\left(2\beta h q^2 \sin(\pi/q)x\right)}}{4hq^2 \sin(\pi/q)x} \simeq -\beta/2
\end{equation}
so if $\beta \ge 2$ for $p=1$ the system is ferromagnetic in this regime. The phase transition can only occur out of this range, i.e. at a value of $h$ diverging at least as fast as $1/x$. Since the transition point moves to $h\rightarrow \infty$ as $x\rightarrow 0$, we can argue that for $\beta\ge 2$, as already discussed in the zero-temperature case, when $\varphi=\pi/q$ ($x=0$) the system is always ferromagnetic. This is in fact shown in Fig.\ref{fig:pi3}-b for $q=3$, where a qualitative difference can be observed between $\beta\ge 2$ and $\beta < 2$. For $\beta \ge 2$ the transition point moves to diverging values of the field $h_*$ when $x\rightarrow 0$, but it tends to a finite value when $\beta < 2$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{m_q4_p1.pdf}\hspace{0.5cm}
\includegraphics[width=0.4\textwidth]{mh_T_p1_q3.pdf}
\caption{Magnetization $m$ as a function of the field $h$ for different values of $\varphi$ in the cases (a) $q=3$, $p=1$ (first order transition) and (b) $q=4$, $p=1$ (second order transition). Field $h_*$ (c) and magnetization $m_*$ (d) at the discontinuous transition for $q=3$ as a function of the chirality $\varphi$ for different values of $\beta$.}\label{fig:pi3}
\end{figure}
\section{Large $q$ limit}\label{larq:sec}
In this section, we derive an analytic expression for the free energy density at finite and zero temperature for large $q$. In this limit, the $\mathbb{Z}_q$ symmetry of the model becomes a continuous $U(1)$ symmetry. The free energy density and the properties of the phase transition can be obtained from the spectrum of the single-site Hamiltonian $\hat H_s$, which now describes the dynamics of a continuous rotor. In order to take the continuum limit of the clock variable we replace
\begin{equation}
\sigma\rightarrow e^{i\alpha} \hspace{2cm} \tau \rightarrow e^{-\frac{2\pi}{q}\partial_\alpha}
\end{equation}
such that $\tau$ acts on $\alpha$ as a translation of $2\pi/q$. With this substitution and by expanding $\tau$ to second order in the small parameter $2\pi/q$, we get
\begin{equation}
H_{s}= 4\pi^2h\left(i\frac{\partial}{\partial \alpha}+\chi\right)^2-2|\lambda| \cos (\alpha +\theta)-2hq^2
\end{equation}
where $\chi=q \varphi/(2\pi)$.
Let us distinguish the cases $p=1$ and $p>1$. In the case $p=1$, we have shown that, for any $q>3$ the model has a second order phase transition at the critical value $h_c$ in Eq.~\ref{eq:hc}. Taking the limit $q\rightarrow \infty$ of this expression we find
\begin{equation}
h_c = \frac{1}{2\pi^2}\frac{1}{1-4\chi^2}.
\end{equation}
On the other hand, for $p>1$ the transition is first order and we use approximate methods for locating the transition point: we approximate the potential $-2|\lambda| \cos (\alpha +\theta)$ with a harmonic potential around $\alpha=-\theta$ and find the spectrum to be
\begin{equation}
\label{eq:harmonic}
E_n=-2|\lambda| -2hq^2+(4
n
+ 2)\pi\sqrt{h|\lambda| }.
\end{equation}
Using Eq.~\ref{eq:f}, we arrive at the following expression for the free energy density
\begin{equation}
f = (2p-1)\left(\frac{|\lambda| }{p}\right)^{\frac{2p}{2p-1}} - 2h q^2 - 2|\lambda| +2\pi q\sqrt{|\lambda| h} +\\
- \frac{1}{\beta}\ln\left(\frac{\sinh(2\pi\beta\sqrt{|\lambda| h} q)}{\sinh(2\pi \beta\sqrt{|\lambda| h})}\right)
\end{equation}
In the zero-temperature limit ($\beta\to\infty$), it follows from the principle of exponential dominance that the free energy density is given by $ f=(2p-1)|m|^{2p}+E_0 $ for $\chi\neq 1/2$. In order to compute the magnetization ($m_*$) and field ($h_*$) at the transition,
we need to solve two equations simultaneously. The first one is obtained by requiring the free energy of the paramagnet to be equal to the one of the ferromagnet at the transition point,{\it i.e.} $f(\lambda_*, \beta=\infty, h_*) = f(0, \beta=\infty, h_*) $. The second one is arrived at by minimizing the free energy with respect $\lambda$. The result is
\begin{equation}\label{eq:largeq_critical}
h_*= \frac{2}{\pi^2}\frac{(2p)^{2p}}{(2p+1)^{2p+1}}
\hspace{2cm}
m_*=\frac{2p}{2p+1}
\end{equation}
By requiring that the transition point belongs to the regime where the harmonic approximation is satisfied (i.e. $\sqrt{h|\lambda|} \ll |\lambda|$), we see that this result is valid when
$p\gg 1$. In Fig.~\ref{fig:cp} we plot the numerical results for finite $q$ and we see that, as expected, when we increase $q$ they better approximate the analytical results in Eq.~\ref{eq:largeq_critical}.
Similarly, the spinoidal field $ h_s $ and magnetization $ m_s $ are computed by solving two equations; the first obtained by requiring that $\frac{\partial^2 f}{\partial \lambda^2} = 0$ and second by minimizing the free energy density with respect to $ \lambda $. We find that
\begin{equation}\label{eq:largeq_spinodal}
h_{s} = \frac{32 p^2}{\pi^2 } \frac{(2p -1)^{2p - 1}}{(6p-1)^{2 p +1}}
\hspace{2cm}
m_{s} = \frac{2p -1 }{6p-1}.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{criticalpoints.pdf}
\caption{Values of the field $h_*$ (panel (a)) and of the magnetization $m_*$ (panel (b)) at the discontinuous phase transition for $p\ge 2$, and different values of $q$. The continuous line is the analytic result obtained in the large $q$ limit (Eq.~\ref{eq:largeq_critical}).}\label{fig:cp}
\end{figure}
\section{Conclusions and perspectives}\label{conca:sec}
We have examined the quantum and thermal properties of $\mathbb{Z}_q$-symmetric fully connected clock models,
and classified the order of their phase transitions.
We showed that the model can have first or second order phase transitions, which depend on
the (i) chirality $\varphi$ of the model, (ii) order $p$ of the interactions, and (iii) dimensionality $q$
of the clock variables.
The full connectivity of the interactions has allowed
us to solve the problem analytically by a combination of mean-field approach
with perturbation theory up to fourth order.
In our analysis, we have first derived the free energy of the system in a mean-field level,
which tends to be exact in the thermodynamic limit for fully connected models.
In this way we have provided general considerations regarding the possible phase transitions that can occur in the model.
We have applied a Landau-theory argument in the following way. We have expanded the free energy density in terms of
the effective longitudinal field $\lambda$ and examined it on the light of the symmetries of the model. In this way we have determined the possible phase transitions the model can have and their respective orders.
The argument relies intimately on the condition that the free energy is an analytic function
for small effective longitudinal fields, $\lambda \sim 0$.
We have found that for $p>1$ the possible phase transitions
can only be of first order, while continuous transitions could in principle occur for $p=1$.
The analyticity condition is satisfied almost always: In the case of a non-analytic free energy around $\lambda \sim 0$ the previous arguments do not apply.
This is the case of the model at zero temperature with the specific chirality $\varphi = \pi/q$ and $q>2$.
In this case a different scenario appears and the model has no phase transition to a paramagnetic phase,
independently of the value of $p$. Remarkably the magnetization remains finite for arbitrarily large fields $h$.
Using perturbation theory up to fourth order, we determined the coefficients
of the free energy series expansion. This allowed us to perform a quantitative study of the phase transitions and
delineate the phase diagram of the model for its different parameters $p$, $q$, $\varphi$ and $\beta$.
In the limit $ q \rightarrow \infty$ the $\mathbb{Z}_q$ symmetry of the model becomes a continuous $U(1)$ symmetry.
In this case we were able to go beyond perturbation theory results, and obtained analytically the free
energy density of the model with its corresponding critical fields $h_*$ and magnetization $m_*$ (Eq.\eqref{eq:largeq_critical}), as well
as its and spinodal fields $h_s$ and magnetization $m_s$
(Eq.\eqref{eq:largeq_spinodal}).
It is worth mentioning that our results are in agreement with previous works~\cite{federica}, where the case $p=1$, $q=3,4$, $\varphi=0$ was studied numerically.
We remark that the phase structure in the case of infinite-range interactions is much simpler than the one of the one-dimensional short-range model, and has no incommensurate gapless phases.
While in the short-range case for $q=3$, $p=1$ the transition between the trivial phase and the symmetry-breaking phase is second order, here the transition is first order. Also at $\varphi=\pi/3$ what we find is very different from the short-range case, which features a transition from a symmetry-breaking to an incommensurate phase~\cite{zhuang2015}.
The difference is evident also for larger $q$. For any $q>3$, $p=1$ in the infinite-range model there is a transition from symmetry-breaking to trivial phase but it is second order, in contrast with the already mentioned Kosterlitz-Thouless transition of the one-dimensional self-dual short-range case with $q>4$.
As a future perspective of this work, it would be interesting to extend the investigation
to the case of long-range interactions in $d$ dimensions, where it is possible to have
some spatial dependence of correlations. This case interpolates between one-dimensional short-range and infinite-range cases; studying it would allow to understand the way one moves between two very different phase diagrams. In particular, this step would be important in
order to understand what are the ingredients which allow for the presence of an incommensurate phase,
like the one that arises in short-range interacting clock models in $d = 1$.
\ack{We acknowledge fruitful discussions with A.~Angelone and M.~Dalmonte. F.I. acknowledges the financial support of the Brazilian funding agencies CNPQ (308205/2019-7) and FAPERJ. This work is partly supported by the ERC under
grant number 758329 (AGEnTh).}
\newpage
\section*{Bibliography}
\hspace{0.5cm}
\bibliographystyle{unsrt}
|
1708.04547
|
\section{\bf Introduction}
\vskip 0.4 true cm
In 1948, L.V. Kantorovich, Soviet mathematician and economist, introduced the well-known Kantorovich inequality \cite{kantorovich}. Operator version of Kantorovich inequality was firstly established by A.W. Marshall and I. Olkin, who obtained:
\vskip 0.4 true cm
\noindent{\bf Theorem A.}
{\it
{\upshape(\cite{marshal})} Let $A$ be a positive operator satisfying $0<m{{\mathbf{1}}_{\mathcal{H}}}\le A\le M{{\mathbf{1}}_{\mathcal{H}}}$ for some scalars $m,M$ with $m<M$ and $\Phi $ be a normalized positive linear map. Then
\begin{equation}\label{5}
\Phi \left( {{A}^{-1}} \right)\le \frac{{{\left( M+m \right)}^{2}}}{4Mm}\Phi {{\left( A \right)}^{-1}}.
\end{equation}
}
This note aims to present an improvement of inequality \eqref{5}. The main result of this note is of this genre:
\begin{theorem}\label{th1}
Let all the assumptions of Theorem {\upshape A} hold. Then
\begin{equation}\label{10}
\Phi \left( {{A}^{-1}} \right)\le \Phi \left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)\le \frac{{{\left( M+m \right)}^{2}}}{4Mm}\Phi {{\left( A \right)}^{-1}}.
\end{equation}
\end{theorem}
This is proven at the end of Section \ref{s2}. We start off by fixing some notation: Let $\mathbb{B}\left( \mathcal{H} \right)$ denote the set of all bounded linear operators on a complex Hilbert space $\mathcal{H}$ with the identity ${{\mathbf{1}}_{\mathcal{H}}}$. We extensively use the continuous functional calculus for self-adjoint operators, e.g., see \cite[p. 3]{book}. An operator $A$ on $\mathcal{H}$ is said to be {\it positive} (in symbol $0\le A$) if $0\le \left\langle Ax,x \right\rangle $ for all $x\in \mathcal{H}$. We write $0<A$ if $A$ is positive and invertible. For self-adjoint operators $A,B\in \mathbb{B}\left( \mathcal{H} \right)$, we say $A\le B$ if $0\le B-A$. A linear map $\Phi :\mathbb{B}\left( \mathcal{H} \right)\to \mathbb{B}\left( \mathcal{K} \right)$, where $\mathcal{H}$ and $\mathcal{K}$ are complex Hilbert spaces, is called {\it positive} if $\Phi \left( A \right)\ge 0$ whenever $A\ge 0$ and is said to be {\it normalized} if $\Phi \left( {{\mathbf{1}}_{\mathcal{H}}} \right)={{\mathbf{1}}_{\mathcal{K}}}$.
A positive function defined on the interval $I$ (or, more generally, on a convex subset of some vector space) is called {\it $log $-convex} if $\log f\left( x \right)$ is a convex function of $x$. We observe that such functions satisfy the elementary inequality
\begin{equation*}
f\left( \left( 1-v \right)a+vb \right)\le {{\left[ f\left( a \right) \right]}^{1-v}}{{\left[ f\left( b \right) \right]}^{v}},\qquad \text{ }0\le v\le 1
\end{equation*}
for any $a,b\in I$. Because of the weighted arithmetic-geometric mean inequality, we also have
\begin{equation}\label{b}
f\left( \left( 1-v \right)a+vb \right)\le {{\left[ f\left( a \right) \right]}^{1-v}}{{\left[ f\left( b \right) \right]}^{v}}\le \left( 1-v \right)f\left( a \right)+vf\left( b \right),
\end{equation}
which says that any log-convex function is a convex function.
\medskip
The following inequality is well known in the literature as the Choi-Davis-Jensen inequality:
\vskip 0.4 true cm
\noindent{\bf Theorem B.}
(\cite{choi, davis}) {\it Let $A\in \mathcal{B}\left( \mathcal{H} \right)$ be a self-adjoint operator with spectrum $Sp\left( A \right)\subseteq I$ and $\Phi $ be a normalized positive linear map from $\mathbb{B}\left( \mathcal{H} \right)$ to $\mathbb{B}\left( \mathcal{K} \right)$. If $f$ is operator convex function on an interval $I$, then
\begin{equation}\label{cdj}
f\left( \Phi \left( A \right) \right)\le \Phi \left( f\left( A \right) \right).
\end{equation}
}
\indent Though in the case of convex function the inequality \eqref{cdj} does not hold in general, we have the following estimate:
\vskip 0.4 true cm
\noindent{\bf Theorem C.}
{\it {\upshape(\cite[Remark 4.14]{micic})} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$ be a self-adjoint operator with $Sp\left( A \right)\subseteq \left[ m,M \right]$ for some scalars $m,M$ with $m<M$ and $\Phi $ be a normalized positive linear map from $\mathbb{B}\left( \mathcal{H} \right)$ to $\mathbb{B}\left( \mathcal{K} \right)$. If $f$ is non-negative convex function, then
\begin{equation*}
\frac{1}{\mu \left( m,M,f \right)}\Phi \left( f\left( A \right) \right)\le f\left( \Phi \left( A \right) \right)\le \mu \left( m,M,f \right)\Phi \left( f\left( A \right) \right),
\end{equation*}
where $\mu\left( m,M,f \right)$ is defined by
\[\mu\left( m,M,f \right)\equiv \max \left\{ \frac{1}{f\left( t \right)}\left( \frac{M-t}{M-m}f\left( m \right)+\frac{t-m}{M-m}f\left( M \right) \right):\text{ }m\le t\le M \right\}.\]
}
In Section \ref{s2} we prove an analogue of Theorem C for log-convex functions. The proof of Theorem \ref{th1} follows quickly from this inequality. In Section \ref{s3}, inspired by the work of Lin \cite{lin}, we square the second inequality in \eqref{10}.
\section{\bf A refinement of the operator Kantorovich inequality}\label{s2}
\vskip 0.4 true cm
An important role in our analysis is played by the following result, which is of independent interest.
\begin{proposition}\label{thb}
Let all the assumptions of Theorem {\upshape C} hold except the condition convexity which is changed to log-convexity. Then
\begin{equation}\label{1}
\Phi \left( f\left( A \right) \right)\le \Phi \left( {{\left[ f\left( m \right) \right]}^{\frac{M{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}}{{\left[ f\left( M \right) \right]}^{\frac{A-m{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}} \right)\le \mu \left( m,M,f \right)f\left( \Phi \left( A \right) \right).
\end{equation}
\end{proposition}
\begin{proof}
It can be verified that if $m\le t\le M$, then $0\le \frac{M-t}{M-m},\frac{t-m}{M-m}\le 1$ and $\frac{M-t}{M-m}+\frac{t-m}{M-m}=1$. Thanks to \eqref{b}, we have
\begin{equation}\label{c}
f\left( t \right)=f\left( \frac{M-t}{M-m}m+\frac{t-m}{M-m}M \right)\le {{\left[ f\left( m \right) \right]}^{\frac{M-t}{M-m}}}{{\left[ f\left( M \right) \right]}^{\frac{t-m}{M-m}}}\le L\left( t \right),
\end{equation}
where
\[L\left( t \right)=\frac{M-t}{M-m}f\left( m \right)+\frac{t-m}{M-m}f\left( M \right).\]
Applying functional calculus for the operator $A$, we infer that
\[f\left( A \right)\le {{\left[ f\left( m \right) \right]}^{\frac{M{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}}{{\left[ f\left( M \right) \right]}^{\frac{A-m{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}\le L\left( A \right).\]
Using the hypotheses made about $\Phi $,
\begin{equation}\label{3}
\Phi \left( f\left( A \right) \right)\le \Phi \left( {{\left[ f\left( m \right) \right]}^{\frac{M{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}}{{\left[ f\left( M \right) \right]}^{\frac{A-m{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}} \right)\le \Phi \left( L\left( A \right) \right).
\end{equation}
On account of \cite[Corollary 4.12]{micic} (the functions $f$ and $g$ there are now $L$ and $f$, respectively), we get
\[\Phi \left( f\left( A \right) \right)\le \Phi \left( {{\left[ f\left( m \right) \right]}^{\frac{M{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}}{{\left[ f\left( M \right) \right]}^{\frac{A-m{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}} \right)\le \mu \left( m,M,f \right)f\left( \Phi \left( A \right) \right).\]
Notice that, although \cite[Corollary 4.12]{micic} is for matrices, it is also true for operators.
Hence \eqref{1} follows.
\end{proof}
\medskip
The following follows immediately from Proposition \ref{thb}. Recall that $f\left( t \right)={{t}^{p}}$, $\left( p<0 \right)$ is log-convex function.
\begin{corollary}\label{3.1}
Under the hypotheses of Proposition \ref{thb}, let $p\in \left( -\infty ,0 \right)$ and $0<m<M$. Then
\begin{equation}\label{18}
\Phi \left( {{A}^{p}} \right)\le \Phi \left( {{m}^{p\left( \frac{M{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m} \right)}}{{M}^{p\left( \frac{A-m{{\mathbf{1}}_{\mathcal{H}}}}{M-m} \right)}} \right)\le K\left( m,M,p \right)\Phi {{\left( A \right)}^{p}},
\end{equation}
where $K\left( m,M,p \right)$ is the generalized Kantorovich constant defined by
\[K\left( m,M,p \right)\equiv \frac{m{{M}^{p}}-M{{m}^{p}}}{\left( p-1 \right)\left( M-m \right)}{{\left( \frac{p-1}{p}\frac{{{M}^{p}}-{{m}^{p}}}{m{{M}^{p}}-M{{m}^{p}}} \right)}^{p}}.\]
\end{corollary}
\begin{remark}
We would like to mention that \eqref{1} can be regarded as an improvement of \cite[Theorem 1.5]{f} (see also \cite[Lemma 2]{micic1}).
\end{remark}
\medskip
After the previous technical intermission, we return to the main subject of this section,
the proof of the inequality \eqref{10}.
\medskip
\noindent {\it Proof of Theorem \ref{th1}.} This follows from Corollary \ref{3.1} by putting $p=-1$. We should point out that $K\left( m,M,-1 \right)=\frac{{{\left( M+m \right)}^{2}}}{4Mm}$. $\hfill\square$
\vskip 0.5 true cm
Can the second inequality in \eqref{10} be squared? Responding to this question is the main motivation of the next section.
\section{\bf Squaring refinement of the operator Kantorovich inequality}\label{s3}
\vskip 0.4 true cm
We will need the following lemmas.
\begin{lemma}\label{11}
\hfill
\begin{itemize}
\item[(i)] \cite[Theorem 1]{bhatia} Let $A,B>0$. Then the following norm inequality holds:
\[\left\| AB \right\|\le \frac{1}{4}{{\left\| A+B \right\|}^{2}}.\]
\item[(ii)] \cite[Theorem 3]{ando} Let $A,B\ge 0$ and $1\le r\le \infty $. Then
\[\left\| {{A}^{r}}+{{B}^{r}} \right\|\le \left\| {{\left( A+B \right)}^{r}} \right\|.\]
\end{itemize}
\end{lemma}
\begin{lemma}\label{9}
For each $m\le t\le M$, we have
\[t+mM{{m}^{\frac{t-M}{M-m}}}{{M}^{\frac{m-t}{M-m}}}\le M+m.\]
\end{lemma}
\begin{proof}
Because of the weighted arithmetic-geometric mean inequality
\[\begin{aligned}
t+mM{{m}^{\frac{t-M}{M-m}}}{{M}^{\frac{m-t}{M-m}}}&=t+{{m}^{\frac{t-m}{M-m}}}{{M}^{\frac{M-t}{M-m}}} \\
& \le t+\frac{t-m}{M-m}m+\frac{M-t}{M-m}M \\
& =M+m,
\end{aligned}\]
which finishes the proof.
\end{proof}
\medskip
Now we are at the position to state our main result.
\begin{theorem}\label{th3}
Let all the assumptions of Theorem {\upshape A} hold. Then
\begin{equation}\label{12}
\Phi {{\left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)}^{p}}\le {{\left( \frac{{{\left( M+m \right)}^{2}}}{{{4}^{\frac{2}{p}}}Mm} \right)}^{p}}\Phi {{\left( A \right)}^{-p}}\quad\text{ for }2\le p<\infty .
\end{equation}
In particular
\[\Phi {{\left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)}^{2}}\le {{\left( \frac{{{\left( M+m \right)}^{2}}}{4Mm} \right)}^{2}}\Phi {{\left( A \right)}^{-2}}.\]
\end{theorem}
\begin{proof}
The idea of the proof is similar to \cite[Theorem 3]{fu}. It is easy to see that if $A,B>0$ and $\alpha >0$, then
\[A\le \alpha B\quad\text{ }\Leftrightarrow \quad\text{ }\left\| {{A}^{\frac{1}{2}}}{{B}^{-\frac{1}{2}}} \right\|\le {{\alpha }^{\frac{1}{2}}}.\]
So we are done if we can show
\[\left\| \Phi {{\left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)}^{\frac{p}{2}}}\Phi {{\left( A \right)}^{\frac{p}{2}}} \right\|\le \frac{{{\left( M+m \right)}^{p}}}{4{{M}^{\frac{p}{2}}}{{m}^{\frac{p}{2}}}}.\]
On account of Lemma \ref{9}, it follows that
\begin{equation}\label{6}
\Phi \left( A \right)+mM\Phi \left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)\le \left( M+m \right){{\mathbf{1}}_{\mathcal{H}}}.
\end{equation}
By direct calculation,
\[\begin{aligned}
& \left\| {{m}^{\frac{p}{2}}}{{M}^{\frac{p}{2}}}\Phi {{\left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)}^{\frac{p}{2}}}\Phi {{\left( A \right)}^{\frac{p}{2}}} \right\| \\
&\quad \le \frac{1}{4}{{\left\| {{m}^{\frac{p}{2}}}{{M}^{\frac{p}{2}}}\Phi {{\left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)}^{\frac{p}{2}}}+\Phi {{\left( A \right)}^{\frac{p}{2}}} \right\|}^{2}} \quad \text{(by Lemma \ref{11} (i))}\\
&\quad \le \frac{1}{4}{{\left\| {{\left( mM\Phi \left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)+\Phi \left( A \right) \right)}^{\frac{p}{2}}} \right\|}^{2}} \quad \text{(by Lemma \ref{11} (ii))}\\
&\quad =\frac{1}{4}{{\left\| mM\Phi \left( {{m}^{\frac{A-M{{\mathbf{1}}_{\mathcal{H}}}}{M-m}}}{{M}^{\frac{m{{\mathbf{1}}_{\mathcal{H}}}-A}{M-m}}} \right)+\Phi \left( A \right) \right\|}^{p}} \\
&\quad \le \frac{{{\left( M+m \right)}^{p}}}{4} \quad \text{(by \eqref{6})}. \\
\end{aligned}\]
This proves \eqref{12}.
\end{proof}
\medskip
{\bf Acknowledgements.} We would like to thank the referee(s) for carefully reading our manuscript and for giving such constructive comments which substantially helped to improve the quality of the paper.
\bibliographystyle{alpha}
|
2204.09404
|
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{U}{nderstanding} human visual attention
has been an active research area
for decades. A plethora of works have been devoted to analyzing human attention when viewing content in different
disciplines, including computer vision, graphics, neuroscience, or psychology.
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{figures/teaser.jpg}
\caption{Overview of this work. We present a convolutional recurrent approach to scanpath prediction. Our model relies on Bayesian deep learning, and a novel spatialized representation of scanpaths. We propose a novel spatio-temporal loss function based on a combination of the Kullback-Leibler divergence and dynamic time warping, and predict time-evolving scanpath probabilistic maps (tSPM), well suited to the stochastic nature of human scanpaths. We evaluate our model and show how our generated scanpaths maintain the spatial and temporal characteristics of human scanpaths, while outperforming previous state-of-the-art approaches. An overview of the model itself is shown in Figure~\ref{fig:OurModel}.}
\label{fig:teaser}
\end{figure*}
However, and regardless of the medium, gathering sufficiently large amounts of data to perform behavioral studies is a cumbersome and time-consuming task. Being able to generate virtual observers mimicking such attention process would greatly facilitate the process, thus helping achieve more significant advances in the fields.
Many works have focused on, given an image, predicting where the attention of the observer is going to be directed to.
Traditionally, the problem has been tackled through spatial, bottom-up analyses of the image, leading to the determination of salient areas represented as \emph{saliency maps}, which are topographical representations by a scalar quantity of the conspicuity (i.e., saliency) at every location in the visual field~\cite{itti1998model, itti2001computational}.
Although this may suffice for certain applications, saliency maps fail to capture the temporal dimension of gaze.
This temporal information is relevant in a varied number of scenarios: How long does it take for an observer to find a specific object in an image? How should one design the layout of a 360º environment or scene? Will some distractor drive attention away from the main focal point, and if so, how and to what extent? Current application areas where this temporal dimension prediction is relevant
range from marketing and product placement, webpage design or scene design, to analysis of visual pathologies or realistic eye motion simulation (e.g., for avatar animation).
To take into account this temporal information, a number of works have tackled the problem of \emph{scanpath} prediction~\cite{kummerer2021state}. A scanpath can be defined as a sequence of consecutive eye movements (i.e., fixations and saccades) through time and space~\cite{goldberg2010visual}. %
Gaze behavior is a complex phenomenon which involves spatio-temporal dependencies~\cite{kapoula2021influence, martin2021scangan360},
as well as a large inter- and intra-observer variability~\cite{le2016introducing, judd2009learning}. %
When attempting to model the temporal dimension of gaze, the problem is often posed as follows: given an input image $I$, and a sequence of gaze points $\{s_0,...,s_{t-1}\}$, the goal is to predict the next gaze point in the scanpath, $s_t$. Previous works either resort to heuristics and hand-crafted features~\cite{lemeur2015saccadic, tatler2009prominence}, or to data-driven methods~\cite{sun2019visual, bao2020human} to do this. %
However, most of the existing methods are designed to optimize the prediction of a \emph{single} fixation point, given the previous points; thus the scanpath is progressively built by concatenating successive single-point solutions. While this strategy is useful for several applications such as foveated rendering~\cite{arabadzhiyska2017saccade, nguyen2018your}, it may lead to increasing deviations from actual human viewing behavior and scanpath plausibility~\cite{fahimi2020metrics}.
In this paper we present a method to predict \textit{full, plausible} scanpaths
given an input image $I$ (see Figure~\ref{fig:teaser}).
We leverage the fact that, despite the inter- and intra-observer variability, common patterns and behaviors do emerge when humans observe certain content~\cite{ellis1985patterns}. This allows us to obtain not a single scanpath, but a \textit{distribution} of scanpaths within this common behavioral space. This distribution can then be sampled to generate individual scanpaths.
We rely on convolutional long-short term memory networks (ConvLSTM)~\cite{xingjian2015convolutional}, since their recurrent architecture is well suited to capture the temporal dependency of each predicted point in a scanpath, while their convolutional nature has proven to be successful handling problems with both spatial and temporal dependencies.
To obtain the distribution of plausible scanpaths we explicitly incorporate the inherent uncertainty of the problem into our model: Our ConvLSTM module is, for the first time, based on Bayesian deep learning, so that its weights are not deterministic, but sampled from a learned distribution instead. In addition, our network is trained using a novel spatio-temporal loss function that combines the benefits of the Kullback-Leibler divergence and dynamic time warping (DTW) for joint spatio-temporal optimization.
Our resulting trained model is able to generate a distribution of plausible scanpaths for a given input image, where each scanpath mimics the visual behavior of a human observer and takes less than one second to generate. We have validated our model both qualitatively and quantitatively, including an exhaustive set of existing metrics accounting for different scanpath characteristics~\cite{fahimi2020metrics}. Our model outperforms the state of the art, being almost on par with the human baseline. %
We will make our code and model publicly available to encourage future research.
\input{sections/RelatedWork.tex}
\input{sections/Model.tex}
\input{sections/Evaluation.tex}
\input{sections/Conclusions.tex}
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (project CHAMELEON, Grant No 682080). This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 956585. This project was also supported by a 2020 Leonardo Grant for Researchers and Cultural Creators, BBVA Foundation (the BBVA Foundation accepts no responsibility for the opinions, statements and contents included in the project and/or the results thereof, which are entirely the responsibility of the authors). This work has also received funding from Spain's Agencia Estatal de Investigación (project PID2019-105004GB-I00). Additionally, Daniel Martin was supported by a Gobierno de Aragon (2020-2024) predoctoral grant.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Conclusion}
\label{sec:Conclusion}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/failureCase.jpg}
\caption{When the visual features of an image are complex or too abstract, our model's performance diminishes: It sometimes fails to localize the areas of interest, or remains within the same areas for several fixations. Feeding our model with larger datasets with more varied images would probably make it more robust to these cases.}
\label{fig:Failure}
\end{figure}
We have presented a novel method for scanpath prediction in 2D images. We introduce a novel spatial scanpath representation that enhances learning the spatial features of images, together with a novel loss function tailored to the spatio-temporal particularities of scanpaths, based on a combination of dynamic time warping and Kullback-Leibler divergence. This allows our model to predict scanpaths that mimic human viewing patterns. We have evaluated our model and compared it to state-of-the-art methods on a large set of metrics that analyze different aspects of scanpaths. Our model outperforms previous approaches, while generating a scanpath in less than a second.
\bigbreak
\emph{Limitations and future work} Our work is not free from limitations, and offers interesting avenues of future work. When the visual features of the image are too abstract or complex, the performance of our model decreases (see Figure~\ref{fig:Failure}). We computed the same set of metrics as in Section~\ref{sec:evaluation} and found that, for some particular complex cases, our metrics are closer to the random baseline than to the human baseline.
We hypothesize that using a larger dataset and more ground-truth data would ameliorate this. In addition, adding more priors may be helpful, although finding out what priors would apply to the most complex cases is still an open problem.
Additionally, exploring the impact of the \textit{duration} of the fixations could further enhance our model's performance. Last, our model assumes no prior knowledge or task-oriented scenarios when viewing the images; it would be interesting to devise variations of our model for such particular cases.
\section{Evaluation}
\label{sec:evaluation}
We validate the quality of our scanpaths against measured, ground-truth scanpaths, as well as to other existing scanpath prediction methods.
Similar to recent work on scanpath generation~\cite{martin2021scangan360}, we rely on the comprehensive set of metrics proposed by Fahimi and Bruce~\cite{fahimi2020metrics}, which include string alignment, curve similarity, time-series analysis, and recurrence analysis. We refer the reader to the original publication for further details on the metrics. In addition, we also analyze the performance of our method against ground-truth data for spatial convergence and saliency, inter-observer variability, and fixation prediction.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/Saliency.jpg}
\caption{We evaluate the spatial convergence of our predicted scanpaths: Our model generates scanpaths that focus on salient regions, and whose aggregation closely resembles ground-truth saliency maps.}%
\label{fig:ConvSaliency}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{figures/ThEffect.jpg}
\caption{We evaluate the ability of our model to generate scanpaths with higher variability. Our model has a stochastic nature, and generates scanpaths by probabilistically sampling the predicted tSPM (see Section~\ref{subsec:Model-OurModel}). Any point with a probability below a specified threshold $th$ will be discarded before sampling. By lowering this value, points with lower probability can be sampled, leading to scanpaths that can focus on regions further from the main regions of interest, yielding a more exploratory behavior. Nevertheless, even when reducing this threshold, our scanpaths remain stable in quantitative evaluations (see the last two rows of Table~\ref{tab:ComparisonMetrics}).}
\label{fig:DiffThresholds}
\end{figure*}
\subsection{Comparison to Other Approaches}
\label{subsec:EvalComparison}
Following previous literature in scanpath prediction~\cite{sun2019visual}, we generate ten scanpaths per image for our test set with each of the methods we are comparing against: Itti et al.'s ~\cite{itti1998model}, LeMeur et al.'s work~\cite{lemeur2015saccadic}, and IOR-ROI~\cite{sun2019visual}. Scanpath length is determined by the mean length of ground-truth data~\cite{sun2019visual}. An illustrative qualitative comparison can be seen in Figure~\ref{fig:GridComparison}. Since some of the models~\cite{itti1998model, sun2019visual} are based on biological mechanisms such as inhibition of return, fixations do not remain in the same region, leading to unnatural scanpaths. %
Our model and LeMeur et al.'s~\cite{lemeur2015saccadic} produce scanpaths that more closely resemble the ground truth. However, our work does not depend on saliency as a proxy, and does not require a module devoted to its prediction. This makes it more general and suitable for data for which ground-truth saliency is not available.
Table~\ref{tab:ComparisonMetrics} shows the comparisons with quantitative metrics~\cite{fahimi2020metrics}. For reference, we also include human baseline (Human BL)~\cite{xia2019predicting} by computing the same metrics for all the ground-truth scanpaths, plus a random baseline (Random BL) generated from random scanpaths. Our models yields the best results in eight of the ten cases, and second in the remaining two.
Additional qualitative results can be found in the supplementary material.
\subsection{Spatial convergence and saliency}
\label{subsec:EvalSaliency}
To evaluate the spatial convergence of our predicted scanpaths, we compare saliency maps. We compute such maps by aggregating multiple scanpaths into a heatmap; we then compare them against the ground-truth saliency maps computed from real observers' data. As can be seen in Figure~\ref{fig:ConvSaliency}, our generated scanpaths lead to predicted saliency maps that closely resemble the ground truth.
\subsection{Scanpath variability}
\label{subsec:EvalScanpathVar}
We generate our scanpaths by sampling our %
generated tSPM (Section \ref{sec:model}). The inherent variability that exists between different observers is modeled by the parameter $th$: higher values lead to more concentrated scanpaths, while lower ones allow our scanpaths to simulate more exploratory visual behaviors. Figure~\ref{fig:DiffThresholds} illustrates this. In addition, we have conducted a quantitative analysis (see last two rows of Table~\ref{tab:ComparisonMetrics}) showing how, even when eliciting a more exploratory behavior by decreasing $th$, our scanpaths still outperform previous approaches and remain close to the human baseline.
\subsection{Step-wise fixation prediction}
\label{subsec:EvalStepWise}
As mentioned in Section~\ref{sec:related}, most existing works take an incomplete scanpath as input, and predict the \textit{next} fixation point. They thus build each scanpath progressively, usually by
optimizing only the prediction of that last point (e.g., by means of MAE~\cite{hu2020dgaze} or MSE~\cite{nguyen2018your} losses).
Although this approach neglects the plausibility of the full scanpath as a whole, it may be useful in some cases. Our proposed spatio-temporal loss and probabilistic framework also offer a precise alternative in these situations.
Table~\ref{tab:pointwise} shows quantitative results for paths of varying lengths: $\ocircle$ represents points from the ground-truth scanpath fed to our network, while $\times$ represents points predicted with our model.
Our method produces plausible results from a single ground-truth point, and very quickly approximates the human baseline with only four.
\begin{table}
\centering
\caption{We have evaluated the ability of our model to complete sequences of scanpaths of different lengths: $\ocircle$ represents points from the ground-truth scanpath, and $\times$ points predicted with our model. Each row is computed by completing every scanpath on our test set 10 times (see Section~\ref{subsec:EvalStepWise}). The first and last rows show a random baseline and the human baseline, respectively.}
\label{tab:pointwise}
\arrayrulecolor{black}
\begin{tabular}{c|cccc}
Scanpath & SCAM$\uparrow$ & HAU$\downarrow$ & fDTW$\downarrow$ & REC$\uparrow$ \\
\hline
Random BL & 0.21 & 192.96 & 703.19 & 0.72 \\
\arrayrulecolor[rgb]{0.753,0.753,0.753}\hline
$\ocircle \times \times \times \times$ & 0.42 & 131.02 & 421.72 & 5.16 \\
$\ocircle \ocircle \times \times \times$ & 0.44 & 129.52 & 407.57 & 6.32 \\
$\ocircle \ocircle \ocircle \times \times$ & 0.46 & 128.81 & 393.83 & 6.99 \\
$\ocircle \ocircle \ocircle \ocircle \times$ & 0.47 & 129.03 & 381.93 & 7.24 \\
\hline
Human BL & 0.49 & 126.73 & 387.75 & 7.17
\end{tabular}
\arrayrulecolor{black}
\end{table}
\section{Our Model}
\label{sec:model}
Our model performs probabilistic scanpath prediction given a single 2D image as input. The model, based on recurrent neural networks, is described in detail in this section:
we introduce the representation we employ for the scanpaths (Section~\ref{subsec:Model-ScanpathParameterization}), a novel loss function that is able to optimize our scanpaths in a joint spatio-temporal fashion (Section~\ref{subsec:Model-LossFunction}), our model architecture in depth (Section~\ref{subsec:Model-OurModel}), and additional details on our training data and procedure (Section~\ref{subsec:Model-TrainingDetails}).
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/SampleGaussian.jpg}
\caption{An example of our spatialized scanpath representation. The top left image shows a RGB image with a sample ground-truth scanpath overlaid.
For that sample ground-truth scanpath, we transform each fixation point into a Gaussian map centered at that particular point (see Section~\ref{subsec:Model-ScanpathParameterization}).}
\label{fig:ExampleGaussianRepresentation}
\end{figure}
\begin{figure*}[th!]
\centering
\includegraphics[width=\linewidth]{figures/OurModel.jpg}
\caption{Overview of our model. We represent our scanpaths as Gaussian maps to enhance spatial learning (see Section~\ref{subsec:Model-ScanpathParameterization}). We leverage a pretrained VGG19 and a pretrained semantic segmentator based on ResNet50 to extract meaningful visual features from the image. We then combine those features and the spatialized scanpath with a CoordConv layer, to facilitate the learning of spatial features. This input is fed to a multi-layer convolutional LSTM module that iterates over the scanpath and predicts the next fixation,
until a scanpath of the target length is generated. We compute a loss based on dynamic time warping and Kullback-Leibler divergence (see Section~\ref{subsec:Model-LossFunction}) to optimize our model in a joint spatio-temporal fashion. The outcome of our model is a tSPM sequence (see also Figure~\ref{fig:TemporalEvolution}).}
\label{fig:OurModel}
\end{figure*}
\subsection{Scanpath Representation}
\label{subsec:Model-ScanpathParameterization}
Scanpaths are commonly defined as a sequence $s = \{s_0, s_1, .., s_{N-1}\}$ of gaze points\footnote{In our case, and following common practice~\cite{bao2020human, sun2019visual, kummerer2016deepgaze}, the points in a scanpath correspond to fixation points (i.e., we do not attempt to predict saccades and other ocular movements).}, where $s_i = (x_i,y_i)$, and $(x_i,y_i)$ are the image coordinates of that particular gaze point.
While this representation may suffice in some cases~\cite{fan2017fixation, zemblys2019gazenet, assens2018pathgan, martin2021scangan360}, it usually falls short for problems where it is necessary to establish a relationship between those coordinates and the position of features within an image. Indeed, convolutional networks are trained to be shift-invariant~\cite{liu2018coordconv}, and forcing them to explicitly learn the relation between gaze point coordinates and the actual positions of image features is challenging and hinders the training process.
Additionally, scanpaths for a given image exhibit both inter- and intra-observer variability. Not all observers will explore the image in exactly the same way, resulting in inter-observer variability. Besides, an observer watching the same image twice may follow slightly different scanpaths, and, even if asked to follow a certain path, there is a ballistic or noisy component in ocular movements (e.g., saccades or post-saccadic oscillations~\cite{larsson2013detection}), that results in different gaze points. As a result, scanpaths are non-deterministic. However, they do exhibit clear patterns across and within observers, as multiple works have shown~\cite{lemeur2015saccadic}.
Given this variability, and in order to facilitate the spatial learning of the network, %
instead of representing each gaze point with its coordinates, $s_i = (x_i,y_i)$,
a more adequate representation for $s_i$ is a Gaussian distribution $g^s_i$ centered in $(x_i,y_i)$, and defined over the whole image. In each distribution $g_i$, there is thus a value $g_i(x,y)$ per pixel $(x,y)$, which represents the probability of a gaze point falling at pixel $(x,y)$ at time step $i$. A scanpath $s$ is therefore represented as a sequence $g^s = \{g^s_0, g^s_1, .., g^s_{N-1}\}$ of Gaussian maps $g^s_i$ (see Figure~\ref{fig:ExampleGaussianRepresentation}); we term a scanpath represented in this way a \emph{spatialized} scanpath. This representation facilitates spatial learning by providing a direct correlation between a scanpath and its corresponding image.
\subsection{Overview of the Model}
\label{subsec:Model-Overview}
Our model, illustrated in Figure~\ref{fig:OurModel}, is based on the recently presented ConvLSTM~\cite{xingjian2015convolutional}, a type of recurrent neural network. %
ConvLSTMs maintain the recurrent nature of traditional LSTMs, processing data in a sequential manner, thus being able to learn the temporal features of the data. Additionally, ConvLSTMs are provided with convolutional operators that handle visual information and facilitate learning spatial features in the input sequence.
Further, we resort to a Bayesian approach when modeling the ConvLSTM module, %
in order to better incorporate the uncertainty driven by inter- and intra-observer variability: The output of the ConvLSTM module is not a point, but rather a probability map (see Figure~\ref{fig:OurModel}). Our whole model therefore predicts, given an input image, a sequence of time-evolving scanpath probabilistic maps (tSPM) (see Figure~\ref{fig:TemporalEvolution}). Each tSPM represents the probabilities of the next gaze fixation point falling on each pixel of the image at a certain time instant.
We additionally leverage pretrained neural networks on image classification tasks to facilitate feature extraction, and CoordConv layers to improve learning of spatial features. The details of our model architecture are described in Section~\ref{subsec:Model-OurModel}.
\subsection{Loss Function}
\label{subsec:Model-LossFunction}
Our spatialized scanpath representation facilitates working over the spatial component of the scanpaths, as explained in Section~\ref{subsec:Model-ScanpathParameterization}. Both the spatial and temporal domains are critical when predicting gaze points. Recurrent neural networks (RNN) have proven to be powerful tools able to handle time dependencies in data, being able to extract, maintain and even infer patterns through time, and thus have been successfully used in some approaches for gaze prediction~(see Section~\ref{sec:related}). However, all those approaches have designed their RNN-based models to optimize the prediction at each time step, with element-wise loss functions, such as mean squared error (MSE) or binary cross-entropy (BCE), that penalize the prediction for \emph{each point} in isolation. %
In contrast, we propose a novel loss function based on the Kullback-Leibler divergence and dynamic time warping, computed over the whole scanpath. The former allows our model to account for the spatial relations between gaze points, while the latter ensures a realistic and plausible temporal behavior of the predicted scanpaths.
\bigbreak
\emph{Kullback-Leibler Divergence (KL-Div)} KL-Div is a measure of how different a probability distribution is from another one, and is one of the most commonly used metrics and losses in saliency prediction problems~\cite{zhang2020spatial, qiao2020viewport, palazzi2018predicting, wu2020salsac}. The Kullback-Leibler divergence ($D_{KL}$) is defined as:
\begin{equation}
D_{KL} (P||Q) = \sum_{j} P(j) ln\frac{P(j)}{Q(j)} ,
\label{eq:KL}
\end{equation}
\noindent where $P$ and $Q$ are the probability distributions to be compared, and $j$ refers to each point
of the distribution. In our particular case, each gaze point is represented in a spatialized manner, hence KL-Div is able to give a qualitative measurement on how different two points are based on their probability distributions $P$ and $Q$.
\bigbreak
\emph{Dynamic Time Warping (DTW)} DTW is a measure of similarity between two time series that may differ in length or speed~\cite{muller2007dynamic}. The DTW algorithm attempts to find the optimal match between the points of two temporal sequences, $r$ and $s$, by matching each point in one of them with at least one point in the other, without forcing a one-to-one correspondence between both sequences. The optimal match is found by minimizing a cost function: a distance matrix $\Delta$ stores the cost (Euclidean distance) for each possible pair of points, and the optimization searches for the matching (alignment) between $r$ and $s$ such that the total cost is minimized. This can be written as:
\begin{equation}
\label{eq:dtw_loss_1}
DTW(r, s) = \underset{A}{\min}
\langle A, \Delta(r, s) \rangle,
\end{equation}
\noindent where $A$ is a binary alignment matrix between two time series $r$ and $s$, $\Delta(r,s) = [\delta(r_i,s_j)]_{i,j}$ is a matrix containing the distances $\delta(\cdot , \cdot )$ between each pair of points in $r$ and $s$, and $\langle \cdot , \cdot \rangle$ denotes the inner product between both matrices.
Since the minimum function is not differentiable, a soft version has been proposed~\cite{cuturi2017soft}:
\begin{equation}
\label{eq:dtw_loss}
DTW^{\gamma}(r, s) = \underset{A}{\min}^{\gamma}
\langle A, \Delta(r, s) \rangle, \quad \gamma > 0
\end{equation}
\noindent The soft-min function $\min^{\gamma}$ is defined as:
\begin{equation}
\label{eq:dtw_loss_min}
{\min}^{\gamma}(a_1,...,a_N) = -\gamma\:log\sum_{i=1}^N exp\left(-\frac{a_i}{\gamma}\right) ,
\end{equation}
\noindent with the $\gamma$ parameter adjusting the similarity between the soft version and the original DTW algorithm, both being the same when $\gamma = 0$.
Eq.~\ref{eq:dtw_loss} has been used successfully as a loss term in related contexts, such as scanpath generation for virtual reality~\cite{martin2021scangan360} or weakly supervised action alignment and segmentation in videos~\cite{Chang_2019_CVPR}.
\bigbreak
\emph{Our Joint KL-DTW Loss} %
While KL-Div accounts for the spatial similarity of two distributions, and DTW focuses on the temporal dimension,
none of them suffices on its own in our particular case. We therefore propose a novel loss function based on the combination of both KL-Div and DTW, defined as follows:
\begin{equation}
\mathcal{L}_{KL-DTW}(r') = \frac{\sum\limits_{s=1}^S DTW^{\gamma}(r',s)}{|S|} ,
\end{equation}
\noindent where $r'$ is a predicted sequence of tSPM (see Section~\ref{subsec:Model-Overview}), and $s$ is a ground-truth scanpath from the set of ground-truth ones $S$ for a given image $I$. $DTW^{\gamma}$ is computed as given by Eq.~\ref{eq:dtw_loss}. However, we modify the computation of the distance matrix $\Delta(r',s) = [\delta(r'_i,s_j)]_{i,j}$ such that, instead of $\delta$ being an Euclidean distance, we have:
\begin{equation}
\delta(r'_i,s_j) = D_{KL} (r'_i||g^s_j),
\label{eq:delta_maps}
\end{equation}
\noindent where $r'_i$ is the $i^{th}$ predicted tSPM, $g^s_j$ is the spatialized representation of point $s_j$ as described in Section~\ref{subsec:Model-ScanpathParameterization}, and $D_{KL}$ is the Kullback-Leibler divergence (Eq.~\ref{eq:KL}).
This formulation allows our model to be optimized to find an alignment that minimizes both the \emph{spatial} and the \emph{temporal} differences between each predicted scanpath and the ground-truth ones, therefore predicting scanpaths that follow a similar distribution as the ground truth. To our knowledge, we are the first to propose such a combination of metrics.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/EvolutionConvLSTM_short.jpg}
\caption{We show, for two given test images, the evolution of our predicted time-evolving scanpath probabilistic maps (tSPM, see Section~\ref{subsec:Model-Overview}) over time. Our Bayesian approach enables
a probabilistic weight selection for the next fixation, and allows our model to be stochastic, while the ConvLSTM module allows taking into account the previous fixations (see Section~\ref{subsec:Model-OurModel}). }
\label{fig:TemporalEvolution}
\end{figure}
\bigbreak
\emph{Bias Regularization Term} Human gaze data in 2D images is known to be strongly biased towards the center of the images~\cite{lemeur2015saccadic}. Although inherent to human nature, such bias hinders the learning process of the network, which can easily overfit to that behavior. Based on this, we include a regularization loss term that penalizes scanpaths whose points tend to stay in the center of the image for a long time, hence eliciting a more exploratory behavior that better reflects ground truth data. Our regularization term is included in the pairwise cost computations $\delta(r'_i,s_j)$ for the distance matrix $\Delta$, modifying Eq.~\ref{eq:delta_maps} as follows: %
\begin{equation}
\delta(r'_i,s_j) = D_{KL} (r'_i||g^s_j) + \lambda_{CB} * \mathcal{L}_{Reg}(r'_i)
\label{eq:delta_wReg}
\end{equation}
\begin{equation}
\mathcal{L}_{Reg}(r'_i) = \frac{1}{D_{KL}(r'_i||g_c)} ,
\label{eq:Reg}
\end{equation}
where
$g_c$ is a Gaussian map representing the aforementioned center bias, computed following the representation introduced in Section~\ref{subsec:Model-ScanpathParameterization} for a point $c$ in the center of the image.
In order to set the relative weight of the regularization term $\lambda_{CB}$, we analyzed the datasets used (see Section~\ref{subsec:Model-TrainingDetails}), and found that this center bias behavior diminishes over time, with fixations being more widely spread over the image in later time instants. We measured the standard deviation of fixation positions in the ground-truth data, and found them to increase logarithmically over time ($R^2 = 0.855$); we increase $\lambda_{CB}$ in the same way (see Figure~\ref{fig:Bias}).
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/bias_reg.jpg}
\caption{When observing images, there is a strong bias towards the center of the image~\cite{lemeur2015saccadic}. This bias can hinder the learning process of deep networks, with a significant risk of overfitting to that behavior. We have analyzed that bias in the datasets used in this work (Section~\ref{subsec:Model-TrainingDetails}), and found that this behavior diminishes over time. (a) Distribution of scanpath fixations in two different time instants for the whole training dataset. (b) We have computed the standard deviations of fixations (y-axis) over time (x-axis) for all the ground-truth scanpaths (in orange), and found they increase in a logarithmic fashion (the fitted curve is shown in purple). We introduce a regularization term in our loss function to foster a similar behavior in our predicted scanpaths (see Section~\ref{subsec:Model-LossFunction}).}
\label{fig:Bias}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{figures/comparisons.jpg}
\caption{Comparison to other approaches for scanpath prediction. Each row shows representative scanpaths for a certain image: ground truth, our method, the IOR-ROI model from Sun et al.~\cite{sun2019visual}, Itti et al.~\cite{itti1998model}, and LeMeur et al.~\cite{lemeur2015saccadic}. Both Itti et al.'s and Sun et al.'s methods sometimes fail to fixate in salient regions, leading to unnatural scanpaths. Different from LeMeur's method, our model is able to focus on the regions of interest while maintaining a plausible exploration trajectory. %
Note that this figure only contains one sample scanpath per method for illustrative purposes. A more thorough analysis with quantitative metrics can be found in Table~\ref{tab:ComparisonMetrics}, and additional qualitative results can be found in the supplementary material.}
\label{fig:GridComparison}
\end{figure*}
\subsection{Model Architecture}
\label{subsec:Model-OurModel}
Our model features a recurrent neural network (RNN), which is able to extract and maintain temporal latent information from the scanpaths it is trained with. Particularly, we choose a convolutional long short-term memory (ConvLSTM) network~\cite{xingjian2015convolutional}, which is an adaptation of classic LSTMs to work with 2D data, such as images. This type of network has proven to be effective in many different problems, such as weather forecasting~\cite{xingjian2015convolutional}, video saliency detection~\cite{song2018pyramid}, or medical image segmentation~\cite{azad2019bi}.
ConvLSTMs behave in a similar way to traditional LSTMs, working over four different gates; however, since they handle spatial data, they conduct convolutional operations rather than lineal ones. The ConvLSTM used in this work\footnote{https://github.com/ndrplz/ConvLSTM\_pytorch} is defined as follows:
\begin{gather}
i_t = \sigma(Conv(x_t;w_{xi}) + Conv(h_{t-1};w_{hi}) + b_i)\nonumber\\
f_t = \sigma(Conv(x_t;w_{xf}) + Conv(h_{t-1};w_{hf}) + b_f)\nonumber\\
o_t = \sigma(Conv(x_t;w_{xo}) + Conv(h_{t-1};w_{ho}) + b_o)\nonumber\\
g_t = Tanh(Conv(x_t;w_{xg}) + Conv(h_{t-1};w_{hg}) + b_g)\nonumber\\
c_t = f_t \odot c_{t-1} + i_t \odot g_t\nonumber\\
h_t = o_t \odot Tanh(c_t)
\label{eq:ConvLSTM}
\end{gather}
\noindent where $x_t$ is the input at a time step $t$, and $h_{t}$ is the hidden state of the network up to the current time step, which also serves as the output of the network. We refer the reader to the work of Xingjian et al.~\cite{xingjian2015convolutional} for an in-depth explanation of the ConvLSTM architecture.
In our particular case of scanpath prediction, there is a degree of stochasticity driven by the inter- and intra-observer variability. As a result, given a particular trajectory (sequence of gaze fixation points), instead of predicting the next point in a deterministic manner, we predict a probability distribution (i.e., the previously introduced tSPM). %
Due to this, we combine the aforementioned ConvLSTM with the recently introduced Bayesian deep learning~\cite{wang2016towards, kendall2017uncertainties}. Unlike traditional deep learning (DL), where the weights of the network are deterministic, in Bayesian DL the weights of a particular layer are sampled from a probability distribution, and thus the network itself can account for the inherent uncertainty of the data~\cite{kendall2017uncertainties}. During the network training, those weight distributions are also optimized. Thus, in our case, we substitute each convolutional operation from Eq.\ref{eq:ConvLSTM} with a Bayesian 2D convolution. This way, our ConvLSTM will no longer be deterministic, and will be able to account for the stochastic component of scanpaths.
\begin{table*}[t!]
\centering
\caption{Results of our quantitative comparisons. We first include upper (human baseline, \emph{Human BL}) and lower (random scanpaths, \emph{Random BL}) baselines for reference. Then, we compare to Sun et al.'s IOR-ROI~\cite{sun2019visual}, Itti et al.~\cite{itti1998model}, and LeMeur et al.~\cite{le2016introducing}. We also compute our set of metrics varying our probabilistic threshold $th$ (see Section~\ref{subsec:Model-OurModel}). Arrows indicate whether higher or lower is better; boldface highlights the best result for each metric. Overall our model ($th=0.7$) yields the best performance across metrics, closest to the human baseline. Moreover, as the last two lines show, decreasing the value of our parameter $th$ (thus leading to more variability in our scanpaths), still leads to good results (see Section~\ref{subsec:EvalScanpathVar} for details).}
\label{tab:ComparisonMetrics}
\arrayrulecolor{black}
\resizebox{\linewidth}{!}{%
\begin{tabular}{c!{\color{black}\vrule}cc|cc|cc|cccc}
\multicolumn{1}{l!{\color{black}\vrule}}{} & \multicolumn{2}{c|}{String alignment} & \multicolumn{2}{c|}{Curve similarities} & \multicolumn{2}{c|}{Time-series analysis} & \multicolumn{4}{c}{Recurrence analysis} \\
Model & LEV$\downarrow$ & SCAM$\uparrow$ & HAU$\downarrow$ & FRE$\downarrow$ & fDTW$\downarrow$ & TDE$\downarrow$ & REC$\uparrow$ & DET$\uparrow$ & LAM$\uparrow$ & CORM$\uparrow$ \\
\arrayrulecolor{black}\hline
Human BL & 10.77 (1.61) & 0.38 (0.06) & 95.97 (18.40) & 140.02 (26.16) & 550.84 (133.71) & 42.40 (8.45) & 6.69 (3.74) & 1.72 (1.51) & 6.09 (6.01) & 22.11 (7.41) \\
Random BL & 12.31 (0.88) & 0.20 (0.02) & 148.01 (13.76) & 199.30 (13.63) & 877.15 (71.66) & 69.87 (4.34) & 0.73 (0.34) & 0.02 (0.09) & 0.19 (0.25) & 3.79 (1.74) \\
\arrayrulecolor[rgb]{0.753,0.753,0.753}\hline
Ours & \textbf{11.47 (1.13)} & 0.34 (0.06) & \textbf{103.44 (27.13)} & \textbf{144.77 (32.77)} & \textbf{610.02 (155.96)} & 43.74 (10.25) & \textbf{3.52 (2.86)} & \textbf{0.64 (0.84)} & \textbf{5.05 (4.96)} & \textbf{13.95 (7.92)} \\
IOR-ROI & 13.26 (0.71) & 0.30 (0.05) & 115.50 (20.22) & 166.07 (21.69) & 777.75 (119.46) & 46.98 (7.18) & 1.80 (0.98) & 0.18 (0.31) & 0.81 (1.35) & 10.28 (4.43) \\
Itti et al. & 14.04 (0.80) & 0.23 (0.05) & 160.09 (29.31) & 207.97 (27.21) & 1041.16 (153.97) & 63.88 (9.54) & 1.02 (1.98) & 0.04 (0.22) & 0.62 (2.03) & 5.84 (6.00) \\
LeMeur et al. & 12.58 (0.78) & \textbf{0.35 (0.04)} & 104.84 (12.79) & 163.59 (20.52) & 669.67 (108.49) & 39.75 (6.53) & 2.39 (1.18) & 0.40 (0.48) & 2.09 (2.26) & 12.54 (4.45)\\
\arrayrulecolor[rgb]{0.753,0.753,0.753}\hline
Ours (th = 0.5) & 11.60 (0.98) & 0.33 (0.06) & 103.97 (23.23) & 149.37 (30.09) & 636.08 (146.44) & 45.46 (10.30) & 3.01 (2.28) & 0.50 (0.58) & 3.14 (2.85) & 12.96 (6.73) \\
\cline{7-7}
Ours (th = 0.35) & 13.26 (0.71) & 0.30 (0.05) & 102.17 (19.77) & 149.99 (28.42) & 639.07 (138.27) & 45.77 (9.63) & 2.82 (2.09) & 0.44 (0.54) & 2.43 (2.59) & 12.88 (6.21) \\
\end{tabular}%
}
\arrayrulecolor{black}
\end{table*}
Instead of feeding our Bayesian ConvLSTM with the raw images, we preprocess them to facilitate the learning process and enhance our model's performance. For this, we first extract the main image features with a pretrained VGG\-19~\cite{russakovsky2015imagenet, simonyan2014very}, and a semantic segmentation mask with a pretrained ResNet50~\cite{he2016deep}. We then convolve both of them together, to obtain a final, comprehensive, single-channel image feature representation. At each time step, this feature representation is fed to the ConvLSTM alongside with (i) the corresponding Gaussian map representing a fixation, and (ii) a CoordConv layer~\cite{liu2018coordconv}. CoordConv layers have proven to ease spatial learning and facilitate network convergence. Different from previous approaches~\cite{itti1998model, le2016introducing, sun2019visual}, we do not resort to saliency for scanpath prediction, since saliency is an aggregated spatial notion that has lost the temporal information, and is not usually available as ground truth.
With this input, our network is able to predict a tSPM (see Section~\ref{subsec:Model-Overview}). Then, to choose the actual pixel where the gaze point will fall, we follow a probabilistic weighted sampling strategy, that again accounts for the stochastic nature of human visual exploration. We first discard all pixels with a probability lower than a threshold $th = 0.7$ (see Section~\ref{subsec:EvalScanpathVar} for additional evaluation on this), and then sample the next point based on the predicted map's probabilities. Once a point $s_i = (x_i,y_i)$ has been sampled, a Gaussian map centered in $(x_i,y_i)$ is again computed (see Section~\ref{subsec:Model-ScanpathParameterization}), and fed to the network for its posterior predictions, until the whole scanpath is predicted.
Once the whole scanpath has been predicted, our novel KL-DTW loss (see Section~\ref{subsec:Model-LossFunction}) is computed between the ground truth and the sequence of tSPM, and the network is then optimized. A complete overview of our model can be seen in Figure~\ref{fig:OurModel}.
\subsection{Datasets and Training Details}
\label{subsec:Model-TrainingDetails}
Following previous work~\cite{sun2019visual}, we train our model over the OSIE dataset~\cite{xu2014predicting}, which contains 700 different images with their corresponding gaze information for a total of fifteen observers, yielding a total of approximately 10,500 scanpaths. We again follow previous work~\cite{sun2019visual} and discard all the scanpaths with $N < 4$, %
and generate scanpaths of length $N = 8$, which is the mean length of our ground-truth data.
For the rest of the scanpaths, in order to train our model, we preprocess each to follow the representaton introduced in Section~\ref{subsec:Model-ScanpathParameterization}. To validate our model, and again inspired by previous approaches~\cite{sun2019visual}, we use the MIT low resolution dataset~\cite{judd2011fixations}. Please refer to Section~\ref{sec:evaluation} for further details.
We trained our model using the Hydra~\cite{Yadan2019Hydra} and Pytorch Lightning~\cite{PytorchLightning} frameworks for PyTorch, logging and checkpointing all the necessary parameters to restore the training process at any point. We use the Adam optimization algorithm~\cite{Adam}. The learning rate has a value of $10^{-4}$, and we set batch size to 1. We trained our model on a Nvidia RTX 2080 Ti with 11GB of VRAM until convergence, for a total of 22 hours.
\section{Related Work}
\label{sec:related}
\subsection{Saliency prediction} First approaches towards modeling human attention were based on saliency, as a measure of how much each part of a scene attracts human attention. The seminal work by Itti et al.~\cite{itti1998model} established the basis of visual attention prediction in images, by extracting hand-crafted features to generate a saliency map. This work inspired many posterior approaches (e.g., ~\cite{saliencytoolbox, zhao2011saliency}) which were also based on the computation of \textit{conspicuity maps} for different visual features (such as color, intensity, orientation of edges, or faces), which were then combined into a final saliency map. Other approaches included multiple semantic segmentation and surroundness analysis~\cite{lu2012cvpr}, or known human priors such as center bias or horizon line detectors~\cite{borji2012cvpr}, to improve saliency prediction.
With the proliferation of deep learning techniques and the appearance of public datasets~\cite{judd2009learning, mit-saliency-benchmark, yang2013saliency}, data-driven methods emerged, yielding impressive results. These methods were mostly based on convolutional neural networks (CNN) that extract latent features from which to infer saliency~\cite{Vig_2014_CVPR, kummerer2016deepgaze, Pan_2016_CVPR, martin20saliency}. Other approaches also leveraged the advances of generative networks~\cite{Pan_2017_SalGAN, xia2019predicting} and recurrent neural networks~\cite{cornia2018predicting, Wang_2018_CVPR}.
None of these works, however, take into account the dynamic nature of gaze behavior, not being able to model the temporal dimension of human attention.
\subsection{Scanpath prediction}
Scanpath models usually aim to progressively build a scanpath by concatenating single-point predictions, which may be partially based on the previous points of the path. Ellis and Smith~\cite{ellis1985patterns} presented a framework based on Markov stochastic processes. Later, other works proposed approaches that included known human biases, (such as the center bias, or human oculomotor constraints)~\cite{lemeur2015saccadic, tatler2009prominence,liu2013semantically,tavakoli2013stochastic}. Data-driven methods provide faster and more precise approaches, for instance using existing saliency prediction methods as a proxy to scanpath prediction, by means of winner-takes-all and inhibition-of-return strategies, by sampling heuristics~\cite{assens2017saltinet}, or simply leveraging deep features from neural networks~\cite{kummerer2016deepgaze}.
Scanpath prediction methods can be roughly categorized into (i) biologically inspired, (ii) statistically inspired, (iii) cognitively inspired, and (iv) engineered models~\cite{kummerer2021state}. Biologically inspired models take into account the importance of low-level features~\cite{itti1998model, tatler2017latest, zanca2019gravitational}, visual working memory~\cite{wang2011simulating}, attention and inhibition-of-return~\cite{engbert2015spatial}, or neuropsychology~\cite{adeli2017model}. Statistically inspired models try to mimic certain statistical properties of human scanpaths~\cite{boccignone2004modelling, sun2014toward, le2016introducing, clarke2017saccadic, xia2019predicting}. Cognitively inspired models assume that other cognitive processes besides low-level features can drive observers' attention, and therefore implement different human mechanisms such as low-level saliency, semantic and spatial effects~\cite{liu2013semantically} or region-of-interest and inhibition-of-return~\cite{sun2019visual}. Finally, engineered models just exploit the ability of data-driven techniques to fit to given data~\cite{chen2021predicting, assens2017saltinet, assens2018pathgan, bao2020human, hu2020dgaze}.
With this surge of data-driven approaches, and motivated by the temporal dependencies that human viewing behavior presents, some works have resorted to recurrent neural networks (RNN), which are capable of encoding previous information, and leveraging it to formulate a prediction~\cite{nguyen2018your}. However, scanpath prediction requires handling temporal and spatial information. To account for both, some recent approaches have built their models following ConvLSTM strategies~\cite{qiao2020viewport, li2019very, sun2019visual, xu2021spherical}, where convolutional operators handle spatial features while LSTM architectures enable learning temporal information.
However, all the aforementioned works are trained to optimize \textit{single-point} predictions, by means of direct losses such as MSE~\cite{assens2018pathgan} or BCE~\cite{sun2019visual}, and thus do not concern themselves with the plausibility of the scanpath as a whole. Recently, the work of Martin et al.~\cite{martin2021scangan360} presented a scanpath generation method for 360{$^\circ$} content, where the model was optimized by means of a dynamic time warping loss function on the whole distribution of ground-truth scanpaths, rather than on a single-point solution, and was hence able to learn and mimic latent behaviors in its predictions.
In this work, and endorsed by previous literature, we resort to convolutional recurrent networks, but overcome the limitations of single-point prediction approaches by combining a novel loss function that combines dynamic time warping and Kullback-Leibler divergence, and a probabilistic approach. The loss function enables focusing on both the temporal and spatial aspects of the whole scanpaths, and optimizes our model over the whole distribution of real scanpaths, while our probabilistic approach accounts for the inherent human variability.
\endinput
\dani{Esta sección va a ser mergeada con lo anterior}
\paragraph{Gaze and viewport prediction in 360º content} \diego{wait, so why is Martin discussed in the paragraph above?} With the proliferation of virtual reality and head-mounted displays (HMD), there has also been a body of literature working on predicting users' viewing behavior, either in terms of generating whole trajectories~\cite{martin2021scangan360} or in terms of viewport prediction based on users' previous history.
Hu et al.~\cite{hu2020dgaze} proposed a model based on CNNs to encode previous user's information and predict the coordinates for their next gaze point.
Nguyen et al.~\cite{nguyen2018your} proposed a CNN saliency predictor for 360$^\circ$ that they later used as a proxy,
combined with LSTM networks, to predict user's next head orientation.
Later on, many works have leveraged ConvLSTM strategies to tackle this problem~\cite{qiao2020viewport, li2019very, xu2021spherical}. Although our work is focused on 2D images, a vast literature endorses the use of recurrent convolutional approaches, suggesting them to be the best suited for this problem. Nonetheless, these works were also designed to optimize each point prediction, rather than the whole generated scanpath, which again neglects the entity of the scanpath itself.
\endinput
\cite{ngo2017saccade} brief workshop on gaze prediction based on CNN and RNN
\cite{sun2019visual} IOR-ROI
\cite{kummerer2021state} SOTA of scanpath prediction
\cite{itti1998model} Itti et al. 1998
\cite{wang2011simulating} Wang et al. 2011 *
\cite{engbert2015spatial} SceneWalk Engbert et al. 2015
\cite{adeli2017model} MASC Adeli et al. 2017
\cite{tatler2007central} LATEST Tatler et al. 2017 *
\cite{zanca2019gravitational}G-Emyiol Zanca et al. 2019
\cite{boccignone2004modelling} CLE Boccignone and Derraro 2004
\cite{sun2014toward} SGP Sun et al 2014 *
\cite{le2016introducing} Le Meur 2016
\cite{clarke2017saccadic} SaccadicFlow 2017
\cite{xia2019predicting} IRL Xia et al 2019
\cite{assens2017saltinet} SaltiNet Assens 2017
\cite{assens2018pathgan} PathGAN et al. 2019 *
\cite{Sitzmann_TVCG_VR-saliency} How do people explore
\cite{martin2021scangan360} ScanGAN360
\cite{martin20saliency} Saliency 360
\cite{chen2021predicting} VQA convLSTM CVPR 2021
\cite{saliencytoolbox, zhao2011saliency, lu2012cvpr, judd2009learning, borji2012cvpr} saliency
\cite{torralba2006contextual, kummerer2016deepgaze} more with saliency
Extensive datasets of human attention~\cite{judd2009learning, mit-saliency-benchmark, yang2013saliency}
\cite{Pan_2016_CVPR, Pan_2017_SalGAN, cornia2018predicting, Vig_2014_CVPR, wang2017deep, Wang_2018_CVPR, xu2019pami} DL for saliency
|
astro-ph/9707274
|
\section{Introduction}
\vspace{-2pt}
It is a great pleasure for me to dedicate this lecture to Art Cox whose
contributions to nonlinear pulsations go back to the very pioneering years of
numerical hydrodynamics some 35 years ago. It is fair to say that there
isn't a type of star that Art hasn't attempted to model.
It is of course impossible to cover in this short review all the topics of
interest to nonlinear stellar pulsations and I will have to make a selection
that necessary reflects my biases. There has been a huge amount of work done
on stellar pulsations and I also apologize upfront for many important
omissions. Extensive references are provided in the excellent reviews of
Gautschy and Saio (1995, hereafter GS).
There are basically two approaches to nonlinear stellar pulsations that are
complementary in some ways. The first, and oldest, is numerical hydrodynamics.
While this is a 'brute force' approach, it has the advantage that with
state-of-the-art physical input and numerical methods it can yield accurate
information about the nonlinear pulsations of individual stellar models. The
second approach is the amplitude equation formalism, and it is perhaps of a
more fundamental nature (e.g. Buchler 1988). It gives a broad overview of the
possible behavior, such as modal selection (bifurcations in modern language)
and the effects of resonances (Buchler 1993). We note that currently this is
the {\sl only} tool with which we can understand nonlinear {\sl nonradial}
pulsations.
About 25 years ago John Cox (1975) felt that "overall, pulsation theory and
its applications are in a fairly satisfactory state, except for a few
disturbing problems". In the intervening time, of course, a good deal of
progress has been made. However, there remain some "disturbing problems", and
in fact some new ones have appeared recently.
Perhaps the most important progress came from outside pulsation theory, namely
from a revision of the stellar opacities (Iglesias \& Rogers 1992, Seaton,
Kwan, Mihalas \& Pradhan 1994), and it essentially solved two longstanding
Cepheid problems. First, the so-called Cepheid bump mass problem (e.g. Art Cox
1980) essentially disappeared and the agreement with observation appears now to
be quite satisfactory for the Galactic Cepheids. (Moskalik, Buchler \& Marom
1992, Kanbur \& Simon 1993). Second, the beta cephei models finally became
linearly unstable (cf. GS).
The very first numerical Lagrangean hydrodynamical computations of
\hyphenation{Ceph-eid} Cepheid variables were already apologetic about the poor
resolution of the partial hydrogen ionization front during the pulsation. In
the late 70s Castor, Davis \& Davison (1977) developed the first code that
could track the moving sharp features and Aikawa \& Simon (1983) started to use
it systematically. More recently, taking advantage of new developments in
computational physics and of faster computers, several groups have developed
more flexible adaptive codes (Gehmeyr \& Winkler 1992, Dorfi \& Feuchtinger
1991, Buchler, Kollath \& Marom 1997). With these codes it is now possible to
obtain a good spatial resolution that satisfactorily resolves all shocks and
ionization fronts and achieves a much enhanced numerical accuracy of the
pulsation. The most striking improvement is in the smoothness of the
lightcurves and radial velocity curves. Fortunately, though, we do have to
discard the Lagrangean results since quantities such as Fourier decomposition
parameters are not substantially different from those obtained with Lagrangean
codes.
However, instead of congratulating ourselves on these and other successes,
it is perhaps more useful to dwell on the "disturbing problems".
\vspace{-8pt}
\section{'Disturbing Problems'}
\vspace{-5pt}
\subsection{Low metallicity Cepheids:}
\vspace{-2pt}
The microlensing projects have provided us with a large treasure trove on
variable stars in the Magellanic Clouds, and since these galaxies have been
found to be metal deficient compared to the Galaxy the new observations have
considerably enlarged our data base, especially for the Cepheids since they
should be strongly affected by metallicity content.
The Fourier decomposition parameters of the fundamental Cepheid
\hyphenation{light-curves} lightcurves in the SMC (Beaulieu \& Sasselov 1997)
and the LMC (Beaulieu et al. 1995, Welch et al. 1995) indicate that the
$\phi_{21}$ phase progression is very similar to that of the Galaxy, although
it may be shifted by $\pm$ 1 or perhaps 2 days in period. However, the size of
the excursion in $\phi_{21}$ in the resonance region is essentially the same as
in the Galaxy.
A comparison of the {\sl linear} bump Cepheid models with mass--luminosity
relations derived from evolutionary computations show up an irreconcilable
difference with the observations (Buchler, Koll\'ath, Beaulieu \& Goupil 1996).
Furthermore the {\sl nonlinear} calculation of low Z Cepheid model pulsations
give $\phi_{21}$ in which the size of the excursion in the 10 day resonance
region almost vanishes as one goes to Z values of 0.005 (Fig.~1) (see also
poster by Goupil).
As far as beat Cepheids are concerned, the hope had been that the observed
period ratios could give a powerful constraint on the stellar model parameters.
While globally, it might appear that the observed period ratios of the
Galactic, LMC and SMC beat Cepheids are in agreement with the linear models
obtained with the new opacities, when looked at in detail, i.e. by considering
individual stars, the agreement is no longer as good. More seriously,
(unknown) nonlinear period shift corrections of as little as 0.1\% can give
substantially different or uncertain mass assignments (Buchler et al. 1996).
\begin{figure}
\centerline{\psfig{figure=phi21Z.ps,width=10cm}}
\caption{\baselineskip 0.1cm Behavior of the Fourier phase $\phi_{21}$
as a function of pulsation period for various values of metallicity.
}
\label{fig-1}
\end{figure}
\vspace{-2pt}
\subsection{Overtone Cepheids \thinspace (s Cepheids)}
\vspace{-2pt}
The Fourier decomposition parameter $\phi_{21}$ for the first overtone Cepheids
display a "Z" shape around a period of 3 days for the Galaxy (e.g. Antonello et
al. 1990) and around $\approx$ 3.5 and 4.0 days for the LMC (Beaulieu et
al. 1995, Welch et al. 1997) and SMC (Beaulieu \& Sasselov 1997), respectively.
Hydrodynamical models of overtone Cepheids do not agree with the observations
(Antonello \& Aikawa 1993). Similar results were obtained by Schaller \&
Buchler in a fairly extensive survey of s Cepheids (unpublished preprint,
1994).
\vspace{-2pt}
\subsection{RR Lyrae:}
\vspace{-2pt}
Overall, the modelling of RR Lyrae pulsation gives decent agreement (except for
'double--mode' RRd pulsations) (cf GS for references), but when a more detailed
comparison with observation is carried out in a systematic fashion, serious
discrepancies pop up as Kov\'acs \& Kanbur (1997) show.
\vspace{-2pt}
\subsection{Population II Cepheids}
\vspace{-2pt}
\noindent {\sl BL Her stars}:
The modelling of these low period stars (Hodson, Cox \& King 1982, Buchler \&
Buchler 1994; cf. also GS) shows overall agreement for the lightcurve data, but
the $\phi_{21}$ are considerably smaller than the observational data indicate.
\noindent {\sl W Vir and RV Tau stars:}
It is now well known that the W Vir and RV Tau stars belong to the same group
(Wallerstein \& Cox 1984) and that the properties vary gradually from the low
period, low luminosity W Vir stars to the long period, high luminosity RV Tau
stars. Although the observations are not very extensive they indicate that the
W Vir stars are periodic up to $\approx$ 15 days from whence they start showing
alternations in the cycles, alternations that become increasingly irregular
with 'period'. The mechanism for this irregular behavior remained a mystery
until relatively recently.
Numerical hydrodynamical modelling of sequences of W Vir models (Buchler \&
Kov\'acs 1987) uncovered very characteristic nonlinear behavior that goes under
the name of low dimensional chaos. Since the concept of low dimensional chaos
is still very new in Astronomy we stress that this behavior is very different
from a static multi-periodic, and also different from an evolving multiperiodic
system. (For the reader familiar with chaos we mention that the presence of
period doubling along sequences of models quite clearly shows the presence of a
horseshoe dynamics with almost one-dimensional return maps. The chaos in
these models is of the stretch--and--fold type, very similar to the one that
occurs in the R\"ossler system of 3 ODEs, e.g. Thompson \& Stewart (1986).
There is however a discordance with observations, that is the onset of period
doublings and chaos occurs already at 7--10 days, rather than at the $\approx$
15 days indicated by the observations.
\vspace{2pt}
The numerical modelling of RV Tau behavior is much harder because the ratio of
growth-rate/pulsation frequency is much greater. The pulsations are thus much
more violent and result in sudden loss of the whole envelope in our
calculations (see however Fadeyev \& Fokin 1985, Takeuti \& Tanaka 1995). We
think that a physical dissipation mechanism is missing from our modelling, even
though we solve the radiation hydrodynamics equations. Most likely turbulent
dissipation plays a role in taming the pulsations of these stellar models.
\vspace{2pt}
The occurrence of chaos in hydrodynamical models is quite robust as we have
already indicated, but could it be an artifact of the theoretical modelling,
even though it was confirmed with a totally different code (Aikawa 1990).
Clearly it needed to be challenged by observation. A recent nonlinear analysis
that goes under the name of 'global flow reconstruction' rather conclusively
shows that the irregular lightcurve of R~Sct, a star of the RV~Tau type, is the
result of a low-dimensional chaotic dynamics (For details we refer the reader
to Buchler et al. 1995, or to a didactic review Buchler 1997). More
specifically, the analysis establishes that the dimension is as low as 4. In
other words, the lightcurve is generated by 4 coupled ordinary differential
equations. Put differently, if $s(t)$ denotes the magnitude of the star, then
at any time $t_n$ the lightcurve is a function of {\sl only four preceding
times},
$s(t_n) = F[s(t_{n-1}),s(t_{n-2}),s(t_{n-3}),s(t_{n-4})]$\hfill\break
This result is quite remarkable since the pulsations of this star are quite
violent (factors of 40 changes in luminosity!) with shocks and ionization
fronts running about. As a physicist, though, we are not satisfied merely with
this result, but would like to know what more physics we can learn about this
star. A four dimensional dynamics indicates that probably two vibrational
modes are involved in the dynamics. This is strongly corroborated by a
linearization of the dynamics about the equilibrium that tells us that two
spiral stability roots are involved, one unstable with frequency $f_0$=0.0068
d$^{-1}$, the other stable with frequency $f_1$=0.014 d$^{-1} \raise3pt\hbox{${\scriptstyle>}$$
2$f_0$. The physical picture that emerges is then that the irregular
lightcurve of R~Sct is the result of the nonlinear interaction between an
unstable, lower frequency mode and a linearly stable overtone with
approximately twice the frequency.
A recent nonlinear analysis of the AAVSO lightcurve of AC~Her similarly shows
low dimensional chaos (Koll\'ath et al. 1997).
To summarize, the predictions of the nonlinear hydrodynamics, viz. that the
irregular behavior of these stars is due to low dimensional chaos, are thus
confirmed, but better numerical modelling is necessary to achieve closer
agreement with observations.
\begin{figure}
\centerline{\psfig{figure=plot_lc_adtc.ps,width=10.5cm}}
\vskip -5pt
\caption{\baselineskip 0.1cm Behavior of the lightcurve as a function of zone
number with the additional zones between 11,000 and 65,000\thinspace K; top: radiative
adaptive code; bottom: turbulent diffusion code. }
\end{figure}
Could there be a common cause for most of the "disturbing problems"? It is
very unlikely that the opacities or the equation of state are still at fault.
A better treatment of radiative transfer (e.g. poster by C. G. Davis) is also
not likely to fix most of the discrepancies. However, as we have already
pointed out, we seem to be missing a physical dissipation mechanism in our
radiative codes. In Lagrangean codes it is necessary to include
pseudo-viscosity (\`a la von Neumann-Richtmyer) in order to handle shocks.
While this viscosity is ok for many explosive shock problems, such as supernova
explosions, it unfortunately also provides artificial and unphysical
dissipation elsewhere. Kov\'acs (1990) found that when he reduced the
artificial dissipation to a minimum the nonlinear pulsation amplitudes kept
increasing to unrealistically large values. Similarly, when we increase the
spatial resolution of the models the pulsation amplitudes also increase
(Fig.~2). In fact it turns out that no combination of linear and quadratic
viscosity parameters give satisfactory fundamental and first overtone models.
(We hasten to add though that the Fourier decomposition parameters and the
Floquet stability coefficients of the limit cycles, fortunately, are reasonably
independent of these changes, so that we do not have to throw away everything
we have done so far!)
\vspace{-8pt}
\section{Turbulent diffusion and convection}
\vspace{-5pt}
I had always hoped that, at least near the blue edge of the instability, the
major effect of convection was {\sl static} and would thus merely cause a small
systematic change in the structure of the models, so that we could get away
with purely radiative hydro models. (Of course it is the important {\sl
dynamic} effect of convection which gives rise to the red edge.) The problems
and tests that we have described above however indicate that we have to include
turbulent convection in the hydrocodes {\sl in order to provide a missing
powerful dissipation mechanism.}
Turbulent convection is of course a 3D phenomenon and at present, and for some
time to come, it is not possible to run realistic 3D pulsation models.
Progress has been made with relatively idealized 3D convection modelling, but
it is slow and these calculations do not yet allow us to extract the 1D
recipes that we need for our radial pulsation codes. In the meantime we have
to rely on ad hoc 1D recipes with ad hoc parameters.
The earliest models for convection were local both in time and in space and
were found to be inadequate for stellar pulsations. Today we have a family of
time-dependent turbulent diffusion models that go back to Spiegel (1963), Unno
(1967) and Castor (1968). A simplified version was implemented in a
hydrodynamics code by Stellingwerf (1982). Recent applications have been made
by several groups, viz. Bono et al. (1997), Gehmeyr (1992), Feuchtinger \&
Dorfi (1996) and by Yecko, Koll\'ath \& Buchler (1997). The strategy has been
and remains to compare the predictions of the models with observations and from
thence calibrate the unknown parameters.
Our own numerical testing shows that with a turbulent diffusion model the
saturation amplitude of the pulsations becomes largely independent of the
zoning (Fig.~2, kindly prepared by Phil Yecko). Another positive point is that
with the reduced pulsation amplitudes the shocks are absent or much weaker.
This in turn allows one to reduce the artificial dissipation to a very small
value.
As a word of caution, we note though that these models may still be too local
in space (they only have a diffusion operator for the turbulent energy) and the
existence of plumes may have to be taken into account as suggested by Rieutord
\& Zahn (1995).
\vspace{-8pt}
\section{Amplitude Equations}
\vspace{-5pt}
We have already pointed out that the amplitude equation formalism offers an
alternative to 'brute force' numerical modelling. We would like to stress here
that contrary to the claim of GS this formalism is not an Ansatz, but is a
mathematically rigorous aproach, namely normal form theory. Essentially, the
only restriction is that the formalism applies to weakly nonadiabatic
pulsators. Many of the interesting stars, viz. the classical Cepheids, the
RR~Lyrae, the delta Scuti and the white dwarfs definitely fall into that
category. For details of the formalism as applied to stellar pulsations we
refer the reader to reviews (Buchler 1988, 1993). We note also that the
formalism has recently been extended to nonradial pulsations, in an Eulerian
formulation by Goupil \& Buchler (1994) and in a Lagrangean one by van Hoolst
(1994).
In a nutshell, the formalism reduces the PDEs of hydrodynamics and radiation
transfer to a small set of ODEs for the amplitudes of the excited modes. The
structure of the equations is uniquely determined by the types of resonances
that occur among the linear modes of oscillation; the remaining physics is all
contained in the values of the nonlinear coefficients. The amplitude equations
are generic and capture the essence of the behavior of the system; it is not
astonishing thus that they pop up in many different areas of physics,
chemistry, biology, etc..
The solutions of the amplitude equations (usually the fixed points) tell us
then about the possible types of behavior for a sequence or array of stellar
models. They also explain the effect of resonances on the morphology of the
lightcurves and radial velocity curves. Perhaps the most useful and best known
application of the formalism has been to describe the behavior of the Fourier
decomposition parameters through the Hertzsprung progression of the bump
Cepheids (e.g. Buchler 1993).
\vspace{-0pt}
\section{Potpourri}
\vspace{-5pt}
\subsection{'Double-mode' behavior}
\vspace{-2pt}
As an application of the formalism to beat (double-mode) behavior we have
plotted in Fig.~3 the predictions of the amplitude equation formalism in which
it is assumed that two nonresonant modes (e.g. the fundamental and first
overtone) interact and can give rise to a double-mode pulsation along a single
parameter sequence of models (scenario 'AB' in Fig.~1 of Buchler \& Kov\'acs
1986). Here we denote by $A_0$ both the Fourier amplitude of the fundamental
mode for the fundamental pulsators, and also the amplitude of the fundamental
component in the case of double mode pulsators. The right figure shows the
corresponding first overtone amplitudes. {\sl Note that the transition from
single mode to double-mode occurs smoothly} for both amplitudes. In a
realistic sample of models one would of course get a dispersion both vertically
and horizontally, but the conclusion remains unaffected.
\begin{figure}
\centerline{\psfig{figure=schem_double.ps,width=12cm}}
\vskip -1pt
\caption{\baselineskip 0.1cm Schematic behavior of the fundamental and first
overtone amplitudes for nonresonant scenario; left: fundamental amplitude;
right: first overtone amplitude.
}
\end{figure}
\begin{figure}
\centerline{\psfig{figure=smc_four.ps,width=13.5cm}}
\vskip -15pt
\caption{\baselineskip 0.1cm Behavior of the fundamental and first overtone
amplitudes for the fundamental and first overtone amplitudes of the SMC
Cepheids; left: fundamental amplitude; right: first overtone amplitude
(courtesy of Beaulieu).
}
\end{figure}
Let us compare this now to the observations, first to the RR~Lyrae in M15 of
Sandage et al. as reproduced in Buchler \& Kov\'acs 1986. In their fig.~8 the
first overtone amplitudes are displayed on the left as dots for the RRc and as
crosses for the RRd, whereas the fundamental amplitudes are the dots on the
right for RRab and open circles for the RRd. The $A_1$ amplitudes of the RRc
and RRd stars indeed vary continuously, but the $A_0$ amplitudes of the RRd are
considerably smaller than those of the RRab. {\sl We are forced to conclude
that a nonresonant scenario is not in agreement with the observations. To
explain the jump in the fundamental amplitudes from the RRab to the RRd is is
necessary to invoke the presence of a resonance}. (A jump in the amplitudes
might also be brought about by higher order, viz. quintic nonlinearities, but
in no studies so far have such nonlinearities ever been found to play a role,
and furthermore the coefficient of the cubic nonlinearity would have to have a
sign opposite to its usual one.) However, no low order resonances are present
in the stellar parameter range of the RR~Lyrae, and it therefore has to be a
higher order resonance that is involved. We note in passing that therefore
these stars were better called {\sl beat RR~Lyrae} because more than 2 modes
are involved.
Let us now turn to the Cepheids. J.-P. Beaulieu has kindly provided me with his
SMC Fourier decomposition data that are displayed in Fig.~4. The first
overtone amplitudes of the beat Cepheids fall right into the range of the
s~Cepheids, but the fundamental amplitudes again are much smaller for the beats
than for the fundamental Cepheids. We are forced to interpret this to mean
that {\sl a resonance must also be involved in the beat Cepheids}.
\vspace{-2pt}
\subsection{The Blazhko effect}
\vspace{-2pt}
Several mechanisms for the Blazhko effect have been proposed (cf. GS), but a
fully satisfactory understanding has so far defied us. In the following we
want to present an observational constraint that to our knowledge has not yet
been discussed. Let us define the Fourier decomposition as $$m(t) = m_o + a\thinspace
cos(\omega t+\phi_1)+b\thinspace cos(2\omega t +\phi_2) + \ldots
$$
\noindent and as usual $\phi_{21}=\phi_2-2\phi_1$.
\begin{figure}
\centerline{\psfig{figure=blazhko_2.ps,width=10.2cm}}
\vskip -2pt
\caption{\baselineskip 0.1cm Variation of the Fourier decomposition parameters
over the Blazhko cycle for RR~Lyrae and AR~Herculis.}
\end{figure}
The lightcurves over a whole Blazhko cycle have been published by Walraven
(1949) for RR~Lyr and by Bal\'azs \& Detre (1939) for AR~Her. In Fig.~5 we
show the variation of the pairs of Fourier parameters, a, b, and $\phi_{21}$.
All quantities are seen to oscillate about some center, which is of particular
interest for the phase $\phi_{21}$. The fact that the phase does not run
through 2$\pi$ over a cycle imposes a severe constraint that can be used to
eliminate some models.
\vspace{-2pt}
\subsection{The dip in the Galactic Cepheid period histogram}
\vspace{-2pt}
In 1977 Becker, Iben \& Tuggle published Cepheid period histograms for several
galaxies. The histogram for the Galaxy and for M31 showed a pronounced dip in
the 8--10 day period range, whereas the corresponding histograms for the LMC
and SMC were devoid of such a deficiency. Trying to explain the dip on the
basis of their stellar evolution calculations Becker et al. had to invoke an ad
hoc double-humped birthrate function. Nonlinear calculations show that this is
no longer necessary (Buchler, Goupil \& Piciullo 1997). Indeed, a perhaps
unexpected side effect of the new opacities is that the fundamental limit cycle
of the Cepheid variables can be unstable. This instability is found to occur
in the 8--10 day period range for metallicity parameters 0.013 $< Z <$ 0.035.
Note that this is consistent with a dip in the Galaxy and M31 and with the
absence of a dip in LMC and SMC.
\vspace{-2pt}
\subsection{Strange Cepheids and RR~Lyrae}
\vspace{-2pt}
It has recently been found that {\sl strange modes} can occur even in weakly
nonadiabatic stars such as Cepheids and RR~Lyrae. A thorough study of the
phenomenon has shown that these modes are surface modes that can be
self-excited to the hot side of the blue edge of the normal Cepheid instability
strip (Buchler, Yecko \& Koll\'ath 1997). The strange modes are recurrent at
higher order wave-vectors, but the lowest ones have typical periods 1/4 to 1/5
that of the fundamental pulsational mode, i.e. they have periods ranging from
$\approx$0.2 days to $\approx$10 days, depending in their luminosity. Their
locations in schematic HR and PL diagrams are shown in Fig.~6.
\begin{figure}
\centerline{\psfig{figure=schem_strange.ps,width=13.5cm}}
\vskip -14pt
\caption{\baselineskip 0.1cm Schematically, location of strange Cepheids in HR
(left) and P--L (right) diagrams.
}
\vspace{-5pt}
\end{figure}
What does one expect the pulsations to look like? We have computed some
nonlinear (radiative) models and find limit cycles with amplitudes in the
millimag and 10--100 m/s ranges, respectively. It might be feared that,
because the strange modes are surface modes their driving could be destroyed by
convection. Preliminary computations with the turbulent diffusion hydro code
however indicate otherwise.
Finally we note that the same trapping and driving mechanisms also work in
RR~Lyrae models and that therefore strange RR~Lyrae should also exist on the
hot side of the RR Lyrae instability range.
\vspace {-8pt}
\section{Conclusions}
\vspace{-5pt}
The theoretical study of stellar pulsations is still faced with many
challenges. We have seen that radiative hydrocodes, while giving decent
agreement with many observations, are not fully satisfactory. We hope that a
proper inclusion of the important dissipative effects of turbulent convection
will help resolve many of the extant difficulties and discrepancies.
\acknowledgments
{\small I wish to congratulate Joyce Guzik and Paul Bradley for the smooth
organization of this excellent meeting, and I also would like to thank them for
their kind financial support.
This research has been supported by NSF (AST95--18068, INT94--15868) and an
RCI account at the NER Data Center at UF.}
\vspace{-10pt}
|
astro-ph/9707033
|
\section{INTRODUCTION}
This series of papers describes a set of very deep mid-infrared observations,
obtained using the ISOCAM (Cesarsky et al. 1996)
instrument on the Infared Space Observatory (ISO, Kessler et al. 1996) and
centred on the Hubble Deep Field (HDF, Williams et al. 1996) region. In
Paper I (Serjeant et al. 1997) we discussed the reduction of the ISOCAM data
and presented the resultant maps of the HDF region at 6.7$\mu$m and
15$\mu$m. Paper II (Goldschmidt et al. 1997)
described the methods we used for detecting sources in these maps,
while Paper III (Oliver et al. 1997) compared source counts derived from
these detections with model predictions.
The two principal goals of this paper are to confirm the ISO-HDF
sources, through associating them with objects in existing HDF
galaxy catalogues, and to study the properties of those associated
galaxies, contrasting them with those of bright HDF galaxies
not detected by ISO.
The spectral energy distributions resulting from the association
procedure will be discussed in Paper V (Rowan-Robinson et al. 1997),
together with their implications for the star formation history of the
Universe.
The plan of this paper is as follows. In Section 2, we briefly review
the basic problems we face in associating the ISO-HDF sources with
galaxies in existing HDF catalogues. The
likelihood ratio method for source
identification is described in Section 3, where we present the results
of applying it to our ISO-HDF sources and discuss the reliability of
the associations we made using it. In Section 4 we discuss the properties
of the galaxies associated with our ISO sources, as well as those
prominent optical HDF galaxies we did not detect, and present
the conclusions that we draw from the work described in this
paper.
\section{ISO SOURCE DETECTIONS IN THE HUBBLE DEEP FIELD}
A total of 27 sources (7 in the complete sample, and 20 in the
supplementary sample) were detected in the 6.7 $\mu$m map together with 22
sources (19 complete, 3 supplementary) at 15 $\mu$m: the positions
and fluxes of these objects are tabulated in Paper II.
There are several basic problems which complicate
the association of ISO-HDF sources with HDF galaxies in existing
optical and near-infrared catalogues. The most obvious of these is the
poor match between
the resolution of ISOCAM and the high source density of
galaxies in the HDF: the radius of the Airy disk is 2.8 arcsec at
6.7$\mu$m and 6.0 arcsec at 15$\mu$m, while there are several hundred
galaxies per square arcminute detected
to $I_{814}\sim 29$, so we expect several galaxies per
randomly placed Airy disk at both 6.7 $\mu$m and 15 $\mu$m.
({\em N.B.\/} Unless otherwise stated, the magnitudes used in this
paper are total magnitudes in the AB system, as measured by Williams
et al. 1996, and we follow them in denoting their four optical bands
as $U_{300}$, $B_{450}$, $V_{606}$ and $I_{814}$ to avoid confusion
with the standard Johnson photometric bands.)
Thus, not only is there a high likelihood of chance
associations with optical galaxies, but any given ISOCAM beam may be
integrating over more than one source, and significant
flux may be contributed by more than one galaxy: this latter is
exacerbated both because many luminous mid-infrared sources are
likely to be interacting or merging galaxy systems (e.g. Lawrence
et al. 1989), and by the fact that our ISO maps appear to be at, or close
to, the confusion limit in both bands (Paper I).
An additional positional uncertainty results from the possibility
of field distortions in the original ISO data (Paper I). We
make some allowance for this in our source association procedure (see
Section 3), and feel that it is unlikely to affect our results
significantly.
Another issue is the band in which to make the associations, since,
clearly, the wide range of colours exhibited by HDF galaxies could
mean that different galaxies would be associated with a particular
ISO source in different bands, and an unfortunate choice of band
could bias the associations made: for example, we might expect the true
counterparts of our ISO galaxies to be dusty, so using too blue a
band might lead us to make the wrong associations. With that in mind we have performed
the likelihood ratio procedure of Section 3 on the $I_{814}$ images
of Williams et al. (1996), which is both the reddest and deepest band,
as well as the only one
available in the Hubble Flanking Fields (HFF).
Finally, we have relatively few sources (only 19 of
the 6.7$\mu$m sources and 5 of those at 15$\mu$m have Airy disks which
fall within
the HDF), restricting our ability to use statistical techniques which rely on
the determination of properties of the source population from the data
themselves.
\begin{figure}
\epsfig{file=figure1.ps,angle=0,width=8cm}
\caption{$I_{814}$ band magnitude histograms for galaxies near ISO-HDF
6.7$\mu$m source positions, and for the whole Williams et al. (1996)
catalogue. The solid lines show the magnitude distribution of galaxies
lying within twice the Airy disk radius of the 19 6.7$\mu$m sources
whose Airy disks fall within
in the HDF, while the dashed line traces the magnitude distribution for the full HDF
catalogue, normalised to the total area enclosed by the 19 Airy
disks. There is a clear excess of bright galaxies surrounding the
ISO-HDF sources.}
\end{figure}
\begin{figure}
\epsfig{file=figure2.ps,angle=0,width=8cm}
\caption{The cumulative probability distribution for the likelihood
ratio of the likeliest optical counterpart to a fictitious 6.7$\mu$m
source placed at random in the HDF.}
\end{figure}
As a first step towards our goals, we compare the $I_{814}$
band magnitude
distribution of galaxies near our ISO-HDF source positions with that
of the complete
optical HDF galaxy catalogue of Williams et al. (1996). In Fig. 1
we plot the $I_{814}$ band magnitude distribution of those HDF
galaxies within
twice the Airy disk radius of the 19 6.7$\mu$m ISO-HDF source
positions lying
within the HDF, together with that for the full Williams et al. (1996) optical
galaxy catalogue. The histogram for the ISO-HDF neighbours is noisy,
both because of the small number of sources, and because a number of
them lie close to the edge of the HDF, and the correction made
for the fraction of the search region outside the HDF can give large
weights to those HDF galaxies inside it. It is clear
from Fig. 1 that there is an excess of bright ($I_{814} < 23$)
galaxies surrounding our ISO-HDF source positions: a two-sided
Kolmogorov-Smirnov test yields a probability $P=1.6 \times 10^{-3}$
that the two magnitude distributions are drawn from the same
population (falling to $P=5.4 \times 10^{-4}$ when only the
six sources from the complete 6.7$\mu$m sample are considered), and
the five 15$\mu$m sources in the HDF yield similar results. This
strongly suggests that the sources in our ISO-HDF samples are
associated with bright galaxies in the HDF.
\begin{figure*}
\epsfig{file=table1.ps,angle=180,width=12.0cm}
\end{figure*}
\begin{figure*}
\vspace{5cm}
\caption{Positions of ISO-HDF sources and their associations, from
a $I_{814}$ band mosaic of the HDF. Dashed circles show the Airy disks of
the ISO-HDF sources, squares mark the positions of the associated
optical galaxies: a second dashed circle indicates a second source
from the same sample nearby. Where the ISO source position falls
within the HDF, the plots are centred on those positions: in three
cases the source falls just outside the HDF and the Airy disk is
displaced from the centre of the plot. The title to each plot gives the name
of the ISO source and the Williams et al. (1996) name for the optical
galaxy associated with it.}
\end{figure*}
\section{ASSOCIATING ISO-HDF SOURCES USING THE LIKELIHOOD RATIO METHOD}
\subsection{The likelihood ratio method}
The likelihood ratio method is one of the most commonly used
techniques for associating {\em sources\/} in one catalogue with
{\em objects\/} in another: it is described in detail by
Sutherland \& Saunders (1992), so we present only a brief review here.
The likelihood ratio, $LR$, is defined to be the ratio of the probability,
$p_{\rm true}$,
of finding the true counterpart to the source at the position of the
object and with its flux, to the probability, $p_{\rm chance}$, of
finding a chance
object at that position and with that flux, given the errors in the
source and object positions.
Consider an object with positional offsets
$(x,y)$ from the estimated source position and with flux $f$. The
probability that the true counterpart lies in an infinitesimal region
of area ${\rm d}x{\rm d}y$ about that position and has a flux in an
infinitesimal interval of size ${\rm d}f$ about that flux, is given by
\begin{equation}
p_{\rm true} \; = \; q(f) \: e(x,y) \: {\rm d}f \: {\rm d}x \: {\rm d}y,
\end{equation}
where $e(x,y)$ is the joint probability distribution function for $x$
and $y$, normalised so that \mbox{$\int e(x,y) {\rm d}x {\rm
d}y=1$} and $q(f)$ is the probability distribution function for an
ensemble of sources, measured in the passband in which the object
catalogue is defined. If $n(f)$ is the local surface density of
objects per unit flux, then \mbox{$p_{\rm chance}=n(f){\rm d}f {\rm d}x
{\rm dy}$} and $LR$ is given by
\begin{equation}
LR(f,x,y) \; = \; \frac {q(f) \: e(x,y)}{n(f)}.
\label{LR_final}
\end{equation}
To implement this method to associate ISO-HDF sources with objects in
the STScI HDF optical catalogue (Williams et al., 1996), we make the following
assumptions concerning the quantities present in equation
(\ref{LR_final}). We neglect the uncertainties in the optical
positions, setting $e(x,y)$ equal to
a Gaussian distribution, with a $\sigma$ value equal to the quadrature
sum of the radius of the Airy disk for the ISO sources (2.8 arcsec at
7$\mu$m and 6.0 arcsec at 15$\mu$m) and an estimated positional
error of equal size: this is a crude estimate of the true
positional uncertainties, designed, in part, to take account of the
possibility of ISOCAM field distortions, but the
associations made are
insensitive to variation of this figure within reasonable bounds. The
form of $n(f)$, which is the magnitude distribution of the galaxies in
the $I_{814}$ band, is
readily computed, but the choice of $q(f)$ is more problematic. As
discussed by Sutherland \& Saunders (1992), the two conventional
approaches would involve assuming some model for the
magnitude distribution of the true optical counterparts of our ISO-HDF
sources, or estimating $q(f)$ from the data, which, essentially, means
taking the difference between the pair of histograms shown in Fig. 1. The
latter method would, clearly, yield an unsatisfactorily noisy $q(f)$
(particularly for the 15$\mu$m sources), while sufficiently little is known
about the mid-infrared properties of the galaxies we are likely to
detect with ISO in the HDF that adopting a model based, say, on IRAS data,
could seriously bias our results. We choose, instead, to take $q(f)$ equal
to a constant, independent of magnitude: uncertainty as to the exact
reliability of our ISO samples means that the value this constant should take remains
unclear, so that the likelihood ratios we compute are left
unnormalised, proportional to those defined by equation (\ref{LR_final}).
This uniform $q(f)$ is, no doubt, incorrect, since, on the evidence of
Fig. 1 (as well as {\em a priori\/} prejudice) the galaxies we detect are
likely to be amongst the brightest in the HDF. Its effect, given that point,
is to make us more likely to identify our ISO sources with
fainter galaxies than we should be. In
fact, as we shall see, our associations are with bright galaxies,
which suggests that our taking a uniform $q(f)$ has not biased
our results.
\subsection{Results}
Deciding what level of likelihood ratio we consider to correspond to
a reliable association is a somewhat subjective matter. Independent of the
application of the likelihood ratio technique, two of us (RGM and
MRR) made independent associations by eye, using $I_{814}$ images of
the HDF fields, with the Airy disks of the ISO-HDF sources superimposed
upon them. In making these associations we are likely to have been
making a subconscious balance between the brightness of a given galaxy and
its distance from the source position, qualitatively similar
to the likelihood ratio method, where $LR \propto
e(x,y)/n(f)$. Nevertheless, the level of agreement obtained, both
between the two observers and with the results of the likelihood ratio
analysis, is surprising: for all but one source the two observers
agreed on the most likely association, and the sources which they felt
confident in having associated reliably were in a one-to-one
correspondence with the sources yielding the highest likelihood
ratios.
We can estimate the level of reliability of these associations by
considering the likelihood ratios for association with random points
in the HDF. In Fig. 2 we show the cumulative probability
distribution for the likelihood ratio of the most likely HDF galaxy for being
the optical counterpart of a fictitious 6.7$\mu$m source at a random position
in the HDF, computed using the same forms for $q(f)$ and $n(f)$ as
for the ISO-HDF sources and 10000 random positions in the HDF.
From Fig. 2 we can compute, as a function of $LR$, the quantity
$P_{\rm ran}(I_{814})$, which is the probability that a
fictitious source, placed at random in the HDF, would have a likeliest
association in the $I_{814}$ catalogue of Williams et al. (1996) with
a likelihood ratio at least as high as a given value
of $LR$. The $P_{\rm ran}(I_{814})$ values for the best optical candidates for
the 6.7$\mu$m sources marking the boundary above which the observers
conservatively considered their associations to be reliable was about
$P_{\rm ran}=0.35$. We chose this conservative level to mark the
break between those associations we consider to be reliable and those
we do not.
In Table 1 we list the properties of the HDF galaxies which are our
preferred associations for the 15 ISO-HDF sources we take as having
reliable associations in the HDF region: these data
are used to compute the spectral energy distributions of the galaxies
in Paper V.
The same procedure was then applied to an $I_{814}$ band
HFF catalogue, resulting in the association of a further 11 ISO-HDF
sources with HFF objects (10 galaxies and 1 star): the construction
of this catalogue is described in the Appendix, where the HFF
associations are tabulated. These results, taken together, provide
lower limits to the reliability of the four ISO-HDF samples; lower
limits because we have been conservative in accepting associations
as reliable, and a substantial number of further associations fall
just below our threshold, and might rise above it, for example, once
more is known of the field distortions in ISOCAM data. The 6.7$\mu$m
and 15$\mu$m complete samples are at least 71 per cent and 68 per cent
reliable, respectively, while the reliabilities of the supplementary
6.7$\mu$m and 15$\mu$m samples are no worse than 35 per cent and
67 per cent respectively.
\subsection{Notes on individual associations}
In this subsection we discuss in detail the associations presented
in Table 1:
these comments should be borne in mind when using the results of
Table 1. In what follows, galaxies in the $I_{814}$ HDF
image are denoted by the names assigned
by Williams et al. (1996), prefixed by ``ST'', while those from the $H+K$ image of Cowie
et al. (1997) are denoted by their number in the catalogue of Songaila
(1997), with the prefix ``IfA'', and photometric redshifts have been
computed using the method of Mobasher et al. (1996), without using the
ISO data themselves.
\begin{enumerate}
\item {\bf ISOHDF3 J123641.1+621129}: The position of this 15$\mu$m
source falls just inside the Hubble Flanking Fields, but it is
included here because its Airy disk encloses the very bright
galaxy ST4-976.0, with which we have associated it: its
proximity to the edge of the HDF means that ST4-976.0 has no
counterpart in the Songaila (1997) catalogue, and we have
estimated a photometric redshift of $z=0.047$ for it.
\item {\bf ISOHDF2 J123641.6+621142}: This is associated with
the brighter member (ST4-948.0) of a merging pair of galaxies,
which is too near the edge of the HDF to be included in the
near-infrared catalogue of Songaila (1997), despite appearing
bright in both the $J$ and $H+K$ images of Cowie et
al. (1997). Phillips et al. (1997) report a spectroscopic
redshift of $z=0.585$ for ST4-948.0.
\item {\bf ISOHDF2 J123642.6+621210}: This source falls midway
between two spiral galaxies: ST4-656.0/IfA11 (\mbox{$z=0.454$},
Songaila 1997) and ST4-795.0/IfA14 (\mbox{$z=0.432$}, Songaila 1997).
It is associated with the former, which is brighter and yields
a higher likelihood ratio, but note that
this is one of the least reliable of our accepted
associations.
\item {\bf ISOHDF2 J123642.9+621309}: This interacting pair
(ST1-57.0 is the brighter member) falls in the Planetary
Camera HDF field, and so is not included in the Songaila
(1997) catalogue. We estimate a photometric redshift of
$z=0.737$ for ST1-57.0.
\item {\bf ISOHDF2 J123643.0+621152}: This source may have
flux contributed by ST4-727.0/IfA59, as well as
ST4-775.0/IfA20, as listed in Table 1. ST4-727.0/IfA59 is
half a magnitude fainter in $I_{814}$ than ST4-775.0/IfA20,
but is closer to the ISO position, and so yields a lower
value of $P_{\rm ran}(I_{814})$: $P_{\rm ran}(I_{814})=0.269$
as against $P_{\rm ran}(I_{814})=0.407$ for ST4-775.0/IfA20.
Despite that we conservatively take ST4-775.0/IfA20 as being the
association, on the basis of its lower photometric redshift:
$z=0.820$, versus $z=1.63$ for ST4-727.0/IfA59. Having two
galaxies with such low $P_{\rm ran}(I_{814})$ values clearly
breaks the assumption, implicit in the likelihood ratio
method, that there is not more than one true optical counterpart to
each ISO source, which gives some justification for
over-riding our reliability criterion that $P_{\rm
ran}(I_{814})<0.35$ in this one case.
\item {\bf ISOHDF3 J123643.7+621255}: This is associated with
the brighter member (ST4-402.0/IfA32) of an interacting pair
of galaxies on the basis of extremely reliable $I_{814}$
data: $P_{\rm ran}(I_{814})=0.025$.
Cohen et al. (1996) give a spectroscopic
redshift of $z=0.558$ for this galaxy.
\item {\bf ISOHDF2 J123643.9+621130}: ST4-752.0 has an acceptable
$P_{\rm ran}(I_{814})$ value. It is too close to the edge of the HDF to have been
included in the Songaila (1997) catalogue, although it looks
bright in both the $J$ and $H+K$ images of Cowie et
al. (1997). A spectroscopic redshift of $z=1.013$ is given for
ST4-752.0 by Cohen et al. (1996).
\item {\bf ISOHDF2 J123646.4+621406}: This source is
confidently associated with ST2-251.0/IfA9: there are no
plausible alternative associations. Cohen et al. (1996)
quote a spectroscopic redshift of $z=0.960$ for this galaxy,
and the broad emission lines in the spectrum shown by
Songaila (1997) indicates that this galaxy hosts an AGN.
\item {\bf ISOHDF2 J123647.1+621426}: This is identified as a
stellar object (ST2-381.0). The $V_{606}$, $I_{814}$, $J$ and
$H+K$ magnitudes can be fit very well with a $T=3450$ K
blackbody, so this appears to be an M0 star. The corresponding
predicted flux at 6.7$\mu$m would be 15$\mu$Jy, a factor of
two lower than we observe, so we must presume that the star
has a circumstellar dust shell, perhaps analogous to U Aur
(Rowan-Robinson \& Harris 1982).
\item {\bf ISOHDF3 J123648.1+621432}: This source is just
outside the HDF, but is included here because we have confidently
associated it with a bright HDF galaxy (ST2-537.0): this
galaxy is not included in the Songaila (1997) catalogue,
because it is at the edge of the HDF. We estimate a
photometric redshift of $z=0.023$ for this galaxy, using the
methods of Mobasher et al. (1996).
\item {\bf ISOHDF2 J123648.2+621427}: ST2-537.0 is clipped off
the edge of the Cowie et al. (1997) $H+K$ image: the
(photometric) redshift for this galaxy is $z=0.023$, as given
above.
\item {\bf ISOHDF2 J123648.4+621215}: The flux from this
source may be a combination of that from ST4-186.0/IfA44, an
elliptical galaxy, as well as our preferred choice of
ST4-260.111/IfA45, which
is a spiral galaxy with a bright giant H{\sc ii} region. We favour
the latter on the conservative basis of its having a lower
photometric redshift ($z=0.778$ versus $z=1.512$), but note
that both galaxies may contribute flux to this source.
\item {\bf ISOHDF2 J123649.7+621315}: This source falls within
a small group
of bright galaxies, which are probably not physically
associated. ST2-264.1/IfA17 is the preferred association for
ISOHDF2 J123649.7+621315, although it is
possible that this source also includes flux from
ST2-256.0/IfA43 and ST2-239.0/IfA35. ST2-264.1/IfA17 has
a spectroscopic redshift of $z=0.475$ (Cohen et al. 1996).
\item{\bf ISOHDF3 J123649.8+621319}: This source lies in the
same group of galaxies as ISOHDF2 J123649.7+621315
and, as with that source, we favour ST2-264.1/IfA17
as the most likely association, but note that
it is likely that this 15$\mu$m source includes flux from
ST2-256.0/IfA43 and ST2-239.0/IfA35, as well as, perhaps,
ST2-404.0/IfA6: as noted above, Cohen et al. (1996) quote
a spectroscopic redshift of $z=0.475$ for ST2-264.1/IfA17.
\item {\bf ISOHDF3 J123651.5+621357}: This is associated with
ST2-652.0/IfA10: Cohen et al. (1996) give the
spectroscopic redshift of ST2-652.0/IfA10 as $z=0.557$.
\item {\bf ISOHDF2 J123658.9+621248}: This source is
associated with ST3-534.0/IfA27: there are no plausible
alternative associations, and Cohen et al. (1996) have
determined its redshift spectroscopically to be $z=0.320$.
\end{enumerate}
In Fig.3 we show the immediate surroundings (in an $I_{814}$ band mosaic) of
the 15 ISO-HDF sources reliably associated with HDF galaxies. The dashed circle
in the centre of each plot marks the Airy disk of the ISO source,
and the square is centred on the position in the optical catalogue
of Williams et al. (1996) of the galaxy with which we have
associated it: for those plots where the edge of the HDF is within the
frame, the ISO-HDF source position is no longer placed at the
centre of the plot, while the presence of a second circle
in the same field indicates the
Airy disk of another ISO-HDF source from the same sample. Three-colour
($B_{450},V_{606},I_{814}$) versions of these plots can be viewed at
{\tt http://artemis.ph.ic.ac.uk/hdf/catalogue.html}.
\subsection{Sources not reliably associated}
Twenty two ISO-HDF sources have not been reliably
associated with stars or galaxies in the optical HDF catalogue
of Williams et al. (1996) or in our own HFF catalogue: in (a) the
complete 6.7$\mu$m sample:
123655.1+621423 and 123658.8+621313; (b) the supplementary 6.7$\mu$m
sample: 123641.5+621309, 123642.5+621256, 123643.1+621203,
123646.4+621440, 123648.6+621123, 123650.2+621139, 123655.2+621413,
123655.7+621427, 123656.1+621303, 123656.6+621307, 123657.6+621205,
123658.6+621309 and 123701.2+621307; (c) the complete 15$\mu$m sample:
123634.3+621238, 123637.5+621109, 123646.9+621045, 123653.6+621140,
123659.4+621337 and 123702.5+621406; and (d) the supplementary
15$\mu$m sample: 123658.1+621458.
A number of these sources have likeliest associations
that lie on the sharply falling portion of the curve of
$P_{\rm ran}(I_{814})$ against $LR$, and may possibly rise above
our reliability threshold once a more accurate model for $e(x,y)$
in equations (1) and (2) can be computed, properly taking into
account the as yet uncertain field distortion in ISOCAM data and
improving the astrometric accuracy of the ISO-HDF maps.
\section{DISCUSSION AND CONCLUSIONS}
We have conservatively associated fifteen ISO-HDF sources detected
at 6.7$\mu$m or 15$\mu$m with optical galaxies in the HDF
catalogue of Williams et al. (1996), eight of which are also in the
near-infrared catalogue of Songaila (1997): a further association is
made with a star. This was done using two
independent procedures, namely the likelihood ratio method
(Sutherland \& Saunders 1992) and visual inspection. These gave
consistent results, whose reliability we tested by computing
the likelihood ratios for galaxies to be associated with
fictitious sources placed at random in the Hubble Deep Field. A
similar procedure yielded a further eleven associations with
objects (ten galaxies and one star) in the Hubble Flanking
Fields: more details of this are given in the Appendix.
We detect 10 of the 44 brightest $I_{814}$ band objects in the Williams et
al. (1996) catalogue (i.e. those with $I_{814}<22.04$): 8 of these 44
objects are stars, which we discuss no further. Of the 36 galaxies,
we detect 13 per cent (2 out of 15) of the ellipticals, 30 per cent (6/18) of
the spirals and 67 per cent (2/3) of the irregulars/mergers.
We divide these 36 galaxies into three bins of 12 galaxies each for
redshift and the three optical colours $V_{606}-I_{814}$,
$B_{450}-V_{606}$, and $U_{300}-B_{450}$. There
are (3,4,3) of our galaxies in bins of increasing redshift, so the
galaxies associated with the ISO-HDF sources have a similar redshift
distribution to bright HDF galaxies in general: 5
out if 10 have redshifts greater than 0.5.
We find (4,4,2) of our objects in the bins of increasing
$V_{606}-I_{814}$ and of increasing $U_{300}-B_{450}$, and (4,4,3) in
bins of increasing $B_{450}-V_{606}$.
A detailed study of the properties of the galaxies associated
with ISO-HDF source, contrasting them with those of the HDF
galaxy population as a whole, will be the topic of a later
paper in this series, but is clear, though, that, amongst bright HDF
galaxies, ISO has a tendency to detect luminous, star-forming galaxies
at fairly high redshift and with disturbed morphologies, in preference
to nearby ellipticals: the
implications of this result is discussed in Paper V.
Further information on the ISO-HDF project can be found on the ISO-HDF
WWW pages: see {\tt http://artemis.ph.ic.ac.uk/hdf/}.
\section*{ACKNOWLEDGMENTS}
This paper is based on observations with ISO, an ESA project, with
instruments funded by ESA Member States (especially the PI countries:
France, Germany, the Netherlands and the United Kingdom) and with
participation of ISAS and NASA.
This work was supported by PPARC grant GR/K98728 and by the EC TMR
Network FMRX-CT96-0068. We thank the referee, Harry Ferguson, for
many helpful comments.
\section*{REFERENCES}
\myref{Cesarsky C., et al., 1996, A\&A, 315, 32}
\myref{Cohen J.G et al., 1996, ApJ, 471, L5}
\myref{Cowie L. L., Clowe D., Fulton E., Cohen J.G., Hu E.M., Songaila A., Hogg D.W., Hodapp K.W.,
1997, in preparation}
\myref{Draper P.W., Eaton N., 1996, {\sc pisa}, Starlink User Note
109, {\tt http://star-www.rl.ac.uk/star/docs/sun109.htx
/sun109.html}}
\myref{Gallego J., Guzman R., 1997, {\tt
http://www.ucolick.org/}}$\;\tilde{}${\tt deep/hdf/hdf.html}
\myref{Goldschmidt P., et al., 1997, MNRAS, in press (Paper II)}
\myref{Fomalont E.B., Kellermann K.I., Richards E., Windhorst R.A.,
Partridge R.B., 1997, ApJ, 475, 5}
\myref{Kessler M., et al., 1996, A\&A, 315, 27}
\myref{Lawrence A., Rowan-Robinson M., Leech K.J., Jones D.H.P, Wall
J.V., 1989, MNRAS, 240, 329}
\myref{Lowenthal J.D. et al., 1997, in preparation}
\myref{Mobasher B., Rowan-Robinson M., Georgakakis A., Eaton N., 1996,
MNRAS, 282, L7}
\myref{Moustakas L., Zepf S., Davis M., 1997,
{\tt http://astro.berkeley.edu/davisgrp/HDF/observations.html}}
\myref{Oliver S.J., et al., 1997, MNRAS, in press (Papper III)}
\myref{Phillips A.C., Guzman R., Gallego J., Koo D.C., Lowenthal J.D.,
Vogt N.P., Faber S.M., Illingworth G.D., 1997, ApJ, submitted}
\myref{Rowan-Robinson M., Harris S., 1982, MNRAS, 200, 197}
\myref{Rowan-Robinson M., et al., 1997, MNRAS, in press (Paper V)}
\myref{Serjeant, S., et al., 1997, MNRAS, in press (Paper I)}
\myref{Songaila, A., 1997, ` The Hawaii Active Catalog of the Hubble
Deep Field'
{\tt http://www.ifa.hawaii.edu/}$\;\tilde{}${\tt cowie/tts/tts.html (15
February 1997)}}
\myref{Sutherland, W., Saunders W., 1992, MNRAS, 259, 413}
\myref{Williams, R.E., et al., 1996, AJ, 112, 1335}
|
0712.4226
|
\section{Introduction}
The SNRs consisting of two partially overlapping nonthermal shells
represent a significant fraction of the known shell-like SNRs.
Several explanations were put forward to explain the unusual
morphology of these objects: {\it i}) superposition of two separate
SNRs; {\it ii}) collision of two separate SNRs (e.g. Uyaniker et al.
2002); {\it iii}) breakout phenomenon in a region with a density
discontinuity (e.g. Vel\'{a}zquez et al. 2001). We propose an
alternative explanation based on the idea that the two-shell SNRs
can be products of an off-centered supernova (SN) explosion in a
preexisting bubble created by the wind of a moving massive star.
\section{Precursors of two-shell SNRs}
Massive stars (the progenitors of most of SNe) are the sources of
strong stellar wind, which creates extended (tens of parsecs)
bubbles in the interstellar medium. In the absence of stellar proper
motion, the wind bubbles blown-up in the homogeneous medium are
spherically-symmetric with the star at the center of symmetry.
However, after correction for galactic rotation, all stars have a
proper motion. Most of the massive stars have a peculiar velocity of
a few km/s; some of them have much larger velocities. The stellar
proper motion could result in a considerable offset of the SN
explosion site from the center of the wind-driven bubble, while the
initially spherical shape of the bubble could be significantly
distorted if the star reaches the edge of the bubble and the stellar
wind starts to interact directly with the ambient interstellar
medium (Weaver et al. 1977; see Fig.~1 for schematic illustration of
this effect).
\begin{figure}[h]
\centering
\caption{Schematic illustration of the effect of stellar motion on
the structure of a wind bubble. At early times (A) the small circle
corresponds to a shock separating freely expanding stellar wind from
the region of shocked wind. The large circle corresponds to the
shock separating the unperturbed interstellar gas from the swept-up
(shocked) interstellar gas. (The contact discontinuity separating
the swept-up gas from the shocked wind is omitted for the sake of
simplicity.) At advanced times (B) the stellar wind interacts
directly with the ambient interstellar medium.} \label{fig1}
\end{figure}
It is likely that just this situation takes place in the case of the
progenitor star of the SN~1987A. Wang et al. (1993) suggested that
the large-scale structure to the southeast of SN~1987A (the dark bay
in their Fig.~2) is the wind-driven bubble created by the moving SN
progenitor star during the main-sequence (MS) stage, while the
Napoleon's Hat nebula originates due to the interaction of the
post-MS winds with the ambient interstellar medium (see, however,
Sugerman et al. 2005).
\begin{figure}[h]
\centering
\caption{The continuum-subtracted ${\rm H}_{\alpha}$ mosaic of
RCW~86 (Smith 1997).} \label{fig3}
\end{figure}
After the massive star exploded as a SN, the SN blast wave takes on
the shape of the preexisting cavity (e.g. R\'{o}$\dot{z}$yczka et
al. 1993; Brighenti \& D'Ercole 1994; Gvaramadze \& Vikhlinin 2003)
and the resulting SNR becomes considerably
non-spherically-symmetric. Fig.~2 shows the ${\rm H}_{\alpha}$ image
(Smith 1997) of RCW~86 -- a shell-like SNR with a peculiar
protrusion to the southwest. We believe (Gvaramadze \& Vikhlinin
2003) that RCW~86 is an older version of the SN~1987A and that the
southwest protrusion is the remainder of a bow shock-like structure
created in the interstellar medium by the post-MS winds. Given the
youth of the SNR, we expect that the stellar remnant should still be
within the protrusion. Motivated by these arguments we searched for
a stellar remnant using the {\it Chandra} archival data and
discovered a neutron star candidate just in the "proper place"
(Gvaramadze \& Vikhlinin 2003).
\section{Two-shell SNRs}
We propose that a two-shell SNR is a natural consequence of an
off-centered cavity SN explosion of a moving massive star, which
ended its evolution near the edge of the MS wind-driven bubble. This
proposal implies that one of the shells is the former MS bubble
reenergized by the SN blast wave. The second shell, however, could
originate in two somewhat different ways, depending on the initial
mass of the SN progenitor star. It could be a shell swept-up by the
SN blast wave expanding through the unperturbed ambient interstellar
medium if the massive star ends its evolution as a red supergiant
(RSG), i.e. if the star evolves through the sequence: MS-RSG-SN. Or
it could be the remainder of a preexisting shell (adjacent to the MS
bubble) swept-up by the fast progenitor's wind during the late
evolutionary phases if after the RSG phase the star evolves through
the Wolf-Rayet (WR) phase (i.e. MS-RSG-WR-SN). In both cases the
resulting (two-shell) SNR should be associated only with one (young)
NS (cf. Gvaramadze 2006). We note several distinctions
characterizing the second case: (a) the birth-place of the stellar
remnant could be significantly offset from the center of the
preexisting WR shell due to the proper motion of the SN progenitor
star, (b) the SN explosion site could be marked by a nebula of
thermal X-ray emission (see Sect.\,4.2), and (c) the preexisting WR
shell causes the rapid evolution of the SN blast wave from the
adiabatic phase to the radiative one.
\section{Two examples}
\subsection{Cygnus Loop (MS-RSG-SN)}
\begin{figure}[h]
\centering
\caption{Two shells of the Cygnus Loop: {\it ROSAT} image at 0.25
keV with overlaid polarization intensity contours (Uyaniker et al.
2002). A neutron star candidate (Miyata et al. 2001) is indicated by
a cross.} \label{fig5}
\end{figure}
The polarized intensity image of the Cygnus Loop by Uyaniker et al.
(2002) shows a prominent shell-like structure encompassing the
"break-out" region in the south of this SNR (see Fig.~3). The
geometric center of the shell nearly coincides with a neutron star
candidate discovered in X-rays by Miyata et al. (2001; indicated in
Fig.~3 by a cross). Uyaniker et al. (2002) believe that the Cygnus
Loop is actually two individual SNRs interacting with each other. An
alternative possibility is that the Cygnus Loop is the result of SN
explosion near the south edge of a cavity blown up by the SN
progenitor during the MS stage and that the SN precursor was a RSG
star (Gvaramadze 2003). This implies that the north (well-known)
shell of the Cygnus Loop corresponds to the former MS bubble
reenergized by the SN blast wave, while the newly-discovered (south)
shell is created by the interaction of the SN blast wave with the
unperturbed interstellar medium. Accordingly, we expect that only
one stellar remnant should be associated with both shells.
\begin{figure}[h]
\centering
\caption{{\it ASCA} image of 3C~400.2 (Yoshita et al. 2001).
Overlaid contours are the VLA image at 1.4 GHZ (Dubner et al.
1994).} \label{fig6}
\end{figure}
\subsection{3C~400.2 (MS-RSG-WR-SN)}
The SNR 3C~400.2 consists of two circular radio shells with the
centrally-filled thermal X-ray emission peaked on the region where
the radio shells overlap each other (Fig.~4; Dubner et al. 1994;
Yoshita et al. 2001) and therefore belongs to the class of
mixed-morpology SNRs (i.e. shell-like in the radio and
centrally-filled in the X-ray; Rho \& Petre 1998). The origin of
mixed-morphology SNRs is usually treated in the framework of White
\& Long's (1991) model of evaporation of embedded interstellar
cloudlets. An alternative explanation of the origin of the
centrally-filled thermal X-ray emission is that it is due to the
evaporation of dense circumstellar clumps (produced by the
interaction of the fast WR wind with the preceding slow RSG one;
Gvaramadze 2001, 2002). We therefore suggest that the SN precursor
was a WR star and that the northwest shell of 3C~400.2 is the former
WR shell. This suggestion can be supported by the fact that the
northwest shell has a bilateral appearance with the bilateral axis
parallel to the Galactic plane (see Gvaramadze 2004). If our
suggestion is correct, then one can expect that the stellar remnant
associated with 3C~400.2 should be within the northwest shell.
|
2001.00577
|
\section{Introduction}
\label{sec:intro}
Recently, End-to-end (E2E) neural network architectures based on sequence to sequence (seq2seq) learning for automatic speech recognition (ASR) have been gaining lots of attention~\cite{Battenberg2017ExploringNT,Prabhavalkar2017}, mainly because they can learn both the acoustic and the linguistic information, as well as the alignments between them, all simultaneously unlike the conventional ASR systems which were based on the hybrid models of hidden Markov models (HMMs) and deep neural network (DNN) models.
Moreover, the E2E models are more suitable to be compressed since they do not need separate phonetic dictionaries and language models, making them one of the best candidates for on-device ASR systems.
Among the various E2E ASR model architectures such as attention-based encoder-decoder models~\cite{Chan2015ListenAA} and recurrent neural network transducer (RNN-T) based models~\cite{Graves2012SequenceTW,Rao2017ExploringAD}, we chose to use the attention based method since the accuracy of this method has surpassed that of the conventional HMM-DNN based state-of-the-art ASR systems~\cite{Chiu2018StateoftheArtSR}.
Despite their extreme accuracy, attention models which require full alignment between the input and the output sequences are not capable of providing streaming ASR services.
Some researches have been made to address this lack of streaming capabilities of the attention models~\cite{45549,Raffel2017OnlineAL,Chiu2018MonotonicCA}.
In \cite{45549}, an online neural transducer was proposed, which applies the full attention method on chucks of input, and is trained with an additional end-of-chunk symbol.
In \cite{Raffel2017OnlineAL}, a hard monotonic attention based model was proposed for streaming decoding with acceptable accuracy degradation.
Furthermore, in ~\cite{Chiu2018MonotonicCA}, a monotonic chunk-wise attention (MoChA) method was proposed, which showed the promising accuracy by loosening a hard monotonic alignment constraint and using a soft attention over a few speech chunks.
In this paper, we explain how we improved our MoChA based ASR system to become an on-device commercialization ready solution.
First, we trained the MoChA models by using connectionist temporal classification (CTC) and cross-entropy (CE) losses jointly to learn alignment information precisely.
A minimum word error rate (MWER) method, which is a type of sequence-discriminative training, was adopted to optimize the models~\cite{Prabhavalkar2018MinimumWE}.
Also, for better stability and convergence of model training, we applied a layer-wise pre-training mechanism~\cite{zeyer2018:attanalysis}.
Furthermore, in order to compress the models, we present a hyper low-rank matrix approximation (hyper-LRA) method by employing DeepTwist~\cite{Lee2015Deeptwist} with minimum accuracy degradation.
Another important requirement for the commercializing ASR solutions is to boost the recognition accuracy for user context-specific keywords.
In order to bias the ASR system during inference time, we fused the MoChA models with statistical n-gram based personalized language models (LMs).
The main contribution of this paper is in successfully building the first ever attention-based streaming ASR system capable of running on devices to the best of our knowledge.
We succeeded not only in training MoChA models with large corpus for Korean and English, but also in satisfying the needs of commercial on-device applications.
The rest of this paper is composed as follows: the speech recognition models based on attention methods are explained in a section 2. A section 3 describes how optimization methods improved recognition accuracy, and explanation for the compression algorithm for MoChA models is followed in a section 4. A section 5 describes the n-gram LM fusion for on-demand adaptation, and then discusses the methods and related experiments results in a section 6 and 7.
\section{MODEL ARCHITECTURE}
\label{sec:model}
Attention-based encoder-decoder models are composed of an encoder, a decoder, and an attention block between the two~\cite{Bahdanau2015Att}.
The encoder converts an input sequence into a sequence of hidden vector representations referred to as encoder embeddings.
The decoder is a generative model which predicts a sequence of the target labels.
The attention is used to learn the alignment between the two sequences of the encoder embeddings and the target labels.
\subsection{Attention-based speech recognition}
\label{sec:bfa}
The attention-based models can be applied to ASR systems~\cite{Chorowski2015Att, returnn_asr} using the following equations (1)-(4).
\begin{equation}
h_t = Encoder(h_{t-1}, x_t)
\end{equation}
where $\textbf{x}=\{x_1,x_2,...,x_T\}$ is the speech feature vector sequence, and $\textbf{h}=\{h_1,h_2,...,h_T\}$ is the sequence of encoder embeddings.
The $Encoder$ can be constructed of bi- or uni-directional long short term memory (LSTM) layers~\cite{Hochreiter:1997:LSM:1246443.1246450}.
Due to the difference in length of the input and the output sequence, the model is often found to have difficulty in convergence.
In order to compensate for this, pooling along the time-axis on the output of intermediate layers of the encoder is used, effectively reducing the length of \textbf{h} to $T' < T$.
\begin{equation}
\begin{aligned}
\label{eq:attention}
e_{t,l} &= Energy(h_t, s_l)\\
&= v^T\tanh(W_hh_t + W_ss_{l} + b)\\
a_{t,l} &= Softmax(e_{t,l})
\end{aligned}
\end{equation}
An attention weight $a_{t,l}$, which is often referred as alignment, represents how the encoder embeddings of each frame $h_t$ and the decoder state $s_l$ are correlated~\cite{Bahdanau2015Att}.
We employed an additive attention method to compute the correlations. A softmax function converts the attention energy into the probability distribution which is used as attention weight.
A weighted sum of the encoder embeddings is computed using the attention weights as,
\begin{equation}
\begin{aligned}
c_l &= \sum_{t}^{T'}a_{t,l}h_{t}
\label{eq:context}
\end{aligned}
\end{equation}
where $c_l$ denotes the context vector, and since the encoder embeddings of the entire input frames are used to compute the context, we could name this attention method as full attention.
The $Decoder$, which consists of uni-directional LSTM layers, computes the current decoder state $s_l$ from the previous predicted label $y_{l-1}$ and the previous context $c_{l-1}$.
And the output label $y_l$ is calculated by a $Prediction$ block using the current decoder state, the context and the previous output label.
\begin{equation}
\begin{aligned}
s_l &= Decoder(s_{l-1}, y_{l-1}, c_{l-1})\\
y_l &= Prediction(s_l, y_{l-1}, c_l)
\end{aligned}
\end{equation}
Typically, the prediction block consists of one or two fully connected layers and a softmax layer to generate a probability score for the target labels. We applied max pooling layer between two fully connected layers.
The probability of predicted output sequence $\textbf{y}=\{y_1,y_2,...,y_L\}$ for given $\textbf{x}$ is calculated as in equation (5).
\begin{equation}
\begin{aligned}
P(\textbf{y}|\textbf{x}) = \prod_l^L{P(y_l|\textbf{x}, y_{1:l-1})}
\end{aligned}
\end{equation}
where $P(y_l|\textbf{x}, y_{1:l-1})$ is the probability of each output label.
Even though the attention-based models have shown state-of-the-art performance, they are not a suitable choice for the streaming ASR, particularly because they are required to calculate the alignment between the current decoder state and the encoder embeddings of the entire input frames.
\subsection{Monotonic Chunk-wise Attention}
\label{sec:mocha}
A monotonic chunk-wise attention (MoChA) model is introduced to resolve the streaming incapability of the attention-based models under the assumption that the alignment relationship between speech input and output text sequence should be monotonic~\cite{Raffel2017OnlineAL, Chiu2018MonotonicCA}.
MoChA model computes the context by using two kinds of attentions, a hard monotonic attention and a soft chunkwise attention. The hard monotonic attention is computed as,
\begin{equation}
\label{eq:MonoAttention}
\begin{aligned}
e_{t,l}^{monotonic} &= MonotonicEnergy(h_t, s_{l})\\
a_{t,l}^{monotonic} &= \sigma(e_{t,l}^{monotonic})\\
z_{t,l} &\sim Bernoulli(a_{t,l}^{monotonic})\\
&=
\begin{cases}
1, & \text{if } a_{t,l}^{monotonic} \geq 0.5\\
0, & \text{otherwise}
\end{cases}
\end{aligned}
\end{equation}
where $z_{t,l}$ is the hard monotonic attention used to determine whether to attend the encoder embedding $h_t$.
The $Decoder$ attends at $u^{th}$ encoder embedding to predict next label only if $z_{u,l}=1$.
The equation (6) is computed on $t \geq u_{l-1}$, where $u_{l-1}$ denotes the attended encoder embedding index for previous output label prediction.
The soft chunkwise attention is computed as
\begin{equation}
\label{eq:ChunkAttention}
\begin{aligned}
e_{t,l}^{chunk} &= ChunkEnergy(h_t, s_{l})\\
a_{t,l}^{chunk} &= Softmax(e_{t,l}^{chunk})\\
c_{t,l} &= \sum_{t=u-w+1}^{u}a_{t,l}^{chunk}h_{t}
\end{aligned}
\end{equation}
where $u$ is the attending point chosen from monotonic attention, $w$ is the pre-determined chunk size, and $a_{t,l}^{chunk}$ is the chunkwise soft attention weight and $c_{t,l}^{chunk}$ is the chunkwise context which is used to predict the output label.
We used the modified additive attention for computing the attention energy in order to ensure model stability~\cite{Chiu2018MonotonicCA}.
\begin{equation}
Energy'(h_t, s_{l}) =
g\frac{v}{||v||}\tanh(W_hh_t + W_ss_{l-1} + b) + r\\
\end{equation}
where $g$, $r$ are additional trainable scalars, and others are same as $Energy()$ in equation (\ref{eq:attention}). $MonotonicEnergy()$ and $ChunkEnergy()$ are computed using equation (8) with own trainable variables.
\begin{comment}
A MoChA model is a suitable architecture for streaming speech recognition systems in that the input speech frame is encoded in the incoming order in the encoders, and the monotonic attentions are calculated only with the encoder output sequences and the decoder state sequences.
According to the monotonic attention probability, the specific input frame is selected as an attending point.
The chunk attention and the context are computed with the chunk which consist of the encoded embedding from only the previous and the current input frame not the futures unlike the attention model explained in the previous section.
\end{comment}
\begin{figure}[h]
\centering
\includegraphics[trim={0 0cm 0 0cm}, clip, width=.8\linewidth]{model_figure.pdf}
\caption{Model architecture}
\end{figure}
\section{Training and Optimization}
\label{sec:optimization}
The main objective of the attention-based encoder-decoder model is to find parameters which minimize the the cross entropy (CE) between the predicted sequences and the ground truth sequences.
\begin{equation}
\label{eq:ce}
L_{CE} = -\sum_{l}\log(P(y_l^*|\textbf{x},y_{1:l-1}^*))
\end{equation}
where $y^*$ is the ground truth label sequence.
We trained MoChA models with CTC loss and CE loss jointly for learning the alignment better and the MWER loss based sequence-discriminative training was employed to further improve the accuracy.
Moreover in order to ensure the stability in training MoChA models, we adopted the pre-training scheme.
\subsection{Joint CTC-CE training}
\label{ssec:ctc-ce}
In spite of the different length between the speech feature sequences and the corresponding text sequences, CTC loss induces the model that the total probability of all possible alignment cases between the input and the output sequence is maximized~\cite{Graves2006ConnectionistTC}.
The CTC loss are defined as follows,
\begin{equation}
\label{eq:CTC}
L_{CTC} = -\sum_{\pi\subset\Pi(y^*)}\sum_{t}^{T}\log(P(y_t^\pi|x_t)
\end{equation}
where $\Pi(y^*)$ are all of the possible alignments generated with \{Blank\} symbol and the repetition of outputs units for having same length as input speech frames, and $\pi$ is one of them. $P(y_t^\pi|x_t)$ is the probability about $t$th predicted label is $t$th label in $\pi$ alignment case.
A CTC loss can be readily applicable for training MoChA Model, especially the encoder, because it also leads the alignment between input and output as a monotonic manner. Moreover CTC loss has the advantage of learning alignment in noisy environments and can help to quickly learn the alignment of the attention based model through joint training~\cite{Kim2017JointCB}.
The joint training loss is defined as follows,
\begin{equation}
\label{eq:jointCTCLoss}
L_{Total} = \lambda L_{CE} + (1-\lambda) L_{CTC} \quad\;\lambda \in [0,1]
\end{equation}
where $L_{Total}$ is joint loss of the two losses.
\subsection{MWER training}
\label{ssec:mwer}
In this paper, the byte-pair encoding (BPE) based sub-word units were used as the output unit of the decoder ~\cite{DBLP:conf/acl/SennrichHB16a}.
Thus the model is optimized to generate individual BPE output sequences well.
However, the eventual goal of the speech recognition is to reduce the word-error rate (WER).
Also, since the decoder is used along with a beam search during inference, it is effective to improve the accuracy by directly defining a loss that lowers the exptected WER of candidated beam search results.
The loss of MWER is represented as follows,
\begin{equation}
\label{eq:MWERLoss}
L_{MWER} = \sum_{b \subset B}P(\textbf{y}^b|\textbf{x})(\mathcal{W}^b-\mathcal{\Bar{W}})
\end{equation}
where $B$ are all the candidates of beams search results, and $\mathcal{W}^b$ indicates the number of word error of each beam result sequence $y^b$. The average word error of all the beam $\mathbb{\Bar{W}}$ helps model converging well by reducing the variance.
\begin{equation}
\label{eq:jointMWERLoss}
L_{Total} = \lambda L_{CE} + (1-\lambda) L_{MWER} \quad\;\lambda \in [0,1]
\end{equation}
The MWER loss, $L_{MWER}$, also can be easily integrated with the CE loss by linearly interpolating as shown in Equation~\ref{eq:jointMWERLoss}.
\subsection{Layer-wise pre-training}
\label{ssec:pretrain}
A layer-wise pre-training of the encoder was proposed in \cite{zeyer2018:attanalysis} to ensure that the model converges well and has better performance.
The initial encoder consists of 2 LSTM layers with a max-pool layer with a pooling factor of 32 in between. After sub-epochs of training, a new LSTM layer and a max-pool layer are added.
The total pooling factor 32 is divided into 2 for lower layer and 16 for newly added higher layer.
This strategy is repeated until the entire model network is completely built with 6 encoder LSTM layers and 5 max-pool layers with 2 pooling factor.
Finally, the total reducing factor is changed to 8, with only the lower 3 max-pool layers having a pooling factor 2.
During pre-training of our MoChA models, when a new LSTM and a max-pool layer were piled up at each stage, the training and validation errors shot up.
In order to address this, we employed a learning rate warm-up for every new pre-training stage.
\subsection{Spec augmentation}
\label{ssec:augmentation}
Because end-to-end ASR model learns from the transcribed corpus, large dataset is one of the most important factor to achieve better accuracy.
Data augmentation methods have been introduced to generate additional data from the originals, and recently, spec augmentation shows state-of-the-art result on public dataset~\cite{spec2019}.
spec augmentation masks parts of spectrogram along the time and frequency axis, thus model could learn from masked speech in a lack of information.
\section{Low-rank matrix approximation}
\label{sec:LRA}
We adopted a low-rank matrix approximation (LRA) algorithm based on singular value decomposition (SVD) to compress our MoChA model~\cite{Xue2014SVD}.
Given a weight matrix $W \in \mathbb{R}^{m \times n} $, SVD is $U\Sigma V^T $, where $\Sigma\in \mathbb{R}^{m \times n}$ is a diagonal matrix with singular values, and $U\in \mathbb{R}^{m\times m}$ and $V\in \mathbb{R}^{n \times n}$ are unitary matrices.
If we specify the rank $r< \frac{mn}{m+n}$, the truncated SVD is given by $\tilde{W}=\tilde{U}\tilde{\Sigma}\tilde{V}^T \in \mathbb{R}^{m\times n}$, where $\tilde{U}\in \mathbb{R}^{m\times r}, \tilde{\Sigma}\in \mathbb{R}^{r\times r}$, and $\tilde{V}\in \mathbb{R}^{n\times r}$ are the top-left submatrices of $U,\Sigma$, and $V$, respectively.
For an LRA, $W$ is replaced by the multiplication of $\tilde{U}'=\tilde{U}\tilde{\Sigma}$ and $\tilde{V}^T$, the number of weight parameters is reduced as $r(m+n) < mn$.
Hence we obtain the compression ratio $\rho=\frac{mn}{r(m+n)}$
in memory footprints and computation complexity for matrix-vector multiplication.
From LRA, we have an LRA distortion as
\begin{align}
\label{eq:lra_dist}
\Delta W = \tilde{U}' \tilde{V}^T - W.
\end{align}
For each layer, given an input $x \in \mathbb{R}^m$, the output error is given by
\begin{equation}
\label{eq:lra}
e = \sigma((W+\Delta W) x +b)-\sigma(W x +b),
\end{equation}
where $b\in \mathbb{R}^n$ and $\sigma(\cdot)$ represent a bias vector and a non-linear activation function, respectively.
Then it propagates through the layers and increases the training loss.
In the LRA, $\tilde{U}'$ and $\tilde{V}^T$ are updated in backward pass by constraining the weight space of $r(m+n)$ dimensions.
However, for large $\rho$, it is difficult to find the optimal weights due to the reduced dimension of the weight space.
Instead, we find an optimal LRA by employing DeepTwist method~\cite{Lee2015Deeptwist}, called a hyper-LRA.
\begin{algorithm}[htb]
\SetAlgoLined
\SetKwFunction{PTrain}{TrainingModelWeights}
\SetKwFunction{PInfer}{InferenceModelWeights}
\SetKwProg{Fn}{Procedure}{}{}
\Fn{\PTrain{}}{
\KwResult{Weight matrices $\{ W_i \}$ }
\For{each iteration $N$}{
\For{each layer $i$}{
$x_{i+1} \gets \sigma(W x_i +b)$
\tcp*{$x_i$:\footnotesize\texttt{input}}
\If{$N \bmod D = 0 $
}{
$x_{i+1} \gets x_{i+1} + e_{i}$
\tcp*{$e_{i}$ \footnotesize\texttt{in (\ref{eq:lra})}}
$W_i \gets W_i + \Delta W_i$
\tcp*{$\Delta W_{i}$ \footnotesize\texttt{in (\ref{eq:lra_dist})}}
}{
}
}
compute the loss $L$\;
\For{each layer $i$}{
$W_i \gets W_i - \eta \frac{\partial L}{\partial W_i}$
\tcp*{$\eta$:\footnotesize\texttt{learning rate}}
}
}
\For{each layer $i$}{
$W_i \gets W_i + \Delta W_i$\;
}
}
\Fn{\PInfer{}}{
\KwResult{Weight matrices $\{ \tilde{U_i}' \}, \{ \tilde{V_i}^T\}$}
\For{each layer $i$}{
$\tilde{U_i}',\tilde{V_i}^T \gets truncatedSVD(W_i) $
}
}
\caption{The hyper-LRA algorithms}
\end{algorithm}
The hyper-LRA algorithm modifies retraining process by adding the LRA distortion to weights and the corresponding errors to the outputs of layers every $D$ iterations, where $D$ is a distortion period. After retraining, instead of $W$, the multiplication of $\tilde{U}'$ and $\tilde{V}^T$ is used for the inference model.
\begin{comment}
For the successful retraining by the hyper-LRA, the sufficient conditions are
\begin{itemize}
\item The loss degradation caused by an LRA distortion should be gradually
decreased to sufficiently small number.
\item The model weights should be converged by which the accuracy degradation is sufficiently small.
\end{itemize}
\end{comment}
Note that the hyper-LRA algorithm optimizes $W$ rather than $\tilde{U}'$ and $\tilde{V}^T$.
In other words, a hyper-LRA method performs weight optimization in the hyperspace of the truncated weight space, which has the same dimension with the original weight space.
Therefore the hyper-LRA approach can provide much higher compression ratio than the conventional LRA whereas it requires more computational complexity for retraining process.
\section{On-Demand Adaptation}
\label{sec:adaptation}
The on-demand adaptation is an important requirement not only for personal devices such as mobiles but also for home appliances such as televisions.
We adopted a shallow fusion~\cite{kannan_wu_nguyen_sainath_chen_prabhavalkar_2018} method incorporated with n-gram LMs at inference time.
By interpolating both general LM and domain-specific LMs, we were able to boost the accuracy of the target domains while minimizing degradation in that of the general domain.
The probabilities computed from the LMs and the E2E models are interpolated at each beam step as follows,
\begin{equation}
\begin{aligned}
\label{eq:sf}
P'(y_l|\textbf{x},y_{1:l-1}) &= \log P(y_l|\textbf{x},y_{1:l-1}) \\ &+ \sum_{n=1}^N \alpha_n\log P_{LM_n}(y_l|y_{1:l-1})
\end{aligned}
\end{equation}
where $N$ is the number of n-gram LMs, $P_{LM}$ is a posterior distribution computed by the n-gram LMs. The LM distribution was calculated by looking up a probability per each BPE for the given context.
\section{Experiment}
\label{sec:evlaution}
\subsection{Experimental setup}
\label{ssec:setup}
We evaluated with Librispeech corpus which consists of 960 hours of data first and Internal usage data as well.
The usage corpus consists of around 10K hours of transcribed speech for Korean and English each and was recorded in mobiles and televisions.
We used randomly sampled one hour of usage data as our validation sets for each language.
We doubled speech corpus by adding the random noise both for training and for validating.
The decoding speed evaluated on Samsung Galaxy S10+ equipped with Samsung Exynos 9820 chipsets, a Mali-G76 MP12 GPU and 12GB of DRAM memory.
A sample rate of speech data was 16kHz and the bits per sample were 16.
The speech data were coded with 40-dimensional power mel-frequency filterbank features which are computed by power operation of mel-spectrogram with $\frac{1}{15}$ as the power function coefficient~\cite{vtlp2019}. The frames were computed every 10ms and were windowed by 25ms Hanning window.
We split words into 10K word pieces through byte pair encoding (BPE) method for both Korean and English normalized text corpus. Especially for Korean, we reduced the total number of units for Korean characters from 11,172 to 68 by decomposing each Korean character into two or three graphemes depending on the presence of final consonants.
We constructed our ASR system based on ReturNN~\cite{returnn}.
In order to speed up the training, we used a multiple GPU training based on the Horovod ~\cite{sergeev2018horovod,Infra2019} all reduce method.
And for better convergence of the model, a ramp-up strategy for both the learning rate and the number of workers was used~\cite{Chiu2018StateoftheArtSR}.
We used a uniform label smoothing method on the output label distribution of the decoder for regularization, and scheduled sampling was applied at a later stage of training to reduce the mismatch between training and decoding. The initial total pooling factor in Encoder is 32 for Librispeech, but 16 is used for internal usage data due to the training sensitivity, and they reduced into 8 after the pre-training stage.
The n-gram LMs were stored in a const arpa structure~\cite{Povey_ASRU2011}.
\subsection{Performance}
\label{ssec:performance}
\begin{table*}[th]
\caption{The performance of Attention based model depending on the number of direction and the cell size of LSTM layers in encoder. Joint CTC and Label smoothing are applied for all the results, and Data augmentation is only used on Usage data. The beam size of beam search based decoding is 12.}
\label{tab:mocha_vs_bfa}
\centering
\begin{tabular}{cccccccccc}
\hline
\multirow{2}{*}{Encoder} & \multirow{2}{*}{Attention} & \multirow{2}{*}{Cell size}
& \multicolumn{2}{c}{Librispeech}
& \multicolumn{1}{c}{Usage KOR}
& \multicolumn{1}{c}{Usage ENG} \\
& & &WER(Test-clean) &Test-other & WER & WER \\ \hline \hline
Bi-LSTM & Full & 1024 & 4.38\% & 14.34\% & 8.58\% & 8.25\% \\ \hline
\multirow { 3}{*}{Uni-LSTM} & Full & 1536 & 6.27\% & 18.42\% & - & - \\
& \multirow {2}{*}{MoChA} & 1024 & 6.88\% & 19.11\% & 11.34\% & 10.77\% \\
& & 1536 & 6.30\% & 18.41\% & 9.33\% & 8.82\% \\ \hline
\end{tabular}
\end{table*}
\begin{table}[th]
\vspace{-0.2cm}
\caption{Accuracy improvement from the optimizations}
\label{tab:trainig_opt}
\centering
\begin{tabular}{ccc}
\hline
& \multicolumn{2}{c}{Librispeech}\\
& Test-clean & Test-other \\ \hline \hline
MoChA (baseline) & 6.70\% & 18.86\% \\ \hline
+ Joint CTC & \multirow{2}{*}{6.30\%} & \multirow{2}{*}{18.41\%} \\
\& Label smoothing & & \\
+ Spec augmentation & 5.93\% & 15.98\% \\
+ Joint MWER & 5.60\% & 15.52\% \\ \hline
\end{tabular}
\vspace{-0.2cm}
\end{table}
We performed several experiments to build the baseline model on each dataset, and evaluated accuracies are shown in Table~\ref{tab:mocha_vs_bfa}. In the table, $Bi-$ and $Uni-$ mean bi-directional and uni-directional respectively, and $Cell size$ denotes the size of the encoder LSTM cells. The size of attention dimension is same as the encoder cell size, and 1000 was used the size of the decoder.
The chunk size of MoChA is 2 for all the experiments since we could not see any significant improvement in accuracy by extending the size more than two.
\begin{figure}[h]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[trim={0 4.2cm 0 4.2cm}, clip, height=40pt, width=0.8\linewidth]{alignment_bfa.pdf}
\caption{Bi-LSTM Full Attention}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[trim={0 4.2cm 0 4.2cm}, clip, height=40pt, width=.8\linewidth]{alignment_ufa.pdf}
\caption{Uni-LSTM Full Attention}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[trim={0 4.2cm 0 4.2cm}, clip, height=40pt, width=.8\linewidth]{alignment_mocha.pdf}
\caption{Uni-LSTM MoChA}
\end{subfigure}
\caption{Comparison of alignment by each attention method}
\end{figure}
As shown in Fig. 2, compared with bi-directional LSTM case, it seems that uni-directional model's alignment has some time delay because uni-directional LSTM cannot use backward information from input sequence. \cite{Sak2015FastAA}
Alignment calculation with soft full attention may have the advantage of seeing more information and utilizing better context. But for speech recognition, since the alignment of speech utterances is monotonic, that advantage could be not so great.
The accuracy of each trained model with various optimization method are shown in Table~\ref{tab:trainig_opt}.
The joint weight $\lambda$ for joint CTC training was 0.8, and it was gradually increased during the training.
We used 13 and 50 for the max size of the frequency-axis mask and the time(frame)-axis mask, respectively, and masks were applied one for each. For joint MWER training, we used 0.6 as $\lambda$ and beam size is 4.
Spec augmentation made large improvement, especially on test-other, and after Joint MWER training, finally the accuracies in WERs on test-clean and test-other were improved relatively 16.41\% and 17.71\% respectively compared to that of the baseline.
\subsection{Compression}
\label{ssec:com_result}
We respectively applied hyper-LRA to weight matrix on each layer,
the rank $r$ of which was chosen empirically.
For encoder LSTM layers, the ranks of the first and the last layers are set larger than those of the internal layers due to the severity of accuracy degradation.
The distortion period $D$ was set as the total iterations in one sub-epoch divided by 16.
The compressed model was retrained with a whole training data.
In addition, we adopted 8-bit quantization both to compress and to speed up by using Tensorflow-lite. ~\cite{tensorflow2015-whitepaper, tflite}.
As shown in Table~\ref{tab:compress}, the sizes of the models were reduced at least 3.4 times by applying hyper-LRA, and totally more than 13.68 times reduced after 8-bit quantization with minimum degradation of the performance.
Furthermore, we were able to compensate the performance by using MWER joint training.
At the same time, the decoding speed of Korean and English models got 13.97 and 9.81 times faster than that of baseline models, respectively. The average latency of final models were 140ms and 156ms, and the memory usage during decoding (CODE + DATA) were 230MB and 235MB for Korean and English, respectively.
\begin{table}[th]
\vspace{-0.2cm}
\caption{Performance for hyper-LRA.
The size of models were evaluated in megabytes (MB), and the beam size was 4. xRT denotes real-time factor for decoding speed. }
\label{tab:compress}
\centering
\renewcommand{\tabcolsep}{1.4mm}
\begin{tabular}{cc|ccc|ccc}
\hline
\multirow{2}{*}{Bits} & Hyper &
\multicolumn{3}{c|}{Korean} & \multicolumn{3}{c}{English} \\
& LRA & WER & xRT & Size & WER & xRT & Size \\ \hline \hline
32 & no & 9.37 & 4.89 & 530.56 & 9.03 & 4.32 & 530.50 \\
32 & yes & 9.85 & 0.99 & 140.18 & 8.91 & 1.15 & 153.98 \\
32 & +MWER & 9.60 & 1.26 & 140.18 & 8.64 & 1.48 & 153.98 \\
8 & no & 9.64 & 1.18 & 132.88 & 9.07 & 0.94 & 132.87 \\
8 & yes & 10.21 & 0.33 & 35.34 & 9.24 & 0.38 & 38.77 \\
8 & +MWER & 9.80 & 0.35 & 35.34 & 8.88 & 0.44 & 38.77 \\ \hline
\end{tabular}
\vspace{-0.2cm}
\end{table}
\subsection{Personalization}
\label{ssec:pdss}
We evaluated our on-demand adaptation method for the three domains in Korean.
Names of contacts, IoT devices and applications were used to manipulate utterances with pattern sentences where the names were replaced with a class name like "call @name".
Individual n-gram LMs were built for each domain using the synthesized corpus.
LMs for the specific domains were built within 5 seconds as in Table.~\ref{tab:lmbuild}.
\begin{table}[th]
\vspace{-0.2cm}
\caption{Building times for n-gram LMs (in seconds).}
\label{tab:lmbuild}
\centering
\begin{tabular}{ccccc}
\hline
Domain & entities & patterns & utterances & time\\ \hline \hline
Contact & 2307 & 23 & 53061 & 4.37 \\
App & 699 & 25 & 17475 & 1.78 \\
IoT & 441 & 4 & 1764 & 0.74 \\ \hline
\end{tabular}
\vspace{-0.2cm}
\end{table}
As in Table~\ref{tab:pdss}, the WER for an App domain was dramatically dropped from 12.76\% to 6.78\% without any accuracy degradation in a general domain.
The additional xRT for the LM fusion was less than 0.15xRT on average even though the number of LM look-up reached millions. The LM sizes for general and for all the three domains were around 43MB and 2MB respectively.
All test sets were recorded in mobiles.
\begin{table}[th]
\vspace{-0.2cm}
\caption{Performance improvement of on-demand adaptation. *xRTs were evaluated on-devices, but WERs were evaluated on servers with the uncompressed MoChA model in Table~\ref{tab:mocha_vs_bfa}.}
\label{tab:pdss}
\centering
\begin{tabular}{cccccc}
\hline
\multirow{2}{*}{Domain} &Length & \multicolumn{2}{c}{MoChA} & \multicolumn{2}{c}{Adapted} \\
&(in hours)& WER & xRT & WER & xRT \\ \hline \hline
General &1.0& 9.33 & 0.35 & 9.30 & 0.61 \\
Contact &3.1& 15.59 & 0.34 & 11.08 & 0.42 \\
App &1.2& 12.76 & 0.34 & 6.78 & 0.48 \\
IoT &1.5& 38.83 & 0.43 & 21.92 & 0.52 \\ \hline
\end{tabular}
\vspace{-0.2cm}
\end{table}
\section{Discussion}
\label{sec:foot}
We accomplished to construct the first on-device streaming ASR system based on MoChA models trained with large corpus.
In spite of the difficulties in training the MoChA models, we adopted various training strategies such as joint loss training with CTC and MWER, layer-wise pre-training and data augmentation.
Moreover, by introducing hyper-LRA, we could reduce the size of our MoChA models to be fit on devices without sacrificing the recognition accuracies.
For personalization, we used shallow fusion method with n-gram LMs, it showed improved results on target domains without sacrificing accuracy for a general domain.
\bibliographystyle{IEEEbib}
|
2103.09301
|
\section{#1}\label{sec:#2}}
\newcommand{\putsubsec}[2]{\subsection{#1}\label{sec:#2}}
\newcommand{\putsubsubsec}[2]{\subsubsection{#1}\label{sec:#2}}
\newcommand{\secref}[1]{Section~\ref{#1}}
\newcommand{\figref}[1]{Figure~\ref{#1}}
\newcommand{\tableref}[1]{Table~\ref{#1}}
\newcommand{\appref}[1]{Appendix~\ref{#1}}
\let\sub\textsubscript
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\makeatletter
\def\gdef\@thefnmark{}\@footnotetext{\gdef\@thefnmark{}\@footnotetext}
\makeatother
\newcommand*\circled[1]{\tikz[baseline=(char.base)]{
\node[white,shape=circle,draw,inner sep=2pt,fill=black] (char) {#1};}}
\newcommand{\texttt{Softermax}}{\texttt{Softermax}}
\section{Ease of Use}
\subsection{Maintaining the Integrity of the Specifications}
The IEEEtran class file is used to format your paper and style the text. All margins,
column widths, line spaces, and text fonts are prescribed; please do not
alter them. You may note peculiarities. For example, the head margin
measures proportionately more than is customary. This measurement
and others are deliberate, using specifications that anticipate your paper
as one part of the entire proceedings, and not as an independent document.
Please do not revise any of the current designations.
\section{Prepare Your Paper Before Styling}
Before you begin to format your paper, first write and save the content as a
separate text file. Complete all content and organizational editing before
formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on
proofreading, spelling and grammar.
Keep your text and graphic files separate until after the text has been
formatted and styled. Do not number text heads---{\LaTeX} will do that
for you.
\subsection{Abbreviations and Acronyms}\label{AA}
Define abbreviations and acronyms the first time they are used in the text,
even after they have been defined in the abstract. Abbreviations such as
IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use
abbreviations in the title or heads unless they are unavoidable.
\subsection{Units}
\begin{itemize}
\item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''.
\item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.
\item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''.
\item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.)
\end{itemize}
\subsection{Equations}
Number equations consecutively. To make your
equations more compact, you may use the solidus (~/~), the exp function, or
appropriate exponents. Italicize Roman symbols for quantities and variables,
but not Greek symbols. Use a long dash rather than a hyphen for a minus
sign. Punctuate equations with commas or periods when they are part of a
sentence, as in:
\begin{equation}
a+b=\gamma\label{eq}
\end{equation}
Be sure that the
symbols in your equation have been defined before or immediately following
the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at
the beginning of a sentence: ``Equation \eqref{eq} is . . .''
\subsection{\LaTeX-Specific Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
\subsection{Some Common Mistakes}\label{SCM}
\begin{itemize}
\item The word ``data'' is plural, not singular.
\item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''.
\item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)
\item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates).
\item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''.
\item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased.
\item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''.
\item Do not confuse ``imply'' and ``infer''.
\item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen.
\item There is no period after the ``et'' in the Latin abbreviation ``et al.''.
\item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''.
\end{itemize}
An excellent style manual for science writers is \cite{b7}.
\subsection{Authors and Affiliations}
\textbf{The class file is designed for, but not limited to, six authors.} A
minimum of one author is required for all conference articles. Author names
should be listed starting from left to right and then moving down to the
next line. This is the author sequence that will be used in future citations
and by indexing services. Names should not be listed in columns nor group by
affiliation. Please keep your affiliations as succinct as possible (for
example, do not differentiate among departments of the same organization).
\subsection{Identify the Headings}
Headings, or heads, are organizational devices that guide the reader through
your paper. There are two types: component heads and text heads.
Component heads identify the different components of your paper and are not
topically subordinate to each other. Examples include Acknowledgments and
References and, for these, the correct style to use is ``Heading 5''. Use
``figure caption'' for your Figure captions, and ``table head'' for your
table title. Run-in heads, such as ``Abstract'', will require you to apply a
style (in this case, italic) in addition to the style provided by the drop
down menu to differentiate the head from the text.
Text heads organize the topics on a relational, hierarchical basis. For
example, the paper title is the primary text head because all subsequent
material relates and elaborates on this one topic. If there are two or more
sub-topics, the next level head (uppercase Roman numerals) should be used
and, conversely, if there are not at least two sub-topics, then no subheads
should be introduced.
\subsection{Figures and Tables}
\paragraph{Positioning Figures and Tables} Place figures and tables at the top and
bottom of columns. Avoid placing them in the middle of columns. Large
figures and tables may span across both columns. Figure captions should be
below the figures; table heads should appear above the tables. Insert
figures and tables after they are cited in the text. Use the abbreviation
``Fig.~\ref{fig}'', even at the beginning of a sentence.
\begin{table}[htbp]
\caption{Table Type Styles}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\
\cline{2-4}
\textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\
\hline
copy& More table copy$^{\mathrm{a}}$& & \\
\hline
\multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.}
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\begin{figure}[htbp]
\centerline{\includegraphics{fig1.png}}
\caption{Example of a figure caption.}
\label{fig}
\end{figure}
Figure Labels: Use 8 point Times New Roman for Figure labels. Use words
rather than symbols or abbreviations when writing Figure axis labels to
avoid confusing the reader. As an example, write the quantity
``Magnetization'', or ``Magnetization, M'', not just ``M''. If including
units in the label, present them within parentheses. Do not label axes only
with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization
\{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of
quantities and units. For example, write ``Temperature (K)'', not
``Temperature/K''.
|
2102.06149
|
\section{Introduction}
\label{sec:intro}
Up-to-date observational data suggest that our universe is mainly driven by a pressure-less or cold dark matter (CDM) and a dark energy (DE) fluid where around 96 per cent ($\sim$\; 28 per cent DM $+$ 68 percent DE) of the total energy budget of the universe is occupied by this joint dark fluid \citep{Aghanim:2018eyx}. The fundamental nature of these fluids, such as, their origin and dynamics are yet to be known even after a series of astronomical missions. Therefore, understanding the dark picture of the universe has remained one of the greatest challenges in cosmology.
In order to reveal the physics of the dark sectors, various cosmological models have been proposed and investigated over the last several years \citep{Copeland:2006wr,Sotiriou:2008rp,Cai:2009zp,DeFelice:2010aj,Capozziello:2011et,Clifton:2011jh,Bamba:2012cp,Cai:2015emx,Nojiri:2017ncd,Bahamonde:2021gfp}.
The standard cosmological model $\Lambda$CDM is one of the simplest cosmological models which fits excellently to most of the observational probes. However, the physics of the dark fluids is not clear in this model as well $-$ the cosmological constant problem, for instance, is a serious issue \citep{Weinberg:1988cp}. Additionally, in this canonical picture of the universe,
several anomalies and tensions between different cosmological probes
may indicate a revision of the $\Lambda$CDM cosmology, see refs. \cite{ 2021arXiv210505208P,Schoneberg:2021qvd}.
In the $\Lambda$CDM model, we assume the simplest possibility for its ingredients $-$ the independent evolution of DM and DE. As the physics of the dark sector is not yet clear, there should not be any reason to exclude the possibility of an interaction between these components. By allowing the interaction or the energy exchange between DM and DE, one naturally generalizes the non-interacting scenarios.
The theory of the dark sector interaction did not appear suddenly in the literature. The limitations or issues of the standard cosmological model at the fundamental level motivated to relax the independent evolution of DM and DE.
For instance, an interaction in the dark sector can provide a possible/promising explanation/solution to the cosmic coincidence problem (\cite{delCampo:2008jx,Velten}), $H_0$ tension \citep{DiValentino:2017iww,Kumar:2017dnp, Yang:2018uae,Pan:2019jqh,DiValentino:2019ffd,Lucca:2020zjb,2021PDU....3300862K,2021CQGra..38o3001D,2021JHEAp..32...28A,2021arXiv211205701R,2021PhRvD.104l3512A,2021Univ....7..300T,DiValentino:2021pow} that arises between the CMB measurements by Planck satellite within the $\Lambda$CDM cosmology \citep{Aghanim:2018eyx} and SH0ES \citep{Riess:2019cxk,2021arXiv211204510R}, and $S_8$ tension \citep{Pourtsidou:2016ico,An:2017crg,Kumar:2019wfs,2021PhRvD.104j4057D,2021PDU....3400899L,2022MNRAS.509.2994A} that arises between the Planck and weak lensing measurements \citep{S8_tension}. Additionally, an interaction in the dark sector could explain the phantom phase of the DE without any need of a scalar field with negative correction \citep{Wang:2005jx,Sadjadi:2006qb,Pan:2014afa,Bonilla_1,Bonilla_2,2021JCAP...10..008Y}. See \cite{Bolotin:2013jpa,Wang:2016lxa} for a comprehensive reading on interacting dark energy models. Therefore, based on such appealing outcomes, it is indeed desirable to consider a wider picture of our universe by including the interaction between DM and DE, and allow the observational data to favor or reject this possibility.
In a standard approach, the interaction between DM and DE is investigated through the inclusion of some phenomenological coupling function to describe the DM and DE dynamics intuitively. However, let us recall that some action formalisms, i.e., construction of the DE-DM interaction models from the first principle, including the Noether symmetry approach, have also been developed in the literature, see e.g. \cite{2020PDU....2700444P,Gleyzes:2015pma,Boehmer:2015kta,Amico:2016qft,Kase:2019hor,2018arXiv180900556V,Pan:2020zza}. On the other hand, given the great interest of the community in this theoretical framework, accomplishing a model independent analysis becomes a necessary task. In principle, one may do it using cosmographic approach, wherein a series expansion is performed around $z = 0$ for a cosmological observable, and then the data are used to constrain the kinematic parameters. This procedure works fine for lower values of $z$, but may not be good enough for larger values of $z$, see \cite{2021PhRvD.104l3518L}. An interesting and robust alternative could be to consider a Gaussian process (GP) to reconstruct the cosmological parameters in a model-independent way \citep{GP_01,GP_02,GP_03,GP_04,GP_05,GP_06,GP_07,GP_08,Jesus2020,GP_09,GP_10,2021arXiv211014950M,2021ApJ...915..123S,Bernardo:2021cxi,Dialektopoulos:2021wde,Bengaly:2021wgc,Avila:2022xad}
or to fix a class of cosmological models \citep{2020CQGra..38e5007B,2021JCAP...09..014B,2021JCAP...07..048R,2021arXiv210401077E,2021PDU....3200812R}. The GP and other alternative approaches have been applied to reconstruct an interaction between DM and DE in a minimally model-dependent way in various works with different data sets and approximations \citep{Yang2015, Wang2015, GP_IDE_01, GP_IDE_02, GP_IDE_03, GP_IDE_04, GP_IDE_05, GP_IDE_06, GP_IDE_07}.
In this work, we employ the GP to carry out a joint analysis by using some geometrical cosmological probes, viz., Cosmic chronometers (CC), Supernova Type Ia (SN), Baryon Acoustic Oscillations (BAO), and the H0LiCOW lenses sample to constrain/reconstruct the interaction in the dark sector of the universe in two different frameworks, namely, the one where the EoS of DE mimics the vacuum energy (known as an interacting vacuum energy scenario) and secondly a general coupling scenario where DE is allowed to assume a dynamical character via its equation of state (EoS). This latter possibility has not been studied much in the literature, viz., most of the works are carried out with only the constant or linear approximation of the EoS parameter of DE. Moreover, to our knowledge of the current literature, the reconstruction of the interaction in the dark sector has not been performed using a joint analysis. In addition, we also simulate a catalogue of 1000 standard siren events from binary neutron star mergers, within the sensitivity predicted for the third generation of the ground GW detector called the Einstein Telescope (ET), and we use these mock data to improve the reconstruction of the coupling function from the SN, BAO, CC and H0LiCOW data. A model-independent joint analysis from above-mentioned data sets, including a forecast analysis with the simulated data for optimizing the covariance function (or kernel in GP language), as we present here, to our knowledge is new and not previously investigated in the literature. Indeed, a joint analysis with several observational probes is helpful to obtain tight constraints on the cosmological parameters. In this work, we develop this methodology to obtain an accurate and robust reconstruction of a possible interaction between DM and DE.
The paper is structured as follows. In Section \ref{sec-method-data-theory}, we describe the GP, the observational data sets and the theoretical framework used in this work for model-independent inference of the dark sector coupling. In Section \ref{sec-results}, we present and discuss our results on the reconstruction of the coupling function between DM and DE following the model-independent approach, wherein the subsections \ref{sec-ivs} and \ref{sec-ide} describe two different reconstructed scenarios. Further, in Section \ref{sec-gw}, we use the mock gravitational waves data in order to get a more deeper understanding on the evolution of the coupling function. Finally, in Section \ref{sec-conclu}, we conclude our work with a brief summary of the entire study.
\section{Methodology, data sets and the theoretical background}
\label{sec-method-data-theory}
This section is divided into the following three parts: the Gaussian process, the observational data, and a basic framework of the theory, which we are going to test in this article using the observational data following the model-independent Gaussian approach.
\subsection{Gaussian process}
\label{sec-gaussian}
In a nutshell, the GP in cosmology allows us, given an observational data set $f(z)\pm \sigma_f$, to obtain a function $f(z)$ without the need to assume a parametrization or physical model about the dark nature of the main components of the universe. The GP method adequately describes the observed data based on a distribution over functions. The reconstructed function $f(z)$ (and its derivatives $f'(z)'$, $f''(z)$,..., etc) have a Gaussian distribution with mean and Gaussian error at each data point $z$. The functions at different points $z$ and $z'$ are related by a covariance function $k(z,z')$, which only depends on a set of kernels with hyperparameters $l$ and $\sigma_f$, describing the strength and extent of the correlations among the reconstructed data points, respectively. Thus, $l$ gives a measure of the coherence length of the correlation in the x-direction, and $\sigma_f$ denotes the overall amplitude of the correlation in the y-direction. In general, the hyperparameters are constant since their values point to a good fit of the function rather than a model that mimics this behavior, which means that the GP optimizes both concerning the observed data. Finally, the GP method is model-independent in a physical model and assumes a particular statistical kernel that determines the correlation between the reconstructed data points. The entire methodology used in this work is described in detail in section II of Ref.~\cite{Bonilla:2020wbn}.
\subsection{Observational data sets}
\label{sec-data}
In this section we shall describe the geometrical probes in detail that we have used to trace the interaction in the dark sector.
\begin{itemize}
\item Cosmic Chronometers (CC): The CC approach is very powerful to detect the expansion history of the universe that comes through the measurements of the Hubble parameter. Here we take into consideration 30 measurements of the Hubble parameter distributed over a redshift interval $0 < z < 2$ as in Ref. \cite{Moresco16}. \\
\item Supernovae Type Ia (SNs): The first astronomical data probing the accelerating expansion of our universe are SNs. Certainly, SNs are very important astronomical probes in analysing the properties of DE and the expansion history of the universe. The latest compilation of SN data (Pantheon sample) that we have used in this work, consists of 1048 SN data points in the redshift range $0.01 < z < 2.3$ \citep{Scolnic18}. In the context of a universe with zero curvature, the entire Pantheon sample can be summarized in terms of six model-independent $E(z)^{-1}$ data points \citep{Riess18}. Here we use the six data points as reported in ref. \cite{Haridasu18} in the form of $E(z)$ taking into account the theoretical and statistical considerations for its implementation. \\
\item Baryon Acoustic Oscillations (BAO): Another important cosmological probe is the BAO data. With the use of BAO, the expanding spherical wave produced by baryonic perturbations of acoustic oscillations in the recombination epoch can be traced through the correlation function of the large-scale structure displaying a peak around 150$h^{-1} {\rm Mpc}$. Here we have used BAO measurements from various astronomical surveys: (i) measurements from the Sloan Digital Sky Survey (SDSS) III DR-12 which report three effective binned redshifts $z = 0.38, 0.51$ and $0.61$ \citep{Alam17}, (ii) measurements from the clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample reporting four effective binned redshifts $z = 0.98, 1.23, 1,52$ and $1.94$, as in \cite{Zhao19}, (iii) measurements from the high-redshift Lyman-$\alpha$ survey reporting two effective binned redshifts at $z = 2.33$ \cite{du_Mas20} and $z = 2.4$ \cite{du_Mas17}. All the measurements are presented in terms of $H(z) \times (r_d/r_{d,fid})$ km s$^{-1}$Mpc$^{-1}$, where $r_d$ denotes the co-moving sound horizon and $r_{d,fid}$ is the fiducial input value provided in the above surveys. \\
\item H0LiCOW sample: Finally, we use the sample from the $H_0$ Lenses in COSMOGRAIL's Wellspring program \footnote{\url{www.h0licow.org}}, another geometrical probe in this list which measures the Hubble constant in a direct way (without assuming any model in the background). The H0LiCOW collaboration has measured six lens systems, determined by measurements of time-delay distances, $D_{\Delta t}$, between multiple images of strong gravitational lens systems due to elliptical galaxies \citep{H0LiCOW}. The entire information is encapsulated in the time-delay distance $D_{\Delta t}$. Along with these six systems of strongly lensed quasars, the angular diameter distance to the lens $D_l$ also offers some additional information in terms of four more data points. Therefore, in total, one can employ 10 data points and we have used them in this work (we refer to \cite{Birrer2019,Pandey2020} for more details in this context).
\end{itemize}
\subsection{Theoretical framework}
\label{sec-theory}
For a model-independent theoretical description of the dark sectors' interaction, in this work, we follow the similar methodology as in ref. \cite{Yang2015}. In the context of a Friedmann$-$Lema\^{i}tre$-$Robertson$-$Walker universe, we assume that the total energy density of the universe is comprised by DE and DM only where both of them are coupled through a non-gravitational interaction. Thus, the conservation equations for DM and DE are modified as
\begin{eqnarray}
&&\dot{\rho}_{\rm DM} +3 H \rho_{\rm DM} = -Q (t)~,\label{cont1}\\
&&\dot{\rho}_{\rm DE} + 3 H \rho_{\rm DE} (1+w)= Q (t)~,\label{cont2}
\end{eqnarray}
where $w = p_{\rm DE}/\rho_{\rm DE}$ is the equation of state of DE ($p_{\rm DE}$ denotes the pressure of the DE fluid), $H=\dot{a}/a$ is the expansion rate of the universe and it is related to the total energy density of the universe as $3H^2 = \rho_{\rm DM} + \rho_{\rm DE}$ (in the units where $8 \pi G = 1$). The function $Q (t)$ describes the interaction between DM and DE, and usually it is taken to be a function of the energy densities of DM and DE. For $Q (t) = 0$ with $w =-1$, the standard $\Lambda$CDM cosmology is recovered. Now, combining the conservation equations (\ref{cont1}) and (\ref{cont2}) with expansion rate of the universe $H(z)$, we obtain \citep{Yang2015}:
\begin{eqnarray}
\label{eqn:WqE}
-wq &=& 2 \Big(E E'^2 + E^2 E'' - \frac{w'}{w} E^2 E' \Big) (1+z)^2\nonumber \\
&&- \Big[ 2(5 + 3 w)E^2 E' - 3 \frac{w'}{w} E^3\Big](1+z)\nonumber \\
&&+ 9(1 + w)E^3,
\end{eqnarray}
where for convenience, we have used a dimension-less variable $q = Q (t)/H^3_0$ to characterize the interaction, $E(z)=H(z)/H_0$ is the normalized Hubble rate and the prime denotes the differentiation with respect to the redshift $z$. Let us note that the symbol $q$ is usually used to represent the deceleration parameter in the literature, but here this symbol has a different meaning as defined above. The detailed derivation of equation (\ref{eqn:WqE}) is given in Appendix \ref{sec-appendix}. Now, using the normalized co-moving distance,
\begin{eqnarray}
\label{eqn:D}
D = \frac{H_0}{c} \left(\frac{1}{1+z} \right) d_L(z),
\end{eqnarray}
where $d_L(z)$ represents the luminosity distance at redshift $z$, eq.(\ref{eqn:WqE}) can be expressed alternatively as
\begin{eqnarray}
\label{eqn:WqD}
-wq &=& 2 \Big(\frac{3 D''^2}{D'^5} - \frac{D'''}{D'^4} + \frac{w' D''}{w D'^4} \Big) (1+z)^2\nonumber \\
&& + \Big[2(5 + 3w)\frac{D''}{D'^4} + \frac{3 w'}{w D'^3}\Big](1+z)\nonumber\\
&& + \frac{9(1 + w)}{D'^3}.
\end{eqnarray}
The above methodology represents a general framework to reconstruct the coupling function with minimal assumption. The only assumption of fact is the validity of the cosmological principle, and a possible coupling between DM and DE has been assumed as a theoretical prior, where this second assumption must be tested with the observational data.
In what follows, we will test this theoretical framework.
\begin{figure*}
\begin{center}
\includegraphics[width=3.1in]{q_1.pdf} \,\,\,\,
\includegraphics[width=3.1in]{q_2.pdf}
\caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL in the interacting vacuum energy scenario from CC+SN+BAO (Orange) and CC+SN+BAO+H0LiCOW (Blue) data. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to the canonical $\Lambda$CDM prediction and the solid curves represent the GP mean. }
\label{IVCDM_results01}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=3.1in]{w_q_1.pdf} \,\,\,\,
\includegraphics[width=3.1in]{W_q_2.pdf}
\caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL in the general interacting scenario of the dark sectors' from CC+SN+BAO (Green) and CC+SN+BAO+H0LiCOW (Yellow) data. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to canonical $\Lambda$CDM prediction and the solid curves stand for the GP mean. }
\label{Geral_results}
\end{center}
\end{figure*}
\section{Results and Discussions}
\label{sec-results}
In this section, we present and discuss the results of our analyses considering two separate cases. First, we fix the EoS of DE to $w=-1$, and reconstruct the interaction function $q = Q (t)/H_0^3$. This possibility characterizes a very well-known sub-class of the interaction scenario in the dark sector, known by interacting vacuum energy. Secondly, we consider a very general possibility assuming $w$ as a free and dynamical function, and similarly reconstruct the interaction function $q$. Thus, having both the possibilities, we explore a very general description of the dark coupling in a model-independent approach. Before we enter into the main results, we rescale the function $q$ (see eq.(\ref{eqn:WqD})) to $\delta (z) = q(1+z)^{-6}$. Such a pre-factor is just considered as a scale transformation with respect to $z$, introduced to better expose the results in the graphical description. Thus, here onwards the function $\delta (z)$ will characterize the coupling function.
We proceed further considering the above two scenarios of coupling in the dark sector. To reconstruct $\delta (z)$, we use $M_{9/2}$ kernel in all the analyses performed in this work. In the case where we assume $w(z)$ to be a free function, we follow the same methodology as presented in \cite{Bonilla:2020wbn}. For this purpose, we have used modified versions of some numerical routines available in the public GAPP (Gaussian Processes in Python) code \cite{GP_01}. In all of our analyses, we employ GP to perform a joint analysis using the minimal data set combination CC+SN+BAO, which to our knowledge has not been investigated previously in the literature. We now present and discuss our main results.
\subsection{Interacting Vacuum Energy}
\label{sec-ivs}
In Fig. \ref{IVCDM_results01}, we have shown the reconstruction of $\delta(z)$ using the data combinations CC+SN+BAO and CC+SN+BAO+H0LiCOW. In both analyses, we note that for $z > 0.5$ the dynamical coupling function $\delta (z)$ between the dark components is statistically well compatible with $\delta (z) =0$. It is interesting to note that the GP mean predicts a possible oscillation in $\delta (z)$, where we can note an oscillation between positive and negative values in the analysed range of $z$. This result strengthens some earlier interaction models having a sign changeable property, see for instance \cite{Pan:2019jqh,Pan:2020bur,2021PhRvD.103h3520Y}.
For the present scenario, at late cosmic time, i.e. for $z < 0.5$ ($z < 0.25$), we find a trend towards $\delta < 0$ for CC+SN+BAO+H0LiCOW (CC+SN+BAO) data. When evaluated at present moment, we find $\delta(z=0) = -0.37 \pm 0.24$ ($-0.76 \pm 0.12$) at $1\sigma$ CL from CC+SN+BAO (CC+SN+BAO+H0LiCOW) data. This suggests an interaction in the dark sector at more than $3\sigma$ CL from the CC+SN+BAO+H0LiCOW joint analysis. It is important to emphasize that these constraints on $\delta(z)$ are subject to the condition $w=-1$. Also, we notice that the combined analysis with several data sets offers a more stringent bound on the interaction function compared to \cite{Yang2015}, where only the SN Ia Union 2.1 data set~\cite{Suzuki:2011hu} was employed.
\subsection{General interaction scenario in the dark sector}
\label{sec-ide}
In the previous subsection, we analysed a particular interaction case, namely the interacting vacuum-energy ($w=-1$) to get the constrains on $\delta(z)$. As a second round of analysis, we relax these conditions by assuming $w(z)$ to be a free function. This possibility allows us to reconstruct the coupling in the dark sector in a general way, because in this case, no physical assumption is considered on the EoS of DE.
In Fig. \ref{Geral_results}, we show the reconstruction of $\delta(z)$ from CC+SN+BAO and CC+SN+BAO+H0LiCOW data combinations. Since in this scenario, we have an additional free parameter $w$ to propagate errors compared to the previous case with $w =-1$, it is expected that large error bars might be imposed on the reconstructed $\delta (z)$ compared to the case with $w=-1$.
As a general feature of the GP mean, we can note a flux of energy from DM to the DE at high $z$, and as cosmic time evolves up to approximately $z < 0.5$, the coupling function $\delta (z)$ reverses its sign, at low $z$ (late time). This again goes in support of some phenomenological models of the interaction \citep{Pan:2019jqh,Pan:2020bur}. In this general framework, we find $\delta(z=0) = -0.31 \pm 0.77$ at $1\sigma$ CL from CC+SN+BAO data and $\delta(z=0) = -0.64 \pm 0.43$ at $1\sigma$ CL from CC+SN+BAO+H0LiCOW data. These predictions are compatible with the $\Lambda$CDM cosmology, i.e., $\delta =0$.
\begin{figure*}
\begin{center}
\includegraphics[width=3.1in]{q_3.pdf} \,\,\,\,
\includegraphics[width=3.1in]{q_4.pdf}
\caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL from CC+SN+BAO+GW (Blue) and CC+SN+BAO+H0LiCOW+GW (Red) data, in the interacting vacuum energy scenario. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to the canonical $\Lambda$CDM prediction and the solid curves are for the GP mean.}
\label{results_mock_data}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=3.1in]{w_q_3.pdf} \,\,\,\,
\includegraphics[width=3.1in]{W_q_4.pdf}
\caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL in the general interacting scenario of dark sectors' from CC+SN+BAO+GW (Black) and CC+SN+BAO+H0LiCOW+GW (Purple) data. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to the canonical $\Lambda$CDM prediction and the solid curves stand for the GP mean.}
\label{results_mock_data_w}
\end{center}
\end{figure*}
\section{Forecast from gravitational wave standard sirens}
\label{sec-gw}
To impose more robust and accurate constraints on the $\delta (z)$ function, we optimize the covariance function using mock gravitational waves (GW) data generated by assuming the $\Lambda$CDM model as a fiducial one. As argued in \cite{Seikel2013}, for non-parametric regression method such as GP, we aim to generate confidence limits such that the true function is trapped appropriately. But it can be problematic when evaluating functions as $w$ and $\delta (z)$ because they are quantities that depend on the second and third order derivatives of the cosmological observable. We can avoid the problem in identifying an appropriate covariance function which can reproduce expected models accurately by adding simulated data. To accomplish this object, we create a standard sirens mock catalogue, the gravitational wave analogue of the astronomical standard candles, which can provide powerful information about the dynamics of the universe. For a given GW strain signal, $h(t) = A(t) \cos [\Phi(t)]$, the stationary-phase approximation can be used for the orbital phase of inspiraling binary system to obtain its Fourier transform $\tilde{h}(f)$. For a coalescing binary system of masses $m_1$ and $m_2$,
\begin{equation}
\label{waveform}
\tilde{h}(f) = Q \mathcal{A} f^{-7/6} e^{i\Phi(f)}.
\end{equation}
Here $\mathcal{A} \propto 1/d_L$ is the luminosity distance to the merger's redshift, and $\Phi(f)$ is the binary system's inspiral phase. For more details on the post-Newtonian coefficients and waveforms, one may refer to \cite{Agostino_Nunes2019} and Appendix A therein. After defining the GW signal, for a high enough signal-to-noise ratio (SNR), one may obtain upper bounds on the free parameters of the GW signal $\tilde{h}(f)$ by using the Fisher information analysis. Estimating $d_L(z)$ from GW standard sirens mock data is well established approach, see \cite{Agostino_Nunes2019} and references therein. In what follows, we briefly describe our methodology that is used to generate the standard sirens mock catalogue.
In order to generate the mock standard siren catalogue, we consider the ET power spectral density noises. The ET is a third-generation ground detector, and covers frequencies in the range $1-10^4$ Hz. The ET is sensitive to signal amplitude, which is expected to be ten times larger than the current advanced ground-based detectors. The ET conceptual design study predicts BNS detection of an order of $10^3-10^7$ per year. However, only a small fraction ($\sim 10^{-3}$) of them is expected to be accompanied by a short $\gamma$-ray burst observation. Assuming a detection rate of $\mathcal{O}(10^5)$, the events with short $\gamma$-ray bursts will be $\mathcal{O}(10^2)$ per year.
In our simulations, 1000 BNS mock GW standard sirens merger events up to $z = 2$ are considered. In the mock catalogue, we have used the input values $H_0 = 67.4$ $km$ $s^{-1}$ $Mpc^{-1}$ and $\Omega_{m0} = 0.31$ for the Hubble constant and matter density parameter, respectively, in agreement with the most recent Planck CMB data (within the $\Lambda$CDM paradigm) \cite{Aghanim:2018eyx}. We have estimated the measurement error on the luminosity distance for each event using the Fisher matrix analysis on the waveforms (see ref.~\cite{Agostino_Nunes2019} for details). We have calculated the SNR of each event, and confirmed that it is a GW detection provided SNR $> 8$. In what follows, we describe the evolution of the interaction function with the inclusion of the mock GW standard sirens with the standard cosmological probes.
In Fig.~\ref{results_mock_data}, we show the reconstructed interaction function $\delta(z)$ from CC+SN+BAO+GW and CC+SN+BAO+H0LiCOW+GW data combinations for the simple interaction scenario with $w = -1$. When evaluated at the present moment, we find $\delta(z=0) = -0.70 \pm 0.14$ at $1\sigma$ CL for CC+SN+BAO, and $\delta(z=0) = -0.833 \pm 0.016$ at $1\sigma$ CL for CC+SN+BAO+H0LiCOW under the interacting vacuum-energy assumption. Analysing the behavior of the $\delta$ function in the range $z \in [0, 2.5]$, we find an evidence for a sign transition in $\delta$ that quantifies the interaction between the dark components. We clearly notice a preference for $\delta < 0$ at late times. In Fig.~\ref{results_mock_data_w}, we show the reconstructed interaction function $\delta(z)$ from CC+SN+BAO+GW and CC+SN+BAO+H0LiCOW+GW data combinations, under the general assumption where $w(z)$ is a free function of $z$. In this case, we find $\delta(z=0) = -0.49 \pm 0.69$ at $1\sigma$ CL from CC+SN+BAO+GW and $\delta(z=0) = -0.705 \pm 0.066$ at $1\sigma$ CL from CC+SN+BAO+H0LiCOW+GW. In our analysis, even including GW mock data in CC+SN+BAO, we note that $\delta$ is compatible with $\Lambda$CDM. On the other hand, from CC+SN+BAO+H0LiCOW+GW data, we can notice a prediction for $\delta < 0$ at late times.
It is important to note that the presence of GWs mock catalogue (assuming a fiducial $\Lambda$CDM model) is used for the purpose of optimizing the covariance function, as argued previously. The results summarized in Fig.~\ref{results_mock_data_w} are the most realistic, being the ones for the most general case.
\section{Conclusions}
\label{sec-conclu}
In this work, we have presented some generalized aspects of dark sectors' interaction that may be of interest to the community: (i) We have investigated the case where DE can assume a dynamical character through its equation of state along with the simplest vacuum-energy case $w =-1$. (ii) We have studied joint analyses of the dark sector interaction with several geometrical probes following a minimally model-dependent way of GP. (iii) We have optimized the covariance function using mock GW standard sirens to better reconstruct the function $\delta(z)$. In short, all these pieces of investigation have led to more general and robust results which could be helpful in order to have a deeper understanding of the physics of the dark sector. Our observations are as follows: We find, for both interacting vacuum and general scenario, that, $\delta (z)$ exhibits a transient nature according to the analyses from CC+SN+BAO and CC+SN+BAO+H0LiCOW data, and at very late time, $\delta (z)$ enters into the negative region (see Figs. \ref{IVCDM_results01} and \ref{Geral_results}). This conclusion remains unaltered when we include the 1000 mock GW standard sirens to the above two combined data sets (see Figs. \ref{results_mock_data} and \ref{results_mock_data_w}). However, the indication of a late interaction is strongly pronounced in the context of the interacting vacuum scenario where we find that for the CC+SN+BAO data, $\delta (z = 0) \neq 0$ at more than $1\sigma$ CL, but $\delta (z = 0) \neq 0$ remains valid at more than $3\sigma$ CL for CC+SN+BAO+H0LiCOW data. This is an interesting result in this work because the transfer of energy among the dark sector components, which has been observed in some phenomenological models, is not ruled out in light of the model-independent analysis, and also for the combined analyses that we have performed during the reconstruction. However, concerning the general interacting picture, we see that $\delta (z =0) =0$ is compatible within $1\sigma$ for both CC+SN+BAO and CC+SN+BAO+H0LiCOW data.
When the GW standard sirens enter into the analysis, we find that for the interacting vacuum scenario, again $\delta (z =0) \neq 0$ at several standard deviations, for both CC+SN+BAO+GW and CC+SN+BAO+H0LiCOW+GW data. Whilst for the general scenario, even if for CC+SN+BAO+GW data, $\delta (z= 0)=0$ is compatible within $1\sigma$ but for CC+SN+BAO+H0LiCOW+GW data, we find a strong preference of an interaction at several standard deviations. Summarizing the results, we find that the model-independent analyses indicate for a possible interaction in the dark sector which is strongly preferred for the scenario with $w =-1$. Based on the findings of this study, we believe that it will be worthwhile to investigate, in future communications, various statistical techniques for reconstructing the function $\delta (z)$, such as the use of neural networks, principal component analysis, and others, which may provide statistical improvements over the standard GP method that we use in the study of cosmological parameters. \\
\section*{Acknowledgements}
\noindent
The authors thank the referee for some useful comments that improved the manuscript.
SK gratefully acknowledges the support from Science and Engineering Research Board (SERB), Govt. of India (File No. CRG/2021/004658). RCN would like to thank the agency FAPESP for financial support under the project No. 2018/18036-5. SP acknowledges the Mathematical Research Impact-Centric Support Scheme (File No. MTR/2018/000940) of SERB, Govt. of India.
\section*{Data Availability}
The observational data used in this article will be shared on reasonable request to the corresponding author.
|
1907.06606
|
\section{Introduction}
Wavelet-based methods are widely applied in a range of fields, such as mathematics, signal and image processing, geophysics, and many others. In statistics, applications of wavelets arise mainly in the areas of non-parametric regression, density estimation, functional data analysis and stochastic processes. These methods basically utilize the possibility of representing functions that belong to certain functional spaces as expansions in wavelet basis, similar to others expansions such as splines or Fourier, among others. However, wavelet expansions have characteristics that make them quite useful from the point of view of function representation: they are located in both time and scale in an adaptive way, their coefficients are typically sparse, they can be obtained by fast
computational algorithms, and the magnitudes of coefficients are linked with the smoothness properties of the functions they represent. These properties enable time/frequency data analysis, bring computational advantages, and allow for statistical
data modeling at different resolution scales.
Wavelet shrinkage methods are used to estimate the coefficients associated with the representation of the function in the wavelet domain by reducing the magnitude of the observed (empirical) coefficients that are obtained by the wavelet transform in the the original data. There are in fact several shrinkage techniques available in the literature. The main works in this area are of Donoho (1993a, 1993b), Donoho and Johnstone (1994a, 1994b, 1995), but also Donoho et al. (1995, 1996), Johnstone e Silverman (1997), Vidakovic (1998, 1999b) and Antoniadis et al. (2002) can be cited. For more details of shrinkage methods, see Vidakovic (1999a) and Jansen (2001). The standard statistical models in which shrinkage techniques are applied assume Gaussian additive errors. These models are important not only from their applicability to a range of different problems, but also from the mathematical point of view since the Gaussian additive errors remain both Gaussian and additive after the wavelet transformation.
Bayesian shrinkage methods have also been extensively studied, mainly for the possibility of adding, by means of a prior probabilistic distributions, prior information about the regression, coefficients and other parameters to be estimated. Specifically in the case of wavelets, information about the degree of sparsity of the coefficient vector, the support of these coefficients (if they are limited), among others can be incorporated into the statistical model of study by means of Bayesian procedures. In this sense, the choice of the prior distribution of the wavelets coefficients is extremely important to achieve meaningful results.
Several bayesian shrinkage procedures have been studied and proposed in the last years in many statistical fields. Some of them are found in Lian (2011), Berger et al. (2012), Karagiannis et al. (2015), Griffin and Brown (2017) and Torkamani and Sadeghzadeh (2017). Priors models in the wavelet domain were proposed since 1990s, see Chipman et all (1997), Abramovich et al. (1998), Abramovich and Benjamini (1996), Vidakovic (1998), Vidakovic and Ruggeri (2001), Angelini and Vidakovic (2004), Johnstone and Silverman (2005), Reményi and Vidakovic (2015), Bhattacharya et al. (2015) among others.
Bayesian models in the wavelet domain have showed to be capable of
incorporating prior information about the unknown regression function such as
smoothness, periodicity, sparseness, self-similarity and, for some
particular basis (Haar), monotonicity. However, little attention has been given to bounded priors, which can be important to model bounded energy signals denoising, restricted to the proposition of the uniform and Bickel distributions by Angelini and Vidakovic (2004), although bounded energy signals occur in practice.
In this paper we propose and explore a beta distribution symmetric around zero as a prior distribution for the location parameter in a Gaussian model on wavelet coefficients. As traditionally done in this kind of analysis the prior is in fact a distribution contaminated at 0. This added point mass at zero to a spread part of the prior facilitates thresholding. The flexibility of the beta distribution, as a spread part of prior, is readily controlled by convenient choice of its parameters. Moreover, we show that there is an interesting relationship between the (hyper) parameters and the degree of wavelet shrinkage, which is useful in denoising data problems. We would like to incorporate prior belief on the
boundedness of the energy of the signal (the $L_2$-norm of the
regression function). The prior information on the energy bound
often exists in real life problems and it can be modelled by the assumption that the location parameter
space is bounded. Estimation of a bounded normal mean has
been considered in Miyasawa (1953), Bickel (1981), Casella and Strawderman (1981),
and Vidakovic and DasGupta (1996). In
our context, if the structure of the prior can be
supported by the analysis of the empirical distribution of the
wavelet coefficients, the precise elicitation of the prior distribution
cannot be done without some kind of approximation. Of
course, when prior knowledge on the signal-to-noise ratio (SNR) is available,
then any symmetric and unimodal distribution supported on the
bounded set, say $[-m, m]$, could be a possible candidate for
the prior. If the problem is rescaled so that the size of the noise (its variance) is 1, then $m$ can be taken as SNR. In this context, beta distribution fits very well due its boundedness and great flexibility, which is a great advantage against the already proposed uniform and Bickel priors. To confirm this model, the performance of the shrinkage rules under beta was better than some of the most used shrinkage/thresholding methods applied in practice in most of the considered scenarios.
This paper is organized as follows: Section 2 defines the considered model in time and wavelet domain and the proposed prior distribution, Section 3 presents the shrinkage rule and its statistical properties such as variance, bias and risks. As an extension of the beta prior, we consider the shrinkage rule under triangular prior in Section 4. Section 5 is dedicated to prior elicitation. To verify the strength of the proposed approach simulation studies are performed in Section 6,
and the shrinkage rule is applied in a Spike Sorting real data set in Section 7. Section 8 provides conclusions.
\section{The model}
\subsection{The symmetric around zero beta distribution}
In Statistics, the beta distribution is extensively used to model variables in the $[0,1]$ domain. This distribution is extremely flexible in shape controlled by convenient choices of its parameters. In our framework, it is convenient to use its version
shifted and rescaled to the interval $[-m,m]$ as well as choosing its parameters to keep it symmetric about 0. Therefore, we propose the use of beta distribution with symmetric support around zero as a prior distribution for the wavelet coefficients. Its density function is
\begin{equation}\label{eq:beta}
g(x;a,m) = \frac{(m^2 - x^2)^{(a-1)}}{(2m)^{(2a-1)}B(a,a)}\mathbb{I}_{[-m,m]}(x),
\end{equation}
\noindent where $B(\cdot , \cdot)$ is the standard beta function, $a>0$ and $m>0$ are the parameters of the distribution, and $\mathbb{I}_{[-m,m]}(\cdot)$ is an indicator function equal to 1 for its argument in the interval $[-m,m]$ and 0 else.
For $a>1$, the density function (\ref{eq:beta}) is unimodal around zero and as $a$ increases, the density becomes more concentrated around zero. This is an important feature for wavelet shrinkage methods, since high values of $a$ imply higher levels of shrinkage in the empirical coefficients, which results in sparse estimated coefficients. Density \eqref{eq:beta} becomes uniform for $a=1$, which was already considered by Angelini and Vidakovic (2004) as prior to the wavelet coefficients. For this reason, we consider in this work beta densities with $a \geq 0$. Figure \ref{fig:beta} shows the beta density function for some selected values of $a$ in the interval $[1,10]$ and $m=3$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.50]{betadist.png}
\caption{Beta densities for some values of $a \in [1,10]$ and $m=3$.}\label{fig:beta}
\end{figure}
\subsection{Beta distribution as prior to the wavelet coefficients}
We start with the nonparametric regression problem of the form
\begin{equation} \label{eq:modeltime}
y_i = f(x_i) + e_i , \qquad i=1,...,n=2^J, J \in \mathbb{N},
\end{equation}
\noindent where $f \in \mathbb{L}_2(\mathbb{R})= \{f:\int f^2 < \infty\}$ and $e_i$, $i=1,...,n$, are zero mean independent normal random variables with unknown variance $\sigma^2$. In vector notation, we have
\begin{equation}\label{eq:modeltimevec}
\boldsymbol{y} = \boldsymbol{f} + \boldsymbol{e},
\end{equation}
\noindent where $\boldsymbol{y} = (y_1,...,y_n)'$, $\boldsymbol{f} = (f(x_1),...,f(x_n))'$ and $\boldsymbol{e} = (e_1,...,e_n)'$. The goal is to estimate the unknown function $f$. After applying a discrete wavelet transform (DWT) on (2.3), given by the orthogonal matrix $D$, we obtain the following model, in the wavelet domain,
\begin{equation} \label{eq:modelvec}
\boldsymbol{d} = \boldsymbol{\theta} + \boldsymbol{\epsilon},
\end{equation}
where
$\boldsymbol{d} = D\boldsymbol{y}$, $\boldsymbol{\theta} = D \boldsymbol{f}$ and $\boldsymbol{\epsilon} = D \boldsymbol{e}$.
Due to the independence of the random errors and the orthogonality of the $D$ transform, the model in the wavelet domain remains
additive and the errors are i.i.d. normal.
Because the strong decorrelating property of wavelets we can study one coefficient at a time. For the $i$th component of the vector $\boldsymbol{d}$, we have a simple model
\begin{equation}\label{eq:model}
d_i = \theta_i + \epsilon_i,
\end{equation}
\noindent where $d_i$ is the empirical wavelet coefficient, $\theta_i \in [-m,m]$ is the coefficient to be estimated and $\epsilon_i \sim N(0,\sigma^2)$ is the normal random error with unknown variance $\sigma^2$. For simplicity of notation, we suppress the subindices of $d$, $\theta$ and $\epsilon$. Note that, according to the model \eqref{eq:model}, $d|\theta \sim N(\theta,\sigma^2)$ and then, the problem of estimating a function $f$ becomes a normal mean estimation problem in the wavelet domain for each coefficient. Once this bounded mean estimation problem is solved, the vector $\boldsymbol{f}$ can be estimated by the application of the inverse discrete wavelet transform (IDWT) on $\boldsymbol{\hat{\theta}}$.
To complete the Bayesian model, we propose the following prior distribution for $\theta$,
\begin{equation} \label{eq:prior}
\pi(\theta;\alpha,a,m) = \alpha \delta_{0}(\theta) + (1-\alpha)g(\theta;a,m),
\end{equation}
where $\alpha \in (0,1)$, $\delta_{0}(\theta)$ is the point mass function at zero and $g(\theta;a,m)$ is the beta distribution \eqref{eq:beta} in $[-m,m]$. The proposed prior distribution has $\alpha \in (0,1)$, $a>0$ and $m>0$ as hyperparameters and their choices are directly related to the degree of shrinkage of the empirical coefficients. It will be shown that as $a$ or $\alpha$ (or both of them) increase, the degree of shinkage increases as well.
\section{The shrinkage rule and statistical properties}
The shrinkage rule $\delta(\cdot)$ for Bayesian estimation of the wavelet coefficient $\theta$ of model \eqref{eq:model} depends on the choice of location of the posterior (mean, mode, or median) and the loss function. Under square error loss function $L(\delta,\theta) = (\delta - \theta)^2$, it is well known that the Bayes rule is the posterior expected value of $\theta$, i.e, $\delta(d) = E_{\pi}(\theta \mid d)$ minimized Bayes risk. The Proposition \ref{prop1} gives an expression of the shrinkage rule under a mixture prior consisting of a point mass at zero and a density function with support in $[-m,m]$.
\begin{prop} \label{prop1}
If the prior distribution of $\theta$ is of the form $\pi(\theta;\alpha,m) = \alpha \delta_{0}(\theta) + (1-\alpha)g(\theta)$, where $g$ is a density function with support in $[-m,m]$, then the shrinkage rule under the quadratic loss function is given by
\begin{equation}
\delta(d) = \frac{(1-\alpha)\int_{\frac{-m-d}{\sigma}}^{\frac{m-d}{\sigma}}(\sigma u + d)g(\sigma u + d)\phi(u)du}{\alpha \frac{1}{\sigma}\phi(\frac{d}{\sigma})+(1-\alpha)\int_{\frac{-m-d}{\sigma}}^{\frac{m-d}{\sigma}}g(\sigma u + d)\phi(u)du}
\end{equation}
where $\phi(\cdot)$ is the standard normal density function.
\end{prop}
\begin{proof}
If $\mathcal{L}(\cdot \mid \theta)$ is the likelihood function, we have that
\begin{align*}
\delta(d) &= E_{\pi}(\theta \mid d) \\
&=\frac{\int_{\Theta}\theta[\alpha\delta_{0}(\theta)+(1-\alpha)g(\theta)]\mathcal{L}(d \mid \theta)d\theta}{\int_{\Theta}[\alpha\delta_{0}(\theta)+(1-\alpha)g(\theta)]\mathcal{L}(d \mid \theta)d\theta} \\
&= \frac{(1-\alpha)\int_{-m}^{m}\theta g(\theta)\frac{1}{\sqrt{2\pi}}\exp\{-\frac{1}{2}(\frac{d-\theta}{\sigma})^2\}\frac{d\theta}{\sigma}}{\alpha \frac{1}{\sigma\sqrt{2\pi}}\exp\{-\frac{1}{2}(\frac{d}{\sigma})^2\}+(1-\alpha)\int_{-m}^{m}g(\theta)\frac{1}{\sqrt{2\pi}}\exp\{-\frac{1}{2}(\frac{d-\theta}{\sigma})^2\}\frac{d\theta}{\sigma}}\\
&= \frac{(1-\alpha)\int_{\frac{-m-d}{\sigma}}^{\frac{m-d}{\sigma}}(\sigma u + d)g(\sigma u + d)\phi(u)du}{\alpha \frac{1}{\sigma}\phi(\frac{d}{\sigma})+(1-\alpha)\int_{\frac{-m-d}{\sigma}}^{\frac{m-d}{\sigma}}g(\sigma u + d)\phi(u)du}.\\
\end{align*}
\end{proof}
Figure \ref{fig:shrink} presents some shrinkage rules for $g$ as beta distribution \eqref{eq:beta} with hyperparameters $m = 3$, $\alpha = 0.9$ and for some values of $a \in [1,10]$ as well as their variances. It can be seen that the length of the interval in which the rule shrinks to zero increases as the hyperparameter $a$ increases, since high values of $a$ result in more concentrated beta distributions around zero. A typical feature of these rules is that as $d$ increases, $\delta(d)$ gets closer to $m$ and as $d$ decreases, $\delta(d)$ gets closer to $-m$. These asymptotic characteristics are reasonable since there is the assumption that the coefficients to be estimated belong to the range $[-m, m]$, so that empirical coefficients outside this range occur due to the presence of noise.
\begin{figure}[H]
\centering
\subfigure[Shrinkage rules\label{lognormal}]{
\includegraphics[scale=0.45]{rules.png}}
\subfigure[Variances\label{blocls}]{
\includegraphics[scale=0.45]{var.png}}
\caption{Shrinkage rules and their variances under beta prior distribution with hyperparameters $m=3$, $\alpha = 0.9$ and values of $a \in [1,10]$.} \label{fig:shrink}
\end{figure}
Figure \ref{fig:bias} (a) and (b) shows the squared bias and classical risks (denoted by $R(\theta)$) respectively for the same shrinkage rules considered above. Observe that, as expected, the rules have smaller variances and biases for values of $\theta$ near zero, reaching minimum values in both graphs when the wavelet coefficient is zero. It is also noted that as hyperparameter $a$ increases, the bias of the estimator increases and the variance decreases. The classical risk decreases as $\theta$ tends to zero and that for high values of $\theta$, the risk is larger for rules with large values of $a$. These features are justified by the fact that the degree of shrinkage increases as the hyperparameter $a$ increases, so if the value of the wavelet coefficient is far from zero, such rules with larger values of $a$ tend to underestimate $\theta$ than rules with small values of $a$.
\begin{figure}[H]
\centering
\subfigure[Squared bias\label{lognormal}]{
\includegraphics[scale=0.4]{bias.png}}
\subfigure[Classical Risks\label{blocls}]{
\includegraphics[scale=0.4]{clasrisks.png}}
\caption{Squared bias and classical risks of the shrinkage rules under beta prior distribution with hyperparameters $m=3$, $\alpha = 0.9$ and values of $a \in [1,10]$.} \label{fig:bias}
\end{figure}
Finally, Tables \ref{tab:brisk} and \ref{tab:brisk2} show Bayes risks (called $r_{\delta}$) in terms of the hyperparameter $a$ and $\alpha$ respectively of the shrinkage rules considered. As expected, Bayes risk decreases as $a$ or $\alpha$ increases, once this hyperparameters behaviors agree with the prior knowledge of sparsity of the wavelet coefficients vector.
\begin{table}[!htb]
\centering
\label{my-label}
\begin{tabular}{ccccccccc}
\hline
$a$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 10 \\ \hline
$r_{\delta}$ & 0.189 & 0.137 & 0.101 & 0.088 & 0.074 & 0.063 & 0.056 & 0.041 \\ \hline
\end{tabular}
\caption{Bayes risks of the shrinkage rules under beta prior distribution with hyperparameters $m=3$ and $\alpha = 0.9$.}\label{tab:brisk}
\end{table}
\begin{table}[!htb]
\centering
\label{my-label}
\begin{tabular}{ccccccc}
\hline
$\alpha$ & 0.6 & 0.7 & 0.8 & 0.9 & 0.99 \\ \hline
$r_{\delta}$ & 0.399 & 0.326 & 0.241 & 0.137 & 0.017 \\ \hline
\end{tabular}
\caption{Bayes risks of the shrinkage rules under beta prior distribution with hyperparameters $m=3$ and $a=2$.}\label{tab:brisk2}
\end{table}
\section{An extension: the triangular prior}
We briefly present the triangular prior distribution for the wavelet coefficients as an extension of the beta distribution, since its associated shrinkage rule has explicit formula in terms of the standard normal density and cumulative functions. In fact, the triangular distribution in $[- m, m]$ is the convolution of two uniform distributions in $[-m/2, m/2]$ and its density is given by
\begin{equation} \label{eq:triang}
g_{T}(x;m)=\left\{\begin{array}{rc}
\frac{x+m}{m^2},&\mbox{if}\quad -m \leq x < 0,\\
\frac{m-x}{m^2}, &\mbox{if}\quad 0 \leq x \leq m, \\
0, &\mbox{}\quad else.
\end{array}\right.
\end{equation}
The following proposition provides an explicit formula for the shrinkage rule under triangular prior.
\begin{prop}\label{prop4}
The shrinkage rule under prior distribution of the form $\pi(\theta;\alpha,m) = \alpha \delta_{0}(\theta) + (1-\alpha)g_{T}(\theta;m)$, where $g_{T}(\cdot;m)$ is the triangular distribution over $[-m,m]$, is
\begin{equation}
\delta_T(d) = \frac{(1-\alpha)S_1(d)}{\frac{\alpha m^2}{\sigma}\phi(\frac{d}{\sigma}) + (1-\alpha)S_2(d)},
\end{equation}
where
\noindent $S_1(d) = d\sigma[\phi(\frac{m+d}{\sigma}) + \phi(\frac{m-d}{\sigma}) -2\phi(\frac{d}{\sigma})] + (d^2+\sigma^2+dm)\Phi(\frac{m+d}{\sigma}) +(d^2+\sigma^2- dm)\Phi(\frac{m-d}{\sigma})-2(d^2+\sigma^2)\Phi(\frac{d}{\sigma})$,
\noindent $S_2(d) = \sigma[\phi(\frac{m+d}{\sigma}) + \phi(\frac{m-d}{\sigma}) -2\phi(\frac{d}{\sigma})]+(d+m)\Phi(\frac{m+d}{\sigma})+(d-m)\Phi(\frac{m-d}{\sigma})-2d\Phi(\frac{d}{\sigma})$,
\noindent for $\phi(\cdot)$ and $\Phi(\cdot)$ the standard normal density and cumulative distribution respectively.
\end{prop}
\begin{proof}
Applying Proposition \ref{prop1} for the density \eqref{eq:triang} and solving the integrals directly.
\end{proof}
Shrinkage rules under triangular prior with $m=3$ and $\alpha \in \{0.6, 0.7, 0.8, 0.9, 0.99\}$ and their statistical properties are shown in Figure \ref{fig:triang} and Table \ref{tab:trisk}. The behaviors of the rules and the properties are the same as the rules under beta prior. The performances of the shrinkage rule under triangular prior in our simulation studies were great as we will see later and its explicit formula can bring advantages in computational implementation.
\begin{figure}[H]
\centering
\includegraphics[scale=0.70]{triangular.png}
\caption{Shrinkage rules (top left), squared bias (top right), variances (bottom left) and classical risks (bottom right) for triangular prior with $m=3$ and $\alpha \in \{0.6, 0.7, 0.8, 0.9, 0.99\}$.}\label{fig:triang}
\end{figure}
\begin{table}[!htb]
\centering
\label{my-label}
\begin{tabular}{ccccccc}
\hline
$\alpha$ & 0.6 & 0.7 & 0.8 & 0.9 & 0.99 \\ \hline
$r_{\delta}$ & 0.357 & 0.289 & 0.212 & 0.119 & 0.014 \\ \hline
\end{tabular}
\caption{Bayes risks of the shrinkage rules under triangular prior distribution with hyperparameters $m=3$ and $\alpha \in \{0.6, 0.7, 0.8, 0.9, 0.99\}$.}\label{tab:trisk}
\end{table}
\section{Default prior hyperparameters}
Methods and criteria for determination of the involved parameters and hyperparameters to estimate the coefficients are important in Bayesian procedures. In the framework of Bayesian shrinkage with beta prior, the choices of the $\sigma$ parameter of the random error distribution and the hyperparameters $\alpha$, $m$ and $a$ of the beta prior distribution of the wavelet coefficient are required. We present the methods and criteria already available in the literature for such choices and used in simulation and application studies.
Based on the fact that much of the noise information present in the data can be obtained on the finer resolution scale, for the robust $\sigma$ estimation, Donoho and Johnstone (1994a) suggest
\begin{equation}\label{eq:sigma}
\hat{\sigma} = \frac{\mbox{median}\{|d_{J-1,k}|:k=0,...,2^{J-1}\}}{0.6745}.
\end{equation}
Angelini and Vidakovic (2004) suggest the hyperparameters $\alpha$ and $m$ be dependent on the level of resolution $j$ according to the expressions
\begin{equation}\label{eq:alpha}
\alpha = \alpha(j) = 1 - \frac{1}{(j-J_{0}+1)^\gamma}
\end{equation}
and
\begin{equation}\label{eq:m}
m = m(j) = \max_{k}\{|d_{jk}|\},
\end{equation}
where $J_ 0 \leq j \leq J-1$, $J_0$ is the primary resolution level and $\gamma > 0$. They also suggest that in the absence of additional information, $\gamma = 2$ can be adopted.
There are some methods available in the literature for choosing the hyperparameters of the beta as prior. Chaloner et al. (1983) proposed a method for choosing the hyperparameters of the beta distribution in $[0,1]$ as a prior distribution for the probability of success in each Bernoulli trial. Duran and Booker (1988) propose the percentile method. For $k \in [-m, m]$ and $p \in (0,1) $ fixed, $a$ is chosen so that
$P(\theta \leq k)=p$, i.e,
\begin{equation}\label{eq:a}
\int_{-m}^{k}\frac{(m^2 - \theta^2)^{(a-1)}}{(2m)^{(2a-1)}B(a,a)}d\theta = p. \nonumber
\end{equation}
Thus, the choice of $a$ is made by determining the probability of occurrence of a particular event $\{\theta \leq k \}$. This procedure is interesting because of the greater facility of subjective determination of a $p$ probability, that is, it is simpler to cognitively assign a probability to a certain event than to directly assign a value to the parameter of a probability distribution. In this work, however, we consider the choice of $a$ according to the desired shrinkage level to be applied in the empirical coefficients. As explained along the paper, this shrinkage level increases as $a$ increases, once beta distribution becomes more concentrated around zero.
Another possibility is to consider the hyperparameter $a$ to be level dependent, i.e, $a = a(j)$. However, since we are interested in studying the impact of a fixed choice of $a$ in denoising, we consider just fixed values of $a$ in the simulations and applications.
\section{Simulation studies}
Simulation studies were done to evaluate the performance of the shrinkage rules under beta prior distribution for the particular cases in which the hyperparameter $a$ assumes the fixed values $a=1, 2, 5,10$ and triangular distribution (Triang) and to compare them with the performances of some of the most used shrinkage/thresholding methods in the literature, namely, universal thresholding (Univ) proposed by Donoho and Johnstone (1994), false discovery rate (FDR) proposed by Abramovich and Benjamini (1996), cross validation (CV) of Nason (1996), Stein unbiased risk estimator (SURE) of Donoho and Johnstone (1995), bayesian adaptive multiresolution shrinker (BAMS) of Vidakovic and Ruggeri (2001) and large posterior mode (LPM) of Cutillo et al. (2008). We also considered the shrinkage rule under Bickel prior, suggested by Angelini and Vidakovic (2004). They proved that the shrinkage rule under this prior is approximately $\Gamma$-minimax for the class of all symmetric unimodal priors bounded on $[-m, m]$, $\Gamma_{SU[-m,m]}$ .
Bickel (1981) proved that, when $m$ increases, the weak limit of the least favorable prior in $\Gamma_{SU[-m,m]}$ is approximately (in sense of weak distributional limits)
$g_m(\theta)=\frac{1}{m}\cos^2\left(\frac{\pi \theta}{2m}\right)
\mathbb{I}_{[-m,m]}(\theta)$. Applying this result in our context, we
have that the Bickel shrinkage is induced by prior
\begin{equation*} \label{mlarge} \pi(\theta)=\alpha
\delta_0+(1-\alpha)\frac{1}{m}\cos^2\left(\frac{\pi \theta}{2m}\right)
\mathbb{I}_{[-m,m]}(\theta).
\end{equation*}
The corresponding Bayes rule does not have a simple analytical form and needs to be numerically computed.
The hyperparameters $m$ and $\alpha$ were selected according to the proposals described in Section 5.
To perform the simulation, the rules were applied in the Donoho-Johnstone (DJ) test functions. These functions, shown in Figure \ref{fig:dj}, are widely used in the literature for comparison of wavelet-based methods. These are four functions, called Bumps, Blocks, Doppler and Heavisine, which represent some characteristics of curves in real problems.
\begin{figure}[H]
\centering
\includegraphics[scale=0.50]{functions.png}
\caption{Donoho-Johnstone (DJ) test functions.}\label{fig:dj}
\end{figure}
For each test function, three sample sizes were selected, $n = 512, 1024$ and $2048$, and for each point, a normal error with zero mean and variance $\sigma^2$ was added, where $\sigma^2$ was selected according to three signal to noise ratio (SNR), 3, 5 and 7. Later, we applied a DWT using Daubechies basis with ten null moments (Daub10). After the shrinkage/thresholding procedure, the IWDT is then applied to estimate the function values.
We used the mean squared error (MSE), $MSE = \frac{1}{n} \sum_{i=1}^{n}[{\hat f(x_i)} - f(x_i)]^2$ as performance measure of the shrinkage rules. For each function, the process was repeated $M = 200$ times and a comparison measure, the average of the obtained MSEs, $AMSE = \frac{1}{M} \sum_{j=1}^{M}MSE_j$, was calculated for each rule as shown in Tables \ref{tab:sim1} and \ref{tab:sim2}. Figure \ref{fig:fits} presents the curve estimates produced by the shrinkage rule under beta prior with $a=2$ and $n=2048$.
In general, the beta prior and triangular shrinkage rules had great performances in the simulations. They were better than Univ, FDR, CV, SURE, BAMS, LPM and Bickel prior shrinkage in practically all the scenarios. Although the performances were better for higher values of SNR as expected, we highlight the good results of the beta prior rules for lower SNR when compared with the other methods, one of the great motivations to apply them in real data sets denoising problems. In Tables \ref{tab:sim1} and \ref{tab:sim2}, we put in bold the best method.
Regarding the hyperparameter $a$, it can be observed that the AMSE decreased until $a=5$ and then increased for $a=10$ in most of the scenarios. As the shrinkage level increases as $a$ increases, when we have an excessive value of $a$, the empirical coefficients are strongly shrunk, which result on poor estimation of important features of the curve, such spikes, cusps or discontinuities, affecting AMSE measure.
\begin{table}[H]
\scalefont{0.5}
\centering
\label{my-label}
\begin{tabular}{|c|c|c|c|c|c|||c|c|c|c|c|c|}
\hline
Signal & n & Method & SNR=3 & SNR=5 & SNR=7&Signal&n&Method&SNR=3&SNR=5&SNR=7 \\ \hline \hline
Bumps&512 & Univ & 11.080 & 5.170 & 3.026 & Blocks &512 & Univ & 6.928 & 3.660 & 2.254 \\
& & FDR & 9.291 & 4.373 & 2.630 & & & FDR & 5.896 & 2.903 & 1.746 \\
& & CV & 11.389 & 9.406 & 6.292 & & & CV & 2.559 & 1.250 & 0.841 \\
& & SURE & 3.609 & 1.556 & 0.882 & & & SURE & 2.766 & 1.216 & 0.679 \\
& & BAMS & 2.867 & 1.528 & 1.265 & & & BAMS & 2.465 & 1.297 & 1.091 \\
& & LPM & 4.892 & 1.960 & 1.000 & & & LPM & 4.892 & 1.960 & 1.000 \\
& & Bickel & 2.814 & 1.156 &\textbf{0.654} & & & Bickel & 2.748 & 1.191 & 1.590 \\
& & $a=1$ & 2.995 & 1.238 & 0.696 & & & $a=1$ & 2.915 & 1.315 & 0.684 \\
& & $a=2$ & 2.874 & 1.182 & 0.674 & & & $a=2$ & 2.799 & 1.311 & 0.799 \\
& & $a=5$ & \textbf{2.812} &\textbf{1.144} & 0.661 & & & $a=5$ &\textbf{2.687} & \textbf{1.181} &\textbf{0.617} \\
& & $a=10$ & 2.936 & 1.186 & 0.720 & & & $a=10$ & 2.972 & 1.445 & 0.878 \\
& & Triang & 2.828 & 1.157 & 0.656 & & & Triang & 2.727 & 1.249 & 0.786 \\ \hline
&1024 & Univ & 7.547 & 3.570 & 2.128 & &1024 & Univ & 4.848 & 2.479 & 1.542 \\
& & FDR & 5.556 & 2.524 & 1.473 & & & FDR & 3.896 & 1.874 & 1.125 \\
& & CV & 2.924 & 1.925 & 1.719 & & & CV & 1.789 & 0.838 & 0.533 \\
& & SURE & 2.467 & 1.057 & 0.590 & & & SURE & 1.888 & 0.837 &\textbf{0.474} \\
& & BAMS & 2.155 & 1.046 & 0.860 & & & BAMS & 1.856 & 0.842 & 0.686 \\
& & LPM & 4.966 & 1.957 & 0.998 & & & LPM & 4.966 & 1.957 & 0.998 \\
& & Bickel & 1.972 & 0.978 &\textbf{0.466} & & & Bickel & 1.718 & 1.788 & 0.523 \\
& & $a=1$ & 2.099 & 0.958 & 0.506 & & & $a=1$ & 1.852 & 0.821 & 0.562 \\
& & $a=2$ & 2.014 & 1.016 & 0.486 & & & $a=2$ & 1.770 & 0.933 & 0.536 \\
& & $a=5$ &\textbf{1.949} &\textbf{0.877} & 0.546 & & & $a=5$ &\textbf{1.713} &\textbf{0.761} & 0.514 \\
& & $a=10$ & 1.949 & 1.059 & 0.821 & & & $a=10$ & 1.900 & 0.786 & 0.496 \\
& & Triang & 1.963 & 0.902 & 0.476 & & & Triang & 1.728 & 0.907 & 0.508 \\ \hline
&2048 & Univ & 5.042 & 2.343 & 1.389 & &2048 & Univ & 3.417 & 1.772 & 1.101 \\
& & FDR & 3.567 & 1.581 & 0.915 & & & FDR & 2.676 & 1.288 & 0.764 \\
& & CV & 1.602 & 0.734 & 0.477 & & & CV & 1.301 & 0.588 & 0.353 \\
& & SURE & 1.647 & 0.696 & 0.389 & & & SURE & 1.356 & 0.596 &\textbf{0.341} \\
& & BAMS & 1.635 & 0.667 & 0.527 & & & BAMS & 1.502 & 0.585 &0.459 \\
& & LPM & 4.955 & 1.957 & 0.998 & & & LPM & 4.955 & 1.957 & 0.998 \\
& & Bickel & 1.208 &\textbf{0.510} &\textbf{0.320} & & & Bickel & 1.566 & 0.602 & 1.393 \\
& & $a=1$ & 1.267 & 0.571 & 0.333 & & & $a=1$ & 1.252 & 0.651 & 1.857 \\
& & $a=2$ & 1.227 & 0.530 & 0.337 & & & $a=2$ & 1.430 & 0.608 & 1.592 \\
& & $a=5$ & 1.209 & 0.742 & 1.113 & & & $a=5$ &\textbf{1.163} &\textbf{0.573} & 1.306 \\
& & $a=10$ & 1.311 & 0.580 & 0.339 & & & $a=10$ & 1.480 & 0.723 & 1.366 \\
& & Triang &\textbf{1.205} & 0.518 & 0.354 & & & Triang & 1.381 & 0.582 & 1.457 \\ \hline
\end{tabular}
\caption{AMSE of the shrinkage/thresholding rules in the simulation study for Bumps and Blocks DJ test functions.}\label{tab:sim1}
\end{table}
\begin{table}[H]
\scalefont{0.5}
\centering
\label{my-label}
\begin{tabular}{|c|c|c|c|c|c|||c|c|c|c|c|c|}
\hline
Signal & n & Method & SNR=3 & SNR=5 & SNR=7&Signal&n&Method&SNR=3&SNR=5&SNR=7 \\ \hline \hline
Doppler&512 & Univ & 2.680 & 1.413 & 0.892 &Heavisine &512 & Univ & 0.567 & 0.404 & 0.304 \\
& & FDR & 2.565 & 1.259 & 0.767 & & & FDR & 0.595 & 0.436 & 0.312 \\
& & CV & 1.293 & 0.647 & 0.451 & & & CV &\textbf{0.505} &\textbf{0.279} &\textbf{0.178} \\
& & SURE & 1.329 & 0.596 & 0.337 & & & SURE & 0.571 & 0.414 & 0.317 \\
& & BAMS & 1.551 & 0.628 & 0.503 & & & BAMS & 1.153 & 0.327 & 0.233 \\
& & LPM & 4.892 & 1.960 & 1.000 & & & LPM & 4.892 & 1.960 & 1.000 \\
& & Bickel & 1.112 &\textbf{0.520} &\textbf{0.297} & & & Bickel & 0.896 & 0.631 & 0.464 \\
& & $a=1$ & 1.138 & 0.567 & 0.305 & & & $a=1$ & 0.788 & 0.548 & 0.470 \\
& & $a=2$ & 1.117 & 0.542 & 0.303 & & & $a=2$ & 0.832 & 0.578 & 0.447 \\
& & $a=5$ & 1.130 & 0.522 & 0.298 & & & $a=5$ & 0.976 & 0.648 & 0.721 \\
& & $a=10$ & 1.274 & 1.551 & 2.113 & & & $a=10$ & 1.230 & 0.718 & 0.495 \\
& & Triang &\textbf{1.104} & 0.525 & 0.311 & & & Triang & 0.837 & 0.561 & 0.444 \\ \hline
&1024 & Univ & 1.612 & 0.846 & 0.534 & &1024 & Univ & 0.460 & 0.314 & 0.231 \\
& & FDR & 1.508 & 0.747 & 0.455 & & & FDR & 0.506 & 0.326 & 0.225 \\
& & CV & 0.803 & 0.367 & 0.218 & & & CV &\textbf{0.369} &\textbf{0.202} &\textbf{0.127} \\
& & SURE & 0.836 & 0.383 & 0.225 & & & SURE & 0.463 & 0.321 & 0.238 \\
& & BAMS & 1.254 & 0.409 & 0.308 & & & BAMS & 1.055 & 0.261 & 0.177 \\
& & LPM & 4.966 & 1.957 & 0.998 & & & LPM & 4.966 & 1.957 & 0.998 \\
& & Bickel & 0.689 &\textbf{0.290} & 0.686 & & & Bickel & 0.657 & 0.454 & 0.518 \\
& & $a=1$ & 0.707 & 0.306 & 0.192 & & & $a=1$ & 0.593 & 0.429 & 0.361 \\
& & $a=2$ & 0.680 & 0.322 & 0.255 & & & $a=2$ & 0.617 & 0.449 & 0.409 \\
& & $a=5$ & 0.684 & 0.301 &\textbf{0.183} & & & $a=5$ & 0.733 & 0.585 & 0.629 \\
& & $a=10$ & 1.348 & 1.297 & 0.744 & & & $a=10$ & 0.851 & 0.487 & 0.398 \\
& & Triang &\textbf{0.677} & 0.340 & 0.253 & & & Triang & 0.624 & 0.425 & 0.425 \\ \hline
&2048 & Univ & 1.146 & 0.578 & 0.364 & &2048 & Univ & 0.363 & 0.233 & 0.165 \\
& & FDR & 1.038 & 0.487 & 0.295 & & & FDR & 0.391 & 0.232 & 0.155 \\
& & CV & 0.551 & 0.252 & 0.146 & & & CV &\textbf{0.265} &\textbf{0.141} &\textbf{0.088} \\
& & SURE & 0.568 & 0.258 & 0.148 & & & SURE & 0.365 & 0.236 & 0.168 \\
& & BAMS & 1.085 & 0.275 & 0.184 & & & BAMS & 0.982 & 0.200 & 0.120 \\
& & LPM & 4.955 & 1.957 & 0.998 & & & LPM & 4.955 & 1.957 & 0.998 \\
& & Bickel & 0.430 & 0.713 & 0.168 & & & Bickel & 0.501 & 0.549 & 1.323 \\
& & $a=1$ & 0.413 & 0.201 & 0.154 & & & $a=1$ & 0.466 & 0.346 & 0.306 \\
& & $a=2$ & \textbf{0.405} & 0.254 & 0.148 & & & $a=2$ & 0.475 & 0.426 & 0.325 \\
& & $a=5$ & 0.423 & \textbf{0.192} & 0.146 & & & $a=5$ & 0.606 & 0.412 & 0.320 \\
& & $a=10$ & 1.681 & 2.523 & 2.509 & & & $a=10$ & 0.613 & 0.385 & 0.315 \\
& & Triang & 0.408 & 0.275 &\textbf{0.145} & & & Triang & 0.480 & 0.405 & 0.329 \\ \hline
\end{tabular}
\caption{AMSE of the shrinkage/thresholding rules in the simulation study for Doppler and Heavisine DJ test functions.}\label{tab:sim2}
\end{table}
\begin{figure}[H]
\subfigure[SNR=3.\label{lognormal}]{
\includegraphics[scale=0.5]{1024s3.png}}
\subfigure[SNR=5.\label{blocls}]{
\includegraphics[scale=0.5]{1024s5.png}}
\subfigure[SNR=7.\label{blocls}]{
\includegraphics[scale=0.5]{1024s7.png}}
\caption{Donoho-Johnstone curves estimates by wavelet shrinkage under beta prior with $a=2$ and $n=2048$.}\label{fig:fits}
\end{figure}
\section{Application in spike sorting data set}
Spike sorting is a classification procedure of action potentials (spikes) emitted by neurons according to their different forms and amplitudes. Typically, action potentials data for sorting by spike sorting are collected extracellularly by means of electrodes connected at certain locations in the head of animals. It is a method of extreme relevance in Neuroscience due to the possibility of studies on which neurons are present in certain regions of the brain and how they interact.
Once the raw data of action potentials is collected, the first step of the spike sorting procedure is to filter through noise reduction data to facilitate visualization of spikes and misclassification of noise as spike. Among several methods used for spike sorting data noise reduction, wavelet based methods are one of the most used. For more details on spike sorting and statistical methods involved in the analysis of characteristic data, one has Pouzat et al. (2002), Lewicki (1998), Shoham et al. (2003), Einevoll et al. (2012) among others. Applications of wavelets in spike sorting occur in the works of Letelier and Webber (2000), Quiroga et al. (2004) and Shalchyan et al. (2012). The purpose here is to apply DWT to the data and use the beta and triangular shrinkage rules for noise reduction.
The original data set, presented in Figure \ref{fig:pot}, has 20000 neuronal action potentials (spikes) observed over time. For the application of DWT, it was considered $n=2^{14}=16384$. The data set is a courtesy of Kenneth Harris, of \textit{Institute of Neurology, Faculty of Brain Sciences, University College London} and it is available at https://ifcs.boku.ac.at/repository/data/spike-sorting/index.html.
\begin{figure}[H]
\centering
\includegraphics[scale=0.60]{spikes2.png}
\caption{Neural action potentials (\textit{spikes}).}\label{fig:pot}
\end{figure}
To the empirical coefficients obtained, the shrinkage rule was applied under beta and triangular prior. The hyperparameters chosen for $\sigma$, $m$ e $\alpha$ were given according to Section 5, with $\hat{\sigma} = 19913$ e $a=2$ for the beta distribution. Figure \ref{fig:estpot} presents the estimated functions and Figure \ref{fig:potcoef} shows the empirical wavelet coefficients after the application of DWT using Daubechies wavelets with ten null moments $N = 10 $ and estimated by the rule under beta prior.
\begin{figure}[H]
\centering
\subfigure[Estimated action potentials - beta prior shrinkage rule with $a=2$.\label{lognormal}]{
\includegraphics[scale=0.4]{spikes2i.png}}
\subfigure[Estimated action potentials - triangular prior shrinkage rule.\label{blocls}]{
\includegraphics[scale=0.4]{spikesti.png}}
\caption{Estimated action potentials - beta prior shrinkage rule with $a=2$ (a) and triangular prior shrinkage rule (b).}\label{fig:estpot}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[Empirical coefficients.\label{lognormal}]{
\includegraphics[scale=0.4]{spikecoef.png}}
\subfigure[Estimated coefficients - beta prior shrinkage rule with $a=2$.\label{blocls}]{
\includegraphics[scale=0.4]{spikecoef2.png}}
\caption{Empirical coefficients (a) and estimated coefficients - beta prior shrinkage rule with $a=2$ (b) of the Spike Sorting data set.}\label{fig:potcoef}
\end{figure}
\section{Conclusion}
The paper proposes the use of beta distribution, as well as the triangular distribution, as a prior distribution for the wavelet coefficients and, in fact, the results indicate that the shrinkage rule associated with such distribution perform better than most of the considered shrinkage/thresholding techniques already used in the practice for most of the cases and test functions. This performance allows the beta and triangular to be considered as candidates to prior for bounded wavelet coefficients by practitioners.
Further extensions, generalization and new results are planned. The performance of the shrinkage rules on statistical models with other distributions with positive support for the random error or even generalizations for positive support distributions could be considered. The impact of using different wavelet bases in such rules may also be of great interest and were not considered here. As improvement and consolidation of the proposed technique, the use of other performance measures in simulation studies ns comparisons against the state of art techniques, especially for a low SNR will be of interest.
|
1907.06651
|
\section{\label{sec:intro}Introduction}
Magnetic Weyl semimetals (WSM) are predicted to host the Quantum anomalous Hall effect at higher temperatures than magnetically doped topological insulators \cite{Muechler2017}, and are therefore of substantial interest for spintronics technologies. One such material is Co$_3$Sn$_2$S$_2$, which has been the subject of intense research interest because the interplay of topology and magnetic order leads to a giant anomalous Hall effect (AHE) in the presence of a weak ordered moment \cite{Liu_NatPhys_2018}. Co$_3$Sn$_2$S$_2$ is a shandite material, where the Co atoms form layers of 2D Kagome lattice. S and Sn atoms are interleaved in the layers, with another Sn species in between the Co-S layers. Co$_3$Sn$_2$S$_2$\, is a half-metallic material, with only one component of spin contributing to the conductivity in the ferromagnetic state below $T_c = 175$~K \cite{Schnelle2013}. More recent calculations and measurements including ARPES and STM show that Co$_3$Sn$_2$S$_2$\, is a WSM, with Weyl points located $\sim60$~meV above the Fermi energy \cite{Morali2019, Yin_NatPhys_2019}. However, the magnetic order is not straight forward, and several studies have recently suggested this material hosts a more complex magnetic texture.
Understanding how this magnetic complexity affects the topological properties of Co$_3$Sn$_2$S$_2$, and specifically the AHE, is the focus of this study.
Exchange Bias (EB) is a property that is very important in magnetic memory technologies, ensuring stability and protecting against volatility. Typically, it is the result of exchange interaction at the interface of a ferromagnet (FM) and another magnetic phase, typically an anti-ferromagnet (AFM) \cite{Meiklejohn1957}. As a result of this interaction, the EB effect manifests as a shift of the magnetic hysteresis loop. This shift is related to the strength of the pinning exchange interaction between the FM and the AFM \cite{Nayak_nmat_2015}. Defining $H_{C\pm}$ as the field at which the magnetization changes sign along an $M(H)$ curve, the exchange bias can be parametrized as $H_{EB} = -(H_{C-} + H_{C+})/2$. Here we show that the coexistence of two magnetic phases in Co$_3$Sn$_2$S$_2$\,leads to an exchange bias that strongly influences the AHE. In addition to being of fundamental interest, this opens the possibility of applying this mechanism as a basis for topological spin valve technologies.
\section{\label{sec:eb} Experiment and Results}
Single crystals of Co$_3$Sn$_2$S$_2$ were grown using the flux method (see methods for details) and were either used pristine or silver epoxy contacts were attached for transport measurements. A plot of resistance as a function of temperature presented in Fig.~\ref{fig:cooldownAndEB}(a) indeed shows a ferromagnetic transition at a temperature of $175$~K.
\subsection{Low temperature anomalous Hall effect and Exchange bias}
The magnetic easy axis in Co$_3$Sn$_2$S$_2$\, is perpendicular to the Co Kagome planes. The system exhibits a magnetic hysteresis loop when sweeping a perpendicular magnetic field at temperatures below the magnetic transition.
\begin{figure}[ht]
\centering
\includegraphics[width = 0.48\textwidth]{Fig1_28Jun2019.png}
\caption{Magnetic transition and low temperature behaviour of Co$_3$Sn$_2$S$_2$\, single crystal sample. (a) Longitudinal ($R_{xx}$) resistance as a function of temperature shows a magnetic phase transition at $175$~K as evident by the change in slope (marked with black arrow).
(b) Exchange biased AHE. Hall resistance as a function of magnetic field at a temperature of $2$~K. The field is swept from between $+1$~T and $-1$~T and the sweep is repeated 5 times, resulting in a different $H_{C-}$ ($H_{C+}$) for cooling down in positive (negative) magnetic fields. The coercive fields on the opposite side are the same and therefore fall on top of each other in the plot.}
\label{fig:cooldownAndEB}
\end{figure}
At a temperature of $2$~K, the magnetic transition appears as a sharp step in $R_{xy}$ at $\mu_0 H_c = 115 \pm 10$~mT (Fig.~\ref{fig:cooldownAndEB}(b)).
When accounting for mixing of the longitudinal resistance (see methods), the resulting hysteresis loop is characteristic of the AHE with a crucial difference: The loop is not centered around zero applied field with the offset depending upon the thermal and magnetic history. The transition at positive applied fields is at $\mu_0 H_c = 115$~mT, but on the negative part it is between $\mu_0 H = -200$~mT and $\mu_0 H= -230$~mT and changes between sequential repeated field sweeps.
When cooling the sample in the presence of magnetic fields, a negative field shifts the positive coercive field as far as $550$~mT initially, and to $200$~mT after relaxation with repeated field sweeps. Positive fields shift the negative coercive field to similar fields with opposite sign. This asymmetry of the hysteresis loop is the signature of EB \cite{Meiklejohn1957,RLStamps2000}, and serves as evidence that the interpretation of the magnetic phase of Co$_3$Sn$_2$S$_2$\, as a simple FM phase in which the $Co$ spins point out of plane is lacking.
Remarkably, even when cooling the sample with no magnetic field applied (zero-field cool - ZFC) a small EB appears (see Fig.~\ref{fig:seb}, this can be seen in Fig.~\ref{fig:cooldownAndEB}(b)). Though first attributed to a possible remnant field in the superconducting magnet, further investigation shows this EB to be a spontaneous one \cite{Wang_PRL_2011}. The spontaneous EB (SEB) can be isothermally induced by the initial field sweep direction at low temperatures, and demonstrates the importance of the specifics of the field sweep protocol. Figure~\ref{fig:seb} shows magnetic field sweeps at $2$~K, with two different field sweep protocols. One is ``positive'' where the field is swept $0 \rightarrow +1T \rightarrow -1T \rightarrow +1T$, and the other is a ``negative'' sweep protocol, where the field is swept $0 \rightarrow -1T \rightarrow +1T \rightarrow -1T$. These two sweeps were each performed after zero field cooling and after in-field cooling. Let us first analyze the meaning of the zero field cooled sweeps. The positive sweep protocol (PSP) results in $H_{EB} = 31.5$~mT, and the negative sweep protocol (NSP) yields $H_{EB} = -40$~mT. Both positive and negative protocols produced the same saturation value of $\Delta R_{xy} = 1 m\Omega$, and show a similar magnitude of $H_{EB}$. In addition, the initial value of $R_{xy}$ for both field sweeps is close to zero, indicating that the initial state of the material is unmagnetized or of very low magnetization. This rules out the naive explanation of remnant field in our magnet.
When the sample is cooled in a small field, the SEB effect and the normal EB effect combine to form the total EB. This is evident, for example, in the $5$~mT cooling PSP, in which $|H_{C-}|$ is larger than that of the NSP. The same applies for the $-5$~mT cooling, but now it is the NSP in which $H_{C+}$ is larger than that of the PSP.
The presence of EB and SEB at low temperatures would suggest that an additional phase or interaction of an AFM or spin glass (SG) nature as well as the FM is also preset in Co$_3$Sn$_2$S$_2$ \cite{Meiklejohn1957, Ali_NatMat_2007}.
\begin{figure}[ht]
\centering
\includegraphics[width = 0.48\textwidth]{Fig2_01Jul2019.png}
\caption{Spontaneous exchange bias. $R_{xy}$ as a function of applied magnetic field for different low values of cooling fields at $2$~K. For each field, both a positive sweep protocol (PSP) and a negative sweep protocol (NSP) were performed, where the sample is warmed up to $250$~K - well above $T_c = 175$~K - between the two to negate the SEB magnetization effect. The initial field sweeps for the zero field cooled curve are marked with arrows, but are clear inside the hysteresis loop area for all fields. These indicate that the initial magnetic state is unsaturated, and that the SEB is indeed induced at the low temperature, by the specifics of the field sweep protocol.}
\label{fig:seb}
\end{figure}
\subsection{\label{sec:125K} Anomalous Hall effect and magnetic properties at intermediate temperatures}
In order to look for clues as to the nature of the EB, magnetization measurements as a function of magnetic field were performed at different temperatures for a sample cooled in a field $\mu_0 H_{cool} = 0.5$~T (Fig.~\ref{fig:biasAtdiffFieldsTemps}(a)).
These hysteresis loops reveal a qualitative difference between low temperatures and high temperatures still below the Curie temperature of $175$~K. A clear change in the shape of the hysteresis loop appears at $125$~K, and the coercive field (where $m$ crosses zero) is reduced significantly above this temperature. The unconventional shape of the hysteresis loops above $125$~K suggests an unusual energy landscape for the phase underlying the EB, and some interplay between it and the ferromagnetism.
\begin{figure}[ht]
\centering
\includegraphics[width = 0.5\textwidth]{Fig3_01Jul2019_colorNEW-01.png}
\caption{Magnetic hysteresis at intermediate temperatures with field parallel to the c-axis.
(a) Magnetization as a function of applied magnetic field for a sample cooled in an applied field of $0.5$~T showing the magnetic hysteresis loops for temperatures near $125$~K. The square shape of the loop at $100$~K gradually changes to a bipartite transition at $125$~K.
(b) Schematic definition of the anisotropy field ($H_A$), the coercive field ($H_C$) and the exchange bias field ($H_{EB}$) on the three types of loop shapes. For the low temperature square loops, $H_A$ and $H_C$ coincide.
}
\label{fig:biasAtdiffFieldsTemps}
\end{figure}
When looking at the sample's resistance as a function of temperature in Fig.~\ref{fig:cooldownAndEB}(a), there is no clear signature of an additional magnetic phase transition that may explain the appearance of EB. However, the existence of a transition at $125$~K becomes readily apparent when studying the effect of an applied field on $R_{xy}$, as shown in Fig.~\ref{fig:warmups}. A field was applied while cooling (where FC data is collected), swept to zero at $2$~K and then the sample was warmed up to $250$~K (where the ZFW data is collected). For cooling fields $\vert \mu_0H_{cool} \vert > 0.05$~T, a clear transition can be observed at $125$~K, marked by a sudden change in $R_{xy}$ in the ZFW curves. We denote this temperature as $T_G$. This demonstrates there is an interaction between the FM and another phase, which pins the zero-field $R_{xy}$ below $T_G = 125$~K.
FC and ZFW curves merge at the FM transition at $175$K. This suggests that the phase below $T_G$ retains the memory of the cooling conditions, and forces a change in the FM ordering at zero field.
\begin{figure}[ht]
\centering
\includegraphics[width = 0.7\textwidth]{Fig4_11Jul2019.png}
\caption{A new phase transition at $125$~K.
(a) $R_{xy}$ as function of temperature for zero field warming up (ZFW) after in-field cooling (FC) of the sample. For each FC (dotted lines) the ZFW (solid) is of the same color. On the ZFW curves, the $125$~K transition is evident as a sudden change in $R_{xy}$ to a different value. For cooling in positive (negative) field, the change in $R_{xy}$ at $125$~K is to a lower (higher) value.
}
\label{fig:warmups}
\end{figure}
\subsection{Magnetic and thermodynamic properties of the $125$~K phase}
A recent work using muon spin rotation \cite{Guguchia_arXiv_2019} concluded that the magnetism in Co$_3$Sn$_2$S$_2$\, originates solely from the Co atoms, and that there is a phase transition from a higher temperature in-plane AFM to a low temperature out-of-plane FM at $\sim90$~K. This is difficult to reconcile with our observations, since simple FM alone does not exhibit EB. To resolve this, we measured the magnetic moment of a Co$_3$Sn$_2$S$_2$ crystal as a function of temperature in both out-of-plane (Fig.~\ref{fig:magnetization}(a)) and in-plane directions (Fig.~\ref{fig:magnetization}(b)). In the out-of-plane direction, a ZFC measurement while warming up in a $10$~mT field shows the expected FM transition at $175$~K, with an additional peak at $125$~K. The same measurement protocol applied in the in-plane direction reveals an AFM-like cusp at $175$~K, followed by a rise in magnetization at $125$~K. The moment amplitude in the ab plane is two orders of magnitude lower than the out-of-plane moment. This is in agreement with other magnetic measurements done previously \cite{Kassem_PRB_2017}. However, the appearance of an in-plane moment with the addition of the $125$~K feature in both orientations points to a previously undiscovered phase transition in Co$_3$Sn$_2$S$_2$. The work by Kassem \textit{et al.} \cite{Kassem_PRB_2017} includes measurements of magnetization and ac susceptibility, which were interpreted as indicating an anomalous magnetic phase preceding the FM phase when cooling. Here, EB clearly shows that the two types of magnetism coexist below $125$~K.
Heat capacity measurements were also performed in order to characterize the $125$~K phase transition. A distinct phase transition at that temperature would be the hallmark of an ordered FM/AFM phase, whereas the lack of such a transition would allude to a glassy one. The results of these measurements are presented in Fig.~\ref{fig:magnetization}(c). A clear transition appears at $175$~K, but no feature is visible at $125$~K, even when following the FC-ZFW protocol mentioned above that shows a transition in transport (as seen in Fig.~\ref{fig:warmups}). This is evidence that the $125$~K transition is not of long range order, but of the freezing of a spin glass.
\begin{figure*}[ht]
\centering
\includegraphics[width = 0.95\textwidth]{Fig5_11Jul2019.png}
\caption{(a+b) In-plane and out-of-plane magnetization measurements on a single crystal.
(a) Out-of-plane (H $||$ c) magnetization as a function of temperature while warming up the sample after ZFC (dashed) and field cooled (solid) and measuring under a field of $10$~mT. The curves are characteristic of a FM with a Curie temperature of $175$~K, with a small feature at $125$~K (marked with a small arrow on the ZFC curve).
(b) In-plane (H $||$ ab) magnetization as a function of temperature while warming up the sample after ZFC under a field of $10$~mT. The curve is characteristic of an AFM with a N\'eel temperature of $175$~K, with a feature at $125$~K.
(c) Single crystal heat capacity as a function of temperature for $100-200$~K. These measurements reveal no feature at $125$~K, despite repeating the ZFW after FC protocol as in Fig.~\ref{fig:warmups}.}
\label{fig:magnetization}
\end{figure*}
\section{Discussion}
The classical method for creating EB in materials is to combine the FM material with an AFM one. The AFM is used as a "pinning" layer, as the exchange interaction at the FM-AFM interface pins the FM layer's magnetization, resulting in a higher coercive field needed to flip its magnetic orientation.
It was later discovered that EB can also be induced by a combination of an FM phase and phases other than AFM \cite{Meiklejohn1957}, such as a ferrimagnet \cite{Cain1990} or a spin glass (SG) \cite{Ali_NatMat_2007}. The presence of exchange bias below $T_G = 125$~K, evidenced in Fig.~\ref{fig:cooldownAndEB} and Fig.~\ref{fig:seb}, immediately suggests the coexistence of ferromagnetism with another phase. As we discuss below, we suggest this phase is a dense spin glass arising from strong magnetic frustration in the Kagome lattice.
The first clue as to the nature of the coexisting phase is the unusual bow-tie like structure of the $M(H)$ sweeps in the range $T_G<T<T_c$, shown in Fig.~\ref{fig:biasAtdiffFieldsTemps}. On sweeping the field up, the system begins with a linear response, saturating at a magnetization $M_0$, indicating the polarization of the FM domains. In an ordinary FM, these domains would remain polarized until a sufficiently negative field can flip the spins. In the present case, the domains depolarize at positive fields, and restore their paramagnetic response. As a minimal Landau model, this can be understood as the interplay of two terms in the free energy, one favoring a FM structure $M=\pm M_0$, and another with a minimum favoring $M=0$. This could be the coexistence of an AFM, as claimed recently \cite{Guguchia_arXiv_2019}, or a correlated paramagnet (PM) with AFM interactions.
The difficulty with reconciling an AFM coexistence at $T_G<T<T_c$, is that EB is not observed in this range, but rather at $T<T_G$. This means that the coexisting phase must become stiffer below $T_G$, not weaker as previously suggested \cite{Guguchia_arXiv_2019}. However, the absence of a heat capacity anomaly at $T_G$, suggests this does not freeze a significant fraction of the degrees of freedom, and in any case muon measurements have confirmed the absence of a competing long range order at low temperatures.
We suggest that a frustration-driven spin glass transition at $T_G$ can explain all of the present observations. These would lead to exchange bias, and an absent heat capacity anomaly \cite{Ali_NatMat_2007}. This also explains the temperature dependence of the $M(H)$ curves. In the range $T_G<T<T_c$, the system is a FM coexisting with an (antiferromagnetically) correlated PM, leading to a linear susceptibility at low fields, but polarizing at sufficiently high fields. Below $T_G$, the correlated PM freezes into a spin glass, pinning the FM domains into one of two states at $\pm M_0$, causing the hysteresis loops to open up and take the familiar form of a more conventional FM system. This pinning explains the sudden jump in the AHE at $125$~K (Fig. \ref{fig:warmups}) and the opening of the hysteresis loops (Fig. \ref{fig:magnetization}). However, a SG would be expected to show time dependent effects like magnetic relaxation or magnetic memory. We have not observed such effects. This may be because the dynamics of the spin glass are too fast (as expected in dense spin glass) and/or because the magnetization of the FM signal is already so large, that the small magnetic contribution of a SG may be difficult to resolve.
If the coexisting phase at low temperature is indeed a spin glass, it is unlikely to be disorder driven. All studies of Co$_3$Sn$_2$S$_2$\, have observed the transition to occur at the same temperature. This includes ours and others' samples that are clean enough to show quantum oscillations \cite{Liu_NatPhys_2018, Supplemental}. The spin glass is therefore most likely driven by frustrated interactions on the Kagome lattice. The observation of AFM correlations in Ref.\cite{Guguchia_arXiv_2019} using muon spectroscopy, may be consistent with this interpretation; the FM (with the moment predominantly aligned along c) onsets at $T_C$, however frustrated in-plane AFM correlations cause weak in-plane canting, that subsequently freezes at $T_G$ into a dense SG. The coexistence of the spin glass and the FM phase then leads to exchange biased AHE presently observed.
\section{\label{sec:sum} Summary}
Co$_3$Sn$_2$S$_2$\, is a magnetic Weyl semimetal displaying exchange bias intrinsically, without the need for doping or layering. The exchange bias in this material can also be induced spontaneously at low temperatures and by low magnetic fields.
We suggest that the origin of the exchange bias is the frustration in the Kagome magnet, which leads to a ferromagnetic state that below $125$~K is simultaneously glassy. The combination of these behaviors leads to exchange bias and spontaneous exchange bias that are strongly evident in the anomalous Hall effect.
We emphasize that magnetism plays in important role in the robustness of the QAHE in magnetically doped topological insulators \cite{Lachman2017}, and therefore may play a crucial role in unlocking the possibility of a QAHE in low-dimensional structures of Co$_3$Sn$_2$S$_2$. The interplay of magnetic frustration and topology in Co$_3$Sn$_2$S$_2$\, provides an example of the potential utility of these materials for future spintronics technologies.
\section{Methods}
\subsection{Single crystal growth}
Single crystals were grown from a stochiometric ratio of elements using the self-flux method (Sn flux). The elements were placed in $AlO_x$ crucible and sealed in an evacuated quartz tube.
\subsection{Transport and magnetization measurements}
Transport measurements were performed in a Quantum Design PPMS, with current flowing in the ab plane.
Mixing was accounted for by removing a constant ratio of $R_{xx}$ from $R_{xy}$ data for all temperatures and fields. This is purely a geometric factor as it is independent of the measurement conditions and is constant for all the transport measurements performed on a specific sample. The ratio was determined to remove ``symmetric" contributions to $R_{xy}$ by ``leveling" the high field values. This protocol was chosen because due to the non-symmetric nature of exchange bias, anti-symmetrising the data was not possible.\\
Magnetization measurements were performed in a Quantum Design MPMS3 on a quartz rod.
\section{Acknowledgements}
\noindent This work was supported by the National Science Foundation under Grant No. 1607753. E.L is an Awardee of the Weizmann Institute of Science - National Postdoctoral Award Program for Advancing Women in Science.
R.K. is supported by the National Science Foundation (NSF) Graduate Research Fellowship under Grant No. DGE-1106400.
E.L., N.M. and S.H. acknowledge support from the Gordon and Betty Moore foundation’s EPiQS Initiative through Grant GBMF4374.
High field measurements were performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1157490 and the State of Florida.
\section{Author contribution}
E.L performed the crystal growth, transport and magnetization measurements, as well as data analysis for the above measurements.
N.M, R.K and S.H performed high magnetic field measurements.
E.L, R.M and J.A devised the experiments and interpreted the results.
E.L and J.A wrote the manuscript with contributions from other authors.
\section{Competing interests}
The authors declare no competing interests.
\section{Data availability}
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
|
1811.05959
|
\section{Introduction}\label{sec:intro}
Consider an experimentalist observing a physical system modeled by a discrete time \emph{dynamical system} $(X,T)$, where $T\colon X \to X$ is the evolution rule and the \emph{phase space} $X$ is a subset of the Euclidean space $\mathbb{R}^N$. It often happens that, for a given point $x \in X$, instead of an actual sequence of $k$ states $x, Tx, \ldots, T^{k-1} x$, the observer's access is limited to the values of $k$ \emph{measurements} $h(x), h(Tx), \ldots, h(T^{k-1} x)$, for a real-valued \emph{observable} $h \colon X \to \mathbb{R}$. Therefore, it is natural to ask, to what extent the original system can be reconstructed from such sequences of measurements and what is the minimal number $k$, referred to as the number of \emph{delay-coordinates}, required for a reliable reconstruction. These questions have emerged in the physical literature (see e.g.~\cite{PCFS80}) and inspired a number of mathematical results, known as \emph{Takens-type delay embedding theorems}, stating that the reconstruction of $(X,T)$ is possible for certain observables $h$, as long as the measurements $h(x), h(Tx), \ldots, h(T^{k-1} x)$ are known for \emph{all} $x \in X$ and large enough $k$.
The possibility of performing measurements at every point of the phase space is clearly unrealistic. However, such an assumption enables one to obtain theoretical results which justify the validity of actual procedures used by experimentalists (see e.g.~\cite{hgls05distinguishing, KY90,sgm90distinguishing,sm90nonlinear}). Note that one cannot expect a reliable reconstruction of the system based on the measurements of a given observable $h$, as it may fail to distinguish the states of the system (e.g. if $h$ is a constant function). It is therefore necessary (and rather realistic) to assume that the experimentalists are able to \emph{perturb} the given observable. The first result obtained in this area is the celebrated Takens delay embedding theorem for smooth systems on manifolds \cite[Theorem 1]{T81}. Due to its strong connections with actual reconstruction procedures used in the natural sciences, Takens theorem has been met with great interest among mathematical physicists (see e.g. \cite{HBS15, SYC91, Voss03}). Let us recall its extension due to Sauer, Yorke and Casdagli \cite{SYC91}. In this setting, the number $k$ of the delay-coordinates should be two times larger than the upper box-counting dimension of the phase space $X$
(denoted by $\overline{\dim}_B\, X$; see Section~\ref{sec:prem} for the definition), and the perturbation is a polynomial of degree $2k$. The formulation of the result given here follows \cite{Rob11}.
\begin{thm}[{\cite[Theorem 14.5]{Rob11}}]\label{thm:standard_takens}
Let $X \subset \mathbb{R}^N$ be a compact set and let $T\colon X \to X$ be Lipschitz and injective. Let $k \in \mathbb{N}$ be such that $k > 2\overline{\dim}_B\, X$ and assume $2\overline{\dim}_B\,(\{ x \in X : T^p x = x \}) < p$ for $p=1, \ldots, k-1$. Let $h \colon \mathbb{R}^N \to \mathbb{R}$ be a Lipschitz function and $h_1, \ldots, h_m\colon \mathbb{R}^N \to \mathbb{R}$ a basis of the space of polynomials of degree at most $2k$. For $\alpha = (\alpha_1, \ldots, \alpha_m) \in \mathbb{R}^m$ denote by $h_\alpha \colon \mathbb{R}^N \to \mathbb{R}$ the map
\[
h_\alpha (x) = h(x) + \sum \limits_{j=1}^m \alpha_j h_j (x).
\]
Then for Lebesgue almost every $\alpha = (\alpha_1, \ldots, \alpha_m) \in \mathbb{R}^m$, the transformation
\[
\phi_\alpha^T\colon X \to \mathbb{R}^k, \qquad \phi_\alpha^T(x) = (h_\alpha(x), h_\alpha(Tx), \ldots, h_\alpha(T^{k-1} x))
\]
is injective on $X$.
\end{thm}
The map $\phi_\alpha^T$ is called the \emph{delay-coordinate map}. Note that Theorem~\ref{thm:standard_takens} applies to any compact set $X \subset \mathbb{R}^N$, not necessarily a manifold. This is a useful feature, as it allows one to consider sets with a complicated geometrical structure, such as fractal sets arising as attractors in chaotic dynamical systems, see e.g.~\cite{ER85}. Moreover, the upper box-counting dimension of $X$ can be smaller than the dimension of any smooth manifold containing $X$, so Theorem~\ref{thm:standard_takens} may require fewer delay-coordinates than its smooth counterpart in \cite{T81}.
As it was noted above, usually an experimentalist may perform only a finite number of observations $h(x_j), \ldots, h(T^{k-1}x_{j})$ for some points $x_j \in X$, $j = 1, \ldots, l$. We believe it is realistic to assume there is an (explicit or implicit) \emph{random} process determining which initial states $x_{j}$ are accessible to the experimentalist. In this paper we are interested in the question of reconstruction of the system in presence of such process. Mathematically speaking, this corresponds to fixing a probability measure $\mu$ on $X$ and asking whether the delay-coordinate map $\phi_\alpha^T$ is injective \emph{almost surely} with respect to $\mu$. Since in this setting we are allowed to neglect sets of probability zero, it is reasonable to ask whether the minimal number of delay-coordinates sufficient for the reconstruction of the system can be smaller than $2\dim X$. Our main result states that this is indeed the case, and the number of delay-coordinates can be reduced by half for \emph{any} (Borel) probability measure.
The problem of determining the minimal number of delay-coordinates required for reconstruction has been already considered in the physical literature. In \cite{PCFS80}, the authors analyzed an algorithm which may by interpreted as an attempt to determine this number in a probabilistic setting. Our work provides rigorous results in this direction. The following theorem is a simplified version of our result.
\begin{thm}[{\bf Probabilistic Takens delay embedding theorem}{}]\label{thm:takens_simple}
Let $X \subset \mathbb{R}^N$ be a Borel set, $\mu$ a Borel probability measure on $X$ and $T\colon X \to X$ an injective, locally Lipschitz map. Take $k \in \mathbb{N}$ such that $k > \dim X$ and assume that for $p=1, \ldots, k-1$ we have $\dim(\{ x \in X : T^p x = x \}) < p$ or $\mu(\{ x \in X : T^p x = x \}) = 0$. Let $h \colon X \to \mathbb{R}$ be a locally Lipschitz function and $h_1, \ldots, h_m\colon \mathbb{R}^N \to \mathbb{R}$ a basis of the space of real polynomials of $N$ variables of degree at most $2k-1$. For $\alpha = (\alpha_1, \ldots, \alpha_m) \in \mathbb{R}^m$ denote by $h_\alpha \colon \mathbb{R}^N \to \mathbb{R}$ the map
\[ h_\alpha (x) = h(x) + \sum \limits_{j=1}^m \alpha_j h_j (x). \]
Then for Lebesgue almost every $\alpha = (\alpha_1, \ldots, \alpha_m) \in \mathbb{R}^m$, there exists a Borel set $X_\alpha \subset X$ of full $\mu$-measure, such that the delay-coordinate map
\[ \phi_\alpha^T\colon X \to \mathbb{R}^k, \qquad \phi_\alpha^T(x) = (h_\alpha(x), h_\alpha(Tx), \ldots, h_\alpha(T^{k-1} x)) \]
is injective on $X_\alpha$.
\end{thm}
In the above theorem, the dimension $\dim$ can be chosen to be any of $\dim_H, \underline{\dim}_B\,, \overline{\dim}_B\,$ (Hausdorff, lower and upper box-counting dimension; for the definitions see Section~\ref{sec:prem}). Recall that for any Borel set $X$ one has
\begin{equation}\label{eq:hdim_bdim}
\dim_H X \leq \underline{\dim}_B\, X \leq \overline{\dim}_B\, X
\end{equation}
(see e.g.~\cite[Proposition~3.4]{falconer2014fractal}).
Since the inequalities in \eqref{eq:hdim_bdim} may be strict, using the Hausdorff dimension instead of the box-counting one(s) may reduce the required number of delay-coordinates. In particular, there are compact sets $X \subset \mathbb{R}^N$ with $\dim_H X=0$ and $\overline{\dim}_B\, X = N$, hence Theorem \ref{thm:takens_simple} can reduce significantly the number of required delay-coordinates compared to Theorem \ref{thm:standard_takens} (in a probabilistic setting).
Notice that in Theorem~\ref{thm:takens_simple} we do not assume that the measure $\mu$ is $T$-invariant. However, the invariance of $\mu$ provides some additional benefits, as shown in the following remark.
\begin{rem}[{\bf Invariant measure case}{}]\label{rem:invariant_simple} Suppose that the measure $\mu$ in Theorem~\ref{thm:takens_simple} is additionally $T$-invariant, i.e.~$\mu(Y) = \mu(T^{-1}(Y))$ for every Borel set $Y \subset X$. Then
the set $X_{\alpha}$ can be chosen to satisfy $T(X_{\alpha}) = X_{\alpha}$. Moreover, if $\mu$ is $T$-invariant and ergodic (i.e.~$T^{-1}(Y) = Y$ can occur only for sets $Y$ of $0$ or full $\mu$-measure), then the assumption on the periodic points of $T$ in Theorem~\ref{thm:takens_simple} can be omitted.
\end{rem}
Note that in the case when the measure $\mu$ is $T$-invariant, Theorem~\ref{thm:takens_simple} and Remark \ref{rem:invariant_simple} show that for a suitable choice of $X_\alpha$, the map $\phi_{\alpha}^T$ is injective on the invariant set $X_\alpha$, which implies that the dynamical system $(\hat X, \hat T)$ for $\hat X = \phi_{\alpha}^T(X_{\alpha})$, $\hat T = \phi_{\alpha}^T \circ T \circ (\phi_{\alpha}^T)^{-1}$, is a model of the system $(X,T)$ embedded in $\mathbb{R}^k$.
An extended versions of Theorem~\ref{thm:takens_simple} and Remark~\ref{rem:invariant_simple} are presented and proved in Section~\ref{sec:takens} as Theorem~\ref{thm:takens} and Remark~\ref{rem:invariant}, respectively. Theorem~\ref{thm:takens} shows that the assumption $k > \dim X$ can be slightly weakened, and in addition to locally Lipschitz functions $h$, one can consider locally $\beta$-H\"older functions for suitable $\beta \in (0,1]$. Moreover, one can replace the probabilistic measure $\mu$ by any Borel $\sigma$-finite measure on $X$. For details, see Section~\ref{sec:takens}.
Notice that to eliminate the assumption on the periodic points of $T$ in Theorem~\ref{thm:takens_simple}, one can also consider systems with `few' or no periodic points. For instance, as proved in \cite{y69}, a flow on a subset of Euclidean space given by an autonomous differential equation $\dot{x}=F(x)$, where $F$ is Lipschitz with a constant $L$, has no periodic orbits of period smaller than $\frac{2\pi}{L}$. It follows that if $T$ is a $t$-time map for such a flow with $t < \frac{2\pi}{L \dim X}$, then it has no periodic points of periods smaller than $\dim X$ and therefore the assumption on periodic points in Theorem~\ref{thm:takens_simple} can be omitted (compare also \cite[Remark~1.2]{Gut16}). The same holds if the number of periodic points of a given period is finite, which by the Kupka--Smale theorem is a generic condition in the space of $C^r$-diffeomorphisms ($r\geq 1$) of a compact manifold equipped with the uniform $C^r$-topology\footnote{According to the Kupka--Smale theorem (\cite[Chapter 3, Theorem 3.6]{palis1982geometric}) it is generic that the periodic points are hyperbolic and thus periodic points of a given period are isolated by the Hartman--Grobman theorem (\cite[Chapter 2, Theorem 4.1]{palis1982geometric}).}.
As has been mentioned already, Takens theorems are used in order to justify actual (approximate) delay map procedures based on real experimental data (see e.g. \cite{SugiharaFishes,hgls05distinguishing,sgm90distinguishing,sm90nonlinear}). Note, however, that in the cited papers the dimension of the phase space $X$ is deduced \emph{a posteriori} from the properties of the \emph{time series} (orbits of the delay coordinate map for a given observable). It would be very interesting to know whether in the literature it has been observed for some experimental data originating from a space $X$ with known dimension that it is sufficient to have $k \approx \dim X$ (instead of $k \approx 2 \dim X$) delay-coordinates (in other words, time series of length $k$) in the framework of such procedures.
In this paper we focus our attention to the case when the space $X$ is a subset of a finite-dimensional Euclidean space. Takens-type delay embedding theorems have also been extended to finite-dimensional subsets of Banach spaces (see e.g.~\cite{Rob05}). It is a natural question, whether our probabilistic embedding theorems can also be transferred into the infinite-dimensional setup. This problem will be considered in a subsequent work.
Takens-type delay embedding theorems can be seen as dynamical versions of \emph{embedding theorems} which specify when a finite-dimensional set can be embedded into a Euclidean space. Indeed, under the assumptions of Theorem~\ref{thm:standard_takens}, the delay-coordinate map $\phi_\alpha^T$ is an embedding of $X$ into $\mathbb{R}^k$ for typical $\alpha$. Embedding theorems in various categories have been extensively studied in a number of papers (see Section~\ref{sec:embedding} for a more detailed discussion). Recently, Alberti, B\"{o}lcskei, De Lellis, Koliander and Riegler \cite{Riegler18} proved a probabilistic embedding theorem involving the modified lower box-counting dimension of the set (see Theorem~\ref{thm:riegler}). We are able to improve this result by considering the Hausdorff dimension. Below we present a simplified version of our theorem, which can be seen as a non-dynamical counterpart of Theorem~\ref{thm:takens_simple}.
\begin{thm}[{\bf Probabilistic embedding theorem}{}]\label{thm:embed_simple}
Let $X \subset \mathbb{R}^N$ be a Borel set and let $\mu$ be a Borel probability measure on $X$. Take $k \in \mathbb{N}$ such that the $k$-th Hausdorff measure of $X$ is zero $($it suffices to take $k > \dim_H X)$ and let $\phi \colon X \to \mathbb{R}^k$ be a locally Lipschitz map. Then for Lebesgue almost every linear transformation $L \colon \mathbb{R}^N \to \mathbb{R}^k$ there exists a Borel set $X_L \subset X$ of full $\mu$-measure, such that $\phi_L = \phi + L$ is injective on $X_L$.
\end{thm}
The extended version of the theorem is formulated and proved in Section~\ref{sec:embedding} as Theorem~\ref{thm:embed}. In particular, we obtain the following geometric corollary (see Section~\ref{sec:embedding} for details).
\begin{cor}[{\bf Probabilistic injective projection theorem}{}]\label{cor:embed-proj_simple}
Let $X \subset \mathbb{R}^N$ be a Borel set and let $\mu$ be a Borel probability measure on $X$. Then for every $k > \dim_H X$ and almost every $k$-dimensional linear subspace $S \subset \mathbb{R}^N$, the orthogonal projection of $X$ into $S$ is injective on a full $\mu$-measure subset of $X$.
\end{cor}
Notice that by the Marstrand--Mattila projection theorem (see \cite{Marstrand,Mattila-proj}), if $X \subset \mathbb{R}^N$ is Borel and $k \geq \dim_H X$, then for almost all $k$-dimensional linear subspaces $S\subset \mathbb{R}^N$, the image of $X$ under the orthogonal projection into $S$ has Hausdorff dimension equal to $\dim_H X$.
Note also that Sauer and Yorke proved in \cite{SauerYorke97} that the dimension\footnote{Any one of the dimensions mentioned above and denoted by $\dim$.} of a bounded Borel subset $X$ of $\mathbb{R}^N$ is preserved under typical smooth maps and typical delay-coordinate maps into $\mathbb{R}^k$ as long as $k \ge \dim X$.
In this paper we also provide several examples. Example \ref{ex:circle} shows that in general the condition $k > \dim_H X$ in Theorem \ref{thm:embed_simple} cannot be replaced by $k \geq \dim_H X$. Example \ref{ex:no_linear_takens} shows that linear perturbations of the observable are not sufficient for Takens theorem. Section~\ref{sec:examples} contains a pair of examples. The first one is based on Kan's example from the Appendix to \cite{SYC91}, showing that condition $k > 2\dim_H X$ is not sufficient for existence of a linear transformation into $\mathbb{R}^k$ which is injective on $X$. As in the probabilistic setting one can work with the Hausdorff dimension, we consider a set $X \subset \mathbb{R}^2$ similar to the one provided by Kan, which cannot be embedded linearly into $\mathbb{R}$, but when endowed with a natural probability measure, almost every linear transformation $L\colon \mathbb{R}^2 \to \mathbb{R}$ is injective on a set of full measure. The second example provides a probability measure with $\dim_H \mu < \underline{\dim}_{\,\it MB\,} \mu$, showing that Theorem \ref{thm:embed_simple} strengthens a previous result from \cite{Riegler18}.
\subsection*{Organization of the paper} The paper is organized as follows. In Section~\ref{sec:prem} we introduce notation, definitions and preliminary results. Section~\ref{sec:embedding} contains the formulation and proof of the extended version of the probabilistic embedding theorem (Theorem~\ref{thm:embed}), while Section~\ref{sec:takens} is devoted to the proof of the extended version of the probabilistic Takens delay embedding theorem (Theorem~\ref{thm:takens}). In Section~\ref{sec:examples} we present examples showing how the use of the Hausdorff dimension improves the previously obtained results.
\subsection*{Acknowledgements} We are grateful to Erwin Riegler for helpful discussions and to the anonymous referees for helpful comments. Y.~G. and A.~\'S. were partially supported by the National Science Centre (Poland) grant 2016/22/E/ST1/00448.
\section{Preliminaries}\label{sec:prem}
Consider the Euclidean space $\mathbb{R}^N$ for $N \in \mathbb{N}$, with the standard inner product $\langle \cdot, \cdot\rangle$ and the norm $\| \cdot \|$. The open $\delta$-ball around a point $x \in \mathbb{R}^N$ is denoted by $B_N(x, \delta)$. By $|X|$ we denote the diameter of a set $X \subset \mathbb{R}^N$. We say that function $\phi\colon X \to \mathbb{R}^k,\ X \subset \mathbb{R}^N$ is \emph{locally $\beta$-H\"{o}lder} for $\beta > 0$ if for every $x \in X$ there exists an open set $U \subset \mathbb{R}^N$ containing $x$ such that $\phi$ is $\beta$-H\"{o}lder on $U \cap X$, i.e. there exists $C>0$ such that
\[
\| \phi(x) - \phi(y) \| \leq C\|x - y\|^{\beta}
\]
for every $x,y \in U \cap X$. We say that $\phi$ is \emph{locally Lipschitz} if it is locally $1$-H\"{o}lder.
For $k \le N$ we write $\Gr(k, N)$ for the $(k, N)$-{\emph{Grassmannian}, i.e.~the space of all $k$-dimensional linear subspaces of $\mathbb{R}^N$, equipped with the standard rotation-invariant (Haar) measure, see \cite[Section 3.9]{mattila} (and \cite{FR02} for another construction of a rotation-invariant measure on the Grassmannian). By $\eta_N$ we denote the normalized Lebesgue measure on the unit ball $B_N(0,1)$, i.e.
\[
\eta_N = \frac{1}{\Leb (B_N(0,1))} \Leb|_{B_N(0,1)},
\]
where $\Leb$ is the Lebesgue measure on $\mathbb{R}^N$.
For $s>0$, the \emph{$s$-dimensional $($outer$)$ Hausdorff measure} of a set $X \subset \mathbb{R}^N$ is defined as
\[ \mathcal{H}^s(X) = \lim \limits_{\delta \to 0}\ \inf \Big\{ \sum \limits_{i = 1}^{\infty} |U_i|^s : X \subset \bigcup \limits_{i=1}^{\infty} U_i,\ |U_i| \leq \delta \Big\}.\]
The \emph{Hausdorff dimension} of $X$ is given as
\[ \dim_H X = \inf \{ s > 0 : \mathcal{H}^s(X) = 0 \} = \sup \{ s > 0 : \mathcal{H}^s(X) = \infty \}. \]
For a bounded set $X \subset \mathbb{R}^N$ and $\delta>0$, let $N(X, \delta)$ denote the minimal number of balls of diameter at most $\delta$ required to cover $X$. The \emph{lower} and \emph{upper box-counting $($Minkowski$)$ dimensions} of $X$ are defined as
\[ \underline{\dim}_B\, X = \liminf \limits_{\delta \to 0} \frac{\log N(X,\delta)}{-\log \delta}\quad \text{ and }\quad \overline{\dim}_B\, X = \limsup \limits_{\delta \to 0} \frac{\log N(X,\delta)}{-\log \delta}. \]
The lower (resp.~upper) box-counting dimension of an unbounded set is defined as the supremum of the lower (resp.~upper) box-counting dimensions of its bounded subsets.
The \emph{lower} and \emph{upper modified box-counting dimensions} of $X \subset \mathbb{R}^N$ are defined as
\begin{align*}
\underline{\dim}_{\text{\it MB}}\, X &= \inf \Big\{ \sup \limits_{i \in \mathbb{N}} \underline{\dim}_B\, K_i : X \subset \bigcup \limits_{i=1}^{\infty} K_i,\ K_i \text{ compact} \Big\},\\
\overline{\dim}_{\text{\it MB}}\, X &= \inf \Big\{ \sup \limits_{i \in \mathbb{N}} \overline{\dim}_B\, K_i : X \subset \bigcup \limits_{i=1}^{\infty} K_i,\ K_i \text{ compact} \Big\}.
\end{align*}
With this notation, the following inequalities hold:
\begin{equation}\label{eq:hdim_mbdim_bdim}
\begin{aligned}
&\dim_H X \leq \underline{\dim}_{\text{\it MB}}\, X \leq \overline{\dim}_{\text{\it MB}}\, X \leq \overline{\dim}_B\, X,\\
&\dim_H X \leq \underline{\dim}_{\text{\it MB}}\, X \leq \underline{\dim}_B\, X \leq \overline{\dim}_B\, X.
\end{aligned}
\end{equation}
We define dimension of a finite Borel measure $\mu$ in $\mathbb{R}^N$ as
\[
\dim \mu = \inf\{ \dim X: X \subset \mathbb{R}^N \text{ is a Borel set of full $\mu$-measure} \}.
\]
Here $\dim$ may denote any one of the dimensions defined above. Recall that for a measure $\mu$ on a set $X$ and a measurable $Y \subset X$ we say that $Y$ is of \emph{full $\mu$-measure}, if $\mu(X\setminus Y) = 0$.
For more information on dimension theory in Euclidean space see \cite{falconer2014fractal, mattila, Rob11}.
For $N,k \in \mathbb{N}$ let $\Lin(\mathbb R^N; \mathbb R^k)$ be the space of all linear transformations $L \colon \mathbb{R}^N \to \mathbb{R}^k$. Such transformations are given by
\begin{equation}\label{eq:L}
L x = \big(\langle l_1, x \rangle , \ldots, \langle l_k, x \rangle\big),
\end{equation}
where $l_1, \ldots, l_k \in \mathbb{R}^N$. Thus, the space $\Lin(\mathbb{R}^N; \mathbb{R}^k)$ can be identified with $(\mathbb{R}^N)^k$, and the Lebesgue measure on $\Lin(\mathbb{R}^N; \mathbb{R}^k)$ is understood as $\bigotimes \limits_{j=1}^k \Leb$, where $\Leb$ is the Lebesgue measure in $\mathbb{R}^N$. Within the space $\Lin(\mathbb{R}^N; \mathbb{R}^k)$ we consider the space $E^N_k$ consisting of all linear transformations $L \colon \mathbb{R}^N \to \mathbb{R}^k$ of the form \eqref{eq:L}, for which $l_1, \ldots, l_k \in B_N(0, 1)$. Note that by the Cauchy-Schwarz inequality,
\begin{equation}\label{eq:E}
\|Lx\| \leq \sqrt{N} \,\|x\|
\end{equation}
for every $L \in E^N_k$ and $x \in \mathbb{R}^N$.
By $\eta_{N,k}$ we denote the normalized Lebesgue measure on $E^N_k$, i.e.~the probability measure on $E^N_k$ given by
\[
\eta_{N,k} = \bigotimes \limits_{j=1}^k \frac{1}{\Leb(B_N(0, 1))} \Leb|_{B_N(0, 1)}.
\]
The following geometrical inequality, used in \cite{HK99} (see also \cite[Lemma 4.1]{Rob11}) is the key ingredient of the proof of Theorem~\ref{thm:embed}.
\begin{lem}\label{lem: key_ineq_linear}
Let $L \colon \mathbb{R}^N \to \mathbb{R}^k$ be a linear transformation. Then for every $x \in \mathbb{R}^N \setminus \{ 0 \}$, $z \in \mathbb{R}^k$ and $\varepsilon>0$,
\[
\eta_{N,k} (\{ L \in E^N_k : \|Lx + z \| \leq \varepsilon \}) \leq CN^{k/2}\frac{\varepsilon^k}{\|x\|^k}, \]
where $C > 0$ is an absolute constant.
\end{lem}
For $L \in \Lin(\mathbb R^m; \mathbb R^k)$, where $m,k \in \mathbb{N}$, denote by $\sigma_p(L)$, $p \in \{1, \ldots, k\}$, the $p$-th largest \emph{singular value} of the matrix $L$, i.e.~the $p$-th largest square root of an eigenvalue of the matrix $L^*L$. In the proof of Theorem~\ref{thm:takens}, instead of Lemma~\ref{lem: key_ineq_linear} we will use the following lemma, proved as \cite[Lemma~4.2]{SYC91} (see also \cite[Lemma 14.3]{Rob11}).
\begin{lem}\label{lem: key_ineq_inter}
Let $L \colon \mathbb{R}^m \to \mathbb{R}^k$ be a linear transformation. Assume that $\sigma_p(L) > 0$ for some $p \in \{1, \ldots, k\}$. Then for every $z \in \mathbb{R}^k$ and $\rho, \varepsilon > 0$,
\[ \frac{\Leb(\{ \alpha \in B_m(0, \rho) : \|L \alpha + z \| \leq \varepsilon \})}{\Leb (B_m(0, \rho))} \leq C_{m,k} \Big(\frac{\varepsilon}{\sigma_p(L) \, \rho}\Big)^p, \]
where $C_{m,k} > 0$ is a constant depending only on $m,k$ and $\Leb$ is the Lebesgue measure on $\mathbb{R}^m$.
\end{lem}
To verify the measurability of the sets occurring in subsequent proofs, we will use the two following elementary lemmas. A measure $\mu$ on a set $X$ is called \emph{$\sigma$-finite} if there exists a countable collection of measurable sets $A_n,\ n \in \mathbb{N}$ such that $\mu(A_n)<\infty$ for each $n \in \mathbb{N}$ and $\bigcup \limits_{n=1}^{\infty}A_n = X$. Recall that a \emph{$\sigma$-compact} set is a countable union of compact sets.
\begin{lem}\label{lem:dimh_fsigma}
Let $X\subset \mathbb{R}^N$ be a Borel set and let $\mu$ be a Borel $\sigma$-finite measure on $X$. Then there exists a $\sigma$-compact set $K \subset X$ of full $\mu$-measure.
\end{lem}
\begin{proof} Follows directly from the fact that a $\sigma$-finite Borel measure in a Euclidean space is regular (see e.g.~\cite[Theorem 1.1]{Billingsley99}).
\end{proof}
\begin{lem}\label{lem:measurability}
Let $\mathcal{X}, \mathcal{Z}$ be metric spaces. Then the following hold.
\begin{itemize}
\item[(a)] If $K \subset \mathcal{X} \times \mathcal{Z}$ is $\sigma$-compact, then so is $\pi_\mathcal{X}(K)$, where $\pi_\mathcal{X}\colon \mathcal{X} \times \mathcal{Z} \to \mathcal{X}$ is the projection given by $\pi_\mathcal{X}(x,z)=x$. In particular, $\pi_\mathcal{X}(K)$ is Borel.
\item[(b)] If $\mathcal{X}$ is $\sigma$-compact, $F\colon \mathcal{X} \to \mathcal{Z}$ is continuous and $K \subset \mathcal{Z}$ is $\sigma$-compact, then $F^{-1}(K)$ is $\sigma$-compact, hence Borel.
\item[(c)]
If $\mathcal{X},\mathcal{Z}$ are $\sigma$-compact, $F\colon \mathcal{X} \times \mathcal{Z} \to \mathbb{R}^k$, $k \in \mathbb{N}$, is continuous and $K \subset \mathcal{X}$ is $\sigma$-compact, then the set
\[ \{ (x, z) \in \mathcal{X} \times \mathcal{Z} : F(x,z) = F(y,z) \text{ for some } y \in K \setminus\{ x \} \} \]
is $\sigma$-compact and hence Borel.
\end{itemize}
\end{lem}
\begin{proof}
The statement (a) follows from the fact that $\pi_\mathcal{X}$ is continuous, and a continuous image of a compact set is also compact. To show (b),
it is enough to notice that $F^{-1}(K)$ is a countable union of closed subsets of a $\sigma$-compact space. To check (c), let $\pi_{\mathcal{X} \times \mathcal{Z}}\colon \mathcal{X} \times K \times \mathcal{Z} \to \mathcal{X} \times \mathcal{Z}$ be the projection $\pi_{\mathcal{X} \times \mathcal{Z}}(x,y,z) = (x,z)$. Then
\begin{align*}
&\{ (x, z) \in \mathcal{X} \times \mathcal{Z} : F(x,z) = F(y,z) \text{ for some } y \in K \setminus\{ x \} \} \\
&= \pi_{\mathcal{X} \times \mathcal{Z}}\big(\{ (x, y, z) \in \mathcal{X} \times K \times \mathcal{Z} : F(x,z) = F(y,z), \: d(x,y) \neq 0 \}\big) \\
& = \bigcup \limits_{n = 1}^{\infty} \pi_{\mathcal{X} \times \mathcal{Z}}\big(\{ (x, y, z) \in \mathcal{X} \times K \times \mathcal{Z} : F(x,z) = F(y,z), \: d(x,y) \geq \frac{1}{n} \}\big),
\end{align*}
where $d$ is the metric in $\mathcal{X}$. Since $d$ is continuous, we can use (a) and (b) to end the proof.
\end{proof}
\section{Probabilistic embedding theorem}\label{sec:embedding}
In this section we prove an extended version of the probabilistic embedding theorem, formulated below. Obviously, Theorem~\ref{thm:embed_simple} follows from Theorem \ref{thm:embed}
\begin{thm}[{\bf Probabilistic embedding theorem -- extended version}{}] \label{thm:embed}
Let $X \subset \mathbb{R}^N$ be a Borel set and $\mu$ be a Borel $\sigma$-finite measure on $X$. Take $k \in \mathbb{N}$ and $\beta \in (0,1]$ such that $\mathcal{H}^{\beta k}(X) = 0$ and let $\phi \colon X \to \mathbb{R}^k$ be a locally $\beta$-H\"older map. Then for Lebesgue almost every linear transformation $L\colon \mathbb{R}^N \to \mathbb{R}^k$ there exists a Borel set $X_L \subset X$ of full $\mu$-measure, such that the map $\phi_L = \phi + L$ is injective on $X_L$.
\end{thm}
\begin{rem} It is straightforward to notice that if $\dim_H X = 0$, then $\phi$ can be taken to be an arbitrary H\"older map.
\end{rem}
\begin{proof}[Proof of Theorem~\rm\ref{thm:embed}]
Note first that it sufficient to prove that the set $X_L$ exists for $\eta_{N,k}$-almost every $L \in E^N_k$. Indeed, if this is shown, then for a given locally $\beta$-H\"older map $\phi \colon X \to \mathbb{R}^k$ we can take sets $\mathcal L_j \subset E^N_k$, $j \in \mathbb{N}$, such that $\eta_{N,k}(\mathcal L_j) = 1$ and for every $\tilde L \in \mathcal L_j$ the map $(\phi/j)_{\tilde L} = \phi/j + \tilde L$ is injective on a Borel set $X_{\tilde L}^{(j)} \subset X$ of full $\mu$-measure. Then the set $\mathcal L = \bigcup_{j\in \mathbb{N}} \{j\tilde L: \tilde L \in \mathcal L_j\} \subset \Lin(\mathbb R^N; \mathbb R^k)$ has full Lebesgue measure and for every $L \in \mathcal L$ there exists $j$ such that $L/j \in \mathcal L_j$, so $(\phi/j)_{L/j} = (\phi + L)/j$ (and hence $\phi_L$) is injective on $X_L = \bigcap_{j\in\mathbb{N}} X_{L/j}^{(j)}$, which has full $\mu$-measure.
By Lemma~\ref{lem:dimh_fsigma}, we can assume that $X$ is $\sigma$-compact. Take $k \in \mathbb{N}$, $\beta \in (0,1]$ with $\mathcal{H}^{\beta k}(X) = 0$ and a locally $\beta$-H\"older map $\phi \colon X \to \mathbb{R}^k$. Set
\[ A = \{ (x, L) \in X \times E^N_k : \phi_L (x) = \phi_L (y) \text{ for some } y \in X \setminus \{ x \}\}. \]
By Lemma~\ref{lem:measurability}, $A$ is Borel.
For $x \in X$ and $L \in E^N_k$, denote by $A_x$ and $A^L$, respectively, the sections
\[ A_x = \{ L \in E^N_k : (x, L) \in A \},\quad A^L = \{ x \in X : (x, L) \in A \}. \]
The sets $A_x$ and $A^L$ are Borel as sections of a Borel set. Observe first that in order to prove the theorem it is enough to show $\eta_{N,k}(A_x) = 0$ for every $x \in X$, since then by Fubini's theorem (\cite[Thm. 8.8]{R87}), $(\eta_{N,k} \otimes \mu) (A) = 0$ and, consequently, $\mu(A^L) = 0$ for $\eta_{N,k}$-almost every $L \in E_k^N$. Since $\phi_L$ is injective on $X \setminus A^L$, the assertion of the theorem is true.
Take a point $x \in X$. Since $\phi$ is locally $\beta$-H\"older and $X$ is separable, there exists a countable covering of $X$ by open sets $U_j \subset \mathbb{R}^N$, $j \in \mathbb{N}$, such that
\begin{equation}\label{eq:holder}
\|\phi(y) - \phi(y')\| \le C_j \|y - y'\|^\beta \qquad \text{for every} \quad y,y' \in U_j \cap X
\end{equation}
for some $C_j > 0$. Let
\[
K_{n} = \Big\{ y \in X : \frac{1}{n} \leq \|x-y\| \Big\}.
\]
To show $\eta_{N,k}(A_x) = 0$, it suffices to prove $\eta_{N,k}(A_{x, j, n}) = 0$ for every $j, n \in \mathbb{N}$, where
\[
A_{x, j, n} = \{ L \in E^N_k : \phi_L (x) = \phi_L (y) \text{ for some } y \in U_j \cap K_{n}\}.
\]
Note that by Lemma~\ref{lem:measurability}, the set $A_{x, j, n}$ is Borel.
Take $j, n \in \mathbb{N}$ and fix a small $\varepsilon > 0$. Since $ \mathcal{H}^{\beta k}(U_j \cap K_{n}) \leq \mathcal{H}^{\beta k} (X) = 0$, there exists a collection of balls $B_N(y_i, \varepsilon_i)$, $i \in \mathbb{N}$, for some $y_i \in U_j \cap K_{n}$ and $\varepsilon_i > 0$, such that
\begin{equation}\label{e:cover} U_j \cap K_{n} \subset \bigcup \limits_{i \in \mathbb{N}} B_N(y_i, \varepsilon_i)\quad \text{and} \quad \sum \limits_{i = 1}^\infty \varepsilon_i^{\beta k} \leq \varepsilon.
\end{equation}
Take $L \in A_{x, j, n}$ and $y \in U_j \cap K_{n}$ such that $\phi_L (x) = \phi_L (y)$. Then $y \in B_N (y_i, \varepsilon_i)$ for some $i \in \mathbb{N}$ and
\begin{align*}
\| L(y_i - x) + \phi(y_i) - \phi(x) \| &= \| \phi_L(y_i) - \phi_L(x)\|\\
&= \| \phi_L(y_i) - \phi_L(y)\|\\
& \leq \|\phi(y_i) - \phi(y)\| + \| L(y_i-y)\|\\
&\leq C_j \|y_i-y\|^\beta + \sqrt{N} \|y_i - y\|\\
&\leq M_j \varepsilon_i^\beta
\end{align*}
for some $M_j >0$, by \eqref{eq:E} and \eqref{eq:holder}. This shows that
\[ A_{x, j, n} \subset \bigcup \limits_{i \in \mathbb{N}} \{ L \in E^N_k : \| L(y_i - x) + \phi(y_i) - \phi(x) \| \leq M_j\varepsilon_i^\beta \}. \]
By Lemma~\ref{lem: key_ineq_linear}, (\ref{e:cover}) and the fact $y_i \in K_n$, we have
\begin{align*}
\eta_{N,k}(A_{x, j, n}) &\leq \sum \limits_{i = 1}^\infty \eta_{N,k}(\{ L \in E^N_k : \| L(y_i - x) + \phi(y_i) - \phi(x) \| \leq M_j\varepsilon_i^\beta \})\\
&\leq \frac{CN^{k/2}M_j^k}{1/n^k}\sum \limits_{i = 1}^\infty \varepsilon_i^{\beta k} \leq CN^{k/2}M_j^k n^k\varepsilon.
\end{align*}
Since $\varepsilon > 0$ was arbitrary, we obtain $\eta_{N,k}(A_{x, j, n}) = 0$, which ends the proof.
\end{proof}
\begin{rem}
Note that the assumption $\mathcal{H}^{\beta k}(X) = 0$ is fulfilled if $\dim_H X < \beta k$, so Theorem~\ref{thm:embed} is indeed a Hausdorff dimension embedding theorem. Moreover, it may happen that $\mathcal{H}^{\beta k}(X) = 0$ and $\dim_H X = \beta k$.
\end{rem}
As a simple consequence of Theorem~\ref{thm:embed}, we obtain the following corollary, formulated in a slightly simplified version in Section~\ref{sec:intro} as Corollary~\ref{cor:embed-proj_simple}.
\begin{cor}[{\bf Probabilistic injective projection theorem -- extended version}{}]\label{cor:embed-proj}
Let $X \subset \mathbb{R}^N$ be a Borel set and let $\mu$ be a Borel $\sigma$-finite measure on $X$. Then for every $k \in \mathbb{N},\ k \leq N$ such that $\mathcal{H}^k(X) =0$ and almost every $k$-dimensional linear subspace $S \subset \mathbb{R}^N$ $($with respect to the standard measure on the Grassmannian $\Gr(k, N))$, the orthogonal projection of $X$ into $S$ is injective on a full $\mu$-measure subset of $X$ $($depending on $S)$.
\end{cor}
\begin{proof}[Proof of Corollary~\rm\ref{cor:embed-proj}]
Apply Theorem~\ref{thm:embed} for the map $\phi \equiv 0$. Then we know that a linear map $L \in \Lin(\mathbb{R}^N; \mathbb{R}^k)$ of the form \eqref{eq:L} is injective on a set $X_L \subset X$ of full $\mu$-measure for Lebesgue almost every $(l_1, \ldots, l_k) \in (\mathbb{R}^N)^k$. We can assume that $l_1, \ldots, l_k$ are linearly independent for all such $L$, which also implies that the same holds for $Ll_1, \ldots, Ll_k$. Setting
\[
S_L = \Span (l_1, \ldots, l_k)
\]
and taking $V_L \in \Lin(\mathbb{R}^k; \mathbb{R}^N)$ defined by $V_L(Ll_j) = l_j$ for $j = 1, \ldots, k$, we have
\[
V_L \circ L = \Pi_{S_L},
\]
where $\Pi_{S_L}$ is the orthogonal projection from $\mathbb{R}^N$ onto $S_L$ and $V_L$ is injective. It follows that $\Pi_{S_L}$ is injective on $X_L$ for almost every $(l_1, \ldots, l_k)$, so $\Pi_{S}$ is injective on a full $\mu$-measure subset of $X$ for almost every $k$-dimensional linear subspace $S \subset \mathbb{R}^N$.
\end{proof}
Let us note that in general, the requirement $\mathcal{H}^{\beta k}(X) = 0$ in Theorem \ref{thm:embed} cannot be replaced by the weaker condition $\dim_H(X) \leq \beta k$.
\begin{examplex}\label{ex:circle}
Let $k=\beta=1$, $X = \mathbb{S}^1 \subset \mathbb{R}^2$ be the unit circle and let $\mu$ be the normalized Lebesgue measure on $\mathbb{S}^1$. We shall prove that there is no Lipschitz transformation $\phi \colon \mathbb{S}^1 \to \mathbb{R}$ which is injective on a set of full $\mu$-measure. Let $\phi$ be such a transformation. Then $\phi(\mathbb{S}^1)=[a, b]$ for some compact interval. As $\phi$ is injective on a set of full measure, the interval $[a,b]$ is non-degenerate, i.e. $a < b$. Fix points $x, y \in \mathbb{S}^1$ with $\phi(x) = a, \phi(y) = b$. As $x \neq y$, there are exactly two open arcs $I, J \subset \mathbb{S}^1$ of positive measure joining $x$ and $y$ such that $\overline{I} \cap \overline{J} = \{ x,y\}$ and $\overline{I} \cup \overline{J} = \mathbb{S}^1$. Clearly $\phi(\overline{I}) = \phi(\overline{J}) = [a,b]$. Let $A \subset \mathbb{S}^1$ be a Borel set such that $\phi$ is injective on $A$ and $\mu(A)$=1. As Lipschitz maps transform sets of zero Lebesgue measure to sets of zero Lebesgue measure, we conclude that $\phi(I \cap A)$ and $\phi(J \cap A)$ are disjoint Lebesgue measurable subsets of $[a,b]$ with Lebesgue measure equal to $b-a$. This contradiction shows that no Lipschitz transformation $\phi \colon \mathbb{S}^1 \to \mathbb{R}$ is injecitive on a full measure set.
\end{examplex}
Theorem~\ref{thm:embed} strengthens the following embedding theorem, proved recently by Alberti, B\"{o}lcskei, De Lellis, Koliander and Riegler in \cite{Riegler18}.
\begin{thm}[{\cite[Theorem II.1]{Riegler18}}]\label{thm:riegler}
Let $\mu$ be a Borel probability measure in $\mathbb{R}^N$ and let $k \in \mathbb{N}$ be such that $k> \underline{\dim}_{\text{\it MB}}\, \mu$. Then for Lebesgue almost every linear transformation $L \colon \mathbb{R}^N \to \mathbb{R}^k$ there exists a Borel set $X_L \subset \mathbb{R}^N$ such that $\mu(X_L)=1$ and $L$ is injective on $X_L$.
\end{thm}
In fact, in \cite{Riegler18} the authors introduced the notion of $\underline{\dim}_{\text{\it MB}}\, \mu$, denoting it by $K(\mu)$ and calling it the \emph{description complexity} of the measure. In particular, Theorem~\ref{thm:riegler} holds for measures $\mu$ supported on a Borel set $X\subset \mathbb{R}^N$ with $\underline{\dim}_B\, X <k$. By \eqref{eq:hdim_mbdim_bdim}, we have $\dim_H \mu \leq \underline{\dim}_{\text{\it MB}}\, \mu $, and in Section~\ref{sec:examples} we present an example (Theorem~\ref{thm:hdim<lmodbdim}) showing that the inequality may be strict. Therefore, Theorem~\ref{thm:embed} actually strengthens Theorem~\ref{thm:riegler}.
Non-probabilistic embedding theorems were first obtained in topological and smooth categories. The well-known Menger--N\"{o}beling embedding theorem (see e.g.~\cite[Theorem~V.2]{HW41}) states that for a compact metric space $X$ with Lebesgue covering dimension at most $k$, a generic continuous transformation $\phi \colon X \to \mathbb{R}^{2k+1}$ is injective (and hence defines a homeomorphism between $X$ and $\phi(X)$). Genericity means here that the set of injective transformations $\phi \colon X \to \mathbb{R}^{2k+1}$ is a dense $G_{\delta}$ subset of $C(X ; \mathbb{R}^{2k+1})$ endowed with the supremum metric. The dimension $2k+1$ is known to be optimal.
The corresponding result in the category of smooth manifolds is the Whitney embedding theorem (see \cite{Whitney36}). It states that for a given $k$-dimensional $C^r$-manifold $M$, a generic $C^r$-transformation from $M$ to $\mathbb{R}^{2k+1}$ is a $C^r$-embedding (i.e.~an injective immersion of class $C^r$).
Let us now compare Theorem~\ref{thm:embed} to non-probabilistic embedding theorems involving the box-counting dimension. One of the first results in this area was a theorem by Ma\~{n}\'{e} \cite[Lemma 1.1]{Mane81}. We present its formulation following \cite[Theorem 4.6]{SYC91} and \cite[Theorem 6.2]{Rob11} (originally, Ma\~{n}\'{e} proved that topologically generic linear transformation is injective on $X$).
\begin{thm}\label{thm:standard_embed}
Let $X \subset \mathbb{R}^N$ be a compact set. Let $k \in N$ be such that $k>2\overline{\dim}_B\,{X}$ $($it suffices to take $k > \dim_H(X-X))$. Then Lebesgue almost every linear transformation $L\colon \mathbb{R}^N \to \mathbb{R}^k$ is injective on $X$.
\end{thm}
\begin{rem}\label{rem:hdim_does_not_work}
As noticed by Ma\~{n}\'{e} and communicated in \cite[p.~627]{ER85}, his original statement in \cite{Mane81} is incorrect. Namely, he assumed $k>2\dim_H X+1$ instead of $k > \dim_H(X-X)$. However, this is known to be insufficient for the existence of a linear embedding of $X$ into $\mathbb{R}^k$. In fact, in \cite[Appendix A]{SYC91}, Kan presented an example of a set $X \subset \mathbb{R}^m$ with $\dim_H X =0$, such that any linear transformation $L \colon \mathbb{R}^m \to \mathbb{R}^{m-1}$ fails to be injective on $X$. It turns out that the assumption $k > 2\dim_H X$ is insufficient, while $k > 2\overline{\dim}_B\, X$ is sufficient. This stems from the fact that the proof of Theorem~\ref{thm:standard_embed} actually requires the property $k > \dim_H(X-X)$, and the upper box-counting dimension satisfies
\begin{equation}\label{e:udim_product}
\overline{\dim}_B\,(A\times B) \leq \overline{\dim}_B\,(A) + \overline{\dim}_B\,(B),
\end{equation}
for $A, B \subset \mathbb{R}^N$, hence
\[\dim_H(X - X) \leq \dim_H(X \times X) \leq \overline{\dim}_B\,(X \times X) \leq 2\overline{\dim}_B\, X\]
(note that this calculation shows that $k>2\overline{\dim}_B\,{X}$ is a stronger assumption than $k > \dim_H(X-X)$). On the other hand, (\ref{e:udim_product}) does not hold for the Hausdorff dimension (nor for the lower box-counting dimension), and $\dim_H X$ does not control $\dim_H(X-X)$. The fact that in Theorem~\ref{thm:embed} we can work with the Hausdorff dimension comes from the application of Fubini's theorem, which enables us to consider covers of the set $X$ itself, instead of $X-X$. In Section \ref{sec:examples} we analyze Kan's example from the point of view of Theorem~\ref{thm:embed}.
\end{rem}
Theorem~\ref{thm:standard_embed} is also true for subsets of an arbitrary Banach space $\mathfrak{B}$ for a prevalent set of linear transformations $L\colon \mathfrak{B} \to \mathbb{R}^k$ (see \cite[Chapter 6]{Rob11} for details).
Note that the linear embedding from Theorem~\ref{thm:embed} need not preserve the dimension of $X$. Indeed, the Hausdorff and box-counting dimensions are invariants for bi-Lipschitz transformations, yet inverse of a linear map on a compact set does not have to be Lipschitz. Therefore, we only know that $\dim \phi_L(X) \leq \dim X$ (see \cite[Proposition 2.8.iv and Lemma 3.3.iv]{Rob11}) and the inequality can be strict. For example, let $\phi \equiv 0$ and $X = \{ (x, f(x)) : x \in [0,1] \}$ be a graph of a (H\"{o}lder continuous) function $f\colon [0,1]\to \mathbb{R}$ with $\dim_H X >1$, e.g.~the Weierstrass non-differentiable function. Then the linear projection $L \colon \mathbb{R}^2 \to \mathbb{R}$ given by $L(x,y)=x$ satisfies $1= \dim L(X) < \dim_H X$. The following theorem shows that in the non-probabilistic setting, one can obtain $\beta$-H\"{o}lder continuity of the inverse map for small enough $\beta \in (0,1)$ (see \cite{BAEFN93, EFNT94, HK99} and \cite[Chapter 4]{Rob11}).
\begin{thm}
Let $X \subset \mathbb{R}^N$ be a compact set. Let $k \in \mathbb{N}$ be such that $k>2\overline{\dim}_B\,{X}$ and let $\beta$ be such that $0 < \beta < 1 - 2\overline{\dim}_B\, X/k$. Then Lebesgue almost every linear transformation $L \colon \mathbb{R}^N \to \mathbb{R}^k$ is injective on $X$ with $\beta$-H\"{o}lder continuous inverse.
\end{thm}
However, this is not true in the case of Theorem~\ref{thm:embed}.
\begin{rem}\label{rem:no_holder_inverse}
In general, we cannot claim that the injective map $\phi_L|_{X_L}$ from Theorem~\ref{thm:embed} has a H\"{o}lder continuous inverse. Indeed, it is well-known that for $n \in \mathbb{N}$ there are examples of compact sets $X \subset \mathbb{R}^N$ of Hausdorff and topological dimension equal to $n$, which do not embed topologically into $\mathbb{R}^k$ for $k \leq 2n$ (showing the optimality of the bounds in the Menger--N\"{o}beling embedding theorem, see \cite[Example V.3]{HW41}). Consider a probability measure $\mu$ on $X$ with $\supp \mu = X$, where $\supp$ denotes the topological support of the measure (the intersection of all closed sets of full measure). It is known that such measure exists for any compact set. If the map $\phi_L|_{X_L}$ from Theorem~\ref{thm:embed} for $k = n+1$ had a H\"{o}lder continuous inverse $f = \phi_L^{-1}$, then we could extend $f$ from $\phi_L(X_L)$ to $\mathbb{R}^{n+1}$ preserving the H\"{o}lder continuity (\cite[Theorem IV.7.5]{Banach51}, see also \cite{minty1970extension}). Then $Y = \{ x \in X : f \circ \phi_L(x) = x \}$ would be a closed subset of $X$ with $\mu(Y)=1$, hence $Y=X$, so $\phi_L$ would be homeomorphism between $X$ and $\phi_L(X) \subset \mathbb{R}^{n+1}$, which would give a contradiction.
\end{rem}
\section{Probabilistic Takens delay embedding theorem}\label{sec:takens}
In this section we present the proof of the extended probabilistic Takens delay embedding theorem. It turns out that linear perturbations are insufficient for Takens-type theorems, see Example~\ref{ex:no_linear_takens}. As observed in \cite{SYC91}, it is enough to take perturbations over the space of polynomials of degree $2k$. This can be easily extended to more general families of functions.
\begin{defn}
Let $X$ be a subset of $\mathbb{R}^N$. A family of transformations $h_1, \ldots, h_m \colon X \to \mathbb{R}$ is called a \emph{$k$-interpolating family} on set $X$, if for every collection of distinct points $x_1, \ldots, x_k \in X$ and every $\xi = (\xi_1, \ldots, \xi_k) \in \mathbb{R}^k$ there exists $(\alpha_1, \ldots, \alpha_m) \in \mathbb{R}^m$ such that
$\alpha_1 h_1(x_i) + \cdots + \alpha_m h_m(x_i) = \xi_i$
for each $i=1, \ldots, k$. In other words, the matrix
\[ \begin{bmatrix} h_1(x_1) & \ldots & h_m(x_1) \\
\vdots &\ddots & \vdots \\
h_1(x_k) & \ldots & h_m(x_k)
\end{bmatrix} \]
has full row rank as a transformation from $\mathbb{R}^m$ to $\mathbb{R}^k$. Note that the same is true for any collection of $l$ distinct points with $l \leq k$.
\end{defn}
\begin{rem}\label{rem:poly_interp} It is known that any linear basis $h_1, \ldots, h_m$ of the space of real polynomials of $N$ variables of degree at most $k-1$ is a $k$-interpolating family (see e.g.~\cite[Section~1.2, eq.~(1.9)]{poly-interpolation}).
\end{rem}
For a transformation $T \colon X \to X$ and $p \in \mathbb{N}$ denote by $\Per_p(T)$ the set of periodic points of minimal period $p$, i.e.
\[ \Per_p(T) = \{ x \in X : T^p x = x \text{ and } T^j x\neq x \text{ for } j = 1, \ldots, p-1\}.\]
Let $\mu$ and $\nu$ be measures on a measurable space $(\mathcal{X}, \mathcal{F})$. The measure $\mu$ is called \emph{singular} with respect to $\nu$, if there exists a measurable set $Y \subset \mathcal{X}$ such that $\mu(\mathcal{X} \setminus Y) = \nu(Y) = 0$. In this case we write $\mu \perp \nu$. By $\mu|_A$ we denote the restriction of $\mu$ to a set $A \in \mathcal{F}$.
\begin{thm}[{\bf Probabilistic Takens delay embedding theorem -- extended version}{}]\label{thm:takens}
Let $X \subset \mathbb{R}^N$ be a Borel set, $\mu$ be a Borel $\sigma$-finite measure on $X$ and $T\colon X \to X$ an injective, locally Lipschitz map. Take $k \in \mathbb{N}$ and $\beta \in (0,1]$ such that $\mathcal{H}^{\beta k}(X)= 0$ and assume $\mu|_{\Per_p(T)} \perp \mathcal{H}^{\beta p}$ for every $p= 1, \ldots, k-1$. Let $h \colon X \to \mathbb{R}$ be a locally $\beta$-H\"older function and $h_1, \ldots, h_m\colon X \to \mathbb{R}$ a $2k$-interpolating family on $X$ consisting of locally $\beta$-H\"{o}lder functions. For $\alpha = (\alpha_1, \ldots, \alpha_m) \in \mathbb{R}^m$ denote by $h_\alpha \colon X \to \mathbb{R}$ the transformation
\[ h_\alpha (x) = h(x) + \sum \limits_{j=1}^m \alpha_j h_j (x). \]
Then for Lebesgue almost every $\alpha = (\alpha_1, \ldots, \alpha_m) \in \mathbb{R}^m$, there exists a Borel set $X_\alpha \subset X$ of full $\mu$-measure, such that the delay-coordinate map
\[
\phi_\alpha^T\colon X \to \mathbb{R}^k, \qquad \phi_\alpha^T(x) = (h_\alpha(x), h_\alpha(Tx), \ldots, h_\alpha(T^{k-1} x))
\]
is injective on $X_\alpha$.
\end{thm}
Notice that Theorem~\ref{thm:takens_simple} follows from Theorem~\ref{thm:takens} by Remark~\ref{rem:poly_interp}.
\begin{rem}[{\bf Invariant measure case -- extended version}{}]\label{rem:invariant} Under the assumptions of Theorem~\ref{thm:takens}, the following hold.
\begin{itemize}
\item[(a)] If the measure $\mu$ is $T$-invariant, then the set $X_{\alpha}$ can be chosen to satisfy $T(X_{\alpha}) \subset X_{\alpha}$.
\item[(b)] If the measure $\mu$ is finite and $T$-invariant, then the set $X_{\alpha}$ can be chosen to satisfy $T(X_{\alpha}) = X_{\alpha}$.
\item[(c)] If the measure $\mu$ is $T$-invariant and ergodic, then the assumption on the periodic points of $T$ in Theorem~\ref{thm:takens} can be omitted.
\end{itemize}
\end{rem}
Under the notation of Theorem~\ref{thm:takens}, we first show a preliminary lemma. For $x \in X$ define its \emph{full orbit} $\Orb(x)$ as
\[\Orb(x) = \{ T^n x : n \ge 0\} \cup \{y \in X : T^n y = x \text{ for some } n \in \mathbb{N}\}.\]
Note that since $T$ is injective, all full orbits are at most countable, and any two full orbits $\Orb(x)$ and $\Orb(y)$ are either equal or disjoint. For $x,y \in X$ let $D_{x,y}$ be the $k\times m$ matrix defined by
\[ D_{x,y} = \begin{bmatrix} h_1(x) - h_1(y) & \ldots & h_m(x) - h_m(y) \\
h_1(Tx) - h_1(Ty) & \ldots & h_m(Tx) - h_m(Ty) \\
\vdots & \ddots & \vdots \\
h_1(T^{k-1}x) - h_1(T^{k-1}y) & \ldots & h_m(T^{k-1}x) - h_m(T^{k-1}y) \\
\end{bmatrix}. \]
\begin{lem}\label{lem:claim}
For $x, y \in X$, the following statements hold.
\begin{enumerate}[\rm (i)]
\item If $y \neq x$, then $\rank D_{x,y} \geq 1$.
\item If $y \notin \Orb(x)$ and $y \in \Per_p(T)$ for some $p \in \{1, \ldots, k-1\}$, then $\rank D_{x,y} \geq p$.
\item If $y \notin \Orb(x)$ and $y \notin \bigcup \limits_{p=1}^{k-1} \Per_p(T)$, then $\rank D_{x,y} = k$.
\end{enumerate}
\end{lem}
\begin{proof}
For (i), it suffices to observe that the first row of $D_{x,y}$ is non-zero as long as $x\neq y$ and therefore $\rank(D_{x,y}) \geq 1$. Indeed, otherwise we would have $h_j(x) = h_j(y)$ for $j=1, \ldots, m$ which contradicts the fact that $h_1, \ldots, h_m$ is an interpolating family.
Assume now $y \notin \Orb(x)$, which implies $\Orb(y) \cap \Orb(x) = \emptyset$. Let $q$ (resp.~$r$) be a maximal number from $\{1, \ldots, k\}$ such that the points $x, Tx, \ldots, T^{q-1} x$ (resp.~$y, Ty, \ldots, T^{r-1} y$) are distinct. Notice that if $y \in \Per_p(T)$ for some $p \in \{1, \ldots, k-1\}$, then $r = p$, and if $y \notin \bigcup \limits_{p=1}^{k-1} \Per_p(T)$, then $r = k$. Thus, the assertions (ii)--(iii) of the lemma can be written simply as one condition
\begin{equation}\label{eq:r}
\rank D_{x,y} \geq r.
\end{equation}
To show that \eqref{eq:r} holds, denote the points $x, Tx, \ldots, T^{q-1} x, y, Ty, \ldots, T^{r-1}y$, preserving the order, by $z_1, \ldots, z_l$, for $l = q + r$. By the definition of $q, r$, we have $1 \leq l \le 2k$ and the points $z_1, \ldots, z_l$ are distinct. Thus, the matrix $D_{x,y}$ can be written as the product
\[ D_{x,y} = J_{x,y} V_{x,y}, \]
where
\[ V_{x,y}
= \begin{bmatrix} h_1(z_1) & \ldots & h_m(z_1) \\
\vdots & \ddots & \vdots \\
h_1(z_l) & \ldots & h_m(z_l)
\end{bmatrix} \]
and $J_{x,y}$ is a $k \times l$ matrix with entries in $\{-1, 0, 1\}$ and block structure of the form
\[
J_{x,y} = \left[ \begin{array}{c|c} * & -\Id_{r \times r} \\ \hline
* & * \end{array} \right],
\]
where $\Id_{r \times r}$ is the $r \times r$ identity matrix. It follows that $\rank J_{x,y} \geq r$. Moreover, since $z_1, \ldots, z_l$ are distinct and $h_1, \ldots, h_m$ is a $2k$-interpolating family, the matrix $V_{x,y}$ is of full rank, hence $\rank D_{x,y} = \rank J_{x,y}\geq r$, which ends the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\rm\ref{thm:takens}]
We proceed similarly as in the proof of Theorem~\ref{thm:embed}, using Lemma~\ref{lem: key_ineq_inter} instead of Lemma~\ref{lem: key_ineq_linear}, together with the suitable rank estimates coming from Lemma~\ref{lem:claim}.
In the same way as in the proof of Theorem~\ref{thm:embed}, we show that it is enough to check that the suitable set $X_\alpha$ exists for $\eta_m$-almost every $\alpha \in B_m(0,1)$.
Applying Lemma~\ref{lem:dimh_fsigma} to the sets $\Per_p(T)$, $p = 1, \ldots, k-1$ and (possibly zero) measures $\mu|_{\Per_p(T)}$, we find (possibly empty) disjoint $\sigma$-compact sets $X_1, \ldots, X_{k-1} \subset X$ such that
\[X_p \subset \Per_p(T),\quad \mu(\Per_p(T) \setminus X_p) = 0, \quad \mathcal{H}^{\beta p}(X_p)=0 \quad \text{ for } p = 1, \ldots, k-1.\] Similarly, there exists a $\sigma$-compact set $X_k \subset X \setminus \bigcup \limits_{p=1}^{k-1}\Per_p(T)$ such that \[\mu\Big( \Big(X \setminus \bigcup \limits_{p=1}^{k-1}\Per_p(T)\Big) \setminus X_k \Big) = 0 \quad \text{ and } \quad \mathcal{H}^{\beta k}(X_k)=0.\]
Note that $X_k$ contains both aperiodic and periodic points (with period at least $k$). Let
\[
\tilde X = \bigcup \limits_{p=1}^k X_p.
\]
Then $\tilde X\subset X$ is a $\sigma$-compact set of full $\mu$-measure. Define
\[ A = \{ (x, \alpha) \in \tilde X \times B_m(0,1) : \phi_\alpha^T (x) = \phi_\alpha^T (y) \text{ for some } y \in \tilde X \setminus \{ x \}\}. \]
The set $A$ is Borel by Lemma~\ref{lem:measurability}. For $x \in \tilde X$ and $\alpha \in B_m(0,1)$, denote, respectively, by $A_x$ and $A^\alpha$, the Borel sections
\[ A_x = \{ \alpha \in B_m(0,1) : (x, \alpha) \in A\},\quad A^\alpha = \{ x \in \tilde X : (x, \alpha) \in A \}. \]
Observe that to show the injectivity of $\phi_\alpha^T$ on a set of full $\mu$-measure, it is enough to prove $\eta_m(A_x) = 0$ for every $x \in \tilde X$, since then by Fubini's theorem (\cite[Thm. 8.8]{R87}), $(\eta_m \otimes \mu) (A) = 0$ and, consequently, $\mu(A^\alpha) = 0$ for $\eta_m$-almost every $\alpha \in B_m(0,1)$. As $\phi_\alpha^T$ is injective on $\tilde X \setminus A^\alpha$ and $\tilde X$ has full $\mu$-measure, the proof of the claim is finished.
Fix $x \in \tilde X$. To show $\eta_m(A_x) = 0$, note that for $y \in \tilde X$,
\begin{equation}\label{e:matrix_form} \phi_\alpha^T(x) - \phi_\alpha^T(y) = D_{x,y}\alpha + w_{x,y}
\end{equation}
for
\[
w_{x,y} = \begin{bmatrix} h(x) - h(y) \\
h(Tx) - h(Ty)\\
\vdots \\
h(T^{k-1} x) - h(T^{k-1}y) \end{bmatrix}.
\]
Write $A_x$ as
\[ A_x = A_x^{\mathrm{orb}} \cup \bigcup \limits_{p=1}^k A_x^p,\]
where
\begin{align*}
A_x^{\mathrm{orb}} &= \{ \alpha \in B_m(0,1) : \phi_\alpha^T (x) = \phi_\alpha^T (y) \text{ for some } y \in \tilde X \cap \Orb(x) \setminus \{ x \}\},\\
A_x^p &= \{ \alpha \in B_m(0,1) : \phi_\alpha^T (x) = \phi_\alpha^T (y) \text{ for some } y \in X_p \setminus \{x\} \}, \quad p = 1, \ldots, k.
\end{align*}
The set $A_x^{\mathrm{orb}}$ is Borel as a countable union of closed sets of the form
\begin{equation}\label{e:orb_sum}
\{ \alpha \in B_m(0,1) : \phi_\alpha^T (x) = \phi_\alpha^T (y) \}, \quad y \in \tilde X \cap \Orb(x) \setminus \{ x \},
\end{equation}
while each set $A_x^p$ is Borel as a section of the set
\[
\{ (x, \alpha) \in \tilde X \times B_m(0,1) : \phi_\alpha^T (x) = \phi_\alpha^T (y) \text{ for some } y \in X_p \setminus \{x\} \},
\]
which is Borel by Lemma~\ref{lem:measurability}.
To end the proof, it is enough to show that the sets $A_x^{\mathrm{orb}}$ and $A_x^p$, $p = 1, \ldots, k$, have $\eta_m$ measure zero.
To prove $\eta_m(A_x^{\mathrm{orb}}) = 0$ it suffices to check that the sets of the form \eqref{e:orb_sum} have $\eta_m$ measure zero. By \eqref{e:matrix_form}, we have
\[ \{ \alpha \in B_m(0,1) : \phi_\alpha^T (x) = \phi_\alpha^T (y) \} = \{ \alpha \in B_m(0,1) : D_{x,y} \alpha = -w_{x,y} \} \]
and Lemma~\ref{lem:claim} gives $\rank D_{x,y}\geq 1$ whenever $y \neq x$, so each set of the form \eqref{e:orb_sum} is contained in an affine subspace of $\mathbb{R}^m$ of codimension at least $1$. Consequently, it has $\eta_m$ measure zero.
Since $T$ is locally Lipschitz, $h, h_1, \ldots, h_m$ are locally $\beta$-H\"older and $X$ is separable, there exists a countable covering $\mathcal V$ of $X$ by open sets in $\mathbb{R}^N$, such that for every $V \in \mathcal V$, the map $T$ is Lipschitz on $V$ and $h, h_1, \ldots, h_m$ are $\beta$-H\"older on $V$. Let $\mathcal U$ be the collection of all sets of the form $U = V_0 \cap T^{-1}(V_1) \cap \ldots \cap T^{-(k-1)}(V_{k-1})$, where $V_0, \ldots, V_{k-1} \in \mathcal V$. Then $\mathcal U$ is a countable covering of $X$ by open sets, and we can write $\mathcal U = \{U_j\}_{j\in \mathbb{N}}$. By definition, for every $j \in \mathbb{N}$ there exists $C_j > 0$ such that
\begin{align*}
\|T^{s+1}(y) - T^{s+1}(y')\| &\le C_j \|T^s(y)-T^s(y')\|,\\
\|h(T^s(y)) - h(T^s(y'))\| &\le C_j \|T^s(y)-T^s(y')\|^\beta, \\
\|h_r(T^s(y)) - h_r(T^s(y'))\| &\le C_j \|T^s(y)-T^s(y')\|^\beta
\end{align*}
for every $y, y' \in U_j \cap X$, $s \in \{0, \ldots, k-1\}$, $r \in \{1, \ldots, m\}$. By induction, it follows that
\begin{equation}\label{eq:lipschitz-holder}
\begin{aligned}
\|T^s(y) - T^s(y')\| &\le C_j^s \|y-y'\|,\\
\|h(T^s(y)) - h(T^s(y'))\| &\le C_j^{\beta s + 1}\|y-y'\|^\beta, \\
\|h_r(T^s(y)) - h_r(T^s(y'))\| &\le C_j^{\beta s + 1} \|y-y'\|^\beta
\end{aligned}
\end{equation}
for $y, y' \in U_j \cap X$, $s \in \{0, \ldots, k-1\}$, $r \in \{1, \ldots, m\}$.
To prove $\eta_m(A_x^p) = 0$ for $p = 1, \ldots, k$, we follow the strategy used in \cite{SYC91} (see also \cite{Rob11}).
Fix $n \in \mathbb{N}$ and for $j \in \mathbb{N}$ define
\begin{align*}
X_x^{p, n} &= \Big\{ y \in X_p : \sigma_p(D_{x,y}) \geq \frac{1}{n}\Big\},\\
A_x^{p, j, n} &= \{ \alpha \in B_m(0,1) : \phi_\alpha^T (x) = \phi_\alpha^T (y) \text{ for some } y \in U_j \cap X_x^{p, n} \setminus \{x\}\},
\end{align*}
where $\sigma_p(D_{x,y})$ is the $p$-th largest singular value. Note that
singular values of given order depend continuously on the coefficients of the matrix, see e.g.~\cite[Corollary 8.6.2]{MatrixComputations}. Hence, the set $X_x^{p, n}$ is $\sigma$-compact as a closed subset of $X_p$ and by Lemma~\ref{lem:measurability}, the set $A_x^{p, j, n}$ is Borel.
By Lemma~\ref{lem:claim}, for every $y \in X_p \setminus \Orb(x)$ we have $\rank D_{x,y} \geq p$. This implies $\sigma_p(D_{x,y})>0$ (see e.g. \cite[Lemma 14.2]{Rob11}). Hence,
\[ A_x^{p} \setminus A_x^{\mathrm{orb}} = \bigcup \limits_{j=1}^{\infty }\bigcup \limits_{n=1}^{\infty } A_x^{p, j, n} \setminus A_x^{\mathrm{orb}}. \]
Consequently, it is enough to prove $\eta_m(A_x^{p, j, n} \setminus A_x^\mathrm{orb}) = 0$ for every $n \in \mathbb{N}$.
Fix $\varepsilon>0$. Since $\mathcal{H}^{\beta p}(U_j \cap X_x^{p,n} \setminus \Orb(x)) \leq \mathcal{H}^{\beta p}(X_p) = 0$, there exists a collection of balls $B_N(y_i, \varepsilon_i)$, for $y_i \in U_j \cap X_x^{p,n}\setminus \Orb(x)$ and $0 < \varepsilon_i < \varepsilon$, $i \in \mathbb{N}$, such that
\begin{equation}\label{e:cover_takens} U_j \cap X_x^{p,n} \setminus \Orb(x) \subset \bigcup \limits_{i \in \mathbb{N}} B_N(y_i, \varepsilon_i) \quad \text{ and } \quad \sum \limits_{i=1}^\infty \varepsilon_i^{\beta p} \leq \varepsilon.
\end{equation}
Take $\alpha \in A_x^{p, j, n} \setminus A_x^{\mathrm{orb}}$ and let $y \in U_j \cap X_x^{p, n} \setminus \Orb(x) $ be such that $\phi_\alpha^T(x) = \phi_\alpha^T(y)$. Then for $y_i$ with $y \in B(y_i, \varepsilon_i)$ we have
\begin{equation}\label{e:D_xyj}
\begin{aligned}
\|D_{x, y_i}\alpha + w_{x, y_i} \| &= \|\phi_\alpha^T (x) - \phi_{\alpha}^T (y_i) \|
= \|\phi_\alpha^T (y) - \phi_{\alpha}^T (y_i) \|\\
&\leq \sqrt{\sum_{s=0}^{k-1} \Big(\|h(T^sy) - h(T^sy_i)\| + \sum_{r=1}^m \alpha_r \|h_r(T^sy) - h_r(T^sy_i)\|\Big)^2}\\
&\leq M_j \|y-y'\|^\beta \le M_j \varepsilon_i^\beta
\end{aligned}
\end{equation}
for
\[
M_j = (1 + \sqrt{m})\;\sqrt{\sum_{s=0}^{k-1} C_j^{2(\beta s + 1)}},
\]
by \eqref{eq:lipschitz-holder} and the fact $\alpha \in B_m(0,1)$. By \eqref{e:D_xyj},
\[ A_x^{p, j, n} \setminus A_x^{\mathrm{orb}} \subset \bigcup \limits_{i \in \mathbb{N}} \{ \alpha \in B_m(0,1) : \|D_{x, y_i}\alpha + w_{x, y_i} \| \leq M_j\varepsilon_i^\beta \}. \]
Since for every $i \in \mathbb{N}$ we have $\sigma_p(D_{x, y_i}) \geq 1/n$, we can apply Lemma~\ref{lem: key_ineq_inter} and \eqref{e:cover_takens} to obtain
\[ \eta_m(A_x^{p, j, n} \setminus A_x^{\mathrm{orb}}) \leq \sum \limits_{i =1 }^\infty C_{m,k} \frac{M_j^p\varepsilon_i^{\beta p}}{1 / n^p} \leq C_{m,k}M_j^pn^p\varepsilon. \]
Since $\varepsilon>0$ was arbitrary, we conclude that $\eta_m(A_x^{p, j, n} \setminus A_x^{\mathrm{orb}}) = 0$, so in fact $\eta_m(A_x^{p, j, n}) = 0$. This ends the proof of Theorem~\ref{thm:takens}.
\end{proof}
\begin{proof}[Proof of Remark~\rm\ref{rem:invariant}]
Suppose that the measure $\mu$ is $T$-invariant. Then it is easy to check that the set
\[
\tilde X_\alpha = \bigcap \limits_{n = 0}^{\infty} T^{-n} (X_\alpha)
\]
is a Borel subset of $X_\alpha$ of full $\mu$-measure satisfying $T(\tilde X_\alpha) \subset \tilde X_\alpha$. Hence, to show (a), it suffices to replace the set $X_\alpha$ by $\tilde X_\alpha$.
In the case when $\mu$ is additionally finite, we first remark that the measure $\mu$ is also forward invariant, i.e.~$\mu(T(Y)) = \mu(Y)$ for Borel sets $Y \subset X$. Note that if $Y$ is Borel, then so is $T(Y)$ as the image of a Borel set under a continuous and injective mapping (see e.g. \cite[Theorem~15.1]{K95}). Using this together with the invariance of $\mu$ and the injectivity of $T$, we check that
\[
\tilde X_\alpha = \bigcap \limits_{n \in \mathbb{Z}} T^{-n} (X_\alpha)
\]
is a Borel subset of $X_\alpha$ of full $\mu$-measure satisfying $T(\tilde X_\alpha) = \tilde X_\alpha$. This gives (b). Notice that the finiteness of $\mu$ is indeed necessary, as for $X = \mathbb{N},$ $T(x) = x+1$ and $\mu$ the counting measure, there does not exist a set $Y \subset X$ of full $\mu$-measure satisfying $T(Y)=Y$.
To show (c), suppose that $\mu$ is $T$-invariant and ergodic.
Obviously, we can assume that the $\mu$-measure of the set of all periodic points of $T$ is positive (including $+\infty$). Then there exists $p \in \mathbb{N}$ such that the measure of the set $P$ of all $p$-periodic points of $T$ is positive (including $+\infty$).
Suppose first that $\mu$ restricted to $P$ is non-atomic. Then there exists a Borel set $Y \subset P$ with $0 < \mu(Y) < \mu(X)/p$. Let $Z = Y \cup T^{-1}(Y) \cup \ldots \cup T^{-(p-1)}(Y)$. Then $0 < \mu(Z) < \mu(X)$ and, by the injectivity of $T$, we have $T^{-1}(Z) = Z$, which contradicts the ergodicity of $\mu$.
Suppose now that $\mu$ has an atom in $P$. Since $\mu$ is a Borel $\sigma$-finite measure in a Euclidean space, this is equivalent to the fact that $\mu(\{x\}) > 0$ for some $x \in P$. Let $\mathcal O$ be the periodic orbit of $x$. Again by the injectivity of $T$, we have $T^{-1}(\mathcal O) = \mathcal O$, so by the ergodicity of $\mu$, the set $\mathcal O$ has full $\mu$-measure. This means that $\mu$ is supported on a set of Hausdorff dimension $0$, which obviously gives (c).
\end{proof}
The original Takens delay embedding theorem states that for given finite dimensional $C^2$ manifold $M$ and generic pair of $C^2$-diffeomorphism $T\colon M \to M$ and $C^2$-function $h\colon M \to \mathbb{R}$, the corresponding delay-coordinate map $\phi\colon M \to \mathbb{R}^k$, $\phi(x) = (h(x), h(Tx), \ldots, h(T^{k-1}x))$ is a $C^2$-embedding (an injective immersion) as long as $k > 2\dim M$. It was followed by the box-counting dimension version of Sauer, Yorke and Casdagli (Theorem~\ref{thm:standard_takens}) and subsequently by the infinite-dimensional result of \cite{Rob05} (see also \cite[Section 14.3]{Rob11}). Refer to \cite{NV18} for a version of Takens' theorem with a fixed observable and perturbation performed on the dynamics. Takens' theorem involving Lebesgue covering dimension on compact metric spaces and a continuous observable was proved in \cite{Gut16} (see \cite{GQS18} for a detailed proof). See also \cite{1999delay, CaballeroEmbed} for Takens theorem for deterministically driven smooth systems and \cite{StarkEmbedSurvey, StarkStochEmbed} for stochastically driven smooth systems.
\begin{examplex}\label{ex:no_linear_takens} It turns out that linear perturbations are not sufficient for Theorems~\ref{thm:standard_takens} and~\ref{thm:takens}, i.e.~it may happen that $\phi_L = (\phi(x) + Lx, \ldots, \phi(T^{k-1}x) + LT^{k-1}x)$ is not (almost surely) injective for a generic linear map $L \colon \mathbb{R}^N \to \mathbb{R}$. As an example, let $X = B_2(0,1)$, fix $a \in (0,1)$ and define $T\colon X \to X$ as
\[ T(x) = ax. \]
Then $T$ is a Lipschitz injective transformation on the unit disc $X \subset \mathbb{R}^2$ with zero being the unique periodic point. Fix $\phi \equiv 0$. We claim that there is no linear observable $L \colon \mathbb{R}^2 \to \mathbb{R}$ which makes the delay map injective, i.e. for every $k \in \mathbb{N}$ and every $v \in \mathbb{R}^2$ the transformation $x \mapsto \phi_v^T(x)= (\langle x, v \rangle, \langle Tx, v \rangle, \ldots, \langle T^{k-1} x, v \rangle) \in \mathbb{R}^k$ is not injective on $X$. This follows from the fact that for each $1$-dimensional linear subspace $W \subset \mathbb{R}^2$ the set $W \cap X$ is $T$-invariant, hence $\phi_v^T = 0$ on an infinite set $\Ker(\langle \cdot, v \rangle) \cap X$. We have seen that $\phi_v^T$ is not injective for any $v \in \mathbb{R}^2$. No we will see that it also not almost surely injecitve for $\mu$ being the Lebesgue measure on $X$. Note that for $v \in \mathbb{R}^2$ and $c \in \mathbb{R}$, the segment $W_c = \{ z \in X : \langle z, v \rangle = c \}$ satisfies $T(W_c) \subset W_{ac} $, hence all points on $W_c$ will have the same observation vector $(\langle x, v \rangle, \langle Tx, v \rangle, \ldots, \langle T^{k-1} x, v \rangle) = (c, ac, a^2c, \ldots, a^{k-1}c)$. Therefore, a set $X_v \subset X$ on which $\phi_v^T$ is injective can only have one point on each of the parallel segments $W_c$ contained in $X$. However, such a set $X_v$ cannot be of full Lebesgue measure. Note that the above example can be easily modified to make $T$ a homeomorphism.
\end{examplex}
\section{Examples} \label{sec:examples}
In this section we present two examples which illustrate the usage of Theorem~\ref{thm:embed}. Let us begin with fixing some notation. For $x \in [0,2)$ we will write
\[
x = x_0.x_1 x_2\ldots,
\]
where $x_0.x_1 x_2 \ldots$ is the \emph{binary expansion} of $x$, i.e.
\[
x = \sum \limits_{j=0}^{\infty} \frac{x_j}{2^j}, \quad x_0, x_1, x_2,\ldots \in \{0,1\}.
\]
For a dyadic rational we agree to choose its eventually terminating expansion, i.e.~the one with $x_j = 0$ for $j$ large enough. Let $\pi \colon \{0,1\}^\mathbb{N} \to [0,1]$ be the coding map
\[ \pi(x_1, x_2, \ldots) = \sum \limits_{j=1}^{\infty} \frac{x_j}{2^j}. \]
\subsection{A modified Kan example}
In the Appendix to \cite{SYC91}, Kan presented an example of a compact set $K \subset \mathbb{R}^N$ with $\dim_H K =0$ and such that every linear transformation $L \colon \mathbb{R}^N \to \mathbb{R}^{N-1}$ fails to be injective on $K$ (see also Remark~\ref{rem:hdim_does_not_work}). It follows from Theorem~\ref{thm:embed}, that whenever we are given a Borel $\sigma$-finite measure $\mu$ on such a set, then almost every linear transformation $L \colon \mathbb{R}^N \to \mathbb{R}$ is injective on a set of full $\mu$-measure. To illustrate this, we construct a $\sigma$-compact set $X \subset \mathbb{R}^2$ with $\dim_H X =0$, which is a slight modification of Kan's example, equipped with a natural Borel $\sigma$-finite measure $\mu$, such that no linear transformation $L \colon \mathbb{R}^2 \to \mathbb{R}$ is injective on $X$, while for almost every $L$ we explicitly show a set $X_L \subset X$ of full $\mu$-measure, such that $L$ is injective on $X_L$.
Following \cite[Appendix]{SYC91}, we begin with constructing compact sets $A, B \subset [0,1]$ such that
\begin{equation}\label{eq:AB_dim1}
\dim_H A = \underline{\dim}_B\, A = \dim_H B =\underline{\dim}_B\, B =0\quad (\text{hence } \dim_H(A \cup B) = 0),
\end{equation} and
\begin{equation}\label{eq:AB_dim2}
\overline{\dim}_B\, A = \overline{\dim}_B\, B = 1, \quad \underline{\dim}_B\,(A \cup B) = \overline{\dim}_B\,(A \cup B)= 1.
\end{equation}
To this aim, let $M_k$, $k \ge 0$, be an increasing sequence of positive integers such that $M_0 = 1$ and $M_k \nearrow \infty$ with $\lim \limits_{k \to \infty} \frac{M_{k+1}}{M_k} = \infty$. Define
\begin{alignat*}{2}
\widetilde A &= \big\{ (x_1,x_2,\ldots) \in \{0,1\}^\mathbb{N} : &\text{ for every even } k, \; &x_j = 0 \text{ for all } j \in [M_{k}, M_{k+1})\\
& & &\text{or } x_j = 1 \text{ for all } j \in [M_{k}, M_{k+1}) \big\},\\
\widetilde B &= \big\{ (x_1,x_2,\ldots) \in \{0,1\}^\mathbb{N} : &\text{ for every odd } k, \; &x_j = 0 \text{ for all } j \in [M_{k}, M_{k+1})\\
& & &\text{or } x_j = 1 \text{ for all } j \in [M_{k}, M_{k+1}) \big\},
\end{alignat*}
and set
\[ A = \pi(\widetilde A), \quad B = \pi(\widetilde B).\]
It is a straightforward calculation to check that $A$ and $B$ satisfy \eqref{eq:AB_dim1} and \eqref{eq:AB_dim2} (see \cite[Appendix]{SYC91}, \cite[Example 7.8]{falconer2014fractal} or \cite[Section 6.1]{Rob11}). Define $X \subset \mathbb{R}^2$ as
\[ X = \Big(\{0\} \times \bigcup \limits_{n \in \mathbb{Z}}(A+n)\Big) \cup \Big(\{1\} \times \bigcup \limits_{n \in \mathbb{Z}}(B+n)\Big). \]
By \eqref{eq:AB_dim1}, we have $\dim_H X = 0$. The following two propositions describe the embedding properties of the set $X$.
\begin{prop}\label{prop:X_no_injection}
No linear transformation $L\colon \mathbb{R}^2 \to \mathbb{R}$ is injective on $X$.
\end{prop}
\begin{proof} The map $L$ has the form $L(x,y) = \alpha x + \beta y$ for $\alpha, \beta \in \mathbb{R}$. Obviously, we can assume $\beta \neq 0$. Note that the points
\[
u = (0,a+n),\quad v = (1, b+m), \qquad \text{for }a\in A,\ b \in B,\ n,m \in \mathbb{Z}
\]
are in $X$ and
\begin{equation}\label{eq:L_kernel} L(u) = L(v) \quad\text{ if and only if }\quad b-a = z,\end{equation}
where
\[
z = -\frac{\alpha}{\beta} + n - m.
\]
For given $\alpha$ and $\beta$, choose $n,m \in \mathbb{Z}$ such that $z \in [0,1)$.
Consider the binary expansion $z = 0.z_1z_2\ldots$ and define
\[
a=0.a_1a_2\ldots \in A,\quad b = 0.b_1b_2\ldots \in B
\]
setting
\begin{equation}\label{eq:AB_decomposition}
\begin{aligned}
&a_j = 0, \quad &&b_j = z_j \quad &&\text{for } j \in [M_{k}, M_{k+1}), &&\text{if } k \text{ is even}, \\
&a_j = 1 - z_j, \quad &&b_j = 1 \quad &&\text{for } j \in [M_{k}, M_{k+1}), &&\text{if }k \text{ is odd}
\end{aligned}
\end{equation}
(if all $b_j$ are equal to $1$, we set $b=1$). Then $z = b-a$ and (\ref{eq:L_kernel}) implies that $L$ is not injective on $X$.
\end{proof}
Let us now define a natural Borel $\sigma$-finite measure $\mu$ on $X$, starting from a pair of probability measures $\nu_1, \nu_2$ on $\widetilde A$ and $\widetilde B$, respectively. Let
\[ \nu_1 = \bigotimes \limits_{k=0}^{\infty} \textbf{p}_k, \quad \nu_2 = \bigotimes \limits_{k=0}^{\infty} \textbf{q}_k, \]
where $\textbf{p}_k$ and $\textbf{q}_k$ are probability measures on $\{0,1\}^{M_{k+1}-M_k}$ given as
\[ \textbf{p}_k = \begin{cases} \frac{1}{2} \delta_{(0, \ldots, 0)} + \frac{1}{2} \delta_{(1, \ldots, 1)} &\text{if } k \text{ is even}\\
\big(\frac{1}{2} \delta_{0} + \frac{1}{2} \delta_{1}\big)^{\otimes(M_{k+1}-M_k)} &\text{if } k \text{ is odd}
\end{cases}, \quad \textbf{q}_k = \begin{cases} \big(\frac{1}{2} \delta_{0} + \frac{1}{2} \delta_{1}\big)^{\otimes(M_{k+1}-M_k)} &\text{if } k \text{ is even} \\
\frac{1}{2} \delta_{(0, \ldots, 0)} + \frac{1}{2} \delta_{(1, \ldots, 1)} &\text{if } k \text{ is odd}\\
\end{cases} \]
and the symbol $\delta_a$ denotes the Dirac measure at $a$.
Then $\supp \nu_1 = \widetilde A,\ \supp \nu_2 = \widetilde B$, hence defining
\[ \mu_1 = \pi_* (\nu_1), \quad \mu_2 = \pi_* (\nu_2), \]
we obtain probability measures on $A, B$, respectively, with $\supp \mu_1 = A$, $\supp \mu_2 = B$. Finally, let
\[ \mu = \sum \limits_{n \in \mathbb{Z}} \delta_{0} \otimes (\tau_n)_* \mu_1 + \sum \limits_{n \in \mathbb{Z}} \delta_{1} \otimes (\tau_n)_* \mu_2, \]
where $\tau_n \colon \mathbb{R} \to \mathbb{R},\ \tau_n(x) = x +n,\ n \in \mathbb{Z}$. Clearly, $\mu$ is a Borel $\sigma$-finite measure with $\supp \mu=X$.
For $a \in A,\ b \in B$ let
\begin{alignat*}{3}
A_a &= \big\{ x \in A \setminus \{ 1 \} : \; &&x + a = z_0.z_1z_2\ldots \text{ such that the sequence } (z_0, z_1, \ldots)\\
&&&\text{is constant on } [M_{k}, M_{k+1}-1) \cap \mathbb{N} \text{ for every odd } k \big\},\\
B_b &= \big\{ x \in B \setminus \{1\} : \; &&x + b = z_0.z_1z_2\ldots \text{ such that the sequence } (z_0, z_1, \ldots)\\
&&&\text{is constant on } [M_{k}, M_{k+1}-1) \cap \mathbb{N} \text{ for every even } k \big\}.
\end{alignat*}
\begin{lem}\label{lem:binary_nonconstant}
For every $a \in A,\ b \in B$, we have $\mu_1(A_a) = \mu_2(B_b) = 0$.
\end{lem}
\begin{proof}
Fix $b=b_0.b_1b_2\ldots \in B$. We will show $\mu_2(B_b) = 0$ (the fact $\mu_1(A_a)=0$ can be proved analogously). The proof proceeds by showing that for each even $k$, the vector $(x_{M_{k}}, \ldots, x_{M_{k+1}-2})$, where $x = x_0.x_1x_2\ldots \in B_b$, can assume at most four values. This will imply $\mu_2(B_b) \leq 8 \cdot 2^{-(M_{k+1} - M_{k})}$ for each even $k$ and, consequently, $\mu_2(B_b) = 0$. To show the assertion, fix an even $k$ and let
\[\xi = \sum \limits_{j=M_{k+1}-1}^\infty \frac{x_j+b_j}{2^j}.\]
Note that $\xi < 2^{-(M_{k+1}-3)}$ (as $\xi<2$ and we exclude expansions with digits eventually equal to $1$). Hence, $\xi = \xi_0.\xi_1 \xi_2\ldots$ with $\xi_j = 0 $ for $j \leq M_{k+1}-3$. Note that, since $b$ is fixed, the values of $\xi_{M_{k+1}-2} \in \{0,1\}$ and $(x_{M_{k}}+b_{M_{k}}, \ldots, x_{M_{k+1}-2}+b_{M_{k+1}-2}) \in \{ (0,\ldots,0),(1,\ldots,1) \}$ determine uniquely the value of $(x_{M_{k}}, \ldots, x_{M_{k+1}-2})$. Therefore, $(x_{M_{k}}, \ldots, x_{M_{k+1}-2})$ can assume at most four values.
\end{proof}
Now for Lebesgue almost every linear transformation $L\colon \mathbb{R}^2 \to \mathbb{R}$ we will construct a set $X_L \subset X$ of full $\mu$-measure, such that $L$ is injective on $X_L$. As previously, write $L(x,y) = \alpha x + \beta y$ for $\alpha,\beta \in \mathbb{R}$. Neglecting a set of zero Lebesgue measure, we can assume $\beta \neq 0$. Let $l \in \mathbb{Z}$ be such that
\begin{equation}\label{eq:z}
z = -\frac{\alpha}{\beta} + l \text{ belongs to } [0,1).
\end{equation}
Similarly as in \eqref{eq:AB_decomposition}, we can write
\begin{equation}\label{eq:A'B'_decomposition}
z = a' - b', \quad z - 1 = a'' - b'' \quad\text{for some } a',a'' \in A, \; b',b'' \in B.
\end{equation}
Let
\[ X_L = \Big( \{0\} \times \bigcup \limits_{n \in \mathbb{Z}}(A+n)\Big) \cup \Big(\{1\} \times \bigcup \limits_{n \in \mathbb{Z}}\big((B \setminus (B_{b'} \cup B_{b''} \cup \{ 1 \}))+n\big)\Big). \]
Then $X_L \subset X$ and Lemma~\ref{lem:binary_nonconstant} implies that $X_L$ has full $\mu$-measure.
\begin{prop}\label{prop:X_almost_sure_injection} For every $\alpha \in \mathbb{R},\ \beta \in \mathbb{R} \setminus \{0\}$, the linear transformation $L\colon \mathbb{R}^2 \to \mathbb{R}$, $L(x,y) = \alpha x + \beta y$, is injective on $X_L$.
\end{prop}
For the proof of the proposition we will need the following simple lemma. The proof is left to the reader.
\begin{lem}\label{lem:binary_piecewise_constant}
Let $x=x_0.x_1x_2\ldots\in [0,1]$, $y = y_0.y_1y_2\ldots \in [0,1]$, $M, N \in \mathbb{N}$, $M<N-1$, be such that $x+y < 2$ and sequences $(x_M, \ldots, x_N)$ and $(y_M, \ldots, y_N)$ are constant. Then $x + y = z_0.z_1z_2\ldots$, where the sequence $(z_M, \ldots, z_{N-1})$ is constant.
\end{lem}
\begin{proof}[Proof of Proposition~\rm\ref{prop:X_almost_sure_injection}] Assume, on the contrary, that there exist points $u, v \in X_L$ such that
$L(u) = L(v)$. As $\beta \neq 0$, we cannot have $u, v \in \{0\} \times \mathbb{R}$ or $u, v \in \{1\} \times \mathbb{R}$. Hence, we can assume $u \in \{0\} \times \mathbb{R}$, $v \in \{1\} \times \mathbb{R}$. Then, following the previous notation, we have $u = (0,a+n)$, $v = (1, b+m)$ for $a\in A$, $b \in B \setminus (B_{b'} \cup B_{b''} \cup \{1\})$, $n,m \in \mathbb{Z}$. Note that $b-a \in [-1,1)$, so by \eqref{eq:L_kernel}, we have
\[ b - a = z \quad \text{ or } \quad b - a = z - 1, \]
for $z$ from \eqref{eq:z}, and \eqref{eq:A'B'_decomposition} implies
\[ b - a = a' - b' \quad \text{ or } \quad b - a = a'' - b''. \]
Hence,
\[ a + a' = b + b' \quad \text{ or }\quad a + a'' = b + b''. \]
This is a contradiction, as Lemma~\ref{lem:binary_piecewise_constant} implies that the binary expansion sequences of $a + a'$ and $a + a''$ are constant on $[M_{k}, M_{k+1}-1) \cap \mathbb{N}$ for every even $k$, while by the condition $b \in B \setminus (B_{b'} \cup B_{b''} \cup \{1\})$, the binary expansion sequences of $b + b'$ and $b + b''$ are not constant on $[M_{k}, M_{k+1}-1) \cap \mathbb{N}$ for some even $k$.
\end{proof}
\subsection{\boldmath Measure with $\dim_H \mu < \underline{\dim}_{\,\it MB\,} \mu$}
To show that Theorem~\ref{thm:embed} is an actual strengthening of Theorem~\ref{thm:riegler}, we present an example of a measure $\mu$, for which
$\dim_H \mu < \underline{\dim}_{\text{\it MB}}\, \mu$. More precisely, we show the following.
\begin{thm}\label{thm:hdim<lmodbdim}
There exists a Borel probability measure $\mu$ on $[0,1]^2$, such that $\dim_H \mu = 1$ and $\underline{\dim}_{\text{\it MB}}\, \mu=2$.
\end{thm}
To begin the construction of $\mu$, fix an increasing sequence of positive integers $N_k$, $k \in \mathbb{N}$, such that $N_k \nearrow \infty$ with $\frac{S_k}{S_{k+1}} \leq \frac{1}{k+1}$, where $S_k = \sum \limits_{j=1}^k N_j$. Consider the probability distributions $\textbf{p}_0, \textbf{p}_1$ on $\{0,1\}$ given by
\[
\textbf{p}_0 (\{0\}) = 0,\ \textbf{p}_0 (\{1\}) = 1, \qquad \textbf{p}_1 (\{0\}) = \textbf{p}_1 (\{1\}) = \frac{1}{2}.
\]
For $y = 0.y_1 y_2 \ldots \in[0,1]$ (in this subsection we assume that the binary expansion of $1$ is $0.111\ldots$), define the probability measure $\nu_y$ on $\{0,1\}^\mathbb{N}$ as the infinite product
\[ \nu_y = \bigotimes \limits_{j=1}^ \infty \bigotimes \limits_{i=1}^{N_j}\textbf{p}_{y_j}.\]
Further, let $\mu_y$ be the Borel probability measure on $[0,1]$ given by
\[ \mu_y = \pi_* \nu_y.\]
Finally, let $\mu$ be the Borel probability measure on $[0,1]^2$ defined as
\[ \mu = \int \limits_{[0,1]} \mu_y d\Leb(y), \quad\text{ i.e. } \mu(A) = \int \limits_{[0,1]} \mu_y(A^y) d\Leb(y) \quad\text{for a Borel set } A \subset [0,1]^2, \]
where $A^y = \{ x \in [0,1] : (x,y) \in A \}$. It is easy to see that $\mu$ is well-defined, as the function $y \mapsto \mu_y(A^y)$ is measurable for every Borel set $A \subset [0,1]^2$.
The proof of Theorem \ref{thm:hdim<lmodbdim} is based on the analysis of the local dimension of $\mu$, defined in terms of dyadic squares (rather then balls). For $n \in \mathbb{N}$ and $x_1, \ldots, x_n \in \{ 0,1 \}$ let $[x_1, \ldots, x_n]$ denote the dyadic interval corresponding to the sequence $(x_1, \ldots ,x_n)$, i.e.
\[[x_1, \ldots, x_n] =
\begin{cases}
\Big[\sum \limits_{j=1}^n \frac{x_j}{2^j}, \sum \limits_{j=1}^n \frac{x_j}{2^j} + \frac{1}{2^n}\Big) &\text{if } \sum \limits_{j=1}^n \frac{x_j}{2^j} + \frac{1}{2^n} < 1\\
\big[1 - \frac{1}{2^n}, 1\big] &\text{otherwise.}
\end{cases}
\]
Under this notation, for $n \in \mathbb{N}$ and $(x,y) \in [0,1]^2$ let $D_n(x,y)$ be the dyadic square of sidelength $2^{-n}$ containing $(x,y)$, i.e.
\[ D_n(x,y) = [x_1, \ldots, x_n] \times [y_1, \ldots, y_n], \quad\text{where } x = 0.x_1 x_2 \ldots \text{ and }y=0.y_1 y_2 \ldots . \]
Recall that the box-dimensions can be defined equivalently in terms of dyadic squares. Precisely, let
$N'(X, 2^{-n})$ be the number of dyadic squares $D_n(x,y)$ of sidelength $2^{-n}$ intersecting $X$. Then (see e.g.~\cite[Section 2.1]{falconer2014fractal})
\begin{equation}\label{eq:bdim_dyadic} \underline{\dim}_B\,(X) = \liminf \limits_{n \to \infty} \frac{\log N'(X, 2^{-n})}{n \log 2} \text{ and } \overline{\dim}_B\,(X) = \limsup \limits_{n \to \infty} \frac{\log N'(X, 2^{-n})}{n \log 2}.\end{equation}
For a Borel finite measure $\mu$ on $[0,1]^2$ and $(x,y) \in [0,1]^2$ define the \emph{lower} and \emph{upper local dimension} of $\mu$ at $(x,y)$ as
\[ \underline{d}(\mu,(x,y)) = \liminf \limits_{n \to \infty} \frac{- \log \mu(D_n(x,y))}{n \log 2}, \quad \overline{d}(\mu,(x,y)) = \limsup \limits_{n \to \infty} \frac{- \log \mu(D_n(x,y))}{n \log 2}. \]
It is well-known (see e.g.~\cite[Propositions 3.10 and 3.20]{HochmanNotes}) that
\begin{equation}\label{eq:hdim_local}
\dim_H \mu = \underset{(x,y) \sim \mu}{\mathrm{ess\ sup}}\ \underline{d}(\mu, (x,y)).
\end{equation}
The following lemma gives estimates on the measure of dyadic squares at suitable scales.
\begin{lem}\label{ref:lem_square_prob_ineqalities}
Let $x=0.x_1x_2\ldots,\in [0,1]$, $y=0.y_1y_2\ldots \in [0,1]$, $n \in \mathbb{N}$ and $D = D_n(x,y) = [x_1, \ldots, x_n] \times [y_1, \ldots, y_n]$. Let $k \in \mathbb{N}$ be such that $S_k < n \leq S_{k+1}$. Then the following hold.
\begin{enumerate}[\rm (a)]
\item If $y_{k} = y_{k+1}=1$, then $\mu(D) \leq 2^{-(2 - \frac{1}{k})n}$.
\item If $n = S_{k+1}$ and $y_{k+1} = 0$, then either $\mu(D) = 0$ or $\mu(D) \geq 2^{-(1 + \frac{1}{k+1})n}$.
\end{enumerate}
\end{lem}
\begin{proof}
Note that for $y' = 0.y'_1 y'_2\ldots \in [0,1]$ such that $(y'_1, \ldots, y'_n) = (y_1, \ldots, y_n)$ we have
\begin{equation}\label{eq:square_prob}
\begin{aligned}
\mu_{y'}(D^{y'}) = \mu_{y'}([x_1, \ldots, x_n]) = \ &\textbf{p}_{y'_1}(\{x_1\}) \cdots \textbf{p}_{y'_1}(\{x_{S_1}\}) \textbf{p}_{y'_2}(\{x_{S_1 + 1}\}) \cdots \textbf{p}_{y'_2}(\{x_{S_2}\})\\
& \cdots \textbf{p}_{y'_{k+1}}(\{x_{S_k + 1}\}) \cdots \textbf{p}_{y'_{k+1}}(\{x_{n}\}).
\end{aligned}
\end{equation}
Moreover, as $k < n$, the value of $\mu_{y'}(D^{y'})$ depends only on $(y_1, \ldots, y_n)$ and $(x_1, \ldots, x_n)$. Using (\ref{eq:square_prob}), we can prove both assertions of the lemma, as follows.
\paragraph*{Ad (a)}
\
If $y_{k} = y_{k+1}=1$, then for $j \in \{S_{k-1} + 1, \ldots, n\}$ we have $\textbf{p}_{y_l}(x_j) = \frac{1}{2}$, where $l \in \{ k, k+1\}$ is such that $S_{l-1}<j \leq S_{l}$. Therefore, in the product \eqref{eq:square_prob} there is at least $n - S_{k-1}$ terms equal to $\frac{1}{2}$. Consequently,
\[ \mu_{y'}(D^{y'}) \leq 2^{-(n - S_{k-1})} = 2^{-(1 - \frac{S_{k-1}}{n})n} \leq 2^{-(1 - \frac{S_{k-1}}{S_k})n} \leq 2^{-(1 - \frac{1}{k})n},\]
hence
\[ \mu(D) = \int \limits_{[y_1, \ldots, y_n]} \mu_{y'}(D^{y'})d\Leb(y') \leq \Leb([y_1, \ldots, y_n]) 2^{-n(1 - \frac{1}{k})} = 2^{-n(2 - \frac{1}{k})}. \]
\paragraph*{Ad (b).}
\
Assume that $\mu(D) \neq 0$. Then all the terms in (\ref{eq:square_prob}) have to be non-zero, so every term is equal to either $\frac{1}{2}$ or $1$. Moreover, as $y_{k+1} = 0$ and $n = S_{k+1}$, we have
\[ \textbf{p}_{y_{k+1}}(\{x_{S_k + 1}\}) \cdots \textbf{p}_{y_{k+1}}(\{x_{n}\}) = 1\]
and, consequently,
\begin{align*}
\begin{split}
\mu(D) = \ &2^{-n} \textbf{p}_{y_1}(\{x_1\}) \cdots \textbf{p}_{y_1}(\{x_{S_1}\}) \textbf{p}_{y_2}(\{x_{S_1 + 1}\}) \cdots \textbf{p}_{y_2}(\{x_{S_2}\})\\
& \cdots \textbf{p}_{y_{k}}(\{x_{S_{k-1} + 1}\}) \cdots \textbf{p}_{y_{k}}(\{x_{S_k}\}) \geq 2^{-n-S_k} = 2^{-(1 + \frac{S_k}{S_{k+1}})n} \geq 2^{-(1 + \frac{1}{k+1})n}.
\end{split}
\end{align*}
\end{proof}
Now we are ready to give the proof of Theorem~\ref{thm:hdim<lmodbdim}.
\begin{proof}[Proof of Theorem~\rm\ref{thm:hdim<lmodbdim}] We begin by proving $\dim_H \mu =1$. Note that $\dim_H \mu \geq 1$, as $\mu$ projects under $[0,1]^2 \ni (x,y) \mapsto y \in [0,1]$ to the Lebesgue measure, so it is sufficient to show $\dim_H \mu \leq 1$. By (\ref{eq:hdim_local}), it is enough to prove that $\underline{d}(\mu, (x,y)) \leq 1$ for $\mu$-almost every $(x,y) \in [0,1]$. Note that for Lebesgue almost every $y = 0.y_1 y_2\ldots\in [0,1]$, the sequence $(y_1, y_2, \ldots)$ contains infinitely many zeros. Hence, it is sufficient to show $\underline{d}(\mu, (x,y)) \leq 1$ for $\mu_y$-almost every $x\in [0,1]$, assuming that $y\in [0,1]$ has this property. Moreover, for $\mu_y$-almost every $x \in [0,1]$, we have $\mu(D_n(x,y)) > 0$ for all $n \in \mathbb{N}$ (see \eqref{eq:square_prob}). For such $x$, by Lemma~\ref{ref:lem_square_prob_ineqalities}(b), we have
\[ \underline{d}(\mu,(x,y)) \leq \liminf \limits_{k \to \infty} \frac{-\log \mu(D_{S_{n_k}}(x,y))}{S_{n_k} \log 2} \leq \lim \limits_{k \to \infty}\frac{(1 + \frac{1}{n_k})S_{n_k}}{S_{n_k}} = 1. \]
Therefore, $\dim_H \mu \leq 1$, so in fact $\dim_H \mu = 1$.
Let us prove now $\underline{\dim}_{\text{\it MB}}\, \mu = 2$. Since $\mu$ is supported on $[0,1]^2$, it suffices to show $\underline{\dim}_{\text{\it MB}}\, \mu \geq 2$. Let $A \subset [0,1]^2$ be a Borel set with $\mu(A)>0$. We show $\underline{\dim}_B\, A \geq 2$. Note that there exists $c > 0$ such that the set
\begin{equation}\label{eq:B_measure}
B = \{ y \in [0,1] : \mu_y(A^y) \geq c \}
\end{equation}
satisfies $\Leb(B) > 0.$ Fix $\varepsilon \in (0, \frac{1}{4})$. By the Lebesgue density theorem (see e.g.~\cite[Corollary 3.16]{HochmanNotes}), there exists a dyadic interval $I \subset [0,1]$ such that
\begin{equation}\label{eq:I_density}
\frac{\Leb(B \cap I)}{|I|} \geq 1 - \varepsilon,
\end{equation}
where $|I| = 2^{-N}$ is the length of $I$. Fix $k \geq N+2$ and $n \in \{S_k + 1, \ldots, S_{k+1} \}$. Consider the collection $\mathcal{C}_n$ of dyadic intervals of length $2^{-n}$ defined as
\[ \mathcal{C}_n = \{ [y_1, \ldots, y_n] : y_k = y_{k+1} = 1 \text{ and } [y_1, \ldots, y_n] \cap B \cap I \neq \emptyset \}.\]
By \eqref{eq:I_density}, we have
\begin{equation}\label{eq:BC_n_measure} \Leb\Big(B \cap \bigcup \mathcal{C}_n\Big) \geq \Big(\frac{1}{4} - \varepsilon\Big)2^{-N}.
\end{equation}
Let
\[
A_n = A \cap \Big([0,1] \times \Big(B \cap \bigcup \mathcal{C}_n\Big)\Big).
\]
Then $A_n \subset A$ and (\ref{eq:B_measure}) together with (\ref{eq:BC_n_measure}) imply
\begin{equation}\label{eq:A0_measure}
\mu(A_n) = \int \limits_{B \cap \bigcup \mathcal{C}_n} \mu_y(A^y) d\Leb(y) \geq c \Big(\frac{1}{4} - \varepsilon\Big)2^{-N}.
\end{equation}
Note that the above lower bound does not depend on $k$ and $n$.
Let $N'(A_n, 2^{-n})$ be the number of dyadic squares of sidelength $2^{-n}$ intersecting $A_n$. If $D = I_1 \times I_2$ is a dyadic square of sidelength $2^{-n}$ intersecting $A_n$, then $I_2 \in\mathcal{C}_n$, hence by Lemma~\ref{ref:lem_square_prob_ineqalities}(a) we have
\[ \mu(D) \leq 2^{-(2 - \frac{1}{k})n}. \]
As any two dyadic squares of the same sidelength are either equal or disjoint, \eqref{eq:A0_measure} gives
\[ N'(A, 2^{-n}) \geq N'(A_n, 2^{-n}) \geq c \Big(\frac{1}{4} - \varepsilon\Big)2^{-N + (2 - \frac{1}{k})n}. \]
Since $k$ and $n$ can be taken arbitrary large, invoking (\ref{eq:bdim_dyadic}) gives $\underline{\dim}_B\, A \geq 2$. Hence, $\underline{\dim}_{\text{\it MB}}\, \mu \geq 2$, so in fact $\underline{\dim}_{\text{\it MB}}\, \mu = 2$.
\end{proof}
\begin{rem}
Note that as
\[ \underset{z \sim \mu}{\mathrm{ess\ sup}}\ \underline{d}(\mu, z) = \dim_H \mu \leq \underline{\dim}_{\text{\it MB}}\, \mu \leq \overline{\dim}_{\text{\it MB}}\, \mu = \dim_P \mu = \underset{z \sim \mu}{\mathrm{ess\ sup}}\ \overline{d}(\mu, z) \]
($\dim_P$ denotes the packing dimension, see e.g.~\cite[Proposition 3.9]{falconer2014fractal} and \cite[Proposition 10.3]{FalconerTechniques}), the equality $\dim_H \mu = \underline{\dim}_{\text{\it MB}}\, \mu$ holds for all \emph{exact dimensional} measures $\mu$, i.e.~the measures $\mu$ with $\underline{d}(\mu, z) = \overline{d}(\mu, z) = \mathrm{const}$ for $\mu$-almost every $z$.
\end{rem}
\bibliographystyle{alpha}
|
1811.05733
|
\section{Introduction}\label{s1}
Let $L_1,\cdots,L_n$ be holomorphic line bundles on a complex manifold $X$
of dimension $n$
and let $V_i\subset\Gamma(X,L_i)$ be a finite dimensional subspace of holomorphic sections,
such that
\begin{equation}\label{1}
\forall i\leq n,\:x \in X \ \exists s\in V_i: s(x) \ne 0.
\end{equation}
Assume that each $V_i$ has a fixed
Hermitian inner product.
Let $\P_i=\P(V_i)$ be a projectivization of $V_i$ equipped
with the corresponding Fubini-Study metric $\Phi_i$,
such that $\vol(\P_i)=1$.
Let $U$ be a relatively compact domain in $X$.
For $(s_1, \ldots, s_n) \in \P_1 \times \ldots \times \P_n$ we denote by
$N_U(s_1, \ldots, s_n)$ the number of isolated zeros of intersection
of hypersurfaces $s_i=0$ in $U$.
We call
$$
{\mathfrak M}_U(V_1, \ldots, V_n) = \int _{\P_1\times \ldots \times \P_n}N_U(s_1,\ldots,s_n)\,ds_1\cdot\ldots\cdot ds_n
$$
the \emph{average number of common zeros in $U$} of $n$ sections $f_i\in V_i$.
Let $A_i(x)=\{f\in V_i\colon f(x)=0\}$.
From (\ref{1}) it follows that
$\forall x\in X, i\leq n\colon$ $\codim\: A_i(x)=1$.
So we get the mapping $\theta_i\colon X \to\P_i^*$
assigning to $x\in X$ the subspace $A_i(x)$.
Consider the pull-back $g_i=\theta^*_i(\Phi^*_i)$ of dual to $\Phi_i$
Fubini-Study metric $\Phi^*_i$ on $\P^*_i$.
For any tuple $h_1,\ldots,h_n$ of non-negative Hermitian forms on $X$
we define the mixed Hermitian volume $\vol^H_{h_1,\ldots,h_n}(U)$;
see Section \ref{s2}.
\begin{theorem}\label{thm1}
$
{\mathfrak M}_U(V_1, \ldots, V_n) =n!\:\vol^H_{g_1,\ldots,g_n}(U)
$
\end{theorem}
\noindent
Actually the theorem coincides with the Crofton formula
for the product of projective spaces \cite{Ka1,Ka2}; see Section \ref{s2}.
The sources of the theorem are the well-known BKK formula
\cite{BKK},
and also a recent result on the average number of roots
of systems of real equations \cite{we}.
\section{Crofton formula and Hermitian mixed volume}\label{s2}
\noindent
Let $\theta_i\colon X \to\P_i^*$ be a mapping
defined in Section \ref{s1}.
Denote by $\omega_i$ the pull-back
of the Fubini-Study form on $\P^*_i=\P(V^*_i)$
under the mapping $\theta_i$.
Then the Crofton formula for the product of projective spaces
tells that
$$
{\mathfrak M}_U(V_1,\ldots,V_n)=\int_U \omega_1\wedge\ldots\wedge\omega_n.
$$
Below, any non-negative Hermitian quadratic form $h$ on $X$ is called a Hermitian metric.
Recall that $h$ is called non-negative if its eigenvalues are non-negative.
Let $W_+$ be a cone of Hermitian metrics.
Define a function $\vol_U\colon W_+\to\R$
as $\vol_U(h)=\vol_h(U)$.
Recall that $f\colon W_+\to\R$ is called a homogeneous polynomial of degree $k$
if for any $g_1,g_2\in W_+$ the function $f(\lambda_1 g_1+\lambda_2 g_2)$
is a homogeneous of degree $k$ in positive variables $\lambda_1,\lambda_2$.
\begin{lemma}\label{lPol}
$\vol_U$ is a homogeneous polynomial of degree $n$
on $W_+$.
\end{lemma}
\begin{proof}
By Wirtinger's theorem,
$\vol_U(h)=\frac{1}{n!}\int_U\omega^n$,
were $\omega={\rm Im}(h)$.
Let $\omega_i={\rm Im}(h_i)$ for $h_1,\ldots,h_n\in W_+$.
Therefore, the function $\vol_U$ extends to a multilinear symmetric $n$-form
$$
\vol^H_{h_1,\ldots,h_n}(U)=\frac{1}{n!}\int_U\omega_1\wedge\ldots\wedge\omega_n.
$$
in variables $h_1,\ldots,h_n$,
such that $\vol^H_{h,\ldots,h}(U)=\vol_U(H)$.
\end{proof}
\noindent
We call $\vol^H_{h_1,\ldots,h_n}(U)$ the
\emph{Hermitian mixed volume} of $U$ for the "mixing tuple" $h_1,\ldots,h_n$.
Now Theorem \ref{thm1} follows from the Crofton formula.
\section{Example: exponential sums}\label{s3}
\noindent
\emph{Exponential sum} (\ES) is a function
in $\C^n$ of the form
$$f(z)=\sum_{\lambda\in\Lambda\subset\C^{n*},\:c_\lambda\in\C}c_\lambda \E^{\langle z,\lambda\rangle},$$
where $\Lambda$ is a finite subset of the dual space $\C^{n*}$.
The set $\Lambda$ is called the \emph{support} of \ES.
The \emph{Newton polytope} of the support
is a convex hull $\conv(\Lambda)$ of $\Lambda$.
Let $X=\C^n$.
We consider the trivial line bundles $L_1,\ldots,L_n$,
and the spaces $V_i\subset\Gamma(L_i,\C^n)$ of \ES s with the supports $\Lambda_i$.
Choose the Hermitian inner product in $V_i$ as
$$
\langle\sum_{\lambda\in\Lambda_i}a_\lambda\E^{\langle z,\lambda\rangle},\sum_{\lambda\in\Lambda_i}b_\lambda\E^{\langle z,\lambda\rangle}\rangle=
\sum_{\lambda\in\Lambda_i}a_\lambda\bar b_\lambda.
$$
Then, by Theorem \ref{thm1},
$$
{\mathfrak M}_U(V_1, \ldots, V_n) =\frac{n!}{(2\pi)^n}\int_U
dd^c\log\sum_{\lambda\in\Lambda_1}\E^{2\re\langle z,\lambda\rangle}\wedge\ldots\wedge
dd^c\log\sum_{\lambda\in\Lambda_n}\E^{2\re\langle z,\lambda\rangle}.
$$
\begin{lemma}\label{lMA1}
The function $\frac{1}{2t}\sum_{\lambda\in\Lambda}\E^{2\re\langle tz,\lambda\rangle}$,
with $t\to+\infty$,
uniformly converges to the support function $h(z)$ of the Newton polytope of $\Lambda$.
\end{lemma}
\noindent
\emph{Proof.}
Let $\mu(z)\in\Lambda$,
such that $\forall \lambda\in\Lambda\colon\:\re\langle z,\mu(z)\rangle\geq\re\langle z,\lambda\rangle$.
Then
$$
\frac{1}{2t}\log\sum_{\lambda\in\Lambda}\E^{2\re\langle tz,\lambda\rangle} =
h(z)+\frac{1}{2t}(\#\Lambda+\log\sum_{\lambda\in\Lambda}\E^{2\re\langle tz,\lambda-\mu(z)\rangle})=
h(z)+ o(1).
$$
\noindent
\begin{proposition}[see \cite{Ka3}]
With increasing $t$ the differential form
$$
\frac{1}{t^n}\frac{1}{(2\pi)^n}dd^c\log\sum_{\lambda\in\Lambda_1}\E^{2\re\langle z,\lambda\rangle}\wedge\ldots\wedge
dd^c\log\sum_{\lambda\in\Lambda_n}\E^{2\re\langle z,\lambda\rangle}
$$
converges to the correctly defined positive current
$$\frac{1}{\pi^n} dd^ch_1\wedge\ldots\wedge dd^ch_n,$$
where $h_i$ is a support function of the Newton polytope of $\Lambda_i$.
\end{proposition}
Let $K_1,\ldots,K_n\subset\C^n$ be convex bodies with support
functions $h_1,\ldots,h_n$, and $B\subset\C^n$ be a unit ball centered in $0$.
We call
$$V(K_1,\cdots,K_n)=\int_Bdd^ch_1\wedge\ldots\wedge dd^ch_n$$
by a mixed pseudo-volume of
convex bodies $K_1,\ldots,K_n\subset\C^n$.
For the correctness of definition and for some geometrical properties
of a mixed pseudo-volume see \cite{Alesk,Ka1,Ka2,Ka3}.
\begin{corollary}
$$\lim_{t\to+\infty} \frac{\mathfrak M_{tB}(V_1, \ldots, V_n)}{t^n}= \frac{n!}{\pi^n}V(\gamma_1,\cdots,\gamma_n)$$
where $\gamma_i$ is a Newton polytope of $\Lambda_i$.
\end{corollary}
\begin{proposition}[see \cite{Alesk,Ka1,Ka2}]
If $K_1,\cdots,K_n\subset\re\C^n$
then
$$V(K_1,\cdots,K_n)=\vol(K_1,\cdots,K_n),$$
where $\vol(K_1,\cdots,K_n)$ is a mixed volume of convex bodies $K_i$.
\end{proposition}
For $K_1,\ldots,K_n\subset\Z^n\subset\re\C^n$
the BKK theorem \cite{BKK} follows; see \cite{BKK}.
|
1811.05888
|
\section{Introduction}
\label{sec:intro}
The solutions of the Euler equation for fluid dynamics are not unique.
An additional law of physics, in the form of an entropy principle, is
needed to ensure a physically meaningful solution. Wild and manifestly
nonphysical solutions have been studied extensively \cite{DelSze09,DelSze10} and
offer counter examples to studies of the Euler equation as a model for
fully developed turbulence. This paper is concerned with the nonuniqueness
for Euler equation solutions that are the limit of Navier-Stokes solutions as
the viscosity tends to zero. We address common practice in the
construction of numerical solutions for turbulent flows.
Applications to type Ia supernova are discussed.
Nonuniqueness (both mathematical and numerical)
of solutions to the Euler equation is well known in the study of shock waves
and its resolution is also well known: a maximum rate of
entropy production is imposed
as a selection criteria to yield a unique and physically
relevant solution.
But nonuniqueness persists
in solutions of the incompressible Euler equation, where shock waves do not
occur. Again, a physical principle must be added to select the physically
meaningful solution.
This paper poses a challenge to existing standards of verification and
validation (V\&V).
We propose that if turbulence is present in the
problem solved, standards of V\&V
should ensure the physical relevance of the solutions.
As with the shock wave example, inadmissible numerical solutions of
turbulent phenomena are also possible. We identify three broad classes
of numerical solutions to the problems of Rayleigh-Taylor (RT) turbulent mixing
and compare them to experimental data \cite{SmeYou87}.
one of these agrees with the data, while two do not.
The second main result of this
paper is to identify these other two, solutions that include no subgrid
terms and those for which the subgrid terms are limited, i.e., the
Implicit Large Eddy Simulation (ILES), as physically inadmissible
solutions of the turbulent RT mixing data \cite{SmeYou87}.
ILES and solutions which report a DNS status and lack subgrid terms.
These latter two solutions do not agree with each other,
further indicating nonuniqueness issues.
To account for observed discrepancies between ILES predictions and
experimental data, it is common to add ``noise'' to the physics model.
As noise increases the entropy, some discrepancies between simulation
and measured data are removed.
The solution with noise is, however, not predictive. Not only
can it be missing in the required amounts, but it is only a qualitative cure,
with no defined noise level or noise frequency spectrum specified.
The maximum entropy rate is a clearly defined physics principle. We
propose it as a solution to the Euler equation nonuniqueness problem.
Reynolds averaged Navier Stokes (RANS) simulations resolve all length
scales needed to specify the problem geometry.
Large eddy simulations (LES) not only
resolve these scales, but in addition they resolve some, but not
all, of the generic turbulent flow. The mesh scale, i.e., the finest of the
resolved scales,
occurs within the turbulent flow. As this is a strongly coupled flow
regime, problems occur at the mesh cutoff. Resolution of all relevant
length scales, known as Direct Numerical Simulation (DNS) is
computationally infeasible for many problems of scientific and
technological interest. As a consequence, an understanding of the
problems and opportunities of LES is an important issue.
The subgrid scale
(SGS) flow exerts an influence on the flow at the resolved level.
Because this SGS effect
is not part of the Navier-Stokes equations,
additional modeling terms are needed in the equations. These
SGS terms added to the right hand side (RHS) of the
momentum and species concentration equations
generally have the form
\begin{equation}
\label{eq:sgs}
\nabla\nu_t \nabla \quad {\mathrm{and}} \quad \nabla D_t \nabla \ .
\end{equation}
The coefficients $\nu_t$ and $D_t$ are called eddy viscosity and eddy
diffusivity.
According to ideas of Kolmogorov \cite{Kol41}, the energy in a turbulent
flow, conserved, is passed in a cascade from larger vortices to smaller ones.
This idea leads to the scaling law \cite{Kol41}
\begin{equation}
\label{eq:K41}
\langle |v(k)|^2 \rangle = C_K \epsilon^{2/3} |k|^{-5/3}
\end{equation}
for the Fourier coefficient $v(k)$ of the velocity $v$. Here
$C_K$ is a numerical coefficient and
$\epsilon$, the energy dissipation rate, denotes the rate at which the energy
is transferred within the cascade.
It is a measure of the intensity of the turbulence.
At the grid level, the numerically modeled
cascade is broken. The role of the
SGS terms is to dissipate this excess grid level energy so that the
resolved scales see a diminished effect from the grid cutoff.
This analysis motivates the SGS coefficient $\nu_t$, while a conservation
law for species concentration
similarly motivates the coefficient $D_t$.
Higher order compact schemes may omit any subgrid model
in their study of RT mixing. As an example,
\cite{CabCoo06} present a nominally DNS solution,
which, however, is not validated by comparison to experiment.
Moreover, the DNS characterization
of the simulation is not documented, with $D$ and $\nu$ not specified.
It appears from the text that DNS refers to globally defined solution
parameters such as the globally defined Kolmogorov scale $\eta$ in relation
to the mesh spacing, with $\nu$ and $D$ defined on this basis.
Such resolution misses local fluctuations in
the turbulent intensity, which require dynamically defined SGS terms
added to the equation. As \cite{CabCoo06} is focused on applications to
supernova Ia, additional comments are placed in our SN Ia discussion.
ILES is the computational model
in which the minimum value of $\nu_t$ is chosen so that a minimum of grid level
excess energy is removed to retain the $|k|^{-5/3}$ scaling law, while
the prefactor $C_K\epsilon^{2/3}$ is not guaranteed. It thus depends on
limited and not full use of the subgrid terms that correspond to the
local values of the energy dissipation cascade.
An ILES version of Miranda, a modern higher order compact scheme, given in
\cite{MorOlsWhi17},
details in the construction of the ILES version of this code and analyzes
a number of scaling related properties of the RT solutions the
algorithm generates. The subgrid terms are chosen
not proportional to the Laplacian as in (\ref{eq:sgs}),
but as higher power dissipation rates,
so that large wave numbers are more strongly suppressed.
The SGS modeling coefficients $\nu_t$ and $D_t$ are chosen as global
constants. The basis for the choice is to regard the accumulation of
energy at the grid level as a Gibbs phenomena to be minimized
\cite{MorOlsWhi17}. Miranda
achieves the ILES goal of an exact $-5/3$ spectral decay,
see Fig. 3 right frame in Ref. \cite{MorOlsWhi17}.
FronTier uses dynamic
SGS models \cite{GerPioMoi91,MoiSquCab91},
and additionally uses a sharp interface model to reduce numerical
diffusion. In this method,
SGS coefficients $\nu_t$ and $D_t$ are defined in terms
based on local flow conditions,
using turbulent scaling laws, extrapolated from an analysis
of the flow at one scale coarser, where the subgrid flow is known.
The philosophy and choices of the SGS terms are completely different among
the compact schemes, ILES and
FronTier, a fact which leads to differences in the obtained
solutions. Solution differences between FronTier and ILES were reviewed in
\cite{ZhaKamShe18}, with FronTier but not ILES showing agreement with the
data \cite{SmeYou87}. The schemes totally lacking SGS terms are even
further from the experimental data \cite{SmeYou87}.
As shown in \cite{ZhaKamShe18},
long wave length noise in the initial conditions was eliminated as
a possible explanation of the discrepancies between ILES simulations and
experimental data for the RT instability growth
rate constant $\alpha_b$.
We also note the mixedness parameter measured in \cite{MueSch1_09}, is furthest
from experiment in \cite{CabCoo06}, is improved in the Miranda simulation
code \cite{MueSch1_09} lacking subgrid terms but with improved modeling of
experimental parameters, and further improved by the FronTier simulation
\cite{GliPloLim15}.
\section{Scaling laws compared}
\label{sec:scaling}
Here we focus on differences in the spectral scaling exponents. As
\cite{CabCoo06,MorOlsWhi17} employ a thinly diffused initial layers
separating two fluids of distinct densities,
the immiscible experiments of \cite{SmeYou87} are the most appropriate for
comparison. \cite{CabCoo06} does not report velocity spectral
scaling properties,
but this reference does report the very large growth of the interfacial mixing
area, \cite{CabCoo06} Fig. 6,
a phenomena which we have also observed \cite{LeeJinYu07,LimYuJin07}.
The scaling rate we observe, Fig.~\ref{fig:spectral}
from the late time
FronTier simulations reported in \cite{ZhaKamShe18}, shows
a strong decay rate in the velocity spectrum,
resulting from a combination of the turbulent fractal decay
and a separate cascading process we refer to stirring.
Stirring is the mixing of distinct regions in a two phase flow. It occurs in the
concentration equation and is driven by velocity fluctuations. For stirring,
the concentration equation describes the
(tracked) front between the phases. Stirring fractal behavior is
less well studied than turbulent velocity. It
accounts for the very steep velocity spectral decay seen in
Fig.~\ref{fig:spectral}. In contrast, ILES \cite{MorOlsWhi17} captures
neither the expected turbulent intermittency correction to the decay rate nor
any stirring correction beyond this.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{fft105.jpg}
\end{center}
\caption{
\label{fig:spectral}
Plot of the spectral decay rate, in log log variables, from the
two point function in log variables (as studied in \cite{Mah17}).
Numerical data from the final time step
RT simulations of experiment 105 reported in
\cite{ZhaKamShe18}.
The immiscible decay rate -3.17 reflects a combination of
turbulent intermittency and the effects of a stirring cascade.
}
\end{figure}
We summarize in Table~\ref{table:compare}
the major code comparisons of this paper,
based on the RT instability growth rate $\alpha_b$.
A compact, higher order scheme \cite{CabCoo06} has the
smallest value $\alpha_b$. ILES is larger, and the FronTier
scheme using dynamic SGS is the largest of the three,
and in agreement with experiment.
\begin{table}
\caption{
\label{table:compare}
Three types of RT simulation algorithms according to their treatment of
SGS terms and their value for $\alpha_b$, compared
to the data of \cite{SmeYou87}.
}
\begin{centering}
\begin{tabular}{|l|l|l|c|}
\hline
Code & SGS terms & solution & evaluation \\
& & properties & relative to \cite{SmeYou87} \\
\hline
\hline
compact high & No SGS & $\alpha_b \sim 0.02$ & Inconsistent\\
order \cite{CabCoo06} & & & \\
\hline
Miranda & Limited SGS & $\alpha_b \sim 0.03$ & Inconsistent\\
ILES \cite{MorOlsWhi17} & & & \\
\hline
FronTier & Dynamic & $\alpha_b \sim 0.06$ & Consistent \\
\cite{ZhaKamShe18} & SGS & & \\
\hline
\end{tabular}
\end{centering}
\end{table}
\section{Maximum entropy production rate}
\label{sec:max-entropy}
Our first main result is to establish a plausible argument
for the validity of the maximum entropy
production rate for Euler equation turbulence.
The admissibility condition is an extension of the second law of
thermodynamics, in the sense that under this extension,
the physically admissible dynamic processes are constrained
more tightly than those allowed by the second law itself.
It has
been applied successfully to many natural processes
\cite{MarSel06,MihFarPai17}
including problems in climate science (terrestrial and other planets) \cite{OzaOhmLor03}, in
astrophysics, and the clustering of galaxies. As
noted in \cite{KleDyk10}, it does not have the status of an
accepted law of physics.
A fundamental obstacle to validation of this
principle can be seen in the lack of a variational principle which combines
conservative and dissipative processes.
We avoid this
fundamental question, and more narrowly outline a possible validation of the
maximum entropy principle
in the context of Euler equation turbulence.
The variational principle we find, in this context,
specifies an extreme value for the entropy production.
As this is applied at each infinitesimal increment
of time, the maximum entropy production
principle actually guarantees a maximal rate of
entropy production. For thermal processes, such a law is well validated,
and leads to the phenomenological Fourier law for thermal conductivity.
According to multifractal theories of turbulence \cite{Fri95}, turbulence
is intermittent, with intense regions of turbulence occurring in
clusters. There is a further clustering of clusters, a process which
continues to all orders. These higher order clusters are defined
in terms of structure functions, to be introduced in Sec.~\ref{sec:vel-ent}.
Before getting into technical details, we emphasize the
central modeling assumptions that make the maximum entropy principle
valid. For each order $p$ of clustering, a fractal set is
defined. Given a length scale $l$, the fractal set at this length scale has
a measure which is exponentially small in $l$. The central physics modeling
assumption for fractal turbulence is
\begin{itemize}
\item
(Fractal)
All the energy for the $p$ level of clustering
is contained in a small fractal set $x_p$,
realized at the length scale $l$ as the set $x_{p,l}$.
The energy on the set, $E_{p,l}$, (defined at the scale $l$) is a constant.
\end{itemize}
This modeling assumption is used in the analysis of power laws
and Poisson processes describing the beta model of
Euler equation turbulence \cite{Fri95}.
It follows that the steady state energy dissipation and entropy production
of the order $p$ clustering
at length scale $l$ are given by
\begin{equation}
\label{eq:entropy-def}
E_{p,l} \int_{x_{p,l}} x dx \quad {\mathrm{and}} \quad
E_{p,l} \int_{x_{p,l}} x \ln x dx \ ,
\end{equation}
We observe that the energy occurs
outside of the integrals,
and that the term $(1-x) \ln (1-x)$ is missing from the entropy.
To model a time
dependent state which has not yet achieved equilibrium, and is still
evolving in time, the only change to
(\ref{eq:entropy-def}) is that the equations are
multiplied by the fractional equilibrium part of the state.
The log Poisson model \cite{SheLev94} selects a fractal set
to describe each order $p$ of clustering. The choice, conditional on
prior choices for smaller $p$ values, is not defined by an exponential, i.e., a
pure fractal, but a mixture of exponentials in the energy dissipation rates.
As the mixture is not
narrowly concentrated about its peak value, the applicability of
hypothesis
(Fractal) cannot be assumed. In the limit of large $p$, however,
the mixture of exponentials is narrow, so that (Fractal) is justified
for physically realizable solutions of Euler equation turbulence.
The peak values for finite $p$ are not identified in the log Poisson
analysis, which finds the mean of the mixture exponentials on the
basis a universality hypothesis. This multifractal model,
evaluated for large $p$ is
applied uniformly to all $p$. From the excellent agreement of these predictions
with multiple experiments and simulations (1\% accuracy) \cite{SheLev94},
the log Poisson model is validated.
A plausible principle to select the physically relevant solutions from among
the multiple nonunique solutions of the Euler equation,
suggested by this analysis, is the principle of
maximal rate of energy dissipation. The analysis of \cite{SheLev94} maximizes
the mean value of competing exponentials rather than their peak value.
The mean and peak coincide in the limit of large $p$, but the distinction
between them for finite $p$ is a gap remaining in any validation argument.
The maximum energy dissipation rate is a viable candidate for the
required selection principle among nonunique solutions of the Euler equation.
Accepting this, our analysis will be complete with solutions lacking
subgrid terms and ILES seen to be invalid physically.
To the extent that some maximal entropy likelihood reasoning is applicable,
for example such as (Fractal), a maximum entropy production rate principle
for the selection of physically relevant solutions of the Euler equation
for fully developed turbulence would follow.
We refer to the highly
developed extensions \cite{StJ05,DubGra96,DubGra96a,SheWay95}
of \cite{SheLev94}. The references \cite{DubGra96,DubGra96a,SheWay95}
extend the log Poisson model to continuous $p > 0$. These references
do not resolve the issue
of either a maximum entropy production rate or a maximum energy dissipation
rate for fully developed turbulence, but they appear to offer a plausible
route for possible validation of either of these.
The dynamic equations are of Fokker-Plank type.
The dissipation operator is a sum of a conventional Laplacian, for the
thermal diffusion and an integral over $p > 0$ of the order $p$ clustering
contribution, which is a fractal, or power law dissipation, expressed in
powers of the length scale $l$.
\section{The second main result}
\label{sec:results}
In view of these observations, we note three
independent reasons for concluding that the absence of subgrid
terms or their limited presence in ILES is problematic
on physical grounds.
\begin{enumerate}
\item The two limited subgrid schemes do not satisfy
the maximum entropy production rate principle.
\item The two limited subgrid schemes are
in violation of incontrovertible experimental and
simulation evidence that the true total spectral
decay rate is more negative than $-5/3$ \cite{Fri95}.
\item These two schemes understate the dissipated energy
and are thus unphysical.
\end{enumerate}
These are logically independent statements. The order is decreasing in the
fundamental nature of the statement and increasing in the simplicity of the
assumptions. Any one of these points is sufficient to invalidate
schemes lacking in subgrid terms or ILES , with limited subgrid terms.
Point 1 is the most fundamental in nature, and it is the
subject of the remainder of this paper. Point 2
rests on established laws of physics and assumes the relevance of
Kolmogorov scaling laws with their intermittency corrections to RT mixing.
Point 3 assumes nothing. Simulations, even ILES simulations, show a transfer
of energy from large to small scales. Point 3 accepts this as a physical
fact. The energy transfer
is not a numerical feature to be minimized, but a property of the
solutions to be modeled correctly. The grid level cutoff terminates this
transfer, and point 3 notes that the transfer, from the grid level to the
subgrid level is incorrectly modeled in both types of limited dissipation
schemes.
For the reader satisfied with
either points 2 or 3, the remainder of the paper can be ignored, and the
discussion has been completed,
independent of the remainder of the paper.
The problems with current computational paradigms are well summarized by
Zhou \cite{Zho17a}, Sec. 6
regarding evaluation of the RT
instability growth rate
$\alpha_b$, ``agreement between simulations and experiment are worse today
than it was several decades ago because of the availability of more
powerful computers.''
As our computational method depends on front tracking in addition to
dynamic subgrid modes (which address items 1-3 above), we additionally
quote from Zhou \cite{Zho17a}, Sec. 5.2, in discussing \cite{GeoGliLi05}:
``it was clear that accurate numerical tracking to control numerical mass
diffusion and accurate modeling of physical scale-breaking phenomena
surface tension were the critical steps for the simulations to agree with
the experiments of Read and Smeeton and Youngs''.
We raise the possibility of ILES related errors in an
analysis of the deflagration to detonation transition in type
Ia supernova. In that these simulations depend on ILES,
their predictive value may be questioned.
We propose a simple simulation search method for
rare events, in which a physics simulation code drives the turbulence
modeling. In agreement with \cite{CabCoo06}, we recommend a new
class of turbulent combustion subgrid models. See Sec.~\ref{sec:Ia}.
\subsection{Rayleigh-Taylor turbulent mixing}
\label{sec:RT}
We assess ILES in terms of the RT instability of
acceleration driven instabilities, and the prediction of the growth
rate $\alpha_b$ of this instability.
We identify situations in which ILES is in near agreement with
experimental measurement of the growth rate
in this measure \cite{Mue08,MueSch1_09}, and ones
\cite{DimYou04} where its predictions differ
by a factor of about 2 from experiments \cite{SmeYou87}.
The first case is characterized by
\begin{itemize}
\item{(a)} low levels of turbulence
\item{(b)} high levels of long wave length perturbations
(``noise'') in the initial
conditions and \item{(c)} diffusive parameters in the physics model.
\end{itemize}
Regarding item (c), we observe that the successful ILES simulations
referenced above concerned hot-cold water, with a moderate Schmidt
number of 7, whereas no results are reported for the very low diffusive
fresh-salt water channel with a Schmidt number of 600.
\subsection{Noise as an adjustable parameter}
The postulate \cite{You03} of noise in the initial data
\cite{SmeYou87}
was shown to lead to agreement of the ILES predictions with experiment.
In previous studies \cite{GliShaKam11,ZhaKamShe18}, we have shown that this
postulate is not valid.
The long wave length noise is present,
but with a sufficiently small amplitude that its influence on the
instability growth rate is about $5\%$. Thus long wave length
initial ``noise'' in the initial conditions for \cite{SmeYou87}
is not sufficient to account
for the factor of 2 discrepancy between ILES and this data.
We regard ``noise'' as a palliative, and not a fundamental principle.
The noise level is not specified, nor is its frequency spectrum,
so that standards of predictive
science are not met. As noted, ``noise'', of the required intensity,
is missing in some instances. We propose the maximum entropy production
rate as a more satisfactory solution to the problem of Euler equation
turbulence nonuniqueness.
ILES simulations have been used in the study of incompressible turbulence,
a problem with ample experimental data reviewed in \cite{SheLev94}.
In such simulations,
``noise'' is added to the initial conditions. In this case the high
frequency component of the noise is important. Agreement with experiment
is obtained. Pure ILES, with no added noise would not meet this test.
\subsection{Outline of derivation}
\label{sec:outline}
Our reasoning is based on three fundamental laws of physics:
\begin{itemize}
\item Conservation of energy, the first law of thermodynamics
\item Maximum entropy production rate, an extension of the second law of
thermodynamics
\item Universality in the clustering and compound clustering of intermittency
in fully developed turbulence.
\end{itemize}
The third item is formulated in \cite{SheLev94}.
In the multifractal description of turbulence, universality
states that the compound clusters, that is the multiple fractals
in the description of turbulent intermittency, must all obey a common
law. There can be no new physical law or parameter in passing
from one level of clustering to the next. The law is evaluated
in closed form \cite{SheLev94} in the limit that the order of the clustering
becomes infinite.
It is a power scaling law. By universality, this
law is then applied to clustering at all orders.
The universality theories are developed for single constant density
incompressible turbulence. Our use in a variable
density context is an extrapolation of these theories
beyond their domain of strict validated applicability. Scaling
laws are similarly extrapolated. Such extrapolations are widely
used (and verified) in simulation studies. For convenience,
the Reynolds stress analysis uses this approximation.
In shock wave modeling, the Euler equation shock
wave introduces a Gibbs phenomena of overshoot. The instability resulting
is removed by dissipation (artificial viscosity, and its modern
variants) of the minimum amount to just prevent the overshoot.
The turbulent cascade of energy is not a Gibbs phenomena. It is
an observable fact and not a numerical artifact.
Minimizing its magnitude is an error, as opposed to
an accurate model of the mesh dissipated error in the dynamic SGS models.
We proceed in the following steps. Using the Reynolds stress, we
express the SGS terms to be modeled as a truncated two point function.
In this formulation, we identify the minimum (ILES) and maximum
(dynamic SGS) alternatives.
We then proceed
from velocity fluctuations
to the energy dissipation rate $\epsilon$ and from the latter to
the entropy production rate. At each step we are looking at truncated
two point functions. At the end, we are looking at the
entropy production rate and must choose the
solution with maximum entropy production rate.
Each step is monotone and preserves the minimum-maximum choice.
Reasoning backwards, we see that the maximum
choice is needed at the outset, and so ILES is inadmissible.
The transition, from velocities to energy truncated two point functions,
has two components. The first is a scaling analysis to show equivalence,
but in the process the order of clustering changes. The second component
in this transition
is to apply universality: all orders of clustering must obey a common
minimum-maximum choice.
.
\subsection{From velocities to entropy}
\label{sec:vel-ent}
\subsubsection{Reynolds stress}
The Reynolds stress results from regarding the mesh values as
cell averaged quantities. This creates an obvious problem for nonlinear
terms of the Euler equation. From the momentum equation, the quadratic
nonlinearity is replaced by the product of the cell mean values. The
resulting error, transferred to the RHS of the momentum equation is the
negative of the gradient of the Reynolds stress, defined as
\begin{equation}
\label{eq:rey}
R = \overline{ v^2} - \overline{v} ~ \overline{v}
\end{equation}
in the case of constant density, with a more complex expression involving
density weighted (Favre) averages in the variable density case.
The added force term $-\nabla R$ on the right hand side (RHS)
of the momentum equation is modeled
as $\nu_t \Delta v$. Thus we see that the minimum and maximum values for
the energy dissipation rate $\nu_t$ correspond to
minimum and maximum values for models of $-\nabla R$. $R$, as a truncated
two point function, vanishes as its argument becomes infinite and is peaked
at the origin. Thus minimum and maximum values for $-\nabla R$
correspond to minimum and maximum values for $R$ itself.
\subsubsection{Velocities to energy}
As technical preparation for the analysis of this section, we define the
structure functions. They make precise the intuitive picture
of multiple orders of clustering for intermittency.
There are two families of structure functions, one for
velocity fluctuations and the other for the energy
dissipation rate $\epsilon$. The structure functions are the expectation
value of the $p^{th}$ power of the variable. For each value of $p$,
they define a fractal and satisfy a power law in their decay in a scaling
variable $l$. The structure functions and the associated scaling exponents
$\zeta_l$ and $\tau_l$ are defined as
\begin{equation}
\label{eq:zeta-tau}
\langle \delta v_l^p \rangle \sim l^{\zeta_p}
\quad {\mathrm{and}} \quad
\langle \epsilon_l^p \rangle \sim l^{\tau_p}
\end{equation}
where $\delta v_l$ and $\epsilon_l$ are respectively
the averages of velocity differences and
of $\epsilon$ over a ball of size $l$.
The two families of exponents are related by a simple scaling law
\begin{equation}
\label{eq:zeta-tau2}
\zeta_p = p/3 + \tau_{p/3}
\end{equation}
derived on the basis of scaling laws and dimensional analysis \cite{Kol62}.
This would seem to accomplish the velocity fluctuation
to energy dissipation rate
step, preserving the minimum vs. maximum choice,
but it does not, because
the value of $p$ to which it applies has changed.
To fill this gap, we turn to the assumption of universality formulated
in terms of the $\tau_p$ \cite{SheLev94}, and as explained
with mathematical formalisms replacing some of the reasoning of a theoretical
modeling nature, \cite{SheWay95,DubGra96,DubGra96a,SheZha09,LiuShe03}.
As a function of $p$,
$\tau_p$ is a fractional order cubic, defined in terms of a fractional
order dissipative operator with a fractional order exponent $\beta$.
This relation is derived exactly in the limit as $p \rightarrow \infty$,
and in the name of universality, then applied to all values of $p$.
As a monotone fractional order cubic, it follows that the minimum-maximum choice
for any $p$ is reflected in the same choice for all $p$. We have thereby
completed the velocity to energy dissipation rate step, and
preserved the minimum vs. maximum choice.
From the modeling principle (Fractal), the
energy dissipation rate is maximized exactly when the
entropy production rate is maximized.
The maximum choice for the entropy production rate is required
and the minimum choice is inadmissible. Reasoning backwards to the
original energy dissipation choices, the minimum rate of energy
dissipation (ILES) is inadmissible.
\section{Significance: an example}
\label{sec:Ia}
For simulation modeling of turbulent flow nonlinearly coupled to other
physics (combustion and reactive flows, particles embedded in turbulent flow,
radiation), the method of dynamic SGS turbulent flow models, which only
deals with average subgrid effects, may be insufficient. In such cases,
the turbulent fluctuations or the full two point correlation function
is a helpful component of SGS modeling. Such a goal is only partially
realized in the simplest of cases, single density incompressible
turbulence. For highly complex physical processes, the knowledge of the
domain scientist must still be retained, and it appears to be
more feasible to bring
multifractal modeling ideas into the domain science communities.
In this spirit, we propose here a simple method
for the identification of (turbulence related) extreme events
through a modification of adaptive mesh refinement (AMR), which we
call Fractal Mesh Refinement (FMR). We propose FMR to seek a
deflagration to detonation transition (DDT)
in type Ia supernova.
FMR allows high levels of strongly focused resolution.
The method is proposed to assess the extreme events generated
by multifractal turbulent nuclear deflagration. Such events, in a white
dwarf type Ia supernova progenitor, are assumed to lead to DDT,
which produces the observed type Ia supernova.
See \cite{ZinAlmBar17,CalKruJac12} and references cited there.
FMR refines the mesh not adaptively where needed,
but only in the most highly critical regions where most important, and
thereby may detect
DDT trigger events within large volumes at a feasible computational cost.
The detailed mechanism for DDT is presumed to be diffused
radiative energy
arising from some local combustion event of extreme
intensity, in the form of a convoluted flame front, embedded in
a nearby volume of unburnt stellar material close to
ignition.
Consistent with the Zeldovich theory \cite{Lee08}, a wide spread
ignition and explosion may result.
FMR refinement criteria will search for such
events. In this plan, the FMR search should avoid ILES.
See \cite{Gli18}.
There is a minimum length scale for wrinkling of a turbulent
combustion front, called the Gibson scale. Mixing can proceed in the
absence of turbulence, via stirring. Thus the Gibson scale is not the
correct limiting scale for a DDT event. Stirring, for a flame front,
terminates at a smaller scale, the width of the flame itself. The analysis of
length scales must also include correctly modeled transport for charged
ions \cite{MelLimRan14},
which can be orders of magnitude larger than those inferred
from hydro considerations. The
microstructure of mixing for a flame front could be thin flame regions
surrounded by larger regions of burned and unburned stellar material
(as with a foam of soap bubbles, with a soap film between the bubbles).
Here again multifractal and entropy issues appear to be relevant,
although not subject to theoretical analysis comparable to that
of multifractal for turbulent flow.
A multifractal clustering of smaller bubbles separated by flame fronts
can be anticipated, and
where a sufficient fraction of these bubbles are unburnt stellar material,
a trigger for DDT could occur.
FMR, with its narrow focus on extreme events, will come closer to discovering
such DDT triggers than will an AMR algorithm design.
For this purpose, the astrophysics code should be based on dynamic subgrid
SGS, not on ILES.
We return to the discussion of \cite{CabCoo06}. Our FronTier computations
of a 2D interface surface length are in qualitative agreement with those of
\cite{CabCoo06} for the surface area.
Such models of interface area should be the basis for subgrid scale modeling
of the turbulent flame intensity. Work is currently in progress to
construct an experimentally validated subgrid scale microstructure to
complement models of turbulent flame surface area. These subgrid
models may play a role in reaching beyond length scales reachable by FMR.
\section{Conclusions }
\label{sec:conc}
We have shown that the ILES algorithm for the solution of Euler equation
turbulence is inadmissible physically. It is in violation of the
physical principle of maximum rate of entropy production.
We have explained observations of experimental flows for which this
error in ILES has only a minor effect. They are associated with high levels
of noise in the initial conditions, low levels of turbulent intensity
and diffusive flow parameters.
Prior work, e.g., \cite{GliShaKam11,GeoGliLi05,ZhaKamShe18}
pertain to simulation validation studies RT instability
experiments with a stronger intensity of turbulence
and for which such significant long wave length perturbations to the
initial data are missing. In these experiments, the present analysis
provides a partial explanation for the factor of about 2 discrepancy between
observed and ILES predicted instability growth rates.
We have noted the potential for ILES related errors to influence
ongoing scientific investigations, including the search for
DDT in type Ia supernova.
We believe V\&V standards should include an analysis of the physical
relevance of proposed solutions to flow problems, specifically turbulent
and stirring problems.
The ILES simulations of the experiments of \cite{SmeYou87} fail this
test by a factor of 2 in the RT growth rate $\alpha_b$, and
on this basis we judge them to be physically inadmissible.
We recognize that the conclusions of this paper will be controversial within
the ILES and high order compact turbulent simulation communities.
A deeper consideration of the
issues raised here is a possible outcome.
The issues to be analyzed are clear:
\begin{itemize}
\item
Is the transport of energy and concentration, blocked at the
grid level, to be ignored entirely \cite{CabCoo06}?
\item
Is it to be regarded as a Gibbs phenomena \cite{MorOlsWhi17},
and thus to be minimized?
\item
Is it a physical phenomena,
to be modeled accurately \cite{GerPioMoi91,MoiSquCab91}?
\end{itemize}
If the response to this paper is an appeal to
consensus (everyone else is doing it),
the argument fails. Consensus is of course a weak argument, and
one that flies in the face of standards of V\&V. More significantly,
there is a far larger engineering community
using dynamic SGS models in the design of engineering structures
tested in actual practice.
This choice is backed by nearly three decades of
extensive experimental validation. It is further used
to extend the calibration range of RANS simulations beyond available
experimental data. The resulting RANS, calibrated to dynamic SGS LES data,
are widely used in the design and optimization
of engineering structures; these are also tested in real applications.
Consensus in this larger community overwhelms the ILES consensus
by its shear magnitude, and ILES loses the consensus argument.
\section{Acknowledgements}
\label{ack}
Use of computational support by the Swiss National Supercomputing Centre is gratefully acknowledged.
Los Alamos National Laboratory Preprint LA-UR-18-30837.
|
2210.03644
|
\section{Introduction}
Let $X=\{X_n:\, n\in\mathbb{N}\}$ be a linear process defined by
\begin{equation}\label{def-Xn}
X_n=\sum_{i=0}^{\infty}a_{i}\varepsilon_{n-i},
\end{equation}
where the innovations $\varepsilon_i$ are i.i.d. real-valued random variables belonging to the domain of attraction of an $\alpha$-stable law ($0<\alpha<2$), the coefficients $a_0=1$, $a_i\sim c_0i^{-\beta}$, $i=1,2,\cdot\cdot\cdot,$ and $c_0$ is a positive constant. Here $a_i\sim a'_i$ means that $a_i/a'_i\to1$ as $i\to\infty$. By Kolmogorov three-series theorem, the linear process $X$ in \eref{def-Xn} converges almost surely if $\alpha\beta>1$. Assume that the linear process $X$ has a bounded probability density function $f(x)$. Then the study of quadratic functional $\int_{{\mathbb R}} f^2(x)\,dx$ will help us to get more information on entropies related to the linear process $X$, say, quadratic R\'{e}nyi entropy $R(f)=-\ln (\int_{{\mathbb R}} f^2(x)\,dx)$ and Shannon entropy $S(f)=-\int_{{\mathbb R}} f(x)\ln f(x)\,dx$.
Entropy is widely applied in the fields of information theory, statistical classification, pattern recognition and so on since it is a measure of uncertainty in a probability distribution. In the literature, different estimators for the quadratic functional and entropies with independent data have been well studied. However, there are very few works on estimations of the quadratic functional and the corresponding entropies for dependent case. In \cite{KLS}, K\"{a}llberg, Leonenko and Seleznjev extended the $U$-statistics method to $m$-dependence sequence. They showed the rate optimality and asymptotic normality of the $U$-statistics estimator for multivariate sequence. In \cite{A}, Ahmad obtained the strong consistency of the quadratic functional by orthogonal series method for stationary time series with strong mixing condition. In \cite{Sang-Xu}, kernel entropy estimation for quadratic functional and related entropies of regular time series data under certain mild conditions were studied. Although the linear processes in \cite{Sang-Xu} can have infinite variance, it can only deal with the short memory case, see Definition 2.1 and Example 3.2 in \cite{Sang-Xu} for more details. To the best of our knowledge, general results for quadratic functional estimations and related entropies of long memory linear processes with infinite variance are still unknown.
In this paper, for the linear process $X=\{X_n:\, n\in\mathbb{N}\}$ defined in (\ref{def-Xn}), we only focus on the case $1<\alpha \beta<2$. According to Definition 2.1 in \cite{Sang-Xu}, this corresponds to the long memory case. When innovations are symmetric $\alpha$-stable random variables, one can also refer to \cite{Hsing-1999} for the definition of such long memory linear processes. To estimate the quadratic functional $\int_{{\mathbb R}} f^2(x)\, dx$ of the linear process $X=\{X_n:\, n\in\mathbb{N}\}$ defined in $\eref{def-Xn}$, we shall apply the kernel method
\begin{align} \label{def-Tn}
T_n(h_n)=\frac{2}{n(n-1)h_n} \sum_{1\le j< i\le n}K\left(\frac{X_i-X_j}{h_n}\right),
\end{align}
where the kernel $K$ is a symmetric and bounded function with $\int_{{\mathbb R}} K(u)\, du = 1$ and $\int_{{\mathbb R}} u^2|K(u)|\, du<\infty$. The bandwidth sequence $h_n$ satisfies $0<h_n\to 0$ as $n\to\infty$.
Throughout this paper, if not mentioned otherwise, the letter $c$ with or without a subscript, denotes a generic positive finite constant whose exact value is independent of $n$ and may change from line to line. We use $\iota$ to denote the imaginary unit $\sqrt{-1}$. For a complex number $z$, we use $\overline{z}$ and $|z|$ to denote its conjugate and modulus, respectively. For any integrable function $g(x)$, its Fourier transform is defined as~$\widehat{g}(u)=\int_{\mathbb{R}}e^{\iota x u}g(x)\, dx$. Moreover, we let $\phi(\lambda)$ be the characteristic function of linear process $X=\{X_n:\, n\in\mathbb{N}\}$, and $\phi_{\varepsilon}(\lambda)$ the characteristic function of innovations. That is, $\phi(\lambda)=\mathbb{E}[e^{\iota \lambda X_n}]$ and $\phi_{\varepsilon}(\lambda)=\mathbb{E}[e^{\iota \lambda \varepsilon_{1}}]$. For simplicity of notation, we always assume that the coefficients $a_i$ in the definition of the linear process $X$ are nonzero.
The paper has the following structure. The main results are given in Section 2. A simulation study is given in Section 3. Section 4 is devoted to the proofs of Theorems \ref{thm1} and \ref{thm2} based on the Fourier transform and the projection method.
\bigskip
\section{Main Results}
It is well known that the characteristic function of an $\alpha$-stable law $S_{\alpha}(\sigma,\eta,\mu)$ has the form
\[
e^{\iota\lambda \mu -\sigma^{\alpha}|\lambda|^{\alpha}(1-\iota \eta \operatorname{sign}(\lambda) \omega(\lambda,\alpha))},
\]
where $0<\alpha \leq 2,\sigma>0,-1 \leqslant \eta \leqslant 1,\mu \in \mathbb{R}$, and
\begin{equation*}\label{def-omega}
\omega(\lambda,\alpha)= \begin{cases} \tan(\frac{\pi \alpha}{2})
& \text { for } \alpha \neq 1 \\
2 \pi^{-1} \log|\lambda|
& \text { for } \alpha=1.\end{cases}
\end{equation*}
It is called symmetric (or $S\alpha S$) if $\eta=\mu=0$, and standard if $\sigma=1$. For more details on stable laws, we refer to \cite{Samorodnitsky and Taqqu-1994}. Let $\xrightarrow{\mathcal{L}}$ denote the convergence in distribution. A random variable $Y$ is said to be in the domain of attraction of an $\alpha$-stable law if there exist i.i.d. random variables $Y_n$ with the same distribution as $Y$, real numbers $A_n$, and strictly positive numbers $B_n$ such that
\[
\frac{\sum\limits^n_{i=1}Y_i-A_n}{B_n}\xrightarrow{\mathcal{L}} S_{\alpha}(\sigma,\eta,\mu)
\]
as $n\to \infty$, see, for example, \cite{Ibragimov and Linnik-1971}. Since the innovations $\varepsilon_i$ belong to the domain of attraction of an $\alpha$-stable law, by Theorem 2.6.5 in \cite{Ibragimov and Linnik-1971}, the characteristic function $\phi_{\varepsilon}$ of $\varepsilon_1$ satisfies
\[
|\phi_{\varepsilon}(\lambda)|=e^{-c_{\alpha} |\lambda|^{\alpha} L(\lambda)(1+o(1))}
\]
as $\lambda\to 0$, where $c_{\alpha}$ is a positive constant depending on $\alpha$ and $L(\lambda)$ is a slowly varying function as $\lambda\to 0$. Since the coefficients satisfy $a_0=1$, $a_i \sim c_0i^{-\beta}$, it is easy to see that
\[
\sum\limits_{i=0}^{\infty}\sqrt{{\mathop {{\rm Var\, }}}(e^{\iota \lambda a_i\varepsilon_1})}=\infty\quad \text{but}\quad \sum\limits_{i=0}^{\infty}{\mathop {{\rm Var\, }}}(e^{\iota \lambda a_i\varepsilon_1})<\infty
\]
for $1<\alpha\beta<2$. So, by Definition 2.1 in \cite{Sang-Xu}, the linear proces $X=\{X_n: n\in\mathbb{N}\}$ defined in \eref{def-Xn} has long memory. This is consistent with the definition of long memory linear processes when innovations are symmetric $\alpha$-stable random variables, see \cite{Hsing-1999}.
Let $G(x)$ be the distribution function of $\varepsilon_1$. We introduce the following assumptions on the innovation $\varepsilon_1$.
\begin{enumerate}
\item[({\bf A1})] There exist strictly positive constants $c_1$ and $\delta$ such that $|\phi_{\varepsilon}(\lambda)|\leq \frac{c_1}{1+|\lambda |^{\delta}}$ for all $\lambda\in\mathbb{R}$;
\item[({\bf A2})] There exists non-negative constants $c_{-}$ and $c_+$ with $c_{-}+c_+>0$ such that
\begin{equation*}
\lim_{x\to-\infty}|x|^{\alpha}G(x)=c_{-} \quad\text{and}\quad \lim_{x\to+\infty}x^{\alpha}(1-G(x))=c_+;
\end{equation*}
\item[({\bf A3})] $G(x)$ is twice differentiable with derivatives $G^{(j)}(x) (j=1,2)$ satisfying the following inequalities: for any $x, y \in \mathbb{R},|x-y| \leqslant 1$, $j=1,2$
\[
|G^{(j)}(x)| \leq C(1+|x|)^{-\alpha}
\]
and
\[
|G^{(j)}(x)-G^{(j)}(y)| \leq C|x-y|(1+|x|)^{-\alpha} .
\]
\end{enumerate}
The assumption ({\bf A1}) and $a_i\sim c_0 i^{-\beta}$ imply that (i) the linear process $X$ defined in \eref{def-Xn} has a bounded probability density function $f(x)$; (ii) the characteristic function $\phi(\lambda)$ of $X_n$ decays at any polynomial rate to $0$ as $|\lambda|\to+\infty$. Moreover, ({\bf A1}) implies that there exists $m\in\mathbb{N}$ such that $|\phi_{\varepsilon}(\lambda)|^m\leq \frac{c_2}{1+|\lambda |^4}$. Following the proof of Lemma 1 of \cite{Giraitis et al-1996} we can get that $f(x)$ is twice continuously differentiable and all its derivatives up to the second order are uniformly bounded, see also the {\bf P1} in \cite{Honda-2009b}.
Assumptions ({\bf A2}) and ({\bf A3}) are only needed in Theorem \ref{thm2} to obtain the desired limiting theorems. The assumption ({\bf A2}) shows that $\varepsilon_1$ belongs to the domain of attraction of $\alpha$-stable law. For more details on the domain of attraction of $\alpha$-stable law, we refer to \cite{Ibragimov and Linnik-1971}. If $\varepsilon_1$ is a symmetric $\alpha$-stable random variable, then its distribution function $G$ satisfies all assumptions ({\bf A1}), ({\bf A2}) and ({\bf A3}).
The following are main results of our paper.
\begin{theorem}\label{thm1}
Assume that ({\bf A1}) holds. Then, for any $\eta\in(0,\alpha-\frac{1}{\beta})$, there exist positive constants $c_1$ and $c_2$ depending on $\eta$ such that
\begin{align*}\label{case1-mean}
\Big|{{\mathbb E}\,} T_n(h_n)-\int_{\mathbb{R}}f^2(x)\, dx\Big|\leq c_1\left(n^{1-(\alpha-\eta)\beta}+h_n^2\right)
\end{align*}
and
\begin{align*}
{{\mathbb E}\,}\Big|T_n(h_n)-{{\mathbb E}\,} T_n(h_n)-\frac{1}{n}\sum^n_{i=1}Y_i\Big|\leq c_2\left(n^{1-(\alpha-\eta)\beta}+\frac{1}{\sqrt{n^3 h^2_n}}+\frac{1}{\sqrt{n^2 h_n}}+h^2_nn^{\frac{1-(\alpha-\eta)\beta}{2}} \right),
\end{align*}
where $Y_i=2\Big( f(X_i)-\int_{\mathbb{R}} f^2(x)\, dx \Big)$.
\end{theorem}
\begin{theorem}\label{thm2}
Under the assumptions of Theorem \ref{thm1} and $nh_n\to \infty$ as $n\to\infty$,
\begin{enumerate}
\item[(1)] if ({\bf A2}) and ({\bf A3}) hold with $c_-=c_+$, $1<\alpha<2$, $\dfrac{1}{\alpha}<\beta<1$ and $\lim\limits_{n\to\infty}n^{\frac{(\alpha\beta-1)(2-\alpha)}{4\alpha}+\frac{\eta\beta}{4}}h_n=0$ for $\eta\in(0,\frac{(\alpha\beta-1)(\alpha-1)}{\alpha\beta})$, then we have
\begin{equation}\label{asym1}
n^{\beta-\frac{1}{\alpha}}\left[T_n(h_n)-{{\mathbb E}\,} T_n(h_n)\right]\xrightarrow{\mathcal{L}} \tilde{c} Z,
\end{equation}
where $Z$ is a standard $S\alpha S$ random variable and
\[
\tilde{c}=2c_{0}\left(2 c_+ \frac{\Gamma(2-\alpha) \cos (\alpha \pi / 2)}{1-\alpha} \int_{-\infty}^{1} \int_{0}^{1}(t-s)_{+}^{-\beta} dt ds\right)^{1 / \alpha}\int_{\mathbb{R}}f(x)df(x) .
\]
\item[(2)] if ({\bf A2}) and ({\bf A3}) hold with $c_-=c_+$, $1<\alpha<2$, $1<\beta<\dfrac{2}{\alpha}$ and $\lim\limits_{n\to\infty}n^{\frac{(\alpha\beta-1)(2-\alpha\beta)}{4\alpha\beta}+\frac{\eta\beta}{4}}h_n=0$ for $\eta\in(0,\frac{(\alpha\beta-1)^2}{\alpha\beta^2})$, then we have
\begin{equation}\label{asym2}
n^{1-\frac{1}{\alpha\beta}}\left[T_n(h_n)-{{\mathbb E}\,} T_n(h_n)\right]\xrightarrow{\mathcal{L}} c_+^{1/\alpha\beta}c_{f}^{+}L^{+}+c_-^{1/\alpha\beta} c_{f}^{-}L^{-},
\end{equation}
where $L^{-}$and $L^{+}$ are i.i.d. random variables with stable law $S_{\alpha \beta}(1,1,0)$ and
\begin{align*}
c_{f}^{\pm}=2 \tilde{\sigma} \int_{0}^{\infty}\left(f_{\infty}(\pm u)-f_{\infty}(0)\right) u^{-(1+1 / \beta)} \mathrm{d}u
\end{align*}
with $f_{\infty}(x)={{\mathbb E}\,}[f\left(X_{1}+x\right)]$ and $\tilde{\sigma}=\left\{\frac{c_{0}^{\alpha}(\alpha \beta-1)}{\Gamma(2-\alpha \beta)|\cos (\pi \alpha \beta / 2)| \beta^{\alpha \beta}}\right\}^{1 /(\alpha \beta)}$.
\item[(3)] if ({\bf A2}) holds, $0<\alpha<1$, $1<\alpha\beta<2$ and $\lim\limits_{n\to\infty}n^{\frac{(\alpha\beta-1)(2-\alpha\beta)}{4\alpha\beta}+\frac{\eta\beta}{4}}h_n=0$ for $\eta\in(0,\frac{(\alpha\beta-1)^2}{\alpha\beta^2})$, then we have
\begin{equation}\label{asym3}
n^{1-\frac{1}{\alpha\beta}}\left[T_n(h_n)-{{\mathbb E}\,} T_n(h_n)\right]\xrightarrow{\mathcal{L}}c_+^{1/\alpha\beta} c_{f}^{+}L^{+}+c_-^{1/\alpha\beta}c_{f}^{-}L^{-},
\end{equation}
where $c_f^{\pm}$ and $L^{\pm}$ are defined in (2).
\end{enumerate}
\end{theorem}
\begin{remark}
The bandwidth $h_n$ in Theorem \ref{thm2} can be chosen independent of $\alpha$ and $\beta$. Note that for any $x\in(1,2)$, $g(x)=\frac{(x-1)(2-x)}{4x}\in (0, \frac{3-2\sqrt{2}}{4}]$ and $\frac{3-2\sqrt{2}}{4}<\frac{3-2.8}{4}=\frac{1}{20}$. Choose $\eta\in(0,\alpha-\frac{1}{\beta})$ small enough. We only need to require $n^{\frac{1}{20}}h_n\to 0$ and $nh_n\to\infty$ as $n\to\infty$. Moreover, ${{\mathbb E}\,} T_n(h_n)$ in $\eref{asym1}$, $\eref{asym2}$ and $\eref{asym3}$ can be replaced by $\int_{\mathbb{R}}f^2(x)\, dx$ if $h_n$ satisfies $\lim\limits_{n\to\infty}n^{\frac{\alpha\beta-1}{2\alpha}}h_n=0$ in $\eref{asym1}$ and $\lim\limits_{n\to\infty}n^{\frac{\alpha\beta-1}{2\alpha\beta}}h_n=0$ in $\eref{asym2}$ and $\eref{asym3}$. Clearly, if $h_n=O(n^{-\frac{1}{4}})$ and $nh_n\to \infty$ as $n\to\infty$, we can replace ${{\mathbb E}\,} T_n(h_n)$ in Theorem $\ref{thm2}$ by $\int_{\mathbb{R}}f^2(x)\, dx$.
\end{remark}
\bigskip
\section{Simulation}
We carry out a simulation study to examine properties of the kernel entropy estimator for the linear process $X=\{X_n:\, n\in\mathbb{N}\}$ defined in $\eref{def-Xn}$. Here we assume that the innovation $\varepsilon_1$ follows the standard symmetric $\alpha$-stable law $S_{\alpha}(1,0,0)$. Moreover, we take the usual normal kernel function $K(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}$, the bandwidth $h_n=n^{-\frac{1}{5}}$ and coefficients
\[
a_i= \begin{cases} 1,
& \text { for } i=0\\
i^{-\beta},
& \text { for } i\geq 1\end{cases}.
\]
Since the innovations $\varepsilon_i$ are i.i.d. $S_{\alpha}(1,0,0)$ random variables with $0<\alpha<2$, we have $\phi_{\varepsilon}(u)=e^{-|u|^{\alpha}}$. Therefore, the characteristic function of linear process $X=\{X_n:\, n\in\mathbb{N}\}$ is written as
\[
\phi(u)={{\mathbb E}\,}[e^{\iota\lambda X_n}]=\prod_{i=0}^{\infty}{{\mathbb E}\,}[e^{\iota\lambda a_i \varepsilon_{n-i}}]=e^{-|u|^{\alpha}\sum\limits_{i=0}^{\infty}|a_i|^{\alpha}}.
\]
Now, by Plancherel theorem, the quadratic functional of the linear process $X=\{X_n:\, n\in\mathbb{N}\}$ can be obtained as
\[
\int_{\mathbb{R}} f^2(x)\, dx=\frac{1}{2\pi}\int_{\mathbb{R}} |\phi(\lambda)|^2\, d\lambda=\dfrac{1}{\pi \alpha \Big(2\sum\limits_{i=0}^{\infty}|a_i|^{\alpha}\Big)^{\frac{1}{\alpha}} }\Gamma(\frac{1}{\alpha}).
\]
We perform the simulation study by using the software MATLAB. Here, the sample sizes are $n=1000, 2000, 5000$ and we always simulate $N=1000$ times. In the following two tables, Mean, Var and Mse stand for the sample means, the sample variances and the sample mean squared errors, respectively.
The true values of $\int_{\mathbb{R}}f^2(x)\, dx$ and simulation results are summarized in the following two tables. Table 1 is for $\alpha=0.5$, while Table 2 is for $\alpha=1.5$. From these tables, we observe that
\begin{itemize}
\item[(i)] As $n$ increases, the estimated value of $\int_{\mathbb{R}} f^2(x)\, dx$ approaches the true value and the bias steadily decreases;
\item[(ii)] The estimator performs pretty well if $\alpha\beta$ is close to $2$.
\end{itemize}
\begin{table}[htb] \label{table1}
\centering
\caption{$\alpha=0.5$} \medskip
\begin{tabular}{ccccccclccccc}
\hline
\multicolumn{3}{c}{\multirow{2}{*}{$\beta$}}&
\multicolumn{3}{c}{\multirow{2}{*}{True value}}& &
\multicolumn{1}{c}{$n$}& & 1000&2000&5000&\\
\multicolumn{3}{c}{}&\multicolumn{3}{c}{}& &\multicolumn{1}{c}{$h_n=n^{-1/5}$}& &0.2513 &0.2187&0.1821&\\
\hline
\multicolumn{3}{c}{\multirow{3}{*}{2.5}}&\multicolumn{3}{c}{\multirow{3}{*}{0.0051}}& &
Mean& & 0.0073& 0.0070& 0.0065&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Var($\times 10^{-3}$)& & 0.0099&0.0074&0.0046\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Mse($\times 10^{-3}$)& & 0.0148&0.0111&0.0064&\\
\hline
\multicolumn{3}{c}{\multirow{3}{*}{3.5}}&\multicolumn{3}{c}{\multirow{3}{*}{0.0181}}& &
Mean& & 0.0184& 0.0185& 0.0182&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Var($\times 10^{-3}$)& & 0.0147&0.0080&0.0037&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Mse($\times 10^{-3}$)& & 0.0148&0.0081&0.0037&\\
\hline
\multicolumn{3}{c}{\multirow{3}{*}{3.9}}&\multicolumn{3}{c}{\multirow{3}{*}{0.0219}}& &
Mean& & 0.0219& 0.0221& 0.0219&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Var($\times 10^{-3}$)& & 0.0146&0.0068&0.0028&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Mse($\times 10^{-3}$)& & 0.0146&0.0068&0.0028&\\
\hline
\end{tabular}
\end{table}
\begin{table}[htb] \label{table2}
\centering
\caption{$\alpha=1.5$} \medskip
\begin{tabular}{ccccccclccccc}
\hline
\multicolumn{3}{c}{\multirow{2}{*}{$\beta$}}&
\multicolumn{3}{c}{\multirow{2}{*}{True value}}& &
\multicolumn{1}{c}{$n$}& & 1000&2000&5000&\\
\multicolumn{3}{c}{}&\multicolumn{3}{c}{}& &\multicolumn{1}{c}{$h_n=n^{-1/5}$}& &0.2513 &0.2187&0.1821&\\
\hline
\multicolumn{3}{c}{\multirow{3}{*}{0.9}}&\multicolumn{3}{c}{\multirow{3}{*}{0.0668}}& &
Mean& & 0.0738& 0.0728& 0.0705&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Var($\times 10^{-3}$)& & 0.1110&0.0713&0.0544\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Mse($\times 10^{-3}$)& & 0.1601&0.1076&0.0679&\\
\hline
\multicolumn{3}{c}{\multirow{3}{*}{1.1}}&\multicolumn{3}{c}{\multirow{3}{*}{0.0840}}& &
Mean& & 0.0858& 0.0857& 0.0846&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Var($\times 10^{-3}$)& & 0.0777&0.0420&0.0247&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Mse($\times 10^{-3}$)& & 0.0807&0.0447&0.0251&\\
\hline
\multicolumn{3}{c}{\multirow{3}{*}{1.3}}&\multicolumn{3}{c}{\multirow{3}{*}{0.0935}}& &
Mean& & 0.0939& 0.0940& 0.0935&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Var($\times 10^{-3}$)& & 0.0525&0.0268&0.0124&\\
\multicolumn{3}{c}{}& \multicolumn{3}{c}{}& &
Mse($\times 10^{-3}$)& & 0.0527&0.0270&0.0124&\\
\hline
\end{tabular}
\end{table}
\bigskip
\section{Proofs}
In this section, we will prove Theorems \ref{thm1} and \ref{thm2}. To begin with, we introduce the projection method and two lemmas. Lemma \ref{lem1} is on the characteristic function of innovations $\varepsilon_i$ and Lemma \ref{lem2} gives the desired estimation of the covariance
\[
{{\mathbb E}\,}\left[ (e^{\iota \lambda X_i}-\phi(\lambda))(e^{-\iota \lambda X_j}-\phi(-\lambda))\right].
\]
For each $i \in \mathbb{Z}$, let $\mathcal{F}_{i}$ be the $\sigma$-field generated by random variables $\left\{\varepsilon_{k}: k \leq i\right\}$. Given an integrable complex-valued random variable $Z$, we define the projection operator $\mathcal{P}_{i}$ as
\[
\mathcal{P}_{i} Z={{\mathbb E}\,}[Z|\mathcal{F}_{i}]-{{\mathbb E}\,}[Z|\mathcal{F}_{i-1}]
\]
for each $i \in \mathbb{Z}$. It is easy to see that ${{\mathbb E}\,}[\mathcal{P}_{i} Z \mathcal{P}_{j} W]=0$ if $i\neq j$, $\mathbb{E} |Z|^2<\infty$ and $\mathbb{E} |W|^2<\infty$.
\begin{lemma} \label{lem1}
If $\varepsilon$ is in the domain of attraction of an $\alpha$-stable law with $\alpha\in (0,2)$ and $\phi_{\varepsilon}(\lambda)$ is its characteristic function, then for any $\eta\in (0,\alpha)$, there exists a positive constant $c_{\alpha,\eta}$ such that
\[
{{\mathbb E}\,}\big|e^{ \iota \lambda\varepsilon}-\phi_{\varepsilon}(\lambda)\big|^2\leq c_{\alpha,\eta}\left(|\lambda|^{\alpha-\eta}\wedge 1\right).
\]
\end{lemma}
\noindent
{\it Proof}: By Theorem 2.6.5 in \cite{Ibragimov and Linnik-1971},
\[
|\phi_{\varepsilon}(\lambda)|^2
=e^{-c_1 |\lambda|^{\alpha}L(\lambda)(1+o(1))}
\]
as $\lambda\to 0$, where $L$ is a slowly varying function as $\lambda\to 0$. Therefore,
\[
{{\mathbb E}\,}|e^{ \iota \lambda \varepsilon_1}-\phi_{\varepsilon}(\lambda)|^2=1-\left|\phi_{\varepsilon}(\lambda)\right|^2\leq c_{\alpha,\eta}\left(|\lambda|^{\alpha-\eta}\wedge 1\right),
\]
where in the last inequality we used the fact that~$|1-e^{-x}|\leq x$~for~$x\geq 0$ and $\lim\limits_{\lambda\to 0}|\lambda|^{\eta}L(\lambda)=0$ for any $\eta>0$. {\hfill $\square$ \bigskip}
\begin{lemma} \label{lem2}
For any $1\leq i\neq j\leq n$ and $\eta\in(0,\alpha-\frac{1}{\beta})$, there exists a positive constant $c_{\eta}$ such that
\[
\bigg|{{\mathbb E}\,}\Big[(e^{\iota \lambda X_i}-\phi(\lambda))(e^{-\iota \lambda X_j}-\phi(-\lambda))\Big]\bigg|\leq \frac{c_{\eta}}{1+|\lambda|^4} |i-j|^{1-(\alpha-\eta)\beta}.
\]
\end{lemma}
\noindent
{\it Proof}: For any $i\geq 1$,
\begin{align}\label{decomposition}
e^{\iota \lambda X_i}-\phi(\lambda)
&=\sum^{i}_{k=-\infty}\mathcal{P}_{k}(e^{\iota \lambda X_i}-\phi(\lambda)) \nonumber \\
&=\sum^{i}_{k=-\infty}\left(\prod^{i-k-1}_{\ell=0}\phi_{\varepsilon}(\lambda a_{\ell})\right)\left(e^{\iota \lambda a_{i-k} \varepsilon_k}-\phi_{\varepsilon}(\lambda a_{i-k})\right) e^{\iota \lambda\sum\limits^{\infty}_{\ell=i-k+1}a_{\ell}\varepsilon_{i-\ell}}.
\end{align}
It suffices to consider the case $i>j$.
Using the decomposition \eref{decomposition}, we can obtain that
\begin{align*}
{{\mathbb E}\,}\left[ (e^{\iota \lambda X_i}-\phi(\lambda))(e^{-\iota \lambda X_j}-\phi(-\lambda))\right]=\sum^{j}_{k=-\infty} \text{I}_1\times \text{I}_2 \times \text{I}_3,
\end{align*}
where
\begin{align*}
\text{I}_1&=\prod^{i-k-1}\limits_{\ell=0}\phi_{\varepsilon}(\lambda a_{\ell})\prod\limits^{j-k-1}_{\ell=0}\phi_{\varepsilon}(-\lambda a_{\ell}),\\
\text{I}_2&=\mathbb{E}\big[(e^{\iota \lambda a_{i-k} \varepsilon_k}-\phi_{\varepsilon}(\lambda a_{i-k}))(e^{-\iota \lambda a_{j-k} \varepsilon_k}-\phi_{\varepsilon}(-\lambda a_{j-k}))\big],\\
\text{I}_3&=\prod^{\infty}_{\ell=1}\phi_{\varepsilon}(\lambda(a_{i-k+\ell}-a_{j-k+\ell}))
\end{align*}
with the convention $\prod\limits^{-1}_{\ell=0}\phi_{\varepsilon}(\lambda a_{\ell})=1$.
By Assumption ({\bf A1}), there exists $m_0\in{\mathbb N}$ such that $\prod\limits^{m_0}_{\ell=0}\left|\phi_{\varepsilon}(\lambda a_{\ell})\right|$ is less than a constant multiple of $\frac{1}{1+|\lambda |^6}$. Hence, for the case $i-j>m_0$ or $k<j-m_0$, $|\text{I}_1|$ is less than a constant multiple of $\frac{1}{1+|\lambda |^6}$. Moreover, for the case $1\leq i-j\leq m_0$ and $j-m_0\leq k\leq j$, $\lim\limits_{n\to\infty}n^{\beta}a_n=c_0$ implies that there exist infinitely many $\ell (\geq 1)$ such that
\[
a_{i-k+\ell}-a_{j-k+\ell}=a_{i-j+j-k+\ell}-a_{j-k+\ell}\neq 0.
\]
So, by Assumption ({\bf A1}), $|\text{I}_3|$ is less than a constant multiple of $\frac{1}{1+|\lambda |^6}$ in the case $1\leq i-j\leq m_0$ and $j-m_0\leq k\leq j$.
Therefore, using Cauchy-Schwartz inequality and Lemma \ref{lem1} with $\eta\in(0, \alpha-\frac{1}{\beta})$,
\begin{align*}
&\Big|{{\mathbb E}\,}\left[ (e^{\iota \lambda X_i}-\phi(\lambda))(e^{-\iota \lambda X_j}-\phi(-\lambda))\right]\Big|\\
&\leq \frac{c_1}{1+|\lambda |^6}\sum^{i\wedge j}_{k=-\infty}\Big|\mathbb{E}\big[(e^{\iota \lambda a_{i-k} \varepsilon_k}-\phi_{\varepsilon}(\lambda a_{i-k}))(e^{-\iota \lambda a_{j-k} \varepsilon_k}-\phi_{\varepsilon}(-\lambda a_{j-k}))\big]\Big|\\
&\leq \frac{c_2}{1+|\lambda |^6}|\lambda|^{\alpha-\eta}\sum^{i\wedge j}_{k=-\infty}|a_{i-k}|^{\frac{\alpha-\eta}{2}}|a_{j-k}|^{\frac{\alpha-\eta}{2}}\\
&\leq \frac{c_3}{1+|\lambda |^4} \sum^{\infty}_{\ell=1}\big(|i-j|+\ell\big)^{-\frac{(\alpha-\eta)\beta}{2}}\ell^{-\frac{(\alpha-\eta)\beta}{2}} \\
&\leq \frac{c_4}{1+|\lambda |^4} \, |i-j|^{1-(\alpha-\eta)\beta}\int^{\infty}_0(1+x)^{-\frac{(\alpha-\eta)\beta}{2}}x^{-\frac{(\alpha-\eta)\beta}{2}}dx \\
&\leq \frac{c_5}{1+|\lambda |^4}\, |i-j|^{1-(\alpha-\eta)\beta}.
\end{align*}
This gives the desired estimate. {\hfill $\square$ \bigskip}
Now we give the proof of Theorem \ref{thm1}.
\noindent
{\it Proof of Theorem \ref{thm1}} The proof will be done in several steps.
\noindent
{\bf Step 1.} We give the estimation for $\big|{{\mathbb E}\,} T_n(h_n)-\int_{\mathbb{R}}f^2(x)\mathrm{d}x\big|$. Using the Fourier inverse transform, we can obtain that
\begin{align*}
&{{\mathbb E}\,} T_n(h_n)-\int_{{\mathbb R}} f^2(x)\, dx\notag\\
&=\frac{1}{\pi n(n-1)}\sum_{1\leq j<i\leq n}\int_{{\mathbb R}}\widehat{K}(\lambda h_n) \, {{\mathbb E}\,}\big[ (e^{\iota \lambda X_i}-\phi(\lambda))(e^{-\iota \lambda X_j}-\phi(-\lambda))\big]\, d\lambda\\
&\qquad\qquad\qquad+\frac{1}{2\pi}\int_{{\mathbb R}}\big(\widehat{K}(\lambda h_n)-\widehat{K}(0)\big)|\phi(\lambda)|^2\, d\lambda\\
&=: \text{II}_1+\text{II}_2.
\end{align*}
By Lemma \ref{lem2} and the boundedness of $\widehat{K}$, we can obtain that
\begin{align*}
|\text{II}_1|\leq \frac{c_1}{n^2} \sum_{1\leq j<i\leq n}|i-j|^{1-(\alpha-\eta)\beta}\leq c_2\, n^{1-(\alpha-\eta)\beta}.
\end{align*}
Since $K$ is a symmetric and bounded function with $\int_{{\mathbb R}} K(u)\, du=1$ and $\int_{{\mathbb R}}u^2 |K(u)|\, du<\infty$,
\begin{align*}
|\widehat{K}(\lambda h_n)-\widehat{K}(0)|=\left|\int_{{\mathbb R}} (e^{\iota\lambda h_n u}-1)K(u) \, du\right|\leq |\lambda|^2 h^2_n \int_{{\mathbb R}} u^2 |K(u)|\, du.
\end{align*}
Then, by Assumption ({\bf A1}), $|\text{II}_2|$ is less than a constant multiple of $h^2_n$. Hence
\begin{align*}
\Big| {{\mathbb E}\,} T_n(h_n)-\int_{{\mathbb R}} f^2(x)\, dx \Big|\leq c_3\left(n^{1-(\alpha-\eta)\beta}+h^{2}_n\right).
\end{align*}
{\bf Step 2.} We give the decomposition for $T_n(h_n)-{{\mathbb E}\,} T_n(h_n)$. It is easy to see that
\begin{align} \label{decomp}
T_n(h_n)-{{\mathbb E}\,} T_n(h_n)=2N_n+A_{n}-{{\mathbb E}\,} A_{n}+B_{n}-{{\mathbb E}\,} B_{n},
\end{align}
where
\begin{align*}
N_n&=\frac{1}{\pi n(n-1)}\sum_{1\leq j<i\leq n,\, i-j>m}\int_{{\mathbb R}} \widehat{K}(\lambda h_n) \big(e^{\iota \lambda X_i}-\phi(\lambda) \big) \phi(-\lambda) \, d\lambda,
\end{align*}
\begin{align*}
A_{n}&=\frac{1}{n(n-1)h_n} \sum_{1\le j<i\le n,\, i-j\leq m}K\left(\frac{X_i-X_j}{h_n}\right),
\end{align*}
\begin{align*}
B_{n}&=\frac{1}{\pi n(n-1)}\sum_{1\leq j<i\leq n,\, i-j>m}\int_{{\mathbb R}} \widehat{K}(\lambda h_n)\big(e^{\iota \lambda X_i}-\phi(\lambda) \big)\big(e^{-\iota \lambda X_j}-\phi(-\lambda) \big) \, d\lambda
\end{align*}
and the proper choice of the natural number $m$ will be specified in {\bf Step 4.}
{\bf Step 3.} We estimate ${{\mathbb E}\,} |A_{n}-\mathbb{E} A_{n}|^2$. For $1\leq j<i\leq n$, we observe that
\[
X_i-X_j=\sum^{\infty}_{\ell=0} a^{i,j}_{\ell}\epsilon_{i-\ell},
\]
where
\begin{equation} \label{aij}
a^{i,j}_{\ell}= \begin{cases} a_{\ell},
& \text { for } 0\leq\ell\leq i-j-1,\\
a_{\ell}-a_{\ell-(i-j)},
& \text { for } \ell\geq i-j.\end{cases}
\end{equation}
For $1\leq j<i\leq n$ with $i-j\leq m$, it is easy to see that there exists $m_1\in\mathbb{N}$ such that $m_1\geq m$ and
\begin{align} \label{m1}
\prod\limits^{m_1}_{\ell=0}|\phi_{\varepsilon}(a^{i,j}_{\ell}\lambda)|\leq \frac{c_4}{1+|\lambda|^4}.
\end{align}
Let
\[
A^{i,j}_n=\frac{1}{n(n-1)h_n}\left(K\Big(\frac{X_{i}-X_{j}}{h_n}\Big)-\mathbb{E}K\Big(\frac{X_{i}-X_{j}}{h_n}\Big)\right).
\]
Then
\begin{align} \label{An}
&{{\mathbb E}\,} |A_{n}-\mathbb{E} A_{n}|^2 \nonumber\\
&=\sum_{\substack{ 1\leq j_1<i_1\leq n,\, 1\leq j_2<i_2\leq n;\\ i_1-j_1\leq m, \, i_2-j_2\leq m,\, |i_2-i_1|\leq 2m_1}} \mathbb{E}[A^{i_1, j_1}_nA^{i_2, j_2}_n]+\sum_{\substack{ 1\leq j_1<i_1\leq n, \, 1\leq j_2<i_2\leq n;\\ i_1-j_1\leq m, \, i_2-j_2\leq m, \, |i_2-i_1|>2m_1}} \mathbb{E}[A^{i_1, j_1}_nA^{i_2,j_2}_n].
\end{align}
Boundedness of the kernel function $K$ implies that
\begin{align}\label{An1}
\sum_{\substack{ 1\leq j_1<i_1\leq n, \, 1\leq j_2<i_2\leq n;\\ i_1-j_1\leq m,\, i_2-j_2\leq m, \, |i_2-i_1|\leq 2m_1}} \big|\mathbb{E}[A^{i_1,j_1}_nA^{i_2, j_2}_n]\big|\leq \frac{c_5}{n^3h^2_n}.
\end{align}
To estimate
\[
\sum_{\substack{ 1\leq j_1<i_1\leq n, \, 1\leq j_2<i_2\leq n;\\ i_1-j_1\leq m, \, i_2-j_2\leq m, \, |i_2-i_1|>2m_1}} \mathbb{E}[A^{i_1, j_1}_nA^{i_2, j_2}_n],
\]
it suffices to consider the case $i_2-i_1>2m_1$.
Applying the projection operator $\mathcal{P}_k$ on the terms $A^{i,j}_n$, we see that
\begin{align*}
\mathbb{E}[A^{i_1,j_1}_nA^{i_2,j_2}_n]
&=\sum^{i_1}_{k=-\infty}\mathbb{E}[\mathcal{P}_kA^{i_1,j_1}_n\mathcal{P}_kA^{i_2,j_2}_n] \nonumber\\
&=\sum^{i_1}_{k=i_1-m_1}\mathbb{E}[\mathcal{P}_kA^{i_1,j_1}_n\mathcal{P}_kA^{i_2,j_2}_n]+\sum^{i_1-m_1-1}_{k=-\infty}\mathbb{E}[\mathcal{P}_kA^{i_1,j_1}_n\mathcal{P}_kA^{i_2,j_2}_n].
\end{align*}
By Fourier inverse transform and the boundedness of $\widehat{K}$,
\begin{align} \label{|Aij|}
&|\mathbb{E}[\mathcal{P}_kA^{i_1,j_1}_n\mathcal{P}_kA^{i_2,j_2}_n]| \nonumber\\
=&\Big|\frac{1}{4\pi^2n^2(n-1)^2}\int_{\mathbb{R}^2} \widehat{K}(\lambda_1h_n)\widehat{K}(\lambda_2h_n)\nonumber\\
&\quad\times \mathbb{E}\big[\mathcal{P}_k (e^{\iota\lambda_1(X_{i_1}-X_{j_1})}-\mathbb{E}e^{\iota\lambda_1(X_{i_1}-X_{j_1})})\mathcal{P}_k (e^{\iota\lambda_2(X_{i_2}-X_{j_2})}-\mathbb{E}e^{\iota\lambda_2(X_{i_2}-X_{j_2})}) \big]\, d\lambda_1 d\lambda_2\Big|\nonumber\\
\leq &\frac{c_6}{n^4}\int_{\mathbb{R}^2} \left|\mathbb{E}\big[\mathcal{P}_k (e^{\iota\lambda_1(X_{i_1}-X_{j_1})}-\mathbb{E}e^{\iota\lambda_1(X_{i_1}-X_{j_1})})\mathcal{P}_k (e^{\iota\lambda_2(X_{i_2}-X_{j_2})}-\mathbb{E}e^{\iota\lambda_2(X_{i_2}-X_{j_2})}) \big]\right|\, d\lambda_1 d\lambda_2.
\end{align}
In the case $i_1-m_1\leq k\leq i_1$, recall the choice of $m_1$ in $\eref{m1}$ and $i_2-i_1>2m_1$, we see that
\[
\left|\mathbb{E}\big[\mathcal{P}_k (e^{\iota\lambda_1(X_{i_1}-X_{j_1})}-\mathbb{E}e^{\iota\lambda_1(X_{i_1}-X_{j_1})})\mathcal{P}_k (e^{\iota\lambda_2(X_{i_2}-X_{j_2})}-\mathbb{E}e^{\iota\lambda_2(X_{i_2}-X_{j_2})}) \big]\right|
\]
in $\eref{|Aij|}$ is less than a constant multiple of
\[
\left|\mathbb{E}\big[(e^{\iota\lambda_1a^{i_1,j_1}_{i_1-k}\varepsilon_k}-\phi_{\varepsilon}(\lambda_1a^{i_1,j_1}_{i_1-k}))(e^{\iota\lambda_1a^{i_2,j_2}_{i_2-k}\varepsilon_k}-\phi_{\varepsilon}(\lambda_2a^{i_2,j_2}_{i_2-k}))\big] \right|\frac{1}{1+|\lambda_2|^4}\prod^{\infty}_{\ell=m_1+1}|\phi_{\varepsilon}(a^{i_1,j_1}_{\ell}\lambda_1+a^{i_2,j_2}_{\ell}\lambda_2)|.
\]
According to Assumption ({\bf A1}) and $1\leq i_{\theta}-j_{\theta}\leq m$ for $\theta=1,2$, it is easy to see that
\[
\int_{\mathbb{R}}\prod^{\infty}_{\ell=m_1+1}|\phi_{\varepsilon}(a^{i_1,j_1}_{\ell}\lambda_1+a^{i_2,j_2}_{\ell}\lambda_2)|\, d\lambda_1\leq M,
\]
where $M$ a finite positive number independent of $\lambda_2, i_1, j_1, i_2, j_2$.
In the case $k\leq i_1-m_1-1$, recall the choice of $m_1$ in $\eref{m1}$ and $i_2-i_1>2m_1$, we see that
\[
\left|\mathbb{E}\big[\mathcal{P}_k (e^{\iota\lambda_1(X_{i_1}-X_{j_1})}-\mathbb{E}e^{\iota\lambda_1(X_{i_1}-X_{j_1})})\mathcal{P}_k (e^{\iota\lambda_2(X_{i_2}-X_{j_2})}-\mathbb{E}e^{\iota\lambda_2(X_{i_2}-X_{j_2})}) \big]\right|
\]
in $\eref{|Aij|}$ is less than a constant multiple of
\[
\left|\mathbb{E}\big[(e^{\iota\lambda_1a^{i_1,j_1}_{i_1-k}\varepsilon_k}-\phi_{\varepsilon}(\lambda_1a^{i_1,j_1}_{i_1-k}))(e^{\iota\lambda_2a^{i_2,j_2}_{i_2-k}\varepsilon_k}-\phi_{\varepsilon}(\lambda_2a^{i_2,j_2}_{i_2-k}))\big] \right|\frac{1}{1+|\lambda_1|^4}\frac{1}{1+|\lambda_2|^4}.
\]
Therefore, by using Cauchy-Schwartz inequality, Lemma $\ref{lem1}$ and the definition of $a^{i,j}_{\ell}$ in $\eref{aij}$, we can obtain that
\begin{align*}
\sum\limits^{i_1}_{k=i_1-m_1}\left| \mathbb{E}[\mathcal{P}_kA^{i_1,j_1}_n\mathcal{P}_kA^{i_2,j_2}_n]\right|
&\leq \frac{c_7}{n^4}\sum\limits^{i_1}_{k=i_1-m_1} \int_{\mathbb{R}^2}
|a^{i_2,j_2}_{i_2-k}\lambda_2|^{\frac{\alpha-\eta}{2}}\frac{1}{1+|\lambda_2|^4}\prod^{\infty}_{\ell=m_1+1}|\phi_{\varepsilon}(a^{i_1,j_1}_{\ell}\lambda_1+a^{i_2,j_2}_{\ell}\lambda_2)|\, d\lambda_1d\lambda_2\\
&\leq \frac{c_8}{n^4}\sum\limits^{i_1}_{k=i_1-m_1} \int_{\mathbb{R}}
|a^{i_2,j_2}_{i_2-k}\lambda_2|^{\frac{\alpha-\eta}{2}}\frac{1}{1+|\lambda_2|^4}\, d\lambda_2\\
&\leq \frac{c_9}{n^4} \sum\limits^{i_1}_{k=i_1-m_1} |a^{i_2,j_2}_{i_2-k}|^{\frac{\alpha-\eta}{2}}\\
&\leq \frac{c_{10}}{n^4} |i_2-i_1|^{-\frac{(\alpha-\eta)\beta}{2}}
\end{align*}
and
\begin{align*}
\sum^{i_1-m_1-1}_{k=-\infty}\left|\mathbb{E}[\mathcal{P}_kA^{i_1,j_1}_n\mathcal{P}_kA^{i_2,j_2}_n]\right|
&\leq \frac{c_{11}}{n^4}\sum^{i_1-m_1-1}_{k=-\infty}\int_{\mathbb{R}^2} |a^{i_1,j_1}_{i_1-k}\lambda_1|^{\frac{\alpha-\eta}{2}}|a^{i_2,j_2}_{i_2-k}\lambda_2|^{\frac{\alpha-\eta}{2}}\frac{1}{1+|\lambda_1|^4}\frac{1}{1+|\lambda_2|^4} \, d\lambda_1d\lambda_2\\
&\leq \frac{c_{12}}{n^4}\sum^{i_1-m_1-1}_{k=-\infty}|a^{i_1,j_1}_{i_1-k}|^{\frac{\alpha-\eta}{2}}|a^{i_2,j_2}_{i_2-k}|^{\frac{\alpha-\eta}{2}}\\
&\leq \frac{c_{13}}{n^4}\sum^{i_1-m_1-1}_{k=-\infty}|a_{i_1-k}|^{\frac{\alpha-\eta}{2}}|a_{i_2-k}|^{\frac{\alpha-\eta}{2}}\\
&\leq \frac{c_{14}}{n^4}|i_2-i_1|^{1-(\alpha-\eta)\beta}.
\end{align*}
Hence
\begin{align} \label{An2}
&\sum_{\substack{ 1\leq j_1<i_1\leq n, \, 1\leq j_2<i_2\leq n;\\ i_1-j_1\leq m,\, i_2-j_2\leq m, \, |i_2-i_1|>2m_1}} \left|\mathbb{E}[A^{i_1,j_1}_nA^{i_2,j_2}_n]\right| \nonumber\\
&\leq \frac{c_{15}}{n^4}\sum_{\substack{ 1\leq j_1<i_1\leq n, \, 1\leq j_2<i_2\leq n;\\ i_1-j_1\leq m,\, i_2-j_2\leq m,\, |i_2-i_1|>2m_1}} (|i_2-i_1|^{-\frac{(\alpha-\eta)\beta}{2}}
+|i_2-i_1|^{1-(\alpha-\eta)\beta})\nonumber\\
&\leq \frac{c_{16}}{n^{1+(\alpha-\eta)\beta}}.
\end{align}
Combining $(\ref{An})$, $(\ref{An1})$ and $(\ref{An2})$ gives
\begin{align} \label{Ane}
{{\mathbb E}\,} |A_{n}-\mathbb{E} A_{n}|^2\leq c_{17}\left(\frac{1}{n^3h^2_n}+\frac{1}{n^{1+(\alpha-\eta)\beta}}\right).
\end{align}
{\bf Step 4.} We estimate ${{\mathbb E}\,}|B_{n}-\mathbb{E} B_{n}|$. For each $i\in\mathbb{N}$ and $\lambda\in\mathbb{R}$, define
\[
H(X_i)(\lambda)=e^{\iota \lambda X_{i}}-\phi(\lambda).
\]
Then
\begin{align} \label{B}
B_{n}=B_{n,1}+B_{n,2}+B_{n,3},
\end{align}
where
\begin{align*}
B_{n,1}
&=\frac{1}{\pi n(n-1)}\sum_{1\leq j<i\leq n,\, i-j>m}\int_{{\mathbb R}} \widehat{K}(\lambda h_n)\sum^{j}_{k=-\infty}\mathcal{P}_{k} H(X_i)(\lambda)\mathcal{P}_{k}H(X_j)(-\lambda) \, d\lambda\\
B_{n,2}
&=\frac{1}{\pi n(n-1)}\sum_{1\leq j<i\leq n, \, i-j>m}\int_{{\mathbb R}} \widehat{K}(\lambda h_n)\sum^{}_{\substack{k\leq i,\, \ell\le j,\, k\neq \ell\\ i-k\leq m_0,\, j-\ell\leq m_0}}\mathcal{P}_{k}H(X_i)(\lambda)\mathcal{P}_{\ell}H(X_j)(-\lambda)\, d\lambda\\
B_{n,3}
&=\frac{1}{\pi n(n-1)}\sum_{1\leq j<i\leq n, \, i-j>m}\int_{{\mathbb R}} \widehat{K}(\lambda h_n)\sum^{}_{\substack{ k\leq i,\, \ell\le j,\, k\neq \ell\\ i-k> m_0\, \text{or}\, j-\ell>m_0}}\mathcal{P}_{k}H(X_i)(\lambda)\mathcal{P}_{\ell}H(X_j)(-\lambda)\, d\lambda.
\end{align*}
Using similar arguments as in {\bf Step 1.}, we can show that
\begin{align} \label{Bn1e}
\mathbb{E}|B_{n,1}|\leq c_{18}\, n^{1-(\alpha-\eta)\beta}.
\end{align}
Note that
\begin{align} \label{Bn2}
&\mathbb{E}|B_{n,2}|^2 \nonumber\\
&\leq \frac{c_{19}}{n^4}\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n, \, i_{\theta}-j_{\theta}>m, \, k_{\theta}\neq \ell_{\theta}\\ 0\leq i_{\theta}-k_{\theta}\leq m_0,\, 0\leq j_{\theta}-\ell_{\theta}\leq m_0, \, \theta=1,2}}\int_{{\mathbb R}^2} \big| \widehat{K}(\lambda_1 h_n)\big|\big|\widehat{K}(\lambda_2 h_n)\big| \nonumber\\
&\quad\times \left| \mathbb{E}\big[\mathcal{P}_{k_1}H(X_{i_1})(\lambda_1) \mathcal{P}_{\ell_1}H(X_{j_1})(-\lambda_1) \mathcal{P}_{k_2}H(X_{i_2})(\lambda_2) \mathcal{P}_{\ell_2}H(X_{j_2})(-\lambda_2) \big]\right| \, d\lambda_1d\lambda_2.
\end{align}
In the sequel, we will estimate the expectation
\begin{align} \label{expectation}
\mathbb{E}\big[\mathcal{P}_{k_1}H(X_{i_1})(\lambda_1) \mathcal{P}_{\ell_1}H(X_{j_1})(-\lambda_1) \mathcal{P}_{k_2}H(X_{i_2})(\lambda_2) \mathcal{P}_{\ell_2}H(X_{j_2})(-\lambda_2) \big]
\end{align}
and specify the choice of $m$. Assume that $m$ is larger than $4m_0$. Then there are four possibilities for the orderings of $i_1, j_1, i_2,j_2$:
\[
(1)\; i_1\geq i_2>j_1\geq j_2, \quad (2)\; i_1\geq i_2>j_2\geq j_1,\quad (3)\; i_2\geq i_1>j_1\geq j_2, \quad (4)\; i_2\geq i_1>j_2\geq j_1.
\]
By symmetry, it suffices to consider the first two cases. In the first case $i_1\geq i_2>j_1\geq j_2$, the expectation $\eref{expectation}$ is equal to zero if $k_1\neq k_2$. When $k_1=k_2$, $0\leq i_1-k_1\leq m_0$ and $0\leq i_2-k_2\leq m_0$ imply $0\leq i_1-i_2\leq m_0$. If $m>m_0+m_2$, then there is a factor
\[
\prod^{i_2-m_0-1}_{q=i_2-m_0-m_2}\phi_{\varepsilon}(a_{i_1-q}\lambda_1+a_{i_2-q}\lambda_2)
\]
in the expectation $\eref{expectation}$. By Assumption ({\bf A1}), we can choose $m_2\in\mathbb{N}$ independent of $i_2$ such that
\begin{align*}
\prod^{i_2-m_0-1}_{q=i_2-m_0-m_2}\left|\phi_{\varepsilon}(a_{i_1-q}\lambda_1+a_{i_2-q}\lambda_2)\right|
&\leq \prod^{m_0+m_2}_{p=m_0+1}\frac{c_{20}}{1+|a_{i_1-i_2+p}\lambda_1+a_{p}\lambda_2|^{\delta}}\\
&\leq \sum^{m_0+m_2}_{p=m_0+1}\frac{c_{21}}{1+|a_{i_1-i_2+p}\lambda_1+a_{p}\lambda_2|^4},
\end{align*}
where in the last inequality we used $\prod\limits^{m_2}_{k=1} x_k\leq \sum\limits^{m_2}_{k=1} x^{m_2}_k$ for any $x_1\geq 0,\cdots, x_{m_2}\geq 0$.
Moreover, if $|\ell_1-\ell_2|>2m_0+m_3+m_4$, then there is another factor
\[
\prod^{j_1-m_0-m_3}_{q=j_1-m_0-m_3-m_4}\phi_{\varepsilon}(a_{i_1-q}\lambda_1+a_{i_2-q}\lambda_2-a_{j_1-q}\lambda_1)=\prod^{m_0+m_3+m_4}_{p=m_0+m_3}\phi_{\varepsilon}((a_{i_1-j_1+p}-a_p)\lambda_1+a_{i_2-j_1+p}\lambda_2)
\]
in the expectation $\eref{expectation}$.
By Assumption ({\bf A1}), $\lim\limits_{n\to\infty}n^{\beta}a_n=c_0$, $|\ell_1-\ell_2|>2m_0+m_3+m_4$ and $0\leq j_{\theta}-\ell_{\theta}\leq m_0$ for $\theta=1,2$, we can choose $m_3, m_4\in\mathbb{N}$ independent of $i_1,i_2,j_1$ such that
\[
\prod^{j_1-m_0-m_3}_{q=j_1-m_0-m_3-m_4}\left|\phi_{\varepsilon}(a_{i_1-q}\lambda_1+a_{i_2-q}\lambda_2-a_{j_1-q}\lambda_1)\right|\leq \sum^{m_0+m_3+m_4}_{q=m_0+m_3}\frac{c_{22}}{1+|(a_{i_1-i_2+i_2-j_1+q}-a_q)\lambda_1+a_{i_2-j_1+q}\lambda_2|^4}.
\]
Note that
\[
i_2-j_1= i_1-j_1-(i_1-i_2)>m-m_0.
\]
So we can choose $m\in\mathbb{N}$ large enough such that
\begin{align} \label{lambda1}
|a_{i_1-i_2+i_2-j_1+q}-a_q|\geq \frac{1}{2}|a_q|
\end{align}
and
\begin{align} \label{lambda2}
\left|\det\left(\left(\begin{array}{ll}
a_{i_1-i_2+p} & a_{p} \\
a_{i_1-i_2+i_2-j_1+q}-a_q & a_{i_2-j_1+q}
\end{array}\right)\right)\right|\geq \frac{1}{4}|a_pa_q|>0
\end{align}
for all $0\leq i_1-i_2\leq m_0$, $m_0+1\leq p\leq m_0+m_2$, $m_0+m_3\leq q\leq m_0+m_3+m_4$.
So for $m$ large enough, in the first case $i_1\geq i_2>j_1\geq j_2$, the right hand side of $\eref{Bn2}$ is less than a constant multiple of
\[
\text{III}_1+\text{III}_2,
\]
where
\begin{align*}
\text{III}_1&=\frac{1}{n^4}\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m,\, k_{\theta}\neq \ell_{\theta}\\ 0\leq i_{\theta}-k_{\theta}\leq m_0,\, 0\leq j_{\theta}-\ell_{\theta}\leq m_0,\, \theta=1,2}}\int_{{\mathbb R}^2} \big| \widehat{K}(\lambda_1 h_n)\big|\big|\widehat{K}(\lambda_2 h_n)\big|\\
&\qquad\qquad\times 1_{\{k_1=k_2, \, |\ell_1-\ell_2|\leq 2m_0+m_3+m_4\}}\sum^{m_0+m_2}_{p=m_0+1}\frac{1}{1+|a_{i_1-i_2+p}\lambda_1+a_{p}\lambda_2|^4}\, d\lambda_1d\lambda_2
\end{align*}
and
\begin{align*}
\text{III}_2&=\frac{1}{n^4}\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m, \, k_{\theta}\neq \ell_{\theta}\\ 0\leq i_{\theta}-k_{\theta}\leq m_0,\, 0\leq j_{\theta}-\ell_{\theta}\leq m_0,\, \theta=1,2}}\int_{{\mathbb R}^2}1_{\{k_1=k_2,\, |\ell_1-\ell_2|>2m_0+m_3+m_4\}}\sum^{m_0+m_2}_{p=m_0+1}\frac{1}{1+|a_{i_1-i_2+p}\lambda_1+a_{p}\lambda_2|^4}\\
&\qquad\qquad\times \sum^{m_0+m_3+m_4}_{q=m_0+m_3}\frac{1}{1+|(a_{i_1-i_2+i_2-j_1+q}-a_q)\lambda_1+a_{i_2-j_1+q}\lambda_2|^4}\\
&\qquad\qquad\qquad\times \bigg(1_{\{\ell_1<\ell_2\}} |\lambda_1a_{i_1-\ell_2}+\lambda_2a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{i_1-\ell_1}+\lambda_2a_{i_2-\ell_1}-\lambda_2a_{j_2-\ell_1}|^{\frac{\alpha-\eta}{2}}\\
&\qquad\qquad\qquad\qquad+1_{\{\ell_1>\ell_2\}} |\lambda_1a_{i_1-\ell_1}+\lambda_2a_{i_2-\ell_1}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{i_1-\ell_2}+\lambda_2a_{i_2-\ell_2}-\lambda_2a_{j_1-\ell_2}|^{\frac{\alpha-\eta}{2}}\bigg) d\lambda_1\, d\lambda_2.
\end{align*}
Clearly,
\begin{align*}
\text{III}_1
&\leq \frac{1}{n^4}\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m,\, k_{\theta}\neq \ell_{\theta}\\ 0\leq i_{\theta}-k_{\theta}\leq m_0,\, 0\leq j_{\theta}-\ell_{\theta}\leq m_0,\, \theta=1,2}}\int_{{\mathbb R}^2} \Big(\big| \widehat{K}(\lambda_1 h_n)\big|^2+\big|\widehat{K}(\lambda_2 h_n)\big|^2\Big)1_{\{k_1=k_2, \, |\ell_1-\ell_2|\leq 2m_0+m_3+m_4\}}\\
&\qquad\qquad\qquad\times \sum^{m_0+m_2}_{p=m_0+1}\frac{1}{1+|a_{i_1-i_2+p}\lambda_1+a_{p}\lambda_2|^4}\, d\lambda_1d\lambda_2\\
&\leq \frac{c_{23}}{n^4}\Big(\int_{{\mathbb R}} \big| \widehat{K}(\lambda_1 h_n)\big|^2\, d\lambda_1+\int_{{\mathbb R}} \big| \widehat{K}(\lambda_2 h_n)\big|^2\, d\lambda_2\Big)\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m,\, k_{\theta}\neq \ell_{\theta}\\ 0\leq i_{\theta}-k_{\theta}\leq m_0,\, 0\leq j_{\theta}-\ell_{\theta}\leq m_0,\, \theta=1,2}}1_{\{k_1=k_2, \, |\ell_1-\ell_2|\leq 2m_0+m_3+m_4\}}\\
&\leq \frac{c_{24}}{n^2h_n},
\end{align*}
where we used Plancherel theorem for the kernel function $K$ in the last inequality.
Moreover,
\begin{align*}
\text{III}_2
&\leq \frac{2}{n^4}\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m, \, k_{\theta}\neq \ell_{\theta}\\ 0\leq i_{\theta}-k_{\theta}\leq m_0,\, 0\leq j_{\theta}-\ell_{\theta}\leq m_0,\, \theta=1,2}} 1_{\{k_1=k_2,\, |\ell_1-\ell_2|>2m_0+m_3+m_4\}}\int_{{\mathbb R}^2}(|\lambda_1|^{\alpha-\eta}+|\lambda_2|^{\alpha-\eta})\\
&\quad\times \sum^{m_0+m_2}_{p=m_0+1}\frac{1}{1+|a_{i_1-i_2+p}\lambda_1+a_{p}\lambda_2|^4}\sum^{m_0+m_3+m_4}_{q=m_0+m_3}\frac{1}{1+|(a_{i_1-i_2+i_2-j_1+q}-a_q)\lambda_1+a_{i_2-j_1+q}\lambda_2|^4}\\
&\qquad\times \bigg(1_{\{\ell_1<\ell_2\}} (|a_{i_1-\ell_2}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}})(|a_{i_1-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{j_2-\ell_1}|^{\frac{\alpha-\eta}{2}})\\
&\qquad\quad+1_{\{\ell_1>\ell_2\}} (|a_{i_1-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_1}|^{\frac{\alpha-\eta}{2}})(|a_{i_1-\ell_2}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}}+|a_{j_1-\ell_2}|^{\frac{\alpha-\eta}{2}})\bigg) d\lambda_1d\lambda_2\\
&\leq \frac{c_{25}}{n^4}\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m, \, k_{\theta}\neq \ell_{\theta}\\ 0\leq i_{\theta}-k_{\theta}\leq m_0,\, 0\leq j_{\theta}-\ell_{\theta}\leq m_0,\, \theta=1,2}} 1_{\{k_1=k_2,\, |\ell_1-\ell_2|>2m_0+m_3+m_4\}}\\
&\qquad\times \bigg(1_{\{\ell_1<\ell_2\}} (|a_{i_1-\ell_2}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}})(|a_{i_1-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{j_2-\ell_1}|^{\frac{\alpha-\eta}{2}})\\
&\qquad\quad+1_{\{\ell_1>\ell_2\}} (|a_{i_1-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_1}|^{\frac{\alpha-\eta}{2}})(|a_{i_1-\ell_2}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}}+|a_{j_1-\ell_2}|^{\frac{\alpha-\eta}{2}})\bigg)\\
&\leq \frac{c_{26}}{n^{1+(\alpha-\eta)\beta}},
\end{align*}
where in the second inequality we used $\eref{lambda1}$ and $\eref{lambda2}$ to make proper change of variables to get the finiteness of the integral with respect to $\lambda_1$ and $\lambda_2$.
Therefore, for $m$ large enough, in the first case $i_1\geq i_2>j_1\geq j_2$, the right hand side of $\eref{Bn2}$ is less than a constant multiple of $\frac{1}{n^2h_n}+\frac{1}{n^{1+(\alpha-\eta)\beta}}$. Similarly, for $m$ large enough, in the second case $i_1\geq i_2>j_2\geq j_1$, we can also show that the right hand side of $\eref{Bn2}$ is less than a constant multiple of $\frac{1}{n^2h_n}+\frac{1}{n^{1+(\alpha-\eta)\beta}}$. Hence,
\begin{align} \label{Bn2e}
\mathbb{E}|B_{n,2}|^2\leq c_{27} \left(\frac{1}{n^2h_n}+\frac{1}{n^{1+(\alpha-\eta)\beta}}\right).
\end{align}
Now we estimate $\mathbb{E}|B_{n,3}|^2$. Note that
\begin{align}\label{Bn3}
&\mathbb{E}|B_{n,3}|^2 \nonumber\\
&\leq \frac{c_{28}}{n^4}\int_{{\mathbb R}^2} \left(\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m,\, k_{\theta}>\ell_{\theta}\\ i_{\theta}-k_{\theta}>m_0\, \text{or}\, j_{\theta}-\ell_{\theta}>m_0,\, \theta=1,2}}+\sum_{\substack{1\leq j_{\theta}<i_{\theta}\leq n,\, i_{\theta}-j_{\theta}>m,\, k_{\theta}<\ell_{\theta}\\ i_{\theta}-k_{\theta}>m_0\, \text{or}\, j_{\theta}-\ell_{\theta}>m_0,\, \theta=1,2}}\right)\big| \widehat{K}(\lambda_1 h_n)\big|\big|\widehat{K}(\lambda_2 h_n)\big| \nonumber\\
&\qquad\times \Big| \mathbb{E}\big[\mathcal{P}_{k_1}H(X_{i_1})(\lambda_1) \mathcal{P}_{\ell_1}H(X_{j_1})(-\lambda_1) \mathcal{P}_{k_2}H(X_{i_2})(\lambda_2) \mathcal{P}_{\ell_2}H(X_{j_2})(-\lambda_2) \big]\Big| \, d\lambda_1d\lambda_2.
\end{align}
Recall the choice of $m_0$ in the proof of Lemma $\ref{lem2}$. Then, by using Cauchy-Schwartz inequality and Lemma \ref{lem1}, we can show that the absolute value of the expectation in $\eref{Bn3}$ is less than a constant multiple of
\begin{align*}
&\frac{1_{\{k_1>\ell_1, k_2>\ell_2, k_1=k_2\}}}{(1+|\lambda_1|^6)(1+|\lambda_2|^6)}|\lambda_1a_{i_1-k_1}|^{\frac{\alpha-\eta}{2}}|\lambda_2a_{i_2-k_2}|^{\frac{\alpha-\eta}{2}}\Big(1_{\{\ell_1=\ell_2\}}|\lambda_1a_{j_1-\ell_1}|^{\frac{\alpha-\eta}{2}}|\lambda_2a_{j_2-\ell_2}|^{\frac{\alpha-\eta}{2}}\\
&\qquad+1_{\{\ell_1<\ell_2\}} |\lambda_2a_{j_2-\ell_2}|^{\frac{\alpha-\eta}{2}} |\lambda_1a_{i_1-\ell_2}+\lambda_2a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{j_1-\ell_1}|^{\frac{\alpha-\eta}{2}} |\lambda_1a_{i_1-\ell_1}+\lambda_2a_{i_2-\ell_1}-\lambda_2a_{j_2-\ell_1}|^{\frac{\alpha-\eta}{2}}\\
&\qquad+1_{\{\ell_1>\ell_2\}} |\lambda_1a_{j_1-\ell_1}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{i_1-\ell_1}+\lambda_2a_{i_2-\ell_1}|^{\frac{\alpha-\eta}{2}}|\lambda_2a_{j_2-\ell_2}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{i_1-\ell_2}+\lambda_2a_{i_2-\ell_2}-\lambda_2a_{j_1-\ell_2}|^{\frac{\alpha-\eta}{2}}\Big)\\
&\quad+\frac{1_{\{\ell_1>k_1, \ell_2>k_2, \ell_1=\ell_2\}}}{(1+|\lambda_1|^6)(1+|\lambda_2|^6)}|\lambda_1a_{i_1-\ell_1}|^{\frac{\alpha-\eta}{2}}|\lambda_2a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}}\Big(1_{\{k_1=k_2\}}|\lambda_1a_{j_1-k_1}|^{\frac{\alpha-\eta}{2}}|\lambda_2a_{j_2-k_2}|^{\frac{\alpha-\eta}{2}}\\
&\qquad+1_{\{k_1<k_2\}} |\lambda_2a_{j_2-k_2}|^{\frac{\alpha-\eta}{2}} |\lambda_1a_{i_1-k_2}+\lambda_2a_{i_2-k_2}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{j_1-k_1}|^{\frac{\alpha-\eta}{2}} |\lambda_1a_{i_1-k_1}+\lambda_2a_{i_2-k_1}-\lambda_2a_{j_2-k_1}|^{\frac{\alpha-\eta}{2}}\\
&\qquad+1_{\{k_1>k_2\}} |\lambda_1a_{j_1-k_1}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{i_1-k_1}+\lambda_2a_{i_2-k_1}|^{\frac{\alpha-\eta}{2}}|\lambda_2a_{j_2-k_2}|^{\frac{\alpha-\eta}{2}}|\lambda_1a_{i_1-k_2}+\lambda_2a_{i_2-k_2}-\lambda_2a_{j_1-k_2}|^{\frac{\alpha-\eta}{2}}\Big)\\
&\leq \frac{1_{\{k_1>\ell_1, k_2>\ell_2, k_1=k_2\}}}{(1+|\lambda_1|^2)(1+|\lambda_2|^2)}|a_{i_1-k_1}|^{\frac{\alpha-\eta}{2}}|a_{i_2-k_2}|^{\frac{\alpha-\eta}{2}}\Big(1_{\{\ell_1=\ell_2\}}|a_{j_1-\ell_1}|^{\frac{\alpha-\eta}{2}}|a_{j_2-\ell_2}|^{\frac{\alpha-\eta}{2}}\\
&\qquad+1_{\{\ell_1\neq \ell_2\}} |a_{j_2-\ell_2}|^{\frac{\alpha-\eta}{2}} (|a_{i_1-\ell_2}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}})|a_{j_1-\ell_1}|^{\frac{\alpha-\eta}{2}} (|a_{i_1-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-\ell_1}|^{\frac{\alpha-\eta}{2}}+|a_{j_2-\ell_1}|^{\frac{\alpha-\eta}{2}})\\
&\quad+\frac{1_{\{\ell_1>k_1, \ell_2>k_2, \ell_1=\ell_2\}}}{(1+|\lambda_1|^2)(1+|\lambda_2|^2)}|a_{i_1-\ell_1}|^{\frac{\alpha-\eta}{2}}|a_{i_2-\ell_2}|^{\frac{\alpha-\eta}{2}}\Big(1_{\{k_1=k_2\}}|a_{j_1-k_1}|^{\frac{\alpha-\eta}{2}}|a_{j_2-k_2}|^{\frac{\alpha-\eta}{2}}\\
&\qquad+1_{\{k_1\neq k_2\}} |a_{j_2-k_2}|^{\frac{\alpha-\eta}{2}} (|a_{i_1-k_2}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-k_2}|^{\frac{\alpha-\eta}{2}})|a_{j_1-k_1}|^{\frac{\alpha-\eta}{2}} (|a_{i_1-k_1}|^{\frac{\alpha-\eta}{2}}+|a_{i_2-k_1}|^{\frac{\alpha-\eta}{2}}+|a_{j_2-k_1}|^{\frac{\alpha-\eta}{2}})\Big).
\end{align*}
Therefore, after simple calculations, we have
\begin{align} \label{Bn3e}
\mathbb{E}|B_{n,3}|^2\leq c_{29} \, n^{2-2(\alpha-\eta)\beta}.
\end{align}
Combining $\eref{B}$, $\eref{Bn1e}$, $\eref{Bn2e}$ and $\eref{Bn3e}$ gives
\begin{align} \label{Bne}
\mathbb{E}|B_n-\mathbb{E} B_n|\leq c_{30} \left(n^{1-(\alpha-\eta)\beta}+\frac{1}{\sqrt{n^2h_n}}\right).
\end{align}
\noindent
{\bf Step 4.} We estimate ${{\mathbb E}\,}\big[|N_n-\overline{N}_n|^2\big]$ where
\[
\overline{N}_n=\frac{1}{2\pi}\int_{{\mathbb R}} \widehat{K}(0) \big(\phi_n(\lambda)-\phi(\lambda) \big) \phi(-\lambda) \, d\lambda.
\]
Let
\[
\widetilde{N}_{n}=\frac{1}{2\pi}\int_{{\mathbb R}} \widehat{K}(\lambda h_n) \big(\phi_n(\lambda)-\phi(\lambda) \big) \phi(-\lambda) \, d\lambda.
\]
Recall the definition of $N_n$ in $\eref{decomp}$. $|N_n-\widetilde{N}_{n}|$ is less than a constant multiple of $\frac{1}{n}$. Moreover, by Cauchy-Schwartz inequality and Lemma \ref{lem2},
\begin{align*}
{{\mathbb E}\,}\big[|\widetilde{N}_n-\overline{N}_n|^2\big]
&\leq {{\mathbb E}\,}\left[ \Big|\frac{1}{2\pi}\int_{{\mathbb R}} \big(\widehat{K}(\lambda h_n)-\widehat{K}(0)\big)\big(\phi_n(\lambda)-\phi(\lambda) \big) \phi(-\lambda) \, d\lambda\Big|^2\right]\\
&\leq \left(\int_{{\mathbb R}} \big|\widehat{K}(\lambda h_n)-\widehat{K}(0)\big|^2|\phi(\lambda)|\, d\lambda \right) \left(\int_{\mathbb{R}} \mathbb{E} |\phi_n(\lambda)-\phi(\lambda)|^2|\phi(\lambda)| \, d\lambda \right)\\
&\leq c_{31} \, h^4_n\, n^{1-(\alpha-\eta)\beta}.
\end{align*}
Hence
\begin{align} \label{Nne}
{{\mathbb E}\,}\big[|N_n-\overline{N}_n|^2\big]\leq c_{32} \left(\frac{1}{n^2}+h^4_n\, n^{1-(\alpha-\eta)\beta}\right).
\end{align}
\noindent
{\bf Step 5.} It is easy to see that
\[
\overline{N}_n=\frac{1}{n} \sum\limits^n_{i=1}\left(f(X_i)-\int_{{\mathbb R}} f^2(x)\, dx\right).
\]
Finally, combining $\eref{decomp}$, $\eref{Ane}$, $\eref{Bne}$ and $\eref{Nne}$ gives
\[
{{\mathbb E}\,}\Big|T_n(h_n)-{{\mathbb E}\,} T_n(h_n)-\frac{1}{n}\sum^n_{i=1}Y_i\Big|\leq c_{33}\left(n^{1-(\alpha-\eta)\beta}+\frac{1}{\sqrt{n^3h^2_n}}+\frac{1}{\sqrt{n^2h_n}}+h^2_nn^{\frac{1-(\alpha-\eta)\beta}{2}} \right).
\]
This finishes the proof of Theorem \ref{thm1}.
{\hfill $\square$ \bigskip}
Finally, we give the proof of Theorem \ref{thm2}.
\noindent
{\it Proof of Theorem \ref{thm2}} According to Theorem \ref{thm1}, we only need to consider the asymptotic behavior of $\overline{N}_n$. In the region $1<\alpha<2$, \eref{asym1} and \eref{asym2} follow from Corollary 2.3 in \cite{Koul and Surgailis-2001} and Theorem 2.2 in \cite{Surgailis-2002}, respectively. In the region $0<\alpha<1$, \eref{asym3} follows from Theorem 2.1 in \cite{Honda-2009b} and the paragraph after it.
{\hfill $\square$ \bigskip}
\bigskip
|
2010.12999
|
\section{Asymptotic Estimates}
\label{sec:asymptoticestimates}
Here we prove symptotic estimates for the Fourier coefficients of the form $\Phi_\nu$ arising in our modified Gross-Zagier calculations, for large $\nu$. (The first section is from last year.)
\subsection{ $\epsilon(\nu) = 1$ , $\nu$ large}
\label{sec:asymptotics}
Write
$$\Phi_\nu(z) = 2^{2k-1} \pi^k \left| D \right| ^{-\frac{1}{2}} \nu^{-k+1} \frac{(k-1)!} {(2k-2)!} m^{k-1} a_{m,\nu} e^{2 \pi imz},$$
where
$$a_{m, \nu} = \sum_{n \leq \frac{m \left| D \right|}{\nu}} a_{m, n, \nu}.$$
Here $a_{m, n, \nu}$ is given by
$$a_{m, n, \nu} = -P_{k-1} \left(1 - \frac{2 n \nu}{m \left| D \right|} \right) \sigma_\nu'(n) r_A (-n \nu + m \left| D \right|)$$
for $n>0$ (and $n < \frac{m \left| D \right|}{\nu}$),
$$a_{m, -n, \nu} = 2Q_{k-1} \left(1 + \frac{2 n \nu}{m \left| D \right|} \right) \sigma_\nu'(n) r_A (n \nu + m \left| D \right|)$$
for $-n<0$, and
$$a_{m, 0, \nu} = \frac{h}{u} r_A(m) \left( \log \frac{\nu \left| D \right|}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) \right).$$
For $\nu > m \left| D \right|$ there are no terms $a_{m, n, \nu}$ with $n>0$.
Now we claim that the sum of the negative terms, $\sum_{n=1}^{\infty} a_{m, -n, \nu}$, grows as $o(1)$ for large $\nu$. If we can show this, it will follow that the coefficient $a_{m, \nu}$ is dominated by the $\log \nu$ term coming from $a_{m, 0, \nu}$.
A straightforward calculation shows that
$$\left| a_{m, -n, \nu} \right| < 2^{8-2k}n^{-k+1/2}\nu^{-k+1/4}(m \left| D \right|)^k.$$
Summing over $n$, we see that the sum is less than
$$2^{10-2k}\nu^{-k+1/4}(m \left| D \right|)^k$$
in absolute value. In particular, this sum is bounded in $\nu$.
Thus, we have
$$a_{m, \nu} = \frac{h}{u} r_A(m) \left(\log \frac{\nu \left| D \right|}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) \right) + o(1),$$
where the error term is bounded by
$$2^{10-2k}\nu^{-k+1/4}(m \left| D \right|)^k,$$
for $\nu > m \left| D \right|$.
\subsection{ $\epsilon(\nu) = -1$}
Again, write
\begin{eqnarray*}
\Phi_\nu(z) & = & \left( 2 \log 2 \pi - \log \frac {N^2} {\nu} \left| D \right| - 2 \frac {\Gamma'} {\Gamma} (k) \right) \\
& & \left[ \sum_{m=1}^{\infty} 2^{2k-1} \pi^k \left| D \right| ^{-\frac{1}{2}} \nu^{-k+1} \frac{(k-1)!}{(2k-2)!} m^{k-1} a_{m,\nu} \right] e^{2 \pi imz},
\end{eqnarray*}
where
$$a_{n, \nu} = \sum_{0 \leq n \leq \frac{m \left| D \right|}{\nu}} a_{m, n, \nu}.$$
Here $a_{m,n,\nu}$ is zero unless $0 \leq n \leq \frac{m \left| D \right|}{\nu}$.
Thus, for $\nu > m \left| D \right|$ we have exactly
$$a_{n, \nu} = \frac{h}{u} r_A(m).$$
\subsection { $\nu$ = 1 }
For $0 < n \leq m \left| D \right|$ we have
$$a_{m, n, \nu=1} = -P_{k-1} \left(1 - \frac{2 n}{m \left| D \right|} \right) \sigma_\nu'(n) r_A (-n + m \left| D \right|).$$
Using elementary estimates for $\sigma_A(n)$, $r_A(n)$ and the Legendre polynomials, we find that for $0 < n \leq m \left| D \right|$,
$$\left| a_{m, -n, 1} \right| \leq 81 2^\frac{-3}{4} (m \left| D \right|) ^\frac{1}{2} (\log m \left| D \right| + 2).$$
If $n > m \left| D \right|$ then
$$\left| a_{m, -n, 1} \right| \leq \frac{9}{8} 2^{1-k} 81 n^\frac{1}{4} (n + m \left| D \right|) ^\frac{1}{4} \left(1 + 2 \frac{n} {m \left| D \right|} \right)^{-k}.$$
Summing over all positive $n$, we find after some simplification that
$$\left| \sum_{n > 0} a_{m, -n, 1} \right| \leq 2^6 ( m \left| D \right| ) ^\frac{3}{2} (\log m \left| D \right| + 3).$$
For $n=0$ we have the equality
$$a_{m, 0, 1} = \frac{h}{u} r_A(m) \left(\log \frac{\left| D \right|}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) \right).$$
Summing over all $n$, we have
$$\left| a_{m, 1} \right| \leq 192 ( m \left| D \right| ) ^\frac{3}{2} (\log m \left| D \right| + 1) + \frac{h}{u} r_A(m) \left| (\log \frac{\left| D \right|}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) ) \right|.$$
\subsection{A Bound for the Dirichlet $L$-Function}
We will use the estimate
$$\left| \frac{L'}{L} (1, \epsilon) \right| \leq \log \left| D \right|.$$
\subsection{Final Asymptotics of Gross-Zagier}
For our purposes it will be convenient to have large-$N$ asymptotic estimates for the Fourier coefficients of $g$. From our asymptotic work we know that for $\nu > m \left| D \right|$, if $\epsilon (\nu) = 1$, then the $m$-th Fourier coefficient of $\Phi_\nu$ behaves as
\begin{eqnarray*}
& & 2^{2k-1} \pi^{k} \frac {(k-1)!} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \nu^{-k+1} m^{k-1} \\
& & \left[ \frac{h}{u} r_A(m) \left(\log \frac{\nu \left| D \right|}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) \right) + o(1) \right],
\end{eqnarray*}
where the error term is bounded by
$$ 2^{10-2k}\nu^{-k+1/4}(m \left| D \right|)^k.$$
If $\epsilon (\nu) = -1$ then the $m$-th Fourier coefficient of $\Phi_ \nu$ equals exactly
$$\left( 2 \log 2 \pi - \log \frac {N^2} {\nu} \left| D \right| - 2 \frac {\Gamma'} {\Gamma} (k) \right) 2^{2k-1} \pi^k \left| D \right| ^{-\frac{1}{2}} \nu^{-k+1} \frac{(k-1)!}{(2k-2)!} m^{k-1} \frac{h}{u} r_A(m).$$
Now suppose $N = p^2$, so that from Equation (\ref{eqn:1}) we have
$$g = \frac{(4 \pi) ^k} {(k-1)!} \left( p^{2k-2} \Phi_{p^2} - \epsilon(p) p^{k-2} \Phi_{p} \right).$$
Using the estimates above we find
$$a_m(g) = 2^{4k-1} \pi^{2k} \left| D \right| ^{-\frac{1}{2}} m^{k-1} \left(\frac{h}{u} r_A(m) \log {p^2} + O(1) \right),$$
regardless of the sign of $\epsilon(p)$.
More precisely, if $\epsilon(p) = 1$ then we have
\begin{eqnarray*}
a_m(g) & = & 2^{4k-1} \pi^{2k} \left| D \right| ^{-\frac{1}{2}} m^{k-1} \Big[ \frac{h}{u} r_A(m) ( (2 - p^{-1}) \log p \\
& & + (1 - p^{-1}) (\log \frac{\left| D \right|}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) ) + o(1)) \Big]\\.
\end{eqnarray*}
Here the error term $o(1)$ is bounded by
$$2^{10-2k}(m \left| D \right|)^k (p^{-2k+1/2} + p^{-k-3/4}).$$
If instead $\epsilon(p) = -1$, then
\begin{eqnarray*}
a_m(g) & = & 2^{4k-1} \pi^{2k} \left| D \right| ^{-\frac{1}{2}} m^{k-1} \\
& & \Big[ \frac{h}{u} r_A(m) ( (2 - 3 p^{-1}) \log p + (1 - p^{-1}) \log \left| D \right| - \log m \\
& & - 2 (1 - p^{-1}) \log{2 \pi} + 2 (1 - p^{-1}) \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon)) + o(1) \Big],
\end{eqnarray*}
with the error term bounded by
$$ 2^{10-2k}p^{-2k+1/2}(m \left| D \right|)^k.$$
If $N=p$ is a prime then
$$g = \frac{(4 \pi) ^k} {(k-1)!} (p^{k-1} \Phi_p - \epsilon(p) p^{-1} \Phi_1),$$
so if $p$ splits in $K$, then
$$a_m(g) = \frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} m^{k-1} (\frac{h}{u} r_A(m) \log {p} + O(1)).$$
Specifically,
\begin{eqnarray*}
a_m(g) & = & \frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} m^{k-1} \\ & & \Big[ \frac{h}{u} r_A(m) \big(\log p + (1 - p^{-1}) \log \frac{\left| D \right|}{m} - 2 (1 - p^{-1}) \log{2 \pi} \\
& & + 2 (1 - p^{-1}) \frac {\Gamma'} {\Gamma}(k) + 2 (1 - p^{-1}) \frac{L'}{L}(1, \epsilon) \big) + o(1) \Big],
\end{eqnarray*}
and the error term is bounded by
$$ 2^{10-2k}p^{-k+1/4}(m \left| D \right|)^k + 192 ( m \left| D \right| ) ^\frac{3}{2} (\log m \left| D \right| + 1) p^{-1}.$$
If instead $N=p$ is inert in $K$ then
\begin{eqnarray*}
a_m(g) & = & \frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} m^{k-1} \\
& & \Big[ \frac {h} {u} r_A(m) \big(2 (1 - p^{-1}) \log 2 \pi - \log p - (1 - p^{-1}) \log \left| D \right| \\
& & - 2 (1 - p^{-1}) \frac {\Gamma'} {\Gamma} (k) + 2 p^{-1} \frac{L'}{L} (1, \epsilon) \big) + o(1) \Big],\\
\end{eqnarray*}
with error term bounded by
$$192 ( m \left| D \right| ) ^\frac{3}{2} (\log m \left| D \right| + 1) p^{-1}.$$
Finally, if $N=1$ then
$$g = \frac{(4 \pi) ^k} {(k-1)!} \Phi_1$$
so
\begin{eqnarray*}
a_m(g) & = & \frac{ 2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} m^{k-1} \frac{h}{u} r_A(m) \\
& & \left(\log \frac{\left| D \right|}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) + E \right),
\end{eqnarray*}
where the error term $E$ is bounded by
$$\left| E \right| \leq 192 ( m \left| D \right| ) ^\frac{3}{2} (\log m \left| D \right| + 1).$$
\section{Miscellaneous Calculations}
To apply the Michel-Ramakrishnan averageing formula we will need an orthogonal basis for the space of old forms, and estimates of their $L$-functions. Toward this end, we collect here some results on the $L$-functions and inner products of old forms.
\subsection{Computing $L$-functions}
\label{sec:Lfunc}
For old forms in level $p^2$, which come from levels $1$ and $p$, we need to express the $L$-function in terms of the $L$-functions in lower levels.
Suppose first that $g$ is a (new) Hecke eigenform in level $1$, which gives the three old forms
\begin{eqnarray*}
g_1(z) & = & g(z) \\
g_p(z) & = & g(pz) \\
g_{p^2}(z) & = & g(p^2z)
\end{eqnarray*}
in level $p^2$. (If $g$ is new in level $p$ then we define the forms $g_1$ and $g_p$ similarly.) We need to express the $L$-functions
$$L(s, g_i \times \Theta_A),$$
for $i = 1, p, p^2$, in terms of the $p$-invariant $L$-function
$$L(s, g \times \Theta_A).$$
For $i=1$, the functions $g_1$ and $g$ have the same Fourier coefficients, so their $L$-functions coincide. For $i=p$ and $i=p^2$, we use the method described in \cite{Shimura}. Here we provide only a sketch of the calculation due to limitations of space.
Each coefficient of the twisted $L$-function is the product of coefficients of a Hecke $L$-function and the $L$-function associated to a theta function; both of these series have Euler products. The method in $\cite{Shimura}$ shows how to compute the Euler product of the twisted series given these two Euler products. It is easy to relate the Hecke $L$-series of $g_i$ to that of $g$. Using $\cite{Shimura}$ we obtain the results below.
Hence, we have
$$L(s, g_p \times \Theta_\chi) = \frac{(\alpha_p + \beta_p) p^{-s} - a_p(g) p^{-2s}} {1 - p^{2k-1}p^{-2s}} L(s, g \times \Theta_\chi)$$
and
$$L(s, g_{p^2} \times \Theta_\chi) = \frac{ c_2 p^{-2s} - c_3 p^{-3s} + c_4 p^{-4s}}{1 - \alpha_p \beta_p \gamma_p \delta_p p^{-2s}} L(s, g \times \Theta_\chi),$$
where
\begin{eqnarray*}
c_2 & = & (\alpha_p^2 + 1 + \beta_p^2)\\
c_3 & = & (\alpha_p + \beta_p)a_p(g)\\
c_4 & = & p^{2k-1} \\
\end{eqnarray*}
Although the calculation is only valid for $s$ with sufficiently large real part, this identity extends to all $s$ by analytic continuation.
\subsubsection{L-functions from Level $p$}
Now suppose $h$ is a Hecke eigenform, new in level $p$, so $h_1(z)=h(z)$ and $h_p(z)=h(pz)$ are the two corresponding old forms in level $p^2$. Again, the $L$-functions for $h_1$ and $h$ coincide, but we still have to compute the $L$-function for $h_p$.
By the same method as in the previous section,
$$L(s, g_p \times \Theta_\chi) = ((\alpha_p + \beta_p) p^{-s} - a_p(g) p^{-2s}) L(s, g \times \Theta_\chi).$$
Again, by analytic continuation the result is true for all $s$, though the calculation is only valid for $s$ with large real part.
We are interested in the derivatives at the central point $s=k$. If the functional equation is odd, so that $L$ itself vanishes at $k$, then differentiation yields
$$L'(k, g_p \times \Theta_\chi) = ((\alpha_p + \beta_p) p^{-k} - a_p(g) p^{-2k}) L(k, g \times \Theta_\chi).$$
If instead the functional equation is even, then we know from logarithmic differentiation of the functional equation that
$$\frac {L'}{L} (k, g \times \Theta_\chi) = 2 \log 2 \pi - \log p \left| D \right| - 2 \frac {\Gamma'} {\Gamma}.$$
Thus we find
\begin{eqnarray*}
L' (k, g_p \times \Theta_\chi) & = & L (k, g \times \Theta_\chi) \big[ -\log p ((\alpha_p + \beta_p) p^{-s} - 2 a_p(g) p^{-2s}) \\
& & + (2 \log 2 \pi - \log p \left| D \right| - 2 \frac {\Gamma'} {\Gamma}(k)) ((\alpha_p + \beta_p) p^{-s} - a_p(g) p^{-2s}) \big]. \\
\end{eqnarray*}
\subsection{Inner Products}
We need to construct an orthonormal basis for the space of forms in level $p^2$. It is known that the Hecke eigenforms form an ONB for the new forms and that the old forms are orthogonal to the new forms. But it is still necessary to orthogonalize the standard basis of old forms.
\subsubsection{Inner Products of Old Forms from Level $1$}
To this end, suppose first that $g$ and $h$ are eigenforms in level $1$, so $g_1$, $g_p$, $g_{p^2}$, $h_1$, $h_p$ and $h_{p^2}$ are three associated old forms in level $p^2$. We wish to calculate inner products of the form $(g_i,h_j)$ in terms of the inner product $(g,h)$ in level $1$. Again, we omit detailed calculations. The general plan of attack is as follows. The inner product can be expressed as an integral over a fundamental domain for $\Gamma_0(p^2)$. Applying the transformation law the integral can be expressed as a sum of integrals over a fundamental domain for $\Gamma_0(SL_2(\mathbb{Z}))$. The sum in turn corresponds to a Hecke trace operator, for which we know that $g$ and $h$ are eigenforms. Writing the traces as scalar multiples of $g$ and $h$, we end up with the inner product expressed in terms of the product $(g,h)$ in level $1$.
Assuming $g$ is normalized, so the eigenvalue of the trace operator $T_p$ is exactly $a_p(g)$, we have
\begin{eqnarray*}
(g_1, h_1) &=& p(p+1) (g,h)\\
(g_p,h_1) &=& p^{-2k+2} a_p(g) (g,h)\\
(g_p, h_p) &=& p(p+1) p^{-2k} (g,h)\\
(g_{p^2},h_1) &=& (p^{2-4k} a_p(g)^2 - p^{1-2k}) (g,h)\\
(g_{p^2},h_p) &=& p^{-4k+2} a_p(g) (g,h)\\
(g_{p^2}, h_{p^2}) &=& p^{1-4k} (p+1) (g,h).\\
\end{eqnarray*}
\subsubsection{Inner Products of Old Forms from Level $p$}
If $g$ and $h$ are new forms in level $p$ then the corresponding forms $g_1, g_p, h_1, h_p$ are old in level $p^2$. We need to compute the inner products $(g_1, h_1)$, $(g_p, h_1)$ and $(g_p, h_p)$.
Since $X(\Gamma_0(p^2))$ covers $X(\Gamma_0(p))$ exactly $p$ times, the usual transformation shows that
$$(g_1, h_1) = p (g,h)$$
and
$$(g_p, h_p) = p^{1-2k} (g,h).$$
The computation of $(g_p, h_1)$ is more involved; the methods in the previous section give
$$(g_p, h_1) = p^{1-2k} a_p(g) (g,h).$$
\section{Extending the Gross-Zagier Formulas}
To begin with we generalize the calculation in Chapter 4 (starting from p. 267) of \cite{GZ} to give results valid for all forms, old and new.
In the discussion leading to Proposition 4.1.2 on page 271, Gross and Zagier show that for any form $f$ of level $N$ and any ideal class $A$,
$$(4 \pi) ^{-s-2k+1} \Gamma(s+2k-1) L_A (f, s+2k-1) = (f, Tr^{ND}_N \Theta_A E_s),$$
where $E_s(z)$ is given by
\begin{equation}
\label{eqn:1}
E_s(z) = \sum_{e|N} \frac{\mu(e)\epsilon(e)}{e^{2s+2k-1}} (N/e)^{-s} E_s^{(1)} \left( \frac{N}{e} z \right).
\end{equation}
Here $E_s^{(1)}$ is the Eisenstein series
$$\frac{1}{2} \sum_{c, d \in \mathbb{Z} \\ D | c} \frac {\epsilon(d)} {(cz+d) ^{2k-1}}
\frac{y^s} {\left| cz+d \right| ^{2s}}.$$
In the computation of $Tr^{ND}_N \Theta_A E_s$ that follows, Gross and Zagier discard the terms $e<N$, because they contribute only an old form, which is orthogonal to any new form $f$. Since we need a result valid for all forms $f$, we repeat their calculation, keeping all terms of the sum for $E_s$.
Thus, we begin with a modified Proposition 4.1.2. If, for all $\nu|N$, we write
$$\tilde{\Phi}_{s,\nu}(z)= Tr^{ND}_N \Theta_A(z) E_s^{(1)} (\nu z)$$
and
\begin{equation}
\label{eqn:phi}
\tilde{\Phi}_{s,mod}(z) = \sum_{e|N} \frac {\mu(e)\epsilon(e)} {e^{2s+2k-1}} (N/e)^{-s} \tilde{\Phi}_{s,N/e}(z),
\end{equation}
then we have
\begin{equation}
\label{eqn:lofs}
(4 \pi) ^{-s-2k+1} \Gamma (s+2k-1) L_A(f,s+2k-1) = (f, \tilde{\Phi}_{s,mod}(z)).
\end{equation}
The correspondence between Gross-Zagier's notation and ours is: Gross-Zagier's $\tilde{\Phi}_s(z)$ is our $\tilde{\Phi}_{s,N}(z)$, up to a factor of $N^k$.
By a calculation analogous to that in \cite{GZ}, we can compute the Fourier coefficients of $\tilde{\Phi} _{s, \nu}$. Then we differentiate at $s = k$ and apply the holomorphic projection lemma from page 292. We omit the details of the calculation.
When $\epsilon(\nu) = 1$, we find that the holomorphic projection is
\begin{eqnarray*}
\Phi_\nu(z) & = & \sum_{m=1}^{\infty} 2^{2k-1} \pi^k \left| D \right| ^{-\frac{1}{2}} \nu^{-k+1} \frac{(k-1)!} {(2k-2)!} m^{k-1} \\
& & \Big[ -\sum_ {0 < n \leq \frac{m \left| D \right|} {\nu}} -P_{k-1} \left(1 - \frac{2 n \nu}{m \left| D \right|} \right) \sigma_\nu'(n) r_A (-n \nu + m \left| D \right|) \\
& & - \sum_{n=0}^{\infty} \left( -2Q_{k-1} \left(1 + \frac{2 n \nu}{m \left| D \right|} \right) \sigma_\nu'(n) r_A (n \nu + m \left| D \right|) \right) \\
& & + \frac{h}{u} r_A(m) \left(\log \frac{\nu \left| D \right| y}{m} - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) \right) \Big] e^{2 \pi imz}.
\end{eqnarray*}
In case $\epsilon(\nu) = -1$, the $L$-function has even functional equation. In this case Gross and Zagier compute only its value, not its derivative, at the center of symmetry. But the even functional equation enables us to express the derivative in terms of the value of the function. Specifically, using functional equation $4.4.10$ of \cite{GZ}, we see that the function $e(n,y)$ satisfies
\begin{equation}
\label{eqn:diff}
\frac{e^\ast (n,y)} {e_{1-k}(n,y)} = \log \pi - \log \left| D \right| - \frac {\Gamma'} {\Gamma} (k).
\end{equation}
Using this result, the rest of the calculation proceeds as in \cite{GZ}.
After holomorphic projection, we have
$$\Phi_\nu (z) = \left( 2 \log 2 \pi - \log \frac{N^2}{\nu} \left| D \right| - 2 \frac {\Gamma'} {\Gamma} (k) \right) \Phi _{1-k, \nu} (z),$$
where
\begin{eqnarray*}
\Phi _{1-k, \nu} (z) & = & \sum_{m=1}^{\infty} 2^{2k-1} \pi^k \left| D \right| ^ {-\frac{1}{2}} \nu ^{-k+1} \frac{(k-1)!} {(2k-2)!} m^{k-1} \\
& & \left[ \frac{h}{u} r_A(m) + \sum _{0 < n \leq \frac{m \left| D \right|} {\nu}} P_{k-1} ( 1 - \frac{2 n \nu} {m \left| D \right|}) \sigma _{\nu, A} (n) r_A (m \left| D \right| - n \nu) \right],
\end{eqnarray*}
Thus, for any holomorphic cusp form $f$ of weight $2k$ and level $N$, the derivative of the twisted $L$-function at $k$ is given by the Petersson inner product
$$L'_A(f,k) = (f, g),$$
where
$$g(z) = \frac{ (4 \pi) ^{k}} { (k-1)! } \sum _{e | N} \mu(e) \epsilon(e) N^{k-1} e^{-k} \Phi_{N/e}^\ast (z);$$
and the value is given by
$$L_A (f, k) = \frac{ (4 \pi) ^k} { (k-1)!} \sum _{e | N} \mu(e) \epsilon(e) N^{k-1} e^{-k} \left(f, \Phi_{1-k, N/e}^\ast (z) \right).$$
\section{Introduction}
Let $f$ be a cusp form of weight $2k$ and level $N$. Let $K$ be an imaginary quadratic field with odd discriminant $D$ and ring of integers $R$, and let $\chi$ be a character of the ideal class group of $R$. In \cite{GZ}, Gross and Zagier study the $L$-function of the twisted modular form $f \times \Theta_\chi$. In particular, they show that this $L$-function satisifies a functional equation with center at $s=k$, with sign determined by the splitting of the prime factors of $N$ in $D$. Furthermore, Gross and Zagier determine the leading behavior of the $L$-function at $k$ -- the value $L(k, f \times \Theta_\chi)$ if the equation is even and the derivative $L'(k, f \times \Theta_\chi)$ if it is odd -- as a Petersson inner product $(f, g)$, where $g$ is itself a cusp form of weight $2k$ and level $N$.
The aim of this paper is to prove a nonvanishing result for the derivative $L'(k, f \times \Theta_\chi)$. Specifically, suppose $N = p^2$ with $p$ prime. Then the functional equation satisfied by the twisted $L$-function is necessarily odd, so the value $L(k, f \times \Theta_\chi)$ is zero. We wish to show that, for sufficiently large $p$, there is a newform $f$ in level $p^2$ such that the $L$-function has only a single zero at $k$.
Interest in non-vanishing of derivatives of modular $L$-functions dates to work of Kolyvagin \cite{Koly}, proving finiteness of the Tate--Shafarevich group for modular elliptic curves, assuming some quadratic twist of the $L$-function has a simple zero at $s = 1$.
Kolyvagin's assumption was proven independently by Murty--Murty \cite{MM} and Bump--Friedberg--Hoffstein \cite{BFH};
Iwaniec \cite{Iwa} refined their nonvanishing result to a more precise quantitative estimate.
Our approach is based on the averageing method of Michel and Ramakrishnan \cite{MR}. This method has its origins in a calculation of Gross and Zagier \cite{GZ}, based on the observation that any functional on a space of modular forms -- say, the functional that assigns to $f$ a central $L$-value -- can be represented as a Petersson inner product of $f$ against a fixed kernel. Michel and Ramakrishnan observed that this method allows one to compute the average central value over cusp forms $f$; their method was extended by Goldfeld--Zhang \cite{GolZh}, Nelson \cite{Nel} and others.
First we will extend the calculation in \cite{GZ} to find a holomorphic cusp form $g$ of level $N$ and weight $2k$ such that, for all $f$ of level $p^2$ and weight $2k$, the central value of the derivative is given by the Petersson inner product
$$L'(k, f \times \Theta_\chi) = (f, g).$$
Given an orthogonal (but not necessarily orthonormal) basis of functions $f_i$, we can expand $g$ in terms of this basis as
$$\sum \frac{(f_i,g)}{(f_i,f_i)} f_i = g.$$
Writing the inner product $(f_i, g)$ as the derivative of an $L$-function, we get
$$\sum \frac {L'(k, f_i \times \Theta_\chi)} {(f_i,f_i)} f_i = g.$$
Finally, we can apply a linear functional to this equality of forms, say by taking the first Fourier coefficient of each side. We obtain
\begin{equation}
\label{eqn:michelram}
\sum \frac {L'(k, f_i \times \Theta_\chi)} {(f_i,f_i)} a_1(f_i) = a_1(g).
\end{equation}
This is the ``averaging technique'' of Michel and Ramakrishnan.
In order to finish the problem, we will determine the asymptotic behavior of $a_1(g)$ for large $p$. We will show that for sufficiently large $p$, the contribution from old forms $f$ to the left-hand side of Equation (\ref{eqn:michelram}) is asymptotically smaller than $a_1(g)$. Thus, some newform $f$ must contribute a nonzero term.
Our methods will prove an unconditional nonvanishing result only for primes $p$ that remain inert in the quadratic field $K$. For split primes, we prove a nonvanishing result conditional on the nonnegativity of the central derivative $L' (k, f \times \Theta_\chi)$. Furthermore, in both cases, the effective bound we obtain for $p$ in the nonvanishing result will be conditional on this conjectured nonnegativity.
\section{Acknowledgements}
The author would like to thank Dinakar Ramakrishnan for helpful guidance and valuable support. This work was done at the California Institute of Technology, partially with the support of a Summer Undergraduate Research Fellowship in the summer of 2011, and partially as the author's undergraduate thesis.
\section{Old Form Contributions}
Now we return to the Michel-Ramakrishnan average formula. Fix a level $N$ and weight $2k$, a quadratic field $K$ of discriminant $D$, and an ideal class $A$. Let $\epsilon$ be the character corresponding to the discriminant $D$, so for $p$ a prime, $\epsilon(p)=1$ if and only if $p$ splits in $K$. Suppose now that $p$ is such a prime and $N = p^2$.
For any newform $f$ of level $1$, $p$ or $p^2$, the twisted $L$-function satisfies an odd functional equation. Thus, we have $L(k, f \times \Theta_\chi) = 0$ for all such newforms, and hence for all forms of level $p^2$. We are interested in the value of the first derivative $L'(k, f \times \Theta_\chi)$.
We have calculated a holomorphic modular form $g$ of level $N$ and weight $2k$ such that, for all $f$ of level $p^2$ and weight $2k$, the central value of the derivative is given by the Petersson inner product
$$L'(k, f \times \Theta_\chi) = (f, g).$$
Given an orthogonal (but not necessarily orthonormal) basis of functions $f_i$, we can expand $g$ in terms of this basis as
$$\sum \frac{(f_i,g)}{(f_i,f_i)} f_i = g.$$
Writing the inner product $(f_i, g)$ as the derivative of an $L$-function, we get
$$\sum \frac {L'(k, f_i \times \Theta_A)} {(f_i,f_i)} f_i = g.$$
Finally, we can apply a linear functional to this equality of forms, say by taking the first Fourier coefficient of each side. We obtain
$$\sum \frac {L'(k, f_i \times \Theta_A)} {(f_i,f_i)} a_1(f_i) = a_1(g).$$
This is the ``averaging technique'' of Michel and Ramakrishnan.
We wish to prove a nonvanishing result, of the type
$$L'(k, f_i \times \Theta_\chi) \neq 0,$$
for some new form $f_i$ of level $p^2$. We know from our asymptotic bounds that $a_1 (g)$ is nonzero. But old forms also contribute to the sum. Our aim is to show that the old form contribution is asymptotically smaller than $a_1 (g)$.
First choose an orthogonal basis as follows: let $f^{(p^2,i)}$, $f^{(p,i)}$ and $f^{(1,i)}$ denote bases of newforms in levels $p^2$, $p$ and $1$, respectively. For any form $f$ and integer $q$, let $f_q(z) = f(qz)$. Then the collection $f^{(p^2,i)}$, $f^{(p,i)}_1$, $f^{(p,i)}_p$, $f^{(1,i)}_1$, $f^{(1,i)}_p$, $f^{(1,i)}_{p^2}$ forms a basis for modular forms in level $p^2$. This is almost, but not quite, an orthonormal basis, in the following sense: two such forms $f^{({j_1},{i_1})}_{q_1}$ and $f^{({j_2},{i_2})}_{q_2}$ are orthogonal unless $j_1=j_2$ and $i_1 = i_2$. (This follows from our preliminary calculations.) Thus, in order to orthogonalize this basis, it suffices to apply Gram-Schmidt separately to every pair $f^{(p,i)}_1$, $f^{(p,i)}_p$, and every triple $f^{(1,i)}_1$, $f^{(1,i)}_p$, $f^{(1,i)}_{p^2}$.
Using the inner products computed above, we find that $f^{(p,i)}_1$ and
$$\tilde{f}^{(p,i)}_p = f^{(p,i)}_p - p^{-2k} a_p(f^{(p,i)}) f^{(p,i)}_1$$
are orthogonal, as are $f^{(1,i)}_1$,
$$\tilde{f}^{(1,i)}_p = f^{(1,i)}_p - p^{-2k} \frac{p}{p+1} a_p(f^{(p,1)}) f^{(1,i)}_1$$
and
$$\tilde{f}^{(1,i)}_{p^2} = f^{(1,i)}_{p^2} - C_p f^{(1,i)}_p - C_1 f^{(1,i)}_1.$$
Here
$$C_p = \left(1 - \frac{1} {(p+1)^2 - p^{-2k+2} a_p(f)^2} \right) p^{-2k} a_p(f^{(1,i)})$$
and
$$C_1 = \left( \frac{-1} {(p+1)^2 - p^{-2k+2} a_p(f)^2} \right) \frac{p^{-4k+1}} {p+1} a_p (f^{(1,i)})^2 - \frac{p^{-2k}} {p+1}.$$
(The coefficients $C_p$ and $C_1$ are dependent on $f$, but we suppress this dependence in our notation.)
We want to estimate the contribution of each of these terms to the Michel-Ramakrishnan average.
We already have the asymptotic result
$$a_1(g) = 2 \pi \left| D \right| ^{-\frac{1}{2}} r_A(1) \left( \frac{h}{u} \log {p^2} + O(1) \right).$$
Supposing $A$ is the class of princpal ideals, so $r_A(1) = 1$, we have
$$a_1(g) = 2 \pi \left| D \right| ^{-\frac{1}{2}} \left( \frac{h}{u} \log {p^2} + O(1) \right).$$
A word about notation: we use $f^{(j,i)}$ to represent a new eigenform in level $j$, and $f^{(j,i)}_q$ the corresponding old eigenforms in level $p^2$. Thus, when we write an inner product of the form $(f^{(j,i)},f^{(j,i)})$, we mean for the inner product to be computed in level $j$; but when we write it as $(f^{(j,i)}_q,f^{(j,i)})_q$, we mean the inner product computed in level $p^2$.
\subsection{Old Forms from Level $1$}
We begin with the old forms from level $1$. Since these forms and their $L$-functions are independent of $p$, we expect that their contribution to the sum will be bounded in $p$. We now confirm this intuition; in fact, we will see that they decay as $O(p^{-2})$. (The additional $p^2$ comes from the change of fundamental domain, which increases the $(f,f)$ denominator by a factor of $p(p+1)$.)
For each normalized eigenform $f$ of level $1$ (here we omit the superscript $(1,i)$), let $c_1 = (f,f)$ be the inner product computed in level $1$, and let
$$c_2 = \max_\chi \left| L' (k, f \times \Theta_\chi) \right|,$$
where the maximum is taken over all Dirichlet characters $\chi$ modulo $D$. We want to show that the three corresponding terms of the Michel-Ramakrishnan average are bounded in terms of $c_1$ and $c_2$.
For $\tilde{f}_1$, since $\Theta_A$ is an average over $\chi$ of the forms $\Theta_\chi$, we have
$$\frac {L'(k, f_1 \times \Theta_A)} {(f_1,f_1)} a_1(f_1) = \frac{c_2} {p(p+1) c_1}.$$
For $\tilde{f}_p$, we need to estimate
$$\frac {L' (k, \tilde{f}_p \times \Theta_A)} { (\tilde{f}_p , \tilde{f}_p)} a_1(\tilde{f}_p),$$
where
$$\tilde{f}_p = f_p - p^{-2k} \frac{p}{p+1} a_p(f) f_1.$$
We can bound the numerator by elementary means. To bound the denominator away from zero we use the Ramanujan bound for $a_p(f)$. Loosely speaking, this bound guarantees that $f_p$ is close to orthogonal to $f_1$, which guarantees that $\tilde{f}_p$ is not too close to zero. Using these bounds, we find that the contribution of $f_p$ to the Michel-Ramakrishnan average is bounded by
$$\left| \frac {L' (k, \tilde{f}_p \times \Theta_A)} { (\tilde{f}_p , \tilde{f}_p)} a_1(\tilde{f}_p) \right| \leq 72 \frac{c_2}{p(p+1)c_1}.$$
By similar means, using elementary bounds and the Ramanujan bound to estimate the $\tilde{f}_p$ contribution, we find that it is bounded as
$$\left| \frac {L' (k, \tilde{f}_{p^2} \times \Theta_A)} { (\tilde{f}_{p^2} , \tilde{f}_{p^2})} a_1(\tilde{f}_{p^2}) \right| \leq \frac{ 32 p^{-2k} c_2} {\frac{1}{10} p^{-4k+2}} (5 p^{-2k-1} c_1) = 1600 p^{-3} \frac{c_2} {c_1}.$$
In short, this bound is $O(p^{-3})$.
Summing over all $f^{(1,i)}_q$, we find that the contribution to the sum from all forms of level $1$ is bounded as $O(p^{-2})$. In particular, since the dimension of the space of forms of level $1$ and weight $2k$ is at most $\frac{k} {12} + 1$, the total contribution is bounded by
$$\left( \frac{k} {12} + 1 \right) \frac{c_2}{c_1} ( 73 p^{-2} + 1600 p^{-3}).$$
If we assume further that all the $c_2$ are positive, then by the averageing formula in level $1$ we know that
$$\sum \frac{c_2}{c_1} \leq \frac{ 2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \frac{h}{u} (\log \left| D \right| - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) + E),$$
where the error term $E$ is bounded by
$$\left| E \right| \leq 192 \left| D \right| ^\frac{3}{2} (\log \left| D \right| + 1).$$
Thus, the contribution to the sum from old forms from level $1$ is bounded by
$$C_k C_p \frac{ 2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \frac{h}{u} \left( \log \left| D \right| - 2 \log{2 \pi} + 2 \frac {\Gamma'} {\Gamma}(k) + 2 \frac{L'}{L}(1, \epsilon) + E \right),$$
where
$$C_k = \frac{k} {12} + 1$$
and
$$C_p = 73 p^{-2} + 1600 p^{-3}.$$
\subsection{Old Forms from Level $p$, where $\epsilon(p) = 1$}
Now we consider the old forms from level $p$. Their contribution will be bounded by $p^{-1} \log p$.
Let $f$ be a normalized eigenform of level $p$ (here we again suppress the superscript $(1,i)$), let $c_1 = (f,f)$ be the inner product computed in level $p$, and let
$$c_2 = \max_\chi \left| L' (k, f \times \Theta_\chi) \right|,$$
where the maximum is taken over all Dirichlet characters $\chi$ modulo $D$. We want to estimate the two corresponding terms of the Michel-Ramakrishnan average in terms of $c_1$ and $c_2$.
First consider the contribution from $f_1$, for each $f$. We have
$$\frac {L'(k, f_1 \times \Theta_A)} {(f_1,f_1)} a_1(f_1) = \frac{1}{p} \frac {L'(k, f \times \Theta_A)} {(f,f)} a_1(f).$$
By Gross-Zagier in level $p$, the sum of these terms behaves asymptotically as
$$O \left( \frac{\log p}{p} \right).$$
Specifically, the sum is bounded by
\begin{eqnarray*}
p^{-1} \frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \\
\Big[ \frac{h}{u} \big(\log p + (1 - p^{-1}) \log \left| D \right| - 2 (1 - p^{-1}) \log{2 \pi} + 2 (1 - p^{-1}) \frac {\Gamma'} {\Gamma}(k) + 2 (1 - p^{-1}) \frac{L'}{L}(1, \epsilon) \big) + o(1) \Big],
\end{eqnarray*}
with error term bounded by
$$ 2^{10-2k}p^{-k+1/4}\left| D \right|^k + 192 \left| D \right| ^\frac{3}{2} (\log \left| D \right| + 1) p^{-1}.$$
Next consider the contribution from $\tilde{f}_p$. To bound this sum we need a positivity result, which we know is unconditionally true: $L'(k, f \times \Theta_A) \geq 0$. (\cite{JN}) Since $a_1(f) = 1$, we have
$$\frac {L'(k, f \times \Theta_A)} {(f,f)} a_1(f) \geq 0$$
for each $f$. Thus, the Michel-Ramakrishnan bound on the average of these terms translates to individual bounds on each term. Using this result, the usual methods give
$$\left| \frac {L'(k, f_p \times \Theta_A)} {(f_p,f_p)} a_1(f_p) \right| \leq 2 p^{-\frac{3}{2}} \frac{L'(k, f \times \Theta_A)} {(f,f)} a_1(f).$$
Summing over all new forms $f$ of level $p$, and using the asymptotic results to bound the Michel-Ramakrishnan average on the right-hand side, we see that the contribution of all the $f_p$ is at most
\begin{eqnarray*}
& & 2 p^{-\frac{3}{2}} \frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \Big[ \frac{h}{u} \big(\log p + (1 - p^{-1}) \log \left| D \right| \\
& & - 2 (1 - p^{-1}) \log{2 \pi} + 2 (1 - p^{-1}) \frac {\Gamma'} {\Gamma}(k) + 2 (1 - p^{-1}) \frac{L'}{L}(1, \epsilon) \big) + o(1) \Big],
\end{eqnarray*}
where again the error term is bounded by
$$ 2^{10-2k}p^{-k+1/4}\left| D \right|^k + 192 \left| D \right|^\frac{3}{2} (\log \left| D \right| + 1) p^{-1}.$$
\subsection{Old Forms from Level $p$, where $\epsilon(p) = -1$}
The notation is as in the previous section. As in the case $\epsilon(p) = 1$, we find that
$$\frac {L'(k, f_1 \times \Theta_\chi)} {(f_1,f_1)} a_1(f_1) = \frac{1}{p} \frac {L'(k, f \times \Theta_\chi)} {(f,f)} a_1(f),$$
and this sum behaves asymptotically as
$$O(\frac{\log p}{p}).$$
Precisely, we can estimate the sum as
\begin{eqnarray*}
& & \frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \Big[ \frac {h} {u} (2 (1 - p^{-1}) \log 2 \pi - \log p \\
& & - (1 - p^{-1})\log \left| D \right| - 2 (1 - p^{-1}) \frac {\Gamma'} {\Gamma} (k) + 2 p^{-1} \frac{L'}{L} (1, \epsilon)) + o(1) \Big],
\end{eqnarray*}
with error term bounded by
$$192 \left| D \right|^\frac{3}{2} (\log \left| D \right| + 1) p^{-1}.$$
By the same methods, we can bound the contribution of $\tilde{f}_p$ by
$$\left| \frac {L'(k, \tilde{f}_p \times \Theta_\chi)} { (\tilde{f}_p, \tilde{f}_p) } a_1 ( \tilde{f} _p) \right| \leq 40 p^{-\frac{3}{2}} (2 \log p + 2 \log 2 \pi k + \log \left| D \right|) \frac {L ( k, f \times \Theta_\chi)} {(f,f)}.$$
Again, the Gross-Zagier estimates show that this is bounded as $o(1)$. Specifically, the sum of these terms is
\begin{eqnarray*}
& & 40 p^{-\frac{3}{2}} (2 \log p + 2 \log 2 \pi k + \log \left| D \right|) \frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \\
& & \Big[ \frac {h} {u} \left(2 (1 - p^{-1}) \log 2 \pi - \log p - (1 - p^{-1})\log \left| D \right| - 2 (1 - p^{-1}) \frac {\Gamma'} {\Gamma} (k) + 2 p^{-1} \frac{L'}{L} (1, \epsilon) \right) + o(1) \Big],
\end{eqnarray*}
with error term bounded by
$$192 (\left| D \right| ) ^\frac{3}{2} (\log \left| D \right| + 1) p^{-1}.$$
\section{The Main Result and Conditional Effective Bounds}
We have seen that old forms contribute a term of growth $o(1)$ to the Michel-Ramakrishnan average. This is true for all primes $p$ if the nonnegativity result holds for the derivative $L'$, and unconditionally true for $p$ with $\epsilon(p) = -1$. Since the average grows asymptotically as $O(\log p)$, we know that for sufficiently large $p$,
$$\sum_f \frac{L'(k, f \times \Theta_A)} {(f,f)} a_1(f) \neq 0,$$
where the sum is taken over newforms $f$ in level $p^2$ and weight $2k$. Therefore, there is at least one newform $f$ with
$$L'(k, f \times \Theta_A) \neq 0,$$
conditionally if $\epsilon(p) = 1$ and unconditionally if $\epsilon(p) = -1$.
Using the estimates to the oldform contributions to the Gross-Zagier sum, we can prove the following effective bound, conditionally on the nonnegativity of the central value of $L'$: For all primes $p > 10^4 k \left| D \right|$, there is a newform in level $p^2$ such that the corresponding twisted $L$-function has only a single zero at the central point $k$ of the functional equation. (This bound is by no means the best possible.)
Using the bounds
$$\log x < 4 x ^ \frac{1}{4} (x>1),$$
$$0 < \frac{\Gamma'}{\Gamma} (k) < \log k,$$
and
$$\left| \frac{L'}{L} (1, \epsilon) \right| < \log \left| D \right|,$$
one easily verifies that the contribution to the Gross-Zagier sum from all old forms combined is less than
$$\frac {2^{4k-1} \pi^{2k}} {(2k-2)!} \left| D \right| ^{-\frac{1}{2}} \frac{h}{u}.$$
But from the result at the end of Section \ref{sec:asymptoticestimates} it is easy to see that the entire Michel-Ramakrishnan sum is necessarily larger than this bound. Thus, there is a newform that contributes a nonzero value to the sum.
|
1909.07733
|
\section{Introduction}
Since Einstein proposed his general theory of relativity in 1915, a lot of research has been devoted to unify General Relativity (GR) and Electromagnetism as two fundamental interactions in nature. However, the early proposals date back to the 1920s, through Kaluza–Klein theory to unify these interactions \cite{Kaluza:1921tu,Klein}, that was a classical unified field theory built in five-dimensional spacetime. Recently, motivated by string theory as a requirement for describing a consistent theory of quantum gravity, extra dimensions have been the subject of much attention. Besides string theory, there are some other theories proposing the necessity of extra dimensions:
\begin{itemize}
\item Large extra dimensions, mostly motivated by the ADD model, by Arkani-Hamed, Dimopoulos, and Dvali together with Antoniadis in Refs. \cite{ArkaniHamed:1998rs,Antoniadis:1998ig} to solve the hierarchy problem in which the difference between the Standard Model interactions and GR manifests itself notably in their dissimilar coupling strengths. While the electromagnetic, weak and strong forces differ by just six orders of magnitude, the gravitational interaction falls apart by further thirty-three orders.
\item Warped extra dimensions, such as those proposed by the Randall–Sundrum (RS) model \cite{Randall:1999ee}, in which our observable universe is modeled as a four dimensional hyper surface, known as the 3-brane, embedded in a five dimensional space, usually called the bulk. The novel idea of the Brane world is that all the gauge interactions, described by the Standard Model, are confined to live in the 3-brane while the gravitational interaction can spread into the fifth dimension of the space.
\item Universal extra dimensions, proposed and first studied in Ref. \cite{Appelquist:2000nn}, assume, at variance with the ADD and RS approaches, that all fields propagate universally in the extra dimensions.
\end{itemize}
The size and the shape of extra dimensions should be related to the fundamental energy scales of particle physics: the cosmological scale, the density of dark energy, the TeV electroweak scale, or the scale of ultimate unification. More likely, the extra dimensions are microscopic, in this case high-energy particle accelerators \cite{Giddings:2001bu,Dimopoulos:2001qe} and cosmic-ray experiments \cite{Nagano:2000ve,Emparan:2001kf} are the only means to detect their physical effects. The LHC experiments will have direct sensitivity to probe extra dimensions, through the production of new particles that move in the extra space. There is also a chance that, due to the existence of extra dimensions, microscopic black holes may be detected at the LHC \cite{Dimopoulos:2001hw,Cheung:2001ue} or in the highest energy cosmic rays \cite{Feng:2001ib,Anchordoqui:2001cg}.
On the other hand, Einstein’s work derived gravitation from the underlying spacetime concept and was not provoked by observational facts but was motivated on a purely theoretical basis, while this theory fundamentally has changed our understanding of spacetime, mass, energy, and gravity. GR had some features and implications further than Newton’s theory of gravitation, namely, light bending, time dilation, and gravitational redshift \cite{Weinberg}.These effects have been verified experimentally and to this date are being tested to higher and higher accuracies. The gravitational waves which were recently detected by LIGO and Virgo collaborations \cite{Abbott:2016blz}are also another profound implication of GR. The detected signals perfectly agree with predictions based on black holes in GR up to $5\sigma$ \cite{TheLIGOScientific:2016pea}.
Gravitational redshift is a very useful tool in astrophysics. This phenomenon has been confirmed by Pound–Rebka experiment in 1959 \cite{Pound:1960zz}.It helps us test our knowledge about the structure of those stars whose internal structures are different from the Sun and other normal stars.
Gravitational lensing occurs when light rays pass close to a massive body and it was confirmed by Eddington for the first time in 1919\cite{Dyson:1920cwa}.
About one century after the first measurement, gravitational lensing is still one of the major tools of cosmology\cite{Peebles:2004qg,Song:2008xd}, astrophysics\cite{Fomalont:1976zz,Fomalont:2009zg} and astronomy \cite{Bull:2015lja,Rusin:2002tq,Reyes:2010tr}.
Time dilation measures the amount of time elapsed between two events by observers situated at different distances from a gravitational mass. The light travel time delay is sometimes called the fourth classical test of GR and was first introduced by Shapiro in 1964 \cite{Shapiro:1964uw}. A significant improvement was reported in \cite{Bertotti:2003rm} from Doppler tracking the Cassini spacecraft on its way to Saturn. In addition to the above theoretical motivation(s), there are advances in technologies concerning the high precision measurement of time and frequency namely, optical lattice clocks \cite{Amenomori:2006bx} and auto-seconds laser technologies \cite{Takamoto:2005}.
Time delay corrections are also very important in Global Positioning Systems (GPS) \cite{Ashby:2003vja}. The clocks on GPS satellites tick faster than the clocks on earth’s surface, so we have to place a correction into the satellite measurements.
In addition to the idea of extra dimensions, the other important implication motivated by string theory is the non-commutativity of space \cite{Seiberg:1999vs,Ardalan:1998ce,Kempf:1994su,AmelinoCamelia:2008qg}. It has drawn a lot of interest in a wide range of areas from condensed matter physics to cosmology, high energy physics, and astrophysics \cite{Connes:1997cr,Fatollahi:2006tp,Zhang:2011zzg}.
The simplest non-commutativity that one can postulate is the commutation relation $[x_{i},x_{j}]=i \theta_{ij}$, where $\theta ij$ is an antisymmetric (constant) tensor of dimension $(length)^2$. The parameter $\theta$ measures the amount of coordinate non-commutativity in the coordinate coherent states (CCS) approach \cite{Snyder:1946qz,Doplicher:1994tu} in which the concept of point-like particle becomes physically meaningless and must be replaced with its best approximation, i.e., a minimal width Gaussian distribution of mass.
In fact, the CCS approach to non-commutative effects can cure the singularity problems at the final stage of the black hole evaporation. This effective approach may be considered as an improvement to semi-classical gravity and a way to understand the non-commutative
effects. Motivated by this idea, the Schwarzschild black holes inspired by non-commutative geometry studied in \cite{Nicolini:2005vd}, was extended to the Reisnner-Nordstrom model in \cite{Ansoldi:2006vg,Alavi:2009tn}, and has been generalized to higher dimensions in \cite{Rizzo:2006zb}, and charged black holes in higher dimensions \cite{Nozari:2007ya,Spallucci:2009zz,Gingrich:2010ed}.Furthermore, in recent years we have witnessed a significant interest in this non-commutative approach from cosmology \cite{Calmet:2005mc,Alavi:2004aq}, holography \cite{Pramanik:2014mya,Zeng:2014xpa,Pramanik:2015eka} and the black hole physics \cite{Nicolini:2009gw,Liang:2012vx,Ghosh:2017odw,Wei:2015dua,Myung:2006mz,Nozari:2007rh,Banerjee:2008gc,Miao:2015npc}.
On the other hand, studying the effects predicted by GR are important in the vicinity of compact objects, such as neutron stars and black holes. The first light detected from regions close to the black holes was discovered by the ASCA satellite \cite{Tanaka:1995en,Yaqoob:1998ef}. In the case of astrophysical effects in the vicinity of higher dimensional black holes one can see \cite{Connell:2008ek} and references therein. The effects of such black hole parameters as charge and rotation on the gravitational lensing have been studied in \cite{Kraniotis:2005zm} for Kerr metric and in \cite{Briet:2008mz} for charged solutions (from free charges like RN black holes to geometrical charges like Kaluza-Klein black holes coming from compactification of extra dimensions). It has also been investigated to determine how the detection of lensed images of black holes determine the form of the black hole metric in \cite{BinNun:2010se}, but the calculations in \cite{Kraniotis:2005zm,Briet:2008mz,BinNun:2010se} are in the strong-field regime of GR.
The authors in \cite{Pardo:2018ipy} have shown that the Virgo and LIGO results for GW170817 data \cite{Abbott:2016blz} have the best consistency with GR but their results do not hold for extra dimensional theories with compact extra dimensions in strong energy limit \cite{ArkaniHamed:1998rs,Antoniadis:1998ig,Randall:1999ee} and for theories with larger extra dimensions, typically cosmological distances \cite{Dvali:2000hr}, in the weak-field regime.
The purpose of the current work is to obtain explicit expressions for the three aforementioned GR effects in the gravitational field of a black hole in commutative and non-commutative spaces with extra dimensions. Inspired by this idea, we do investigate deviations from GR predictions due to the gravitational leakage into the extra dimensions, and we are more interested in some possible observational or experimental consequences of extra dimensions in such gravitational systems. This issue deserves further research along the lines that we have already proposed in \cite{Karimabadi:2018sev}.
The structure of the paper is as follows: Section II, introduces the Schwarzschild black hole in higher dimensions, in some detail. Similarities and differences of GR measurements in four and extra dimensions are illustrated in Section III. In particular, there are three figures which make the comparison easier and clearer. As our results show, if spacetime is truly a higher dimensional space then its implications should appear in gravitational measurements around black holes.
The goal of section IV, is to study the effects of the non-commutativity of space on higher dimensional GR measurements. We first present some preliminaries of black holes in non-commutative higher dimensional spaces. Then, we obtain a minimum mass to form a black hole in each extra dimension and investigate the gravitational measurements in non-commutative higher dimensional spaces. Although theories with extra dimensions has received much attention in recent years, unfortunately the size of extra dimensions has not been investigated properly. To tackle this problem, we finally obtain lower bound on the size of extra dimensions using the bound obtained for the non-commutative length scale in our previous work \cite{Karimabadi:2018sev}.
\section{Schwarzschild black hole in higher dimensions}
Amongst the various types of black hole solutions of Einstein field equations, a natural higher dimensional generalization of the Schwarzschild metric, also known as the Schwarzschild-Tangherlini metric \cite{Tangherlini:1963bw}, has been assumed to be stable, like its four-dimensional counterpart. The spacetime around such an uncharged, stationary, spherically symmetric black hole in $(d+1)$ dimensions is described by
\begin{equation}} \def\ee{\end{equation}\label{Sch1} ds^2=B(r) dt^2-B^{-1}(r) dr^2-r^2 d\Omega^2_{d-1}\,,\ee
where $d\Omega^2_{d-1}$ denotes the element of unit $(d-1)$-sphere with area $A_{d-1}=\frac{2 \pi^{d/2}}{\Gamma(d/2)}$ and $B(r)$ is given by
\begin{equation}} \def\ee{\end{equation}\label{func1}B(r)=1-\frac{\mu_0}{r^{d-2}}\,.\ee
The constant parameter $\mu_0$ is related to the mass of the black hole by \cite{Myers:1999psa}
\begin{equation}} \def\ee{\end{equation}\label{sm} M=\frac{(d-1) A_{d-1}\mu_0}{16 \pi G_{d+1}} \,,\ee
where $G_{d+1}=G_4 L^{d-3}$ is the $(d+1)$-dimensional gravitational constant and $L$ is the size of the extra dimensions, so
\begin{equation}} \def\ee{\end{equation}\label{func2}B(r)=1-\frac{8 MG_4 L^{d-3}\, \Gamma[\frac{d}{2}]}{(d-1)\,\pi^{\frac{d}{2}-1} \, r^{d-2}}\,.\ee
For later convenience, we use $G_4=1$ and define dimensionless variables $x=\frac{r}{\ell_{p}}$, $\eta=\frac{M}{\ell_{p}}$, and $\alpha=\frac{L}{\ell_{p}}$ where $\ell_{p}$ is the Planck's length.
It is worth mentioning that if a gravitational radius of a black hole is much smaller than the characteristic length of the extra dimensions, then the black hole can be very well described by asymptotically flat solutions like those in \cite{Tangherlini:1963bw,Myers:1986un,Scardigli:2003kr,Nouicer:2007pu} for higher dimensional static and rotating black holes (a review on higher dimensional black holes can be found in \cite{Emparan:2008eg}, see also references therein). Thus, from this point of view, one might conclude that there should not be any practical difference between these kinds of black holes and the small black holes in the ADD and RS models given in the introduction.
We have plotted the $g_{00}$ component of higher dimensional Schwarzschild metric (\ref{Sch1}) as a function of $x$ for different spatial dimensions in Fig.~(\ref{f1}). The location of the event horizon is determined by the equation $B(r)=0$, so as seen in Fig.~(\ref{f1}) this occurs in smaller distances in higher dimensions which asserts that gravity in four-dimensional spacetime is stronger than higher dimensions. This fact can also be checked by noting that, in higher dimensions the $g_{tt}$ curves tend more rapidly to the $g_{tt}$ of flat spacetime.
\begin{figure}[H]
\centering
\includegraphics[width=10cm,height=6.5cm]{SHED.pdf}
\caption{The $g_{tt}$ component for $\alpha\!=\!1.5$ and $\eta\!=\!2.5$. The number in parentheses is the spatial dimension $d$, with this extra explanation that in units where $c=G=\hbar=1$, these values for $\alpha$ and $\eta$ are satisfactory.}
\label{f1}
\end{figure}
\section{Gravitational effects in higher dimensions}
In this section, we are going to obtain expressions for the three aforementioned effects of GR in the case of an extra dimensional Schwarzschild black hole as the gravitational system. In order to compare the behaviour of extra dimensions with GR, we perform a numerical analysis by plotting the quantities. A general remark is in order here. The details of the calculations in this section for four dimensions could be found in Ref.~\cite{Weinberg}, and the calculations for extra dimensions could be carried out along the same lines.
{\bf{\emph{Redshift:}}} When the light passes in the opposite direction of a gravitational field, some of its energy is wasted and it is transmitted to redshift wavelength. The gravitational redshift is denoted by $z =\Delta \lambda/\lambda$ where $\Delta\lambda$ is the difference between the observed and emitted wavelengths and $\lambda$ is the wavelength of the source. However, for radiation emitted in a strong gravitational field, as that coming from the surface of a neutron star or close to the event horizon of a black hole, the gravitational redshift can be very large. Hence, there is a shift in the spectral lines of light around a Schwarzschild metric (\ref{Sch1}) which is given by the following maximum value
\begin{equation}} \def\ee{\end{equation} \label{reds1} z=\frac{\omega_1}{\omega_2}\Big|_{max}-1=\sqrt{\frac{B(r_2)}{B(r_1)}}-1\,,\ee
where $\omega_2$ and $\omega_1$ are the frequencies received by the observer and emitted by the source, respectively. (for more details see $\S\,14.3$ in Ref.~\cite{Weinberg}) When the light is emitted from radius $r_1$ and received at $r_2\rightarrow\infty$, then the redshift measured by an asymptotic observer turns out to be
\begin{equation}} \def\ee{\end{equation}\label{reds2} z=\left[1-\frac{8 M L^{d-3}\, \Gamma[\frac{d}{2}]}{(d-1)\,\pi^{\frac{d}{2}-1} \, r_1^{d-2}}\right]^{-1/2}-1.\ee
We have plotted the redshift factor (\ref{reds2}) for different spatial dimensions in Fig.~(\ref{f2}). Comparing the graphs confirms the statement that in higher dimensions the spacetime foam has lower curvature than GR or the gravity is diluted in extra dimensions. In other words, it can be inferred from the figure that the rate of increase in redshift occurs at higher dimensions closer to the black hole event horizon.
\begin{figure}
\centering
\includegraphics[width=12cm,height=7cm]{rs.pdf}
\caption{Redshift for different values of $d$ in terms of $x$ for $\alpha=1.5$ and $\eta=2.5$. The vertical line shows the location of event horizon in each dimension and $x=r_1/\ell_p$.}
\label{f2}
\end{figure}
{\bf{\emph{Deflection of light:}}} When the light passes close to a massive object such as a supernova or a black hole, it is deflected from its straight path by the value
\begin{equation}} \def\ee{\end{equation}\label{def} \Delta\phi=2 \int^{\infty}_{r_{\circ}} \frac{1}{r\sqrt{B(r)}} \,\left(\frac{r^2}{r_{\circ}^2}\frac{B(r_{\circ})}{B(r)}-1\right)^{-\frac12} dr-\pi\,,\ee
where $r_{\circ}$ is the closest distance to the massive object depicted in Fig.~(\ref{f10}). The details of the calculation can be found in $\S\,8.5$ of Ref.~\cite{Weinberg}.
The bending and delay of photons by the curvature of spacetime produced by a mass are proportional to $\gamma+1$ ( $\gamma$ is called the parameterized post-Newtonian parameter), where $\gamma$ is one in GR but zero in the Newtonian
theory \cite{Weinberg}. Henceforth, we consider $\gamma=1$ and also ignore the modification of the Heisenberg uncertainty principle to a Generalized Uncertainty Principle (GUP) given in \cite{Kempf:1994su,Scardigli:2014qka}.
\begin{figure}[H]
\centering
\includegraphics[width=10cm,height=4cm]{def.jpg}
\caption{Gravitational deflection of light around a massive object.}
\label{f10}
\end{figure}
The integration yields the following expression for bending of light in the vicinity of a Schwarzschild metric (\ref{Sch1}),
\begin{equation}} \def\ee{\end{equation}\label{Schdef} \Delta\phi=\frac{4 M L^{d-3} \, \Gamma[\frac{d-1}{2}]}{ \pi^{\frac{d-3}{2}}\,r_\circ^{d-2}}.\ee
The behaviour of this quantity has been plotted versus $r_\circ$ for different $d$ in Figs.~(\ref{f3}). We have used the dimensionless convention introduced in the previous section. The figures are plotted from the event horizon in each dimension. By comparing the figures, it is observed that as the dimension of spacetime increases, the deflection occurs at closer distances to the event horizon, or equivalently, the suppression takes place in short distances. Also, the maximum value of $\Delta \phi$, which happens in the horizon, increases by $d$. However, the general behaviour of the plots are the same for all dimensions. Thus, the results show that the deflection of light in higher dimensions is weaker than GR.
\begin{figure}
\centering
{\includegraphics[width=.5\textwidth]{def30.pdf}}
{\includegraphics[width=.5\textwidth]{def40.pdf}}
{\includegraphics[width=.5\textwidth]{def50.pdf}}
{\includegraphics[width=.5\textwidth]{def60.pdf}}
\caption{\small{Deflection of light around a Schwarzschild black hole for $\alpha=1.5$ and $\eta=2.5$ in diffenet spacetime dimensions.}}
\label{f3}
\end{figure}
\vspace{1cm}
{\bf{\emph{Time delay:}}} According to GR massive objects curve the spacetime geometry, so the motion of different particles, such as photons, is affected by this curvature. Bending the spacetime causes the light path become longer than the straight path, therefore it takes more times to travel, consequently generates a time delay. The maximum round-trip excess time delay around the black hole described by (\ref{Sch1}) is given by \cite{Weinberg}
\begin{equation}} \def\ee{\end{equation}\label{td1} (\Delta t)_{max}=2\left[t(r_0,r_1)+t(r_0,r_2)-\sqrt{r_1^2-r_0^2}-\sqrt{r_2^2-r_0^2}\,\right],\ee
where the time required for light to go from $r_0$ to $r$ is
\begin{equation}} \def\ee{\end{equation} \label{td10} t(r_0,r)=\int_{r_0}^{r}\frac{1}{B(r)} \left(1-\frac{B(r)}{B(r_0)}\,\frac{r_0^2}{r^2}\right)^{\frac12}\,dr .\ee
The reader can see the details in $\S\,8.7$ of Ref.~\cite{Weinberg}.
In order to calculate the excess (\ref{td1}), we first use the Robertson expansion for the integrand in (\ref{td10}). The leading terms in the expansion,i.e., $\sqrt{ r_1^2 - r_0^2}$ and $\sqrt{ r_2^2 - r_0^2}$ which are what we should expect if light traveled in straight lines at unit velocity, are canceled and only remain the dominant terms given by
\begin{eqnarray}} \def\eea{\end{eqnarray}\label{tds} \Delta t &\!\!\!\!\simeq \!\!\!\!&\frac{4\eta}{\pi^{\frac{d}{2}-1} \alpha^{d-3}}\Bigg[ \frac{(d-2) \pi^{\frac32} i \csc [\frac{\pi d}{2}]}{x^{d-3} \Gamma \left(\frac{5-d}{2}\right)}-\frac{4}{(d-3)(d-1)}\nonumber\\
&&\left((\frac{\delta^3}{\delta^d}+\frac{\sigma^3}{\sigma^d})\Gamma \left(\frac{d}{2}\right)+\frac{6(d-3)-4(d-4)}{(d-1) x^{d-3}} (\frac{i}{2})^{d-1} \Gamma \left(\frac{4-d}{2}\right) \Gamma \left(d\right) \right)\Bigg],
\eea
where we have used the previous dimensionless variables $\alpha,\,\eta$, and $x$.
In fact, these terms evidently produce a GR delay in the time it takes a radar signal to travel to a planet and back. This excess delay is a maximum when planet is at superior conjunction and the radar signal just grazes the black hole; in this case $r_0$ is almost equal to the event horizon radius and is much smaller than the distances $r_1$ and $r_2$ of the black hole from the earth and the planet, respectively (see Fig.~\ref{f40}). For $d=3$, the excess time delay (\ref{tds}) gives the known result
\begin{equation}} \def\ee{\end{equation}\label{tds1} (\Delta t)_{max}\simeq 4\eta \left\{ 1+\ln{[\frac{4\delta\sigma}{x^2}]}\right\}. \ee
The dimensionless parameters $\delta=\frac{r_1}{\ell_p}$ and $\sigma=\frac{r_2}{\ell_p}$ are the orbital radius of the earth and of the reflecting planet around the center of the black hole illustrated in Fig.~\ref{f40}.
\begin{figure}[H]
\centering
\includegraphics[width=6cm,height=4cm]{td.jpg}
\caption{The actual path of the radar reflection of photons from the earth to a planet and back.}
\label{f40}
\end{figure}
In order to better understand the implication of this excess delay, we have plotted (\ref{tds}) as a function of different distances from the event horizon for $d=3,4,5$, and $6$ in Figs.~(\ref{f4}). It is observed from the figures that it takes shorter time for a radar signal to travel to the planet and back in higher dimensional spaces compared to GR which again approves that, gravity is weaker in higher dimensions than GR. The diagrams are plotted from the event horizon in each dimension, however, their behaviour is the same and a maximum of time delay occurs when the signal passes close to the horizon.
\begin{figure}
\centering
{\includegraphics[width=.5\textwidth]{td30.pdf}}
{\includegraphics[width=.5\textwidth]{td40.pdf}}
{\includegraphics[width=.5\textwidth]{td50.pdf}}
{\includegraphics[width=.5\textwidth]{td60.pdf}}
\caption{\small{Excess time delay for a higher dimensional Schwarzschild black hole with $\alpha=1.5$, $\eta=2.5$, $\delta=100$ and $\sigma=100$. In the case of maximum excess time delay the distances $\delta$ and $\sigma$ are very greater than the location of horizon.}}
\label{f4}
\end{figure}
\section{Gravitational effects in higher dimensional non-commutative spaces}
In a noncommutative geometry, point-like objects cannot exist, because there is no physical distance smaller than a minimal position uncertainty of the order of $\sqrt{\theta}$. This effect is implemented in spacetime through de-localization of matter which results in a regular or curvature singularity free, metric. This is exactly what is expected from the existence of a minimal length. The effects of this spreading over space is mathematically implemented by replacing position Dirac-delta function everywhere with a Gaussian distribution of minimal width $\sqrt{\theta}$ \cite{Rizzo:2006zb,Spallucci:2009zz}. Motivated by this result, we choose the mass density of a smeared, static, spherically symmetric source as
\begin{equation}} \def\ee{\end{equation}\label{gd} \rho_{M}(r)=\frac{M}{(4\pi \theta)^{d/2}} \exp{\left(-\frac{r^2}{4\theta}\right)}\,,\ee
i.e. the particle mass $M$ is diffused throughout a region of linear size $\sqrt{\theta}$. It is generally assumed that $\sqrt{\theta}$ is close to the Planck length. However, one can define the line element and Einstein’s equation with de-localized matter sources which give regular metrics\cite{Nicolini:2005vd,Rizzo:2006zb}.
The particle-like $(d+1)$-dimensional solution of Einstein's equation with this source is described by the metric (\ref{Sch1}) \cite{Rizzo:2006zb,Spallucci:2009zz} with
\begin{equation}} \def\ee{\end{equation} \label{ncss} B^{NC}(r)=1-\frac{\mu(d) [\sqrt{\theta }]^{d-2}}{r^{d-2}} \,\gamma\left(\frac{d}{2},\frac{r^2}{4 \theta}\right),\ee
where $NC$ refers to the non-commutative space and the dimensionless parameter $\mu(d)$ is defined as follows
\begin{equation}} \def\ee{\end{equation} \label{mu} \mu(d)=\frac{8 M L^{d-3} }{(d-1) [\pi \theta]^{(d-2)/2}}.\ee
The Euler lower Gamma function $\gamma(a/b,z)$ is defined by
\begin{equation}} \def\ee{\end{equation}
\gamma(a/b,x)\equiv \int^{x}_{0} \, e^{-t} \,t^{a/b}\,\frac{dt}{t},
\ee
and the physical mass of the solution is given by integrating the minimal
spread Gaussian profile (\ref{gd})
\begin{equation}} \def\ee{\end{equation} \label{mass} M_{\theta}=A_{(d-1)}\int r^2 \rho_{M}(r) dr\,.\ee
For an observer at large distances, $\frac{r}{\sqrt{\theta}}\rightarrow \infty$ or $\frac{\sqrt{\theta}}{r}\rightarrow 0$, this smeared density looks like a small sphere of matter with radius about $\sqrt{\theta}$, so it assures that the metric to be Schwarzschild.
In contradiction to the usual Schwarzschild black hole in GR, which has a single horizon, in (3+1)-dimensional non-commutative space we have different possibilities. An important and interesting question in a non-commutative background is: what is the condition to have a black hole with one (extremal) or two horizons. For a Schwarzschild metric in (3+1)-dimensional space, it is shown in \cite{Nicolini:2005vd} that:
\begin{itemize}
\item For $\eta=\frac{M}{\sqrt{\theta}}<1.9$ there is no horizon for (\ref{ncss}) with $d=3$ shown by the red curve in Fig.~(\ref{mss})
\item For $\eta=\frac{M}{\sqrt{\theta}}=1.9$ there is a degenerate horizon (extremal black hole) in $x=\frac{r}{\sqrt{\theta}}=3$ shown by the blue curve in Fig.~(\ref{mss}). This mass is called the $minimal\, mass\,, M=M_0=1.9\sqrt{\theta}$ \cite{Nicolini:2005vd}, which represents the final state of a black hole at the end of Hawking evaporation process.
\item For $\eta=\frac{M}{\sqrt{\theta}}>1.9$ there are two distinct horizons shown by green curve in Fig.~(\ref{mss}). By increasing $M$, i.e., for $M >> M_0$, the inner horizon shrinks to zero, while the outer horizon approaches the Schwarzschild value $r = 2M$.
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=12cm,height=7cm]{MSSM.pdf}
\caption{{$g_{tt}$ in terms of $x$ for various values of $\frac{M}{\sqrt{\theta}}$. Intercepts on the horizontal axis give radii of the event horizon(s). The dashed curves show the commutative case for different values of mass $M$.}}
\label{mss}
\end{figure}
In fact, the location of event the horizon for a Schwarzschild-like black hole is determined by the equation $B(r)\!=\!0$ in our convention, but one cannot exactly solve this equation for $r_h$ in non-commutative geometry. So, we should solve it numerically for different values of mass parameter $M$ as illustrated in Fig. (\ref{mss}) for $d=3$ dimensions. This is also true for the parameter $\mu(d)$ in non-commutative extra dimensions where one can say that it has the same role as the black hole mass $M$ for Schwarzschild black hole in metric function (\ref{ncss}). It is natural to ask the same question in the context of non-commutative extra dimensions as at what values of $\mu(d)$can a black hole exist (having at least one degenerate horizon). We denote this value of $\mu(d)$ by $\mu_0 (d)$, and the relevant values are calculated and summarized in Tab.~(\ref{tab1}).
\begin{table}[H]
\caption[]{Values for $\mu_0 (d)$(Minimum of $\mu(d)$ to form a black hole) in different dimensions.}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline
$d$&$3$&$4$&$5$&$6$&$7$&$8$&$9$ \\ \hline
$\mu_0 (d)$ & $4.29714$&$13.40368$&$36.813$&$94.11858$&$229.84576$&$543.7545$&$1256.8274$ \\ [0.2ex]\hline
\end{tabular}
\label{tab1}
\end{table}
To check the existence of horizons and their radii we have plotted (\ref{ncss}) as a function of $\frac{r}{\sqrt{\theta}}$ using the values shown in Tab.~(\ref{tab1}) which is depicted in Fig.~(\ref{newmet}). As expected, the black holes do exist in all dimensions i.e., there is one degenerate horizon in each dimension.
\begin{figure}[H]
\centering
\includegraphics[width=14cm,height=8cm]{newmet.pdf}
\caption{{Time component of the extremal Schwarzschild metric in different dimensions vs. $x=\frac{r}{\sqrt{\theta}}$.}}
\label{newmet}
\end{figure}
\subsection{Gravitational measurements}
Now, we are ready to study the redshift and deflection of light around a higher- dimensional non-commutative Schwarzschild-like geometry.
{\bf{\emph{Redshift:}}}
In the context of non-commutative geometry in CCS approach, the redshift function is obtained by inserting (\ref{ncss}) in (\ref{reds1}) and doing necessary calculation, so the maximum redshift measured by an asymptotic observer, $r_2\rightarrow \infty$, is given by
\begin{equation}} \def\ee{\end{equation}\label{ncsred} z^{NC}=\left[1-\frac{\mu(d) [\sqrt{\theta }]^{d-2}}{r_1^{d-2}} \,\gamma\left(\frac{d}{2},\frac{r_1^2}{4 \theta}\right)\right]^{-1/2} \!-\!1,\ee
where in the limit $\frac{\sqrt{\theta}}{r_1}\rightarrow 0$, it leads to (\ref{reds2}) for the higher dimensional commutative Schwarzschild solution.
Using (\ref{mu}) and Tab.~(\ref{tab1}), we have plotted the redshift function calculated by (\ref{ncsred}) for different spatial dimensions in Fig.~(\ref{srnred}) in terms of dimensionless radial coordinate $x=\frac{r_1}{\sqrt{\theta}}$. As expected, far away from the gravitational system, all the curves tend to zero and there is no shift in the light wavelength just as in the commutative spaces.
\begin{figure}[H]
\centering
\includegraphics[width=14cm,height=8cm]{rednced.pdf}
\caption{{Redshift of a higher dimensional NC Schwarzschild black hole.}}
\label{srnred}
\end{figure}
We can also see from Fig.~(\ref{srnred}) that in contrast to the commutative space, there is a maximum of redshift (peak) in each dimension. However, the values of the peaks are the same for all dimensions, i.e., it does not depend on the dimension and by increasing the dimension of spacetime, the peak occurs in smaller $x$. For instance, there is a regular peak at $x=3$ for extremal limit $\eta=1.9$ in four-dimensions \cite{Karimabadi:2018sev}. It has been shown in \cite{Nicolini:2005vd} that there exists a similar finite maximum temperature at $r_h=3\sqrt{\theta}$ that the black hole can reach before cooling down to absolute zero which states that there is no curvature singularity at the origin and the geometry is regular there. It is also observed for higher dimensions that by increasing $x=\frac{r_1}{\sqrt{\theta}}$, the redshift decreases more rapidly than four-dimensional spacetime, which shows that gravity becomes weaker in higher dimensions.
{\bf{\emph{Deflection of light:}}}
The amount of deflection of light when passing close to a higher dimensional Schwarzschild black hole in a non-commutative space is calculated by inserting the metric (\ref{ncss}) in the relation (\ref{def}), so we have
\begin{eqnarray}} \def\eea{\end{eqnarray}\label{defsh1} \Delta\phi&\!\!\!\!=\!\!\!\!&-\pi+2\int_{r_{\circ}}^{\infty} dr \,\Big[\frac{1}{r \sqrt{\frac{r^2}{r_{\circ}^2}-1}}+\frac{4Mr\, L^{d-3}}{(d-1)\, \pi^{\frac{d-2}{2}} r_{\circ}^{d}\,\left(\frac{r^2}{r_{\circ}^2}-1\right){}^{3/2}}\,\gamma\left(\frac{d}{2},\frac{r^2}{4 \theta}\right)\nonumber\\
&\!\!\!\!-\!\!\!\!&\frac{4M r^{3-d} L^{d-3}}{(d-1)\, \pi^{\frac{d-2}{2}} r_{\circ}^{2} \,\left(\frac{r^2}{r_{\circ}^2}-1\right){}^{3/2}}\,\gamma\left(\frac{d}{2},\frac{r^2}{4 \theta}\right)+\frac{4M r^{1-d} L^{d-3}}{(d-1)\, \pi^{\frac{d-2}{2}} \,\left(\frac{r^2}{r_{\circ}^2}-1\right){}^{1/2}}\,\gamma\left(\frac{d}{2},\frac{r^2}{4 \theta}\right)\Big] ,\eea
and after integration the result is as follows
\begin{equation}} \def\ee{\end{equation}\label{defsh2} \Delta\phi^{NC}=\frac{8M L^{d-3}}{(d-1)\, \pi^{\frac{d-3}{2}} r_{\circ}^{d-2}} \,\gamma\left(\frac{d+1}{2},\frac{r_{\circ}^2}{4 \theta}\right)\,, \ee
where in the limit $\frac{\sqrt{\theta}}{r}\rightarrow 0$, it gives the predicted deflection as denoted by (\ref{Schdef}). There are again two points that can be inferred from Fig.~(\ref{f8}); $i$) There is a regular peak for the value of deflection in each spacetime dimension at the degenerate horizon. $ii$) The maximum value reduces by decreasing the spacetime dimensions and also the peak of the deflection takes place in a smaller $x$ as we increase the number of dimensions. $iii$) As we get away from the horizon, the value of deflection goes faster to zero by increasing spacetime dimensions and it is because there are more channels available for gravity to decay in them.
\begin{figure}[H]
\centering
\includegraphics[width=14cm,height=8cm]{defnc1.pdf}
\caption{{Gravitational deflection close to a non-commutative Schwarzschild black hole for $\eta=1.9$ and $\alpha=3$.}}
\label{f8}
\end{figure}
\subsection{Determination of lower bounds}
{\bf{\emph{Lower bound on the size of extra dimensions (L):}}}
One can impose the condition $L>r_{\circ}$ ($\frac{L}{\sqrt\theta}>\frac{r_{\circ}}{\sqrt{\theta}}$) to ensure that L is larger than the event horizon radius and so is an observable quantity, see also \cite{Scardigli:2003kr}. The values of $\frac{r_{\circ}}{\sqrt{\theta}}$ are extractable from Fig.~(\ref{newmet}). Also, using the fact that the length of $\sqrt{\theta}$ is less than $7\times10^{-19}$ \cite{Karimabadi:2018sev}, one can obtain a lower bound on the length of extra dimensions L as provided in Tab.~(\ref{tab3}).
\begin{table}[H]
\caption[]{Lower bound on length of extra dimensions.}
\centering
\begin{tabular}{|c|c|c|}\hline
$d$ & ${r_{\circ}}/{\sqrt{\theta}} $ & $ L(m)$ \\ \hline
4 & 2.68&$1.87\times10^{-18}$ \\
5 & 2.51&$1.1\times10^{-18}$ \\
6 & 2.41&$9.38\times10^{-19}$ \\
7 & 2.34&$8.65\times10^{-19}$\\
8 & 2.29&$8.26\times10^{-19}$ \\
9 & 2.26&$8.01\times10^{-19}$ \\[0.5ex]\hline
\end{tabular}
\label{tab3}
\end{table}
{\bf{\emph{Lower bound on the mass of the black hole (M):}}}
As mentioned earlier the metric (\ref{ncss}) for special values of $\mu(d)$, which are listed in Tab.~(\ref{tab1}), corresponds to extremal black holes in different extra dimensions. So the condition
\begin{equation}} \def\ee{\end{equation}\label{A} \mu(d)\geq \mu_0 (d)\,, \ee
ensures that we have black holes with two distinct horizons (the greater sign) or with one degenerate horizon (the equal sign). Using (\ref{A}) and the results of Tab.~(\ref{tab3}), one can provide a lower bound on the mass to form a black hole in higher dimensional spacetime the results of which are summarized in Tab.~(\ref{tab4}). These values are close to the order of the mass of the primordial micro black holes created in the early Universe which may have survived (have not evaporated) until the current epoch \cite{Liddle:1998nt,Sato-Polito:2019hws}.
\begin{table}[H]
\caption[]{Lower bound on the mass of the black hole (M).}
\centering
\begin{tabular}{|c|c|}\hline
$d$ & $M(kg)$ \\ \hline
3 & $1.79\times10^{9}$ \\
4 & $5.54\times10^{9}$ \\
5 & $3.83\times10^{10} $\\
6 & $2.26\times10^{11} $\\
7 & $1.21\times10^{12} $\\
8 & $6.07\times10^{12} $\\
9 & $2.86\times10^{13} $\\[0.5ex]\hline
\end{tabular}
\label{tab4}
\end{table}
\section{Conclusions}
In this paper, we studied the well-known tests of GR for higher dimensional commutative and non-commutative Schwarzschild black holes. We obtained expressions for the gravitational redshift, deflection, and time delay around black holes. The results show that the amounts of these quantities will diminish when we study higher dimensional black holes. In this regard, as depicted in Figs.~(\ref{f2}), (\ref{f3}) and (\ref{f4}), by increasing the dimensions of spacetime in commutative case, the effects of gravity become weaker than GR, what is consistent with the fact that the gravitational effects propagate into the extra dimensions, or that gravity gets diluted in the large volume of the extra dimensions \cite{ArkaniHamed:1998rs}.
On the other hand, in a non-commutative geometry which is based on the CCS formalism, we observed that the existence of a Schwarzschild black hole with a degenerate horizon (extremal black hole) tightly depends on $M$, $L$ and $\sqrt{\theta}$, i.e. the mass of the black hole, the size of the extra dimensions and the non-commutative length scale, respectively. It has been shown in Fig.~(\ref{newmet}) and Tab.~(\ref{tab1}) that for a definite higher dimensional non-commutative geometry, i.e., for given values of $L$ and $\sqrt\theta$, by increasing the number of dimensions, we need more mass to generate extremal Schwarzschild black holes. In spite of GR, where the redshift factor does not have a finite value, in the non-commutative case, there is a finite extremum value in which light might shift to the red wavelength.
Very interesting and important in higher dimensional research, we have also obtained a minimum mass needed to form a black hole in each dimension and a lower bound on the size of the extra dimensions. Our results also confirm the previous studies that hold gravity becomes weaker in extra dimensions. Also, it would be of interest to investigate the effects of the charged solutions in each sector, that is, for both commutative (higher dimensional RN black holes) and non-commutative spaces \cite{Spallucci:2009zz}. In conclusion, if there exist extra dimensions of space in nature, as it seems to emerge from different theories and arguments, then the implications should appear in GR gravitational measurements such as those treated in this work.
\section*{Acknowledgment}
The authors would like to thank Asghar Moulavinafchi for his careful reading of the manuscript and for his valuable comments which helped us improve the English language and grammar in our paper.
|
2101.05756
|
\section{Introduction}
Over the last decade the acquisition of ever more complex data, structures and shapes has increased dramatically. Consequently, the need to develop meaningful methods for comparing general objects has become more and more apparent. In numerous applications, e.g. in molecular biology \citep{holm1993protein,kufareva2011method,brown2016fast}, computer vision \citep{lowe2001local,jain20003d} and electrical engineering \citep{papazov2012rigid,kuo20143d}, it is important to distinguish between different objects in a pose invariant manner: two instances of the a given object in \emph{different} spatial orientations are deemed to be equal. Furthermore, also the comparisons of graphs, trees, ultrametric spaces and networks, where mainly the underlying connectivity structure matters, have grown in importance \citep{chen2011algebraic, dong2020copt}. One possibility to compare two general objects in a pose invariant manner is to model them as metric spaces $(X,\ensuremath{d_{X}})$ and $(Y,\ensuremath{d_{Y}})$ and regard them as elements of the collection of isometry classes of compact metric spaces denoted by $\mathcal{M}$ (i.e. two compact metric spaces $(X,d_X)$ and $(Y,d_Y)$ are in the same class if and only if they are isometric to each other which we denote by $X\cong Y$). It is possible to compare $(X,\ensuremath{d_{X}})$ and $(Y,\ensuremath{d_{Y}})$ via the \emph{Gromov-Hausdorff distance} \cite{edwards1975structure,gromov1981groups}, which is a metric on $\mathcal{M}$. It is defined as
\begin{equation}\label{eq:Gromov Hausdorff}
d_{\mathrm{GH}}(X,Y):=\inf_{Z,\phi,\psi}d^{(Z,d_Z)}_{\mathrm{H}}(\phi(X),\psi(Y)),\end{equation}
where $\phi:X\to Z$ and $\psi:Y\to Z$ are isometric embeddings into a metric space $(Z,d_Z)$ and $d^{(Z,d_Z)}_\mathrm{H}$ denotes the \emph{Hausdorff distance in $Z$}. The Hausdorff distance is a metric on the collection of compact subsets of a metric space $(Z,d_Z)$, which is denoted by $\mathcal{S}(Z)$, and for $A,B\in\mathcal{S}(Z)$ defined as follows
\begin{equation}
d_\mathrm{H}^{(Z,d_Z)}\left(A,B\right):=\max\left( {\sup\limits_{a\in A} \operatornamewithlimits{inf\vphantom{p}}\limits_{b\in B}}d_Z(a,b),~\sup\limits_{b\in B} \operatornamewithlimits{inf\vphantom{p}}\limits_{a\in A}d_Z(a,b)\right).
\end{equation}
While the Gromov-Hausdorff distance has been applied successfully for various shape and data analysis tasks (see e.g. \cite{memoli2004comparing,bronstein2006efficient,bronstein2006generalized,bronstein2009partial,bronstein2009topology,chazal2009gromov,bronstein2010gromov,carlsson2010characterization}), it turns out that it is generally convenient to equip the modelled objects with more structure and to model them as \emph{metric measure spaces} \citep{memoli2007use,memoli2011gromov}. A metric measure space $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\dX,\muX \right) }$ is a triple, where $(X,\ensuremath{d_{X}})$ denotes a metric space and $\ensuremath{\mu_{X}}$ stands for a Borel probability measure on $X$ with full support. This additional probability measure can be thought of as signalling the importance of different regions in the modelled object. Moreover, two metric measure spaces $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\dX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\dY,\muY \right) }$ are considered as isomorphic (denoted by $\ensuremath{\mathcal{X}}\cong_w\ensuremath{\mathcal{Y}}$) if and only if there exists an isometry $\varphi:(X,\ensuremath{d_{X}})\to (Y,\ensuremath{d_{Y}})$ such that $\varphi_\#\ensuremath{\mu_{X}}=\ensuremath{\mu_{Y}}$. Here, $\varphi_\#$ denotes the pushforward map induced by $\varphi$. From now on, $\mathcal{M}^w$ denotes the collection of all (isomorphism classes of) compact metric measure spaces.
The additional structure of the metric measure spaces allows to regard the modelled objects as probability measures instead of compact sets. Hence, it is possible to substitute the Hausdorff component in \Cref{eq:Gromov Hausdorff} by a relaxed notion of proximity, namely the \emph{Wasserstein distance}. This distance is fundamental to a variety of mathematical developments
and is also known as Kantorovich distance \citep{kantorovith1942translocation}, Kantorovich-Rubinstein distance \citep{kantorowitsch1958space}, Mallows distance \citep{mallows1972note} or as the Earth Mover's distance \citep{rubner2000earth}. Given a compact metric space $(Z,d_Z)$, let $\mathcal{P}(Z)$ denote the space of probability measures on $Z$ and let $\alpha,\beta\in \mathcal{P}(Z)$. Then, the Wasserstein distance of order $p$, for $1\leq p< \infty$, between $\alpha$ and $\beta$ is defined as
\begin{equation}\label{eq:Wasserstein}
d^{(Z,d_Z)}_{\mathrm{W},p}(\alpha,\beta):=\left(\inf_{\mu\in\mathcal{C}(\alpha,\beta)}\int_{Z\times Z} d^p_Z(x,y)\,\mu(dx\times dy)\right)^\frac{1}{p},
\end{equation}
and for $p=\infty$ as
\begin{equation}\label{eq:def Wasserstein infinity}
d^{(Z,d_Z)}_{\mathrm{W},\infty}(\alpha,\beta)\coloneqq\inf_{\mu\in\mathcal{C}(\alpha,\beta)}\sup_{(x,y)\in\mathrm{supp}(\mu)}d_Z(x,y),
\end{equation}
where $\supp{\mu}$ stands for the support of $\mu$ and $\mathcal{C}(\alpha,\beta)$ denotes the set of all couplings of $\alpha$ and $\beta$, i.e., the set of all probability measures $\mu$ on the product space $Z\times Z$ such that
\[\mu(A\times Z)=\alpha(A)~\text{ and }~\mu(Z\times B)=\beta(B)\]
for all Borel measurable sets $A$ and $B$ of $Z$.
It is worth noting that the Wasserstein distance between probability measures on the real line admits a closed form solution (see \cite{villani2003topics} and Remark \ref{rem:closed-form}).
\citet{sturm2006geometry} has shown that replacing the Hausdorff distance in \Cref{eq:Gromov Hausdorff} with the Wasserstein distance indeed yields a meaningful metric on $\mathcal{M}^w$. Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\dX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\dY,\muY \right) }$ be two metric measure spaces. Then, \emph{Sturm's Gromov-Wasserstein distance} of order $p$, $1\leq p\leq \infty$, is defined as
\begin{equation}\label{eq:stdSturm}d_{\mathrm{GW},p}^{\mathrm{sturm}}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq \inf_{Z,\phi,\psi} d_{\mathrm{W},p}^{{(Z,d_Z)}}(\phi_\#\ensuremath{\mu_{X}},\psi_\#\ensuremath{\mu_{Y}}),\end{equation}
where $\phi:X\to Z$ and $\psi:Y\to Z$ are isometric embeddings into the metric space $(Z,d_Z)$.
Based on similar ideas but starting from a different representation of the Gromov-Hausdorff distance, M\'emoli \cite{memoli2007use,memoli2011gromov} derived a computationally more tractable and topologically equivalent metric on $\mathcal{M}^w$, namely the \emph{Gromov-Wasserstein} distance: For $1\leq p<\infty$, the \emph{$p$-distortion} of a coupling $\mu\in\mathcal{C}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}})$ is defined as
\begin{equation}\label{eq:distortion}
\mathrm{dis}_p(\mu)\coloneqq \left(\iint_{X\times Y \times X\times Y}\big|\ensuremath{d_{X}}(x,x')-\ensuremath{d_{Y}}(y,y')\big|^p\,\mu(dx\times dy)\,\mu(dx'\times dy')\right)^{1/p}\end{equation}
and for $p=\infty$ it is given as
\[\mathrm{dis}_\infty(\mu)\coloneqq\sup_{\substack{x,x'\in \ensuremath{\mathcal{X}},\,y,y'\in \ensuremath{\mathcal{Y}}\\ s.t.\,(x,y),(x',y')\in \supp{\mu}}}\big|\ensuremath{d_{X}}(x,x')-\ensuremath{d_{Y}}(y,y')\big|.\]
The \emph{Gromov-Wasserstein distance} of order $p$, $1\leq p\leq \infty$, is defined as
\begin{equation}\label{eq:Gromov Wasserstein}
d_{\mathrm{GW},p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\frac{1}{2}\inf_{\mu\in\mathcal{C}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}})} \mathrm{dis}_p(\mu).
\end{equation}
It is known that in general $d_{\mathrm{GW},p}\leq \dsturm{p}$ and that the inequality can be strict \cite{memoli2011gromov}. Although both $\dsturm{p}$ and $d_{\mathrm{GW},p}$, $1\leq p\leq \infty$, are in general NP-hard to compute \citep{memoli2011gromov}, it is possible to efficiently approximate $d_{\mathrm{GW},p}$ via conditional gradient descent \citep{memoli2011gromov,peyre2016gromov}. This has led to numerous applications and extensions of this distance \citep{alvarez2018gromov,titouan2019optimal,bunne2019learning,chowdhury2020generalize,scetbon2021linear}.
In many cases, since the direct computation of either of these distances can be onerous, the determination of the degree of similarity between two datasets is performed via firstly computing \emph{invariant features} out of each dataset (e.g. global distance distributions \citep{osada2002shape}) and secondly by suitably comparing these features. This point of view has motivated the exploration of inverse problems arising from the study of such features \citep{memoli2011gromov,sturm2012space,brinkman2012invariant,memoli2021distance}.
Clearly, $\mathcal{M}^w$ contains various, extremely general spaces. However, in many applications it is possible to have prior knowledge about the metric measure spaces under consideration and it is often reasonable to restrict oneself to work on a specific sub-collections $\mathcal{O}^w\subseteq\mathcal{M}^w$. For instance, it could be known that the metrics of the spaces considered are induced by the shortest path metric on some underlying trees and hence it is unnecessary to consider the calculation of $\dsturm{p}$ and $\dgw{p}$, $1\leq p\leq \infty$, for all of $\mathcal{M}^w$.
The potential advantages of focusing on a specific sub-collection $\mathcal{O}^w$ are twofold. On the one hand, it might be possible to use the features of $\mathcal{O}^w$ to gain computational benefits. On the other hand, it might be possible to refine the definition $\dsturm{p}$ and $\dgw{p}$, $1\leq p\leq \infty$, to obtain more informative comparisons on $\mathcal{O}^w$. Naturally, it is of interest to identify and study these subclasses and the corresponding refinements.
This approach has been pursued to study (variants of) the Gromov-Hausdorff distance on compact \emph{ultrametric spaces} by \citet{zarichnyi2005gromov} and \citet{qiu2009geometry}, and on compact \emph{p-metric spaces} by \citet{memoli2019gromov}. Here, the metric space $\left(X,d_X\right)$ is called a $p$-metric space $(1\leq p<\infty)$, if for all $x,x',x''\in X$ it holds
\[d_X(x,x'')\leq \left(d_X(x,x')^p+d_X(x',x'')^p\right)^{1/p}.\]
Further, the metric space $(X,u_X)$ is called an ultrametric space, if
$\ensuremath{u_{X}}$ fulfills for all $x,x',x''\in X$ that
\begin{equation}\label{eq:ultra triangle ineq}
\ensuremath{u_{X}}(x',x'')\leq \max(\ensuremath{u_{X}}(x,x'),\ensuremath{u_{X}}(x',x'')).
\end{equation}
In particular, note that ultrametrics can be considered as the limiting case of $p$-metrics as $p\rightarrow\infty$. In particular, \citet{memoli2019gromov} derived a polynomial time algorithm for the calculation of the \emph{ultrametric Gromov-Hausdorff} distance $u_\mathrm{GH}$ between two compact ultrametric spaces $(X,\ensuremath{u_{X}})$ and $(Y,\ensuremath{u_{Y}})$ (see \Cref{sec:ultrametric Gromov-Hausdorff}), which is defined as \begin{equation}\label{eq:Gromov Hausdorff ultrametric}
u_{\mathrm{GH}}(X,Y):=\inf_{Z,\phi,\psi}d^{(Z,u_Z)}_{\mathrm{H}}(\phi(X),\psi(Y)),\end{equation}
where $\phi:X\to Z$ and $\psi:Y\to Z$ are isometric embeddings into a common \emph{ultrametric} space $(Z,u_Z)$ and $d^{(Z,u_Z)}_\mathrm{H}$ denotes the Hausdorff distance on $Z$.
A further motivation to study (surrogates of) the distances $\dsturm{p}$ and $\dgw{p}$ restricted on a subset $\mathcal{O}^w$ comes from the idea of \emph{slicing} which originated as a method to efficiently estimate the Wasserstein distance $d^{\mathbb{R}^d}_{\mathrm{W},p}(\alpha,\beta)$ between probability measures $\alpha$ and $\beta$ supported in a high dimensional euclidean space $\mathbb{R}^d$ \cite{rubner2000earth}. The original idea is that given any line $\ell$ in $\mathbb{R}^d$ one first obtains $\alpha_\ell$ and $\beta_\ell$, the respective pushforwards of $\alpha$ and $\beta$ under the orthogonal projection map $\pi_\ell : \mathbb{R}^d \rightarrow \ell$, and then one invokes the explicit formula for the Wasserstein distance for probability measures on $\mathbb{R}$ (see \Cref{rem:closed-form}) to obtain a lower bound to $d^{\mathbb{R}^d}_{\mathrm{W},p}(\alpha,\beta)$ without incurring the possibly high computational cost associated to solving an optimal transportation problem. This lower bound is improved via repeated (often random) selections of the line $\ell$ \citep{rubner2000earth, bonneel2015sliced,kolouri2019generalized}.
Recently, \citet{le2019tree} pointed out that, thanks to the fact that the $1$-Wasserstein distance also admits an explicit formula when the underlying metric space is a tree \citep{do2011sublinear,evans2012phylogenetic,mcgregor2013sketching}, one can also devise \emph{tree slicing} estimates of the distance between two given probability measures by suitably projecting them onto tree-like structures. Most likely, the same strategy is successful for suitable projections on random ultrametric spaces, as on these there is also an explicit formula for the Wasserstein distance \citep{kloeckner2015geometric}. The same line of of work has also recently been explored in the Gromov-Wasserstein scenario \citep{vayer2019sliced,le2019fast} and could be extended based on efficiently computable restrictions (or surrogates of) $\dsturm{p}$ and $\dgw{p}$. Inspired by the results of \citet{memoli2019gromov} on the ultrametric Gromov-Hausdorff distance and the results of \citet{kloeckner2015geometric}, who derived an explicit representation of the Wasserstein distance on ultrametric spaces, we study the collection of compact \emph{ultrametric measure spaces} $\mathcal{U}^w\subseteq\mathcal{M}^w$, where $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }\in \mathcal{U}^w$, whenever the underlying metric space $(X,\ensuremath{u_{X}})$ is a compact ultrametric space.
In terms of applications, ultrametric spaces (and thus also ultrametric \emph{measure} spaces) arise naturally in statistics as metric encodings of dendrograms \citep{jardine1971mathematical,carlsson2010characterization} which is a graph theoretical representations of ultrametric spaces, in the context of phylogenetic trees \citep{semple2003phylogenetics}, in theoretical computer science in the probabilistic approximation of finite metric spaces \citep{bartal1996probabilistic,fakcharoenphol2004tight}, and in physics in the context of a mean-field theory of spin glasses
\citep{mezard1987spin,Rammal1986UltrametricityFP}.
Especially for phylogenetic trees (and dendrograms), where one tries to characterize the structure of an underlying evolutionary process or the difference between two such processes, it is important to have a meaningful method of comparison, i.e., to have a meaningful metric on $\mathcal{U}^w$. However, it is evident from the definition of $d_{\mathrm{GW},p}^{\mathrm{sturm}}$ and the relationship between $d_{\mathrm{GW},p}^{\mathrm{sturm}}$ and $d_{\mathrm{GW},p}$ (see \cite{memoli2011gromov}), that the ultrametric structure of $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$ is not taken into account in the computation of either $d_{\mathrm{GW},p}^{\mathrm{sturm}}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ or $d_{\mathrm{GW},p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$, $1\leq p\leq \infty$. Hence, we suggest, just as for the ultrametric Gromov-Hausdorff distance, to adapt the definition of $d_{\mathrm{GW},p}^{\mathrm{sturm}}$ (see \Cref{eq:stdSturm}) as well as the one of $\dgw{p}$ (see \Cref{eq:Gromov Wasserstein}) and verify in the following that this makes the comparisons of ultrametric measure spaces more sensitive and leads for $p=\infty$ to a \emph{polynomial time} algorithm for the derivation of the proposed metrics.
\subsection{The proposed approach}
Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\uY,\muY \right) }$ be ultrametric measure spaces. Reconsidering the definition of Sturm's Gromov-Wasserstein distance in \Cref{eq:stdSturm}, we propose to only infimize over ultrametric spaces $(Z,u_Z)$ in \Cref{eq:stdSturm}. Thus, we define for $p\in[1,\infty]$ \emph{Sturm's ultrametric Gromov-Wasserstein distance} of order $p$ as \begin{equation}\label{eq:ultra Sturm}\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\inf_{Z,\phi,\psi} d_{\mathrm{W},p}^{{(Z,u_Z)}}(\phi_\#\ensuremath{\mu_{X}},\psi_\#\ensuremath{\mu_{Y}}),\end{equation}
where $\phi:X\to Z$ and $\psi:Y\to Z$ are isometric embeddings into an ultrametric space $(Z,u_Z)$.
In the subsequent sections of this paper, we will establish many theoretically appealing properties of $\usturm{p}$. Unfortunately, we will verify that, although an explicit formula for the Wasserstein distance of order $p$ on ultrametric spaces exists \citep{kloeckner2015geometric}, for $p\in [1,\infty)$ the calculation of $\usturm{p}$ yields a highly non-trivial combinatorial optimization problem (see \Cref{subsubsec:alt rep for Sturms GW dist}). Therefore, we demonstrate that an adaption of the Gromov-Wasserstein distance defined in \Cref{eq:Gromov Wasserstein} yields a topologically equivalent and easily approximable distance on $\mathcal{U}^w$. In order to define this adaption, we need to introduce some notation. For $a,b\geq 0$ and $1\leq q <\infty$ let \[\Lambda_q(a,b)\coloneqq |a^q-b^q|^{1/q}.\]
Further define $\Lambda_\infty(a,b)\coloneqq\max(a,b)$ whenever $a\neq b$ and $\Lambda_\infty(a,b)=0$ if $a=b$.
Now, we can rewrite $d_{\mathrm{GW},p}$, $1\leq p\leq\infty,$ as follows
\begin{equation}\label{eq:p-dist delta1}
d_{\mathrm{GW},p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\frac{1}{2}\inf_{\mu\in\mathcal{C}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}})}\left(\iint_{X\times Y \times X\times Y}\!\!\!\!\!\big(\Lambda_1(\ensuremath{d_{X}}(x,x'),\ensuremath{d_{Y}}(y,y'))\big)^p\,\mu(dx\times dy)\,\mu(dx'\times dy')\right)^{1/p}\!\!\!.
\end{equation}
Considering the derivation of $d_{\mathrm{GW},p}$ in \cite{memoli2011gromov} and the results on the closely related ultrametric Gromov-Hausdorff distance studied in \cite{memoli2019gromov}, this suggests to replace $\Lambda_1$ in \Cref{eq:p-dist delta1} with $\Lambda_\infty$ in order to incorporate the ultrametric structures of $\ensuremath{\left(X,\uX,\muX \right) }$ and $\ensuremath{\left(Y,\uY,\muY \right) }$ into the comparison. Hence, we define the \emph{$p$-ultra-distortion} of a coupling $\mu\in\mathcal{C}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}})$ for $1\leq p<\infty$ as
\begin{equation}\label{eq:distortion ult}
\mathrm{dis}_p^\mathrm{ult}(\mu)\coloneqq \left(\iint_{X\times Y \times X\times Y}\big(\Lambda_\infty(\ensuremath{u_{X}}(x,x'),\ensuremath{u_{Y}}(y,y'))\big)^p\,\mu(dx\times dy)\,\mu(dx'\times dy')\right)^{1/p}.
\end{equation}
and for $p=\infty$ as
\[\mathrm{dis}_\infty^\mathrm{ult}(\mu)\coloneqq\sup_{\substack{x,x'\in \ensuremath{\mathcal{X}},\,y,y'\in \ensuremath{\mathcal{Y}}\\ s.t.\,(x,y),(x',y')\in \supp{\mu}}}\Lambda_\infty(\ensuremath{u_{X}}(x,x'),\ensuremath{u_{Y}}(y,y')).\]
The \emph{ultrametric Gromov-Wasserstein distance} of order $p\in[1,\infty]$, is given as
\begin{equation}
\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\inf_{\mu\in\mathcal{C}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}})} \mathrm{dis}_p^\mathrm{ult}(\mu).\label{eq:def uGW}
\end{equation}
Due to the structural similarity between $\dgw{p}$ and $\ugw{p}$, we can expect (and later verify) that many properties of $\dgw{p}$ extend to $\ugw{p}$. In particular, we will establish that also $\ugw{p}$ can be approximated\footnote{Here ``approximation" is meant in the sense that one can write code which will locally minimize the functional. There are in general no theoretical guarantees that these algorithms will converge to a global minimum.} via conditional gradient descent and admits several polynomial time computable lower bounds which are useful in applications.
It is worth mentioning that \citet{sturm2012space} studied the family of so-called $L^{p,q}$-distortion distances similar to our construction of $\ugw{p}$. In our language, for any $p,q\in[1,\infty)$, the $L^{p,q}$-distortion distance is constructed by infimizing over the $(p,q)$-distortion defined by replacing $\Lambda_\infty$ with $(\Lambda_q)^q$ in \Cref{eq:distortion ult}. This distance shares many properties with $\dgw{p}$.
\subsection{Overview of our results}\phantom{a}\vspace{3mm}\\
We give a brief overview of our results.
\textbf{\Cref{sec:preliminaries}.} We generalize the results of \citet{carlsson2010characterization} on the relation between ultrametric spaces and dendrograms and establish a bijection between compact ultrametric spaces and \emph{proper dendrograms} (see \Cref{def:proper dendrogram}). After recalling some results on the ultrametric Gromov-Hausdorff distance (see \Cref{eq:Gromov Hausdorff ultrametric}), we use the connection between compact ultrametric spaces and dendrograms to reformulate the explicit formula for the $p$-Wasserstein distance ($1\leq p< \infty$) on ultrametric spaces derived by \citet{kloeckner2015geometric} in terms of proper dendrograms. This allows us to derive a formulation of the $\infty$-Wasserstein distance on ultrametric spaces and to study the Wasserstein distance on compact subspaces of the ultrametric space $(\R_{\geq 0},\Lambda_\infty)$, which will be relevant when studying lower bounds of $\ugw{p}$, $1\leq p\leq \infty$.
\textbf{\Cref{sec:ultrametric GW distance}.} We demonstrate that $\ugw{p}$ and $\usturm{p}$, $1\leq p\leq \infty$, are $p$-metrics on the collection of ultrametric measure spaces $\mathcal{U}^w$. We derive several alternative representations for $\usturm{p}$ and study the relation between the metrics $\usturm{p}$ and $\ugw{p}$. In particular, we show that, while for $1\leq p<\infty$ it holds in general that $\ugw{p}\leq2^\frac{1}{p}\,\usturm{p}$, both metrics coincide for $p=\infty$, i.e., $\ugw{\infty}=\usturm{\infty}$. Furthermore, we show how this equality in combination with an alternative representation of $\ugw{\infty}$ leads to a \emph{polynomial time algorithm} for the calculation of $\usturm{\infty}=\ugw{\infty}$. Moreover, we study the topological properties of $(\mathcal{U}^w,\usturm{p})$ and $(\mathcal{U}^w,\ugw{p})$, $1\leq p\leq \infty$. Most importantly, we show that $\usturm{p}$ and $\ugw{p}$ induce the same topology on $\mathcal{U}^w$ which is also different from the one induced by $\dsturm{p}/\dgw{p}$, $1\leq p\leq \infty$. While we further prove that the metric spaces $(\mathcal{U}^w,\usturm{p})$ and $(\mathcal{U}^w,\ugw{p})$, $1\leq p<\infty$, are neither complete nor separable metric space, we demonstrate that the ultrametric space $(\mathcal{U}^w,\usturm{\infty})$, which coincides with $(\mathcal{U}^w,\ugw{\infty})$, is complete. Finally, we establish that $(\mathcal{U}^w,\usturm{1})$ is a geodesic space.
\textbf{\Cref{sec:lower bounds}.} Unfortunately, it does not seem to be possible to derive a polynomial time algorithm for the calculation of $\usturm{p}$ and $\ugw{p}$, $1\leq p<\infty$. Consequently, based on easily computable invariant features, in \Cref{sec:lower bounds} we derive several polynomial time computable lower bounds for $\ugw{p}$, $1\leq p\leq \infty$. Due to the structural similarity between $\dgw{p}$ and $\ugw{p}$, these are in a certain sense analogue to those derived in \cite{memoli2007use,memoli2011gromov} for $\dgw{p}$. Among other things, we show that
\begin{equation}
\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\geq\uSLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\inf_{\gamma\in\mathcal{C}(\ensuremath{\mu_{X}}\otimes \ensuremath{\mu_{X}},\ensuremath{\mu_{Y}}\otimes\ensuremath{\mu_{Y}})}\norm{\Lambda_\infty(\ensuremath{u_{X}},\ensuremath{u_{Y}})}_{L^p(\gamma)}.\end{equation}
We verify that the lower bound $\uSLB{p}$ can be reformulated in terms of the Wasserstein distance on the ultrametric space $(\R_{\geq 0},\Lambda_\infty)$ (we derive an explicit formula for $d_{\mathrm{W},p}^{(\R_{\geq 0},\Lambda_\infty)}$ in \Cref{sec:explicit formulat}). This allows us to efficiently calculate $\uSLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ in $O((m\vee n)^2)$, where $m$ stands for the cardinality of $X$ and $n$ for the one of $Y$.
\textbf{\Cref{sec:ultra-dissimilarity spaces}.} As the ultrametric space assumption is somewhat restrictive (especially in the context of phylogenetic trees, see \cite{semple2003phylogenetics}), we prove in \Cref{sec:ultra-dissimilarity spaces} that the results on $\ugw{p}$ can be extended to the more general \emph{ultra-dissimilarity spaces} (see \Cref{def:ultra dissimilarity}). In particular, we prove that $\ugw{p}$, $1\leq p\leq \infty$, is a metric on the \emph{isomorphism classes} of ultra-dissimilarity spaces (see \Cref{def:isomorphism ultra-dissimilarity}).
\textbf{\Cref{sec:computational aspects}.} We illustrate the behaviour and relation between $\ugw{1}$ (which can be approximated via conditional gradient descent) and $\uSLB{1}$ in a set of illustrative examples. Additionally, we carefully illustrate the differences between $\ugw{1}$ and $\uSLB{1}$, and $\dgw{1}$ and $\dSLB{1}$ (see \Cref{sec:lower bounds} for a definition), respectively.
\textbf{\Cref{sec:phylogenetic tree shapes}.} Finally, we apply our ideas to \emph{phylogenetic tree shape comparison}. To this end, we compare two sets of phylogenetic tree shapes based on the HA protein sequences from human influenza collected in different regions with the lower bound $\uSLB{1}$. In particular, we contrast our results in both settings to the ones obtained with the tree shape metric introduced in Equation (4) of \citet{colijn2018metric}.
\subsection{Related work}
In order to better contextualize our contribution, we now describe related work, both in applied and computational geometry, and in phylogenetics (where notions of distance between trees have arisen naturally).
\subsubsection*{Metrics between trees: the phylogenetics perspective}
In phylogenetics, where one chief objective is to infer the evolutionary relationship between species via methods that evaluate observable traits, such as DNA sequences,
the need to be able to measure dissimilarity between different trees arises from the fact that the process of reconstruction of a phylogenetic tree may depend on the set of genes being considered. At the same time, even for the same set of genes, different reconstruction methods could be applied which would result in different trees. As such, this has led to the development of many different metrics for measuring distance between phylogenetic trees. Examples include the
Robinson-Foulds metric \citep{robinson1981comparison}, the subtree-prune and regraft distance \citep{hein1990reconstructing}, and the nearest-neighbor interchange distance \citep{robinson1971comparison}.
As pointed out in \cite{owen2010fast}, many of these distances tend to quantify differences between tree topologies and often do not take into account edge lengths. A certain phylogenetic tree metric space which encodes for edge lengths was proposed in \cite{billera2001geometry} and
studied algorithmically in \cite{owen2010fast}. This tree space assumes that the all trees have the same set of taxa. An extension to the case of trees over different underlying sets is given in \cite{grindstaff2018geometric}. \citet{lafond2019complexity} considered one type of metrics on possibly \emph{muiltilabeled} phylogenetic trees with a fixed number of leafs. As the authors pointed out, a multilabeled phylogenetic tree in which no leafs are repeated is just a standard phylogenetic tree, whereas a multilabeled phylogenetic tree in which all labels are equal defines a \emph{tree shape}. The authors then proceeded to study the computational complexity associated to generalizations of some of the usual metrics for phylogenetic trees (such as the Robinson-Foulds distance) to the multilabeled case. \citet{colijn2018metric} studied a metric between (binary) phylogenetic tree shapes based on a bottom to top enumeration of specific connectivity structures. The authors applied their metric to compare evolutionary trees based on the HA protein sequences from human influenza collected in different regions.
\subsubsection*{Metrics between trees: the applied geometry perspective}
From a different perspective, ideas from applied geometry and applied and computational topology have been applied to the comparison of tree shapes in applications in probability, clustering and applied and computational topology.
Metric trees are also considered in probability theory in the study of models for random trees together with the need to quantify their distance; \citet{evans2007probability} described some variants of the Gromov-Hausdorff distance between metric trees. See also \cite{greven2009convergence} for the case of metric measure space representations of trees and a certain Gromov-Prokhorov type of metric on the collection thereof.
Trees, in the form of dendrograms, are abundant in the realm of hierarhical clustering methods. In their study of the \emph{stability} of hierarchical clustering methods, \citet{carlsson2010characterization} utilized the Gromov-Hausdorff distance between the ultrametric representation of dendrograms. \citet{DBLP:journals/dcg/Schmiedl17} proved that computing the Gromov-Hausdorff distance between tree metric spaces is NP-hard. \citet{liebscher2018new} suggested some variants of the Gromov-Hausdorff distance which are applicable in the context of phylogenetic trees. As mentioned before, \citet{zarichnyi2005gromov} introduced the ultrametric Gromov-Hausdorff distance $u_\mathrm{GH}$ between compact ultrametric spaces (a special type of tree metric spaces). Certain theoretical properties such as precompactness of $u_\mathrm{GH}$ has been studied in \cite{qiu2009geometry}. In contrast with the NP-hardness of computing $d_\mathrm{GH}$, \citet{memoli2019gromov} devised an polynomial time algorithm for computing $u_\mathrm{GH}$.
In computational topology \emph{merge trees} arise through the study of the sublevel sets of a given function \citep{adelson1945level,reeb1946points} with the goal of shape simplification. \citet{morozov2013interleaving} developed the notion of \emph{interleaving distance} between merge trees which is related to the Gromov-Hausdorff distance between trees through bi-Lipschitz bounds. In \cite{agarwal2018computing}, exploiting the connection between the interleaving distance and the Gromov-Hausdorff between metric trees, the authors approached the computation of the Gromov-Hausdorff distance between metric trees in general and provide certain approximation algorithms. \citet{touli2018fpt} devised fixed-parameter tractable (FPT) algorithms for computing the interleaving distance between metric trees. One can imply from their methods an FPT algorithm to compute a 2-approximation of the Gromov-Hausdorff distance between ultrametric spaces. \citet{memoli2019gromov} devised an FPT algorithm for computing the exact value of the Gromov-Hausdorff distances between ultrametric spaces.
\section{Preliminaries}\label{sec:preliminaries}
In this section we briefly summarize the basic notions and concepts required throughout the paper.
\subsection{Ultrametric spaces and dendrograms}
We begin by describing compact ultrametric spaces in terms of \emph{proper dendrograms}. To this end, we introduce some definitions and some notation. Given a set $X$, a \emph{partition} of $X$ is a set $P_X=\{X_i\}_{i\in I}$ where $I$ is any index set, $\emptyset\neq X_i\subseteq X$, $X_i\cap X_j=\emptyset$ for all $i\neq j\in I$ and $\bigcup_{i\in I}X_i=X$. We call each element $X_i$ a \emph{block} of the given partition $P_X$ and denote by $\mathbf{Part}(X)$ the collection of all partitions of $X$. For two partitions $P_X$ and $P'_X$ we say that $P_X$ is \emph{finer} than $P'_X$, if for every block $X_i\in P_X$ there exists a block $X'_j\in P'_X$ such that $X_i\subseteq X'_j$.
\begin{definition}[Proper dendrogram]\label{def:proper dendrogram}
Given a set $X$ (not necessarily finite), a \emph{proper dendrogram} $\theta_X:[0,\infty)\rightarrow \mathbf{Part}(X)$ is a map satisfying the following conditions:
\begin{enumerate}
\item $\theta_X(s)$ is finer than $\theta_X(t)$ for any $0\leq s<t<\infty$;
\item $\theta_X(0)$ is the finest partition consisting only singleton sets;
\item There exists $T>0$ such that for any $t\geq T$, $\theta_X(t)=\{X\}$ is the trivial partition; \item For each $t> 0$, there exists $\varepsilon>0$ such that $\theta_X(t)=\theta_X(t')$ for all $t'\in[t,t+\varepsilon]$.
\item For any distinct points $x,x'\in X$, there exists $T_{xx'}>0$ such that $x$ and $x'$ belong to different blocks in $\theta_X(T_{xx'})$.
\item For each $t>0$, $\theta_X(t)$ consists of only finitely many blocks.
\item Let $\{t_n\}_{n\in\mathbb N}$ be a decreasing sequence such that $\lim_{n\rightarrow\infty}t_n=0$ and let $X_n\in \theta_X(t_n)$. If for any $1\leq n<m$, $X_m\subseteq X_n$, then $\bigcap_{n\in\mathbb N}X_n\neq\emptyset$.
\end{enumerate}
\end{definition}
When $X$ is finite, a function $\theta_X:[0,\infty)\rightarrow \mathbf{Part}(X)$ satifying conditions (1) to (4) will satisfy conditions (5), (6) and (7) automatically, and thus a proper dendrogram reduces to the usual dendrogram (see \cite[Sec. 3.1]{carlsson2010characterization} for a formal definition).
Let $\theta_X$ be a proper dendrogram over a set $X$. For any $x\in X$ and $t\geq 0$, we denote by $[x]_t^X$ the block in $\theta(t)$ that contains $x\in X$ and abbreviate $[x]_t^X$ to $[x]_t$ when the underlying set $X$ is clear from the context. Similar to \cite{carlsson2010characterization}, who considered the relation between finite ultrametric spaces and {dendrograms}, we will prove that there is a bijection between compact ultrametric spaces and proper dendrograms. In particular, one can show that the subsequent theorem generalizes \cite[Theorem 9]{carlsson2010characterization}. Since its proof depends on several concepts not yet introduced, we postpone it to \Cref{proof:thm:compact ultra-dendro}.
\begin{theorem}\label{thm:compact ultra-dendro}
Given a set $X$, denote by $\mathcal{U}(X)$ the collection of all compact ultrametrics on $X$ and $\mathcal{D}(X)$ the collection of all proper dendrograms over $X$. For any $\theta\in\mathcal{D}(X)$, consider $u_\theta$ defined as follows:
\[\forall x,x'\in X,\,\,\, u_\theta(x,x')\coloneqq\inf\{t\geq 0\,|\,x,x' \text{ belong to the same block of } \theta(t)\}.\]
Then, $u_\theta\in\mathcal{U}(X)$ and the map $\Delta_X:\mathcal{D}(X)\rightarrow\mathcal{U}(X)$ sending $\theta$ to $u_\theta$ is a bijection.
\end{theorem}
\begin{remark}\label{rem:corresponding dendrogram}
From now on, we denote by $\theta_X$ the proper dendrogram corresponding to a given compact ultrametric $u_X$ on $X$ under the bijection given above. Note that a block $[x]_t$ in $\theta_X(t)$ is actually the closed ball $B_t(x)$ in $X$ centered at $x$ with radius $t$. So for each $t\geq 0$, $\theta_X(t)$ partitions $X$ into a union of several closed balls in $X$ with respect to $u_X$.
\end{remark}
\subsection{The ultrametric Gromov-Hausdorff distance}\label{sec:ultrametric Gromov-Hausdorff}
Both $\dsturm{p}$ and $\dgw{p}$, $1\leq p\leq \infty$, are by construction closely related to the Gromov-Hausdorff distance. In a recent paper, \citet{memoli2019gromov} studied an ultrametric version of this distance, namely the \emph{ultrametric Gromov-Hausdorff distance} (denoted as $u_\mathrm{GH}$). Since we will demonstrate several connections between $\usturm{p}$, $\ugw{p}$, $1\leq p\leq \infty$, and this distance, we briefly summarize some of the results in \cite{memoli2019gromov}. We start by recalling the formal definition of $u_\mathrm{GH}$.
\begin{definition}
Let $(X,\ensuremath{u_{X}})$ and $(Y,\ensuremath{u_{Y}})$ be two compact ultrametric spaces. Then, the \emph{ultrametric Gromov-Hausdorff} between $X$ and $Y$ is defined as
\[u_\mathrm{GH}(X,Y)=\inf_{Z,\phi,\psi}d^Z_\mathrm{H}\left(\phi(X),\psi(Y)\right),\]
where $\phi:X\to Z$ and $\psi:Y\to Z$ are isometric embeddings (distance preserving transformations) into the ultrametric space $(Z,u_Z)$.
\end{definition}
\citet{zarichnyi2005gromov} has shown that $u_\mathrm{GH}$ is an ultrametric on the isometry classes of compact ultrametric spaces, which are denoted by $\mathcal{U}$, and \citet{memoli2019gromov} identified a structural theorem (cf. \Cref{thm:ultrametric GH-distance}) that gives rise to a polynomial time algorithm for the calculation of $u_\mathrm{GH}$. More precisely, it was proven in \cite{memoli2019gromov} that $u_\mathrm{GH}$ can be calculated via so-called \emph{quotient ultrametric spaces}, which we define next. Let $(X,\ensuremath{u_{X}})$ be an ultrametric space and let $t\geq 0$. We define an equivalence relation $\sim_t$ on $X$ as follows: $x\sim_t x'$ if and only if $\ensuremath{u_{X}}(x,x')\leq t$. We denote by $[x]^X_t$ (resp. $[x]_t$) the equivalence class of $x$ under $\sim_t$ and by $X_t$ the set of all such equivalence classes. In fact, $[x]_t^X=\{x'\in X|\,u(x,x')\leq t\}$ is exactly the closed ball centered at $x$ with radius $t$ and corresponds to a block in the corresponding proper dendrogram $\theta_X(t)$ (see \Cref{rem:corresponding dendrogram}).
{Thus, one can think of $X_t$ as a ``set representation'' of $\theta_X(t)$.}
We define an ultrametric $u_{X_t}$ on $X_t$ as follows:
\[u_{X_t}([x]_t,[x']_t)\coloneqq\begin{cases}
\ensuremath{u_{X}}(x,x'),& [x]_t\neq[x']_t\\
0,& [x]_t=[x']_t.
\end{cases} \]
Then, $(X_t,u_{X_t})$ is an ultrametric space and we call $(X_t, u_{X_t})$ the \emph{quotient }of $(X,u_X)$ at level $t$ (see \Cref{fig:accumulated spaces} for an illustration). It is straightforward to prove that the quotient of a compact ultrametric space at level $t>0$ is a finite ultrametric space (cf. \cite[Lemma 2.3]{wan2020novel}). Furthermore, the quotient spaces characterize $u_\mathrm{GH}$ as follows.
\begin{theorem}[Structural theorem for $u_\mathrm{GH}$, {\cite[Theorem 5.7]{memoli2019gromov}}]\label{thm:ultrametric GH-distance}
Let $(X,\ensuremath{u_{X}})$ and $(Y,\ensuremath{u_{Y}})$ be two compact ultrametric spaces. Then,
\[u_\mathrm{GH}(X,Y)=\inf\left\lbrace t\geq 0 \,|\,X_t \cong Y_t\right\rbrace.\]
\end{theorem}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figures/umspacequotient.eps}
\caption{\textbf{Metric quotient:} An ultrametric space (black) and its quotient at level $t$ (red).} \label{fig:accumulated spaces}
\end{figure}
\begin{remark}
Let $(X,\ensuremath{u_{X}})$ and $(Y,\ensuremath{u_{Y}})$ denote two finite ultrametric spaces and let $t\geq 0$. The quotient spaces $X_t$ and $Y_t$ can be considered as vertex weighted, rooted trees \citep{memoli2019gromov}. Hence, it is possible to check whether $X_t \cong Y_t$ in polynomial time \citep{aho1974design}. Consequently, \Cref{thm:ultrametric GH-distance} induces a simple, polynomial time algorithm to calculate $u_\mathrm{GH}$ between two finite ultrametric spaces.
\end{remark}
\subsection{Wasserstein distance on ultrametric spaces}\label{sec:explicit formulat}
\begin{comment}
Using the synchronized rooted tree representation of ultrametric spaces, Kloeckner identified in \cite{kloeckner2015geometric} the formula computing the Wasserstein distance between two probability measures $\alpha,\beta$ on a finite ultrametric space $(X,\ensuremath{u_{X}})$:
\begin{equation}\label{eq:w-dist-ultra-tree}
\left(d_{\mathrm{W},p}^{X}\right)^p(\alpha,\beta)=2^{p-1}\sum_{v\in V\backslash\{o\}}\left(h^p(v*)-h^p(v))\right)\left|\mu(X_v)-\beta(X_v)\right|,
\end{equation}
where $X_v$ denotes the set of all points in $X$ (as leafs in $T_X$) having $v$ as an ancestor in $T_X$.
Then, we can reinterpret Equation \Cref{eq:w-dist-ultra-tree} as follows:
\end{comment}
\citet{kloeckner2015geometric} uses the representation of ultrametric spaces as so called \emph{synchronized rooted trees} to derive an explicit formula for the Wasserstein distance on ultrametric spaces. By the constructions of the dendrograms and of the synchronized rooted trees (see \Cref{sec:synchronized rooted tree}), it is immediately clear how to reformulate the results of \citet{kloeckner2015geometric} on compact ultrametric spaces in terms of proper dendrograms. To this end, we need to introduce some notation. For a compact ultrametric space $X$, let $\theta_X$ be the associated proper dendrogram and let $V(X)\coloneqq \bigcup_{t>0}\theta_X(t)=\{[x]_t|\,x\in X,t> 0\}$.
It can be shown that $V(X)$ is the collection of all closed balls in $X$ except for singletons $\{x\}$ such that $x$ is a cluster point\footnote{A cluster point $x$ in a topological space $X$ is such that any neighborhood of $x$ contains countably many points in $X$.} (see \Cref{lm:vx characterization}).
For $B\in V(X)$, we denote by $B^*$ the smallest (under inclusion) element in $V(X)$ such that $B\subsetneqq B^*$ (for the existence and uniqueness of $B^*$ see \Cref{lemma:existence of B^*}).
\begin{theorem}[The Wasserstein distance on ultrametric spaces, {\cite[Theorem 3.1]{kloeckner2015geometric}}]\label{lemma:Wasserstein on ultrametric spaces} Let $(X,\ensuremath{u_{X}})$ be a compact ultrametric space. For all $\alpha,\beta\in\mathcal{P}(X)$ and $1\leq p<\infty$, we have
\begin{equation}
\left(d_{\mathrm{W},p}^{X}\right)^p(\alpha,\beta)=2^{-1}\sum_{B\in V(X)\backslash\{X\}}\left(\diam{B^*}^p-\diam{B}^p\right)\left|\alpha(B)-\beta(B)\right|.
\end{equation}
\end{theorem}
While \Cref{lemma:Wasserstein on ultrametric spaces} is only valid for $p<\infty$, it can be extended to the case $p=\infty$.
\begin{lemma}\label{lm:winfty-finite}
Let $X$ be a compact ultrametric space. Then, for any $\alpha,\beta\in P(X)$, we have
\begin{equation}\label{eq:ultra Wasserstein infinity}
d_{\mathrm{W},\infty}^X(\alpha,\beta)=\max_{B\in V(X)\backslash\{X\}\text{ and }\alpha(B)\neq\beta(B)}\diam{B^*}.
\end{equation}
\end{lemma}
The proof of \Cref{lm:winfty-finite} is technical and we postpone it to \Cref{sec:proof of lm winfty-finite}.
\subsubsection{Wasserstein distance on \texorpdfstring{$(\R_{\geq 0},\Lambda_\infty)$}{the ultrametric real line}}
The non-negative half real line $\R_{\geq 0}$ endowed with $\Lambda_\infty$ turns out to be an ultrametric space (cf. \cite[Remark 1.14]{memoli2019gromov}). Finite subspaces of $(\mathbb R_{\geq 0},\Lambda_\infty)$ are of particular interest in this paper. These spaces possess a particular structure (see Figure \ref{fig:Rinfty}) and the computation of the Wasserstein distance on them can be further simplified.
\begin{figure}
\centering
\includegraphics[scale = 0.2]{Figures/uR.eps}
\caption{\textbf{Illustration of $(\R_{\geq 0},\Lambda_\infty)$:} This is the dendrogram for a subspace of $(\R_{\geq 0},\Lambda_\infty)$ consisting of 5 arbitrary distinct points of $\mathbb{R}_+$. } \label{fig:Rinfty}
\end{figure}
\begin{theorem}[$d^{(\R_{\geq 0},\Lambda_\infty)}_{\mathrm{W},p}$ between finitely supported measures]\label{thm:closed-form-w-infty-real}
Suppose $\alpha,\beta$ are two probability measures supported on a finite subset $\{x_0,\dots,x_n\}$ of $(\mathbb{R}_{\geq 0},\Lambda_\infty)$ such that $0\leq x_0<x_1<\dots<x_n$. Denote $\alpha_i\coloneqq \alpha(\{x_i\})$ and $\beta_i\coloneqq \beta(\{x_i\})$. Then, we have for $p\in[1,\infty)$ that
\begin{equation}\label{eq:dp finite} d^{(\R_{\geq 0},\Lambda_\infty)}_{\mathrm{W},p}(\alpha,\beta)=2^{-\frac{1}{p}}\left(\sum_{i=0}^{n-1}\left|\sum_{j=0}^i(\alpha_j-\beta_j)\right|\cdot|x_{i+1}^p-x_i^p|+\sum_{i=0}^n|\alpha_i-\beta_i|\cdot x_i^p\right)^\frac{1}{p}.
\end{equation}
Let $F_\alpha$ and $F_\beta$ denote the cumulative distribution functions of $\alpha$ and $\beta$, respectively. Then, for the case $p=\infty$ we obtain
\[d_{\mathrm{W},\infty}^{(\R_{\geq 0},\Lambda_\infty)}(\alpha,\beta)=\max\left(\max_{0\leq i\leq n-1, F_\alpha(x_i)\neq F_\beta(x_i)}x_{i+1},\max_{0\leq i\leq n, \alpha_i\neq\beta_i}x_i\right).\]
\end{theorem}
\begin{proof}
Clearly, $V(X)=\{\{x_0,x_1,\ldots,x_i\}|\,i=1,\ldots,n\}\cup\{\{x_i\}|\,i=1,\ldots,n\}$ (recall that each set corresponds to a closed ball). Thus, we conclude the proof by applying \Cref{lemma:Wasserstein on ultrametric spaces} and \Cref{lm:winfty-finite}.
\end{proof}
\begin{remark}[The case $p=1$]\label{rmk:int-w-inf-real}
Note that when $p=1$, for any finitely supported probability measures $\alpha,\beta\in\mathcal{P}(\mathbb R_{\geq 0})$,
\[d_{\mathrm{W},1}^{(\R_{\geq 0},\Lambda_\infty)}(\alpha,\beta)=\frac{1}{2}\left(d_{\mathrm{W},1}^{(\mathbb{R},\Lambda_1)}(\alpha,\beta)+\int_\mathbb{R}x\,|\alpha-\beta|(dx)\right). \]
The formula indicates that the $1$-Wasserstein distance on $(\R_{\geq 0},\Lambda_\infty)$ is the average of the usual $1$-Wasserstein distance on $(\R_{\geq 0},\Lambda_1)$ and a ``weighted total variation distance''. The weighted total variation like distance term is sensitive to difference of supports. For example, let $\alpha=\delta_{x_1}$ and $\beta=\delta_{x_2}$, then $\int_\mathbb{R}x\,|\alpha-\beta|(dx)=x_1+x_2$ if $x_1\neq x_2$.
\end{remark}
\begin{remark}[Extension to compactly supported measures]\label{rem:extension to compactly supported measures}
In fact, $X\subseteq(\R_{\geq 0},\Lambda_\infty)$ is compact if and only if it is either a finite set or countable with 0 being the unique cluster point (w.r.t. the usual Euclidean distance $\Lambda_1$) (see \Cref{lm:compact of R}). Hence, it is straightforward to extend \Cref{thm:closed-form-w-infty-real} to compactly supported measures and we refer to \Cref{sec:extension to compactly supported measures} for the missing details.
\end{remark}
\begin{remark}[Closed-form solution for $d_{\mathrm{W},p}^{(\R_{\geq 0},\Lambda_q)}$]\label{rem:closed-form}
We know that there is a closed-form solution for Wasserstein distance on $\mathbb{R}$ with the usual Euclidean distance $\Lambda_1$:
\[d_{\mathrm{W},p}^{(\mathbb{R},\Lambda_1)}(\alpha,\beta)=\left(\int_0^1|F_\alpha^{-1}(t)-F_\beta^{-1}(t)|^pdt\right)^\frac{1}{p},\]
where $F_\alpha$ and $F_\beta$ are cumulative distribution functions of $\alpha$ and $\beta$, respectively. We have also obtained a closed-form solution for $d_{\mathrm{W},p}^{(\R_{\geq 0},\Lambda_\infty)}$ in \Cref{thm:closed-form-w-infty-real}. We generalize these formulas to the case $d_{\mathrm{W},p}^{(\R_{\geq 0},\Lambda_q)}$ when $q\in(1,\infty)$ and $q\leq p$ in \Cref{sec:closed form solution}.
\end{remark}
\section{Ultrametric Gromov-Wasserstein distances}\label{sec:ultrametric GW distance}
In this section we investigate the properties of $\ugw{p}^\mathrm{sturm}$ as well as $\ugw{p}$, $1\leq p\leq\infty$, and study the relation between them.
\subsection{Sturm's ultrametric Gromov-Wasserstein distance}\label{subsec:Sturms ultrametric GW distance}
We begin by establishing several basic properties of $\usturm{p}$, $1\leq p\leq \infty,$ including a proof that $\usturm{p}$ is indeed a metric (or more precisely a $p$-metric) on the collection of compact ultrametric measure spaces $\mathcal{U}^w$.
The definition of $\usturm{p}$ given in \Cref{eq:ultra Sturm} is clunky, technical and in general not easy to work with. Hence, the first observation to make is the fact that $\usturm{p}$, $1\leq p \leq\infty$, shares a further property with $\dsturm{p}$: $\usturm{p}$ can be calculated by minimizing over pseudo-ultrametrics instead of isometric embeddings.
\begin{lemma}\label{lemma:pseudometric representation of usturm}
Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\uY,\muY \right) }$ be two ultrametric measure spaces. Let $\mathcal{D}^\mathrm{ult}(\ensuremath{u_{X}},\ensuremath{u_{Y}})$ denote the collection of all pseudo-ultrametrics $u$ on the disjoint union $X\sqcup Y$ such that $u|_{X\times X} = \ensuremath{u_{X}}$ and $u|_{Y\times Y} = \ensuremath{u_{Y}}$. Let $p\in[1,\infty]$. Then, it holds that \begin{equation}\label{eq:pseudometric def of usturm}\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\inf_{u \in \mathcal{D}^\mathrm{ult}(\ensuremath{u_{X}},\ensuremath{u_{Y}})} \pseudoWasser{p}^{(X\sqcup Y,u)}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}}), \end{equation}
where $\pseudoWasser{p}^{(X\sqcup Y,u)}$ denotes the\emph{ Wasserstein pseudometric} of order $p$ defined in \Cref{eq:def Wasserstein pseudpmetric p} (resp. in \Cref{eq:def Wasserstein pseudpmetric infinity} for $p=\infty$) in \Cref{sec:Wasserstein pseudometric} of the supplement.
\end{lemma}
\begin{proof}
The above lemma follows by the same arguments as Lemma 3.3 $(iii)$ in \cite{sturm2006geometry}.
\end{proof}
\begin{remark}[Wasserstein pseudometric]
The \emph{Wasserstein pseudometric} is a natural extension of the Wasserstein distance to pseudometric spaces and has for example been studied in \citet{thorsley2008model}. In \Cref{sec:Wasserstein pseudometric} we carefully show that it is closely related to the Wasserstein distance on a canonically induced metric space. We further establish that the Wasserstein distance and the Wasserstein pseudometric share many relevant properties. Hence, we do not notationally distinguish between these two concepts. \end{remark}
The representation of $\usturm{p}$, $1\leq p\leq \infty$, given by the above lemma is much more accessible and we first use it to establish the subsequent basic properties of $\usturm p$ (see \Cref{sec:proof of prop usturm_basic} for a full proof).
\begin{proposition}\label{prop:usturm_basic}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$. Then, the following holds:
\begin{enumerate}
\item For any $p\in[1,\infty]$, we always have that $\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\geq \dsturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$.
\item For any $1\leq p\leq q\leq \infty$, we have that $\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\leq\usturm{q}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}) $.
\item It holds that $\lim_{p\rightarrow\infty}\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\usturm{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}). $
\end{enumerate}
\end{proposition}
Moreover, we use \Cref{lemma:pseudometric representation of usturm} to prove that $(\mathcal{U}^w,\usturm{p})$ is indeed a metric space.
\begin{theorem}\label{thm:sturms um}
$\ugw{p}^\mathrm{sturm}$ is a $p$-metric on the collection $\mathcal{U}^w$ of compact ultrametric measure spaces. In particular, when $p=\infty$, $\ugw{\infty}^\mathrm{sturm}$ is an ultrametric.
\end{theorem}
In order to increase the readability of this section we postpone the proof of \Cref{thm:sturms um} to \Cref{sec:proof of thm sturms um}. In the course of the proof, we will, among other things, verify the existence of optimal metrics and optimal couplings in \Cref{eq:pseudometric def of usturm} (see \Cref{prop:usturm_optimal}). Furthermore, it is important to note that the topology induced on $\mathcal{U}^w$ by $\usturm{p}$, $1\leq p\leq \infty$, is different from the one induced by $d_{\mathrm{GW},p}^{\mathrm{sturm}}$. This is well illustrated in the following example.
\begin{example}[$\usturm{p}$ and $d_{\mathrm{GW},p}^{\mathrm{sturm}}$ induce different topologies]\label{ex:notation two point space}
This example is an adaptation from \citet[Example 3.14]{memoli2019gromov}. For each $a>0$, denote by $\Delta_2(a)$ the two-point metric space with interpoint distance $a$. Endow with $\Delta_2(a)$ the uniform probability measure $\mu_a$ and denote the corresponding ultrametric measure space $\hat{\Delta}_2(a)$. Now, let $\ensuremath{\mathcal{X}}\coloneqq \hat{\Delta}_2(1)$ and let $\ensuremath{\mathcal{X}}_n\coloneqq\hat{\Delta}_2\left(1+\frac{1}{n}\right)$ for $n\in\mathbb N$. It is easy to check that for any $1\leq p \leq \infty$, $\dsturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{X}}_n)=\frac{1}{2n}$ and $\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{X}}_n)=
2^{-\frac{1}{p}}(1+\frac{1}{n})$ where we adopt the convention that $1/\infty=0$. Hence, as $n$ goes to infinity $\ensuremath{\mathcal{X}}_n$ will converge to $\ensuremath{\mathcal{X}}$ in the sense of $\dsturm{p}$, but not in the sense of $\usturm{p}$, for any $1\leq p\leq\infty$.
\end{example}
\subsubsection{Alternative representations of \texorpdfstring{$\usturm{p}$}{Sturm's Gromov-Wasserstein distance}}\label{subsubsec:alt rep for Sturms GW dist}
In this subsection, we derive an alternative representation for $\ugw{p}^\mathrm{sturm}$ defined in \Cref{eq:ultra Sturm}. We mainly focus on the case $p<\infty$, however it turns out that the results also hold for $p=\infty$ (see \Cref{subsec:relation between ugw and usturm}).
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$ and recall the original definition of $\usturm{p}$, $p\in[1,\infty]$, given in \Cref{eq:ultra Sturm}, i.e.,
\[\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\inf_{Z,\phi,\psi} d_{\mathrm{W},p}^{(Z,u_Z)}(\varphi_\#\ensuremath{\mu_{Y}},\psi_\#\ensuremath{\mu_{Y}}),\]
where $\phi:X\to Z$ and $\psi:Y\to Z$ are isometric embeddings into an ultrametric space $(Z,u_Z)$. It turns out that we only need to consider relatively few possibilities of mapping two ultrametric spaces into a common ultrametric space. Exemplarily, this is shown in \Cref{fig:common ultrametric space}, where we see two finite ultrametric spaces and two possibilities for a common ultrametric space $Z$.
\begin{figure}
\centering
\includegraphics[width =0.8\textwidth]{Figures/combofumspaces.eps}
\caption{\textbf{Common ultrametric spaces:} Representation of the two kinds of ultrametric spaces $Z$ (middle and right) into which we can isometrically embed the spaces $X$ and $Y$ (left).} \label{fig:common ultrametric space}
\end{figure}
Indeed, it is straightforward to write down all reasonable embeddings and target spaces. We define the set
\begin{equation}\label{eq:definition of A}
\mathcal{A}\coloneqq\{(A,\varphi)\,|\,\emptyset\neq A\subseteq X \text{ is closed and } \varphi:A\hookrightarrow Y \text{ is an isometric embedding } \}.
\end{equation}
Clearly, $\mathcal{A}\neq\emptyset$, as it holds for each $x\in X$ that $\{(\{x\},\varphi_y)\}_{y\in Y}\subseteq\mathcal{A}$, where $\varphi_y$ is the map sending $x$ to $y\in Y$. Another possibility to construct elements in $\mathcal{A}$ is illustrated in the subsequent example.
\begin{example}\label{ex:u=0 A}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$ be finite spaces and let $u\in\mathcal{D}^\mathrm{ult}(\ensuremath{u_{X}},\ensuremath{u_{Y}})$. If $u^{-1}(0)\neq \emptyset$, we define $A\coloneqq\pi_X(u^{-1}(0))\subseteq X$, where $\pi_X:X\times Y\rightarrow X$ is the canonical projection. Then, the map $\varphi:A\rightarrow Y$ defined by sending $x\in A$ to $y\in Y$ such that $u(x,y)=0$ is an isometric embedding and in particular, $(A,\varphi)\in\mathcal{A}$.
\end{example}
Now, fix two compact spaces $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$. Let $(A,\varphi)\in\mathcal{A}$ and let $Z_A=X\sqcup (Y\setminus \varphi(A))\subseteq X\sqcup Y$. Furthermore, define $u_{Z_A}:Z_A\times Z_A\rightarrow\R_{\geq 0}$ as follows:
\begin{enumerate}
\item $u_{Z_A}|_{X\times X}\coloneqq u_X$ and $u_{Z_A}|_{Y\setminus \varphi(A)\times Y\setminus \varphi(A)}\coloneqq u_Y|_{Y\setminus \varphi(A)\times Y\setminus \varphi(A)}$;
\item For any $x\in A$ and $y\in Y\setminus \varphi(A)$ define $u_{Z_A}(x,y)\coloneqq \ensuremath{u_{Y}}(y,\varphi(x)) $;
\item For $x\in X\setminus A$ and $y\in Y\setminus \varphi(A)$ let $u_{Z_A}(x,y)\coloneqq \inf\{\max(\ensuremath{u_{X}}(x,a),\ensuremath{u_{Y}}(\varphi(a),y))\,|\,a\in A\} $;
\item For any $x\in X$ and $y\in Y\setminus\varphi(A)$, $u_{Z_A}(y,x)\coloneqq u_{Z_A}(x,y). $
\end{enumerate}
Then, $(Z_A,u_{Z_A})$ is an ultrametric space such that $X$ and $Y$ can be mapped isometrically into $Z_A$ (see \cite[Lemma 1.1]{zarichnyi2005gromov}). Let $\phi^X_{(A,\varphi)}$ and $\psi^Y_{(A,\varphi)}$ denote the corresponding isometric embeddings of $X$ and $Y$, respectively.
This allows us to derive the following statement, whose proof is postponed to \Cref{sec:proof of compact usturm a phi}.
\begin{theorem}\label{thm:compact usturm A Phi representation}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$. Then, we have for each $p\in[1,\infty)$ that
\begin{equation}\label{eq:alternative representation ustum}
\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\inf_{(A,\varphi)\in\mathcal{A}}d_{\mathrm{W},p}^{Z_A}\left({\left(\phi^X_{(A,\varphi)}\right)}_\#\ensuremath{\mu_{X}},{\left(\psi^Y_{(A,\varphi)}\right)}_\#\ensuremath{\mu_{Y}}\right).\end{equation}
\end{theorem}
\begin{remark}\label{rem:computation of usturm}
Let $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ be two finite ultrametric measure spaces. The representation of $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$, $1\leq p\leq \infty$ given by \Cref{thm:compact usturm A Phi representation} is very explicit and recasts the computation of $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$, $1\leq p\leq \infty$, as a combinatorial problem. In fact, as $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are finite, the set $\mathcal{A}$ in \Cref{eq:alternative representation ustum} can be further reduced. More precisely, we demonstrate in \Cref{sec:proof of compact usturm a phi} (see \Cref{coro:usturm A Phi representation}) that it is sufficient to infimize over the set of all \emph{maximal pairs}, denoted by $\mathcal{A}^*$. Here, a pair $(A,\varphi_1)\in\mathcal{A}$ is denoted as \emph{maximal}, if for all pairs $(B,\varphi_2)\in \mathcal{A}$ with $A\subseteq B$ and $\varphi_2|_A=\varphi_1$ it holds $A=B$.
Using the ultrametric Gromov-Hausdorff distance (see \Cref{eq:Gromov Hausdorff ultrametric}) it is possible to determine if two ultrametric spaces are isometric in polynomial time \cite[Theorem 5.7]{memoli2019gromov}. However, this is clearly not sufficient to identify all $(A,\varphi)\in\mathcal{A}^*$ in polynomial time. Especially, for a given, viable $A\subseteq X$, there are usually multiple ways to define the corresponding map $\varphi$. Furthermore, we have for $1\leq p<\infty$ neither been able to further restrict the set $\mathcal{A}^*$ nor to identify the optimal $(A^*,\varphi^*)$. This just leaves a brute force approach which is computationally not feasible. On the other hand, for $p=\infty$ we are able to explicitly construct the optimal pair $(A^*,\varphi^*)$ (see \Cref{thm:optimal A and varphi (usturm)}).
\end{remark}
\subsection{The ultrametric Gromov-Wasserstein distance}\label{subsec:the ultrametric GW distance}
In the following, we consider basic properties of $\ugw{p}$ and prove the analogue of \Cref{thm:sturms um}, i.e., we verify that also $\ugw{p}$ is a $p$-metric, $1\leq p\leq \infty$, on the collection of ultrametric measure spaces.
The subsequent proposition collects three basic properties of $\ugw{p}$ which are also shared by $\usturm{p}$ (cf. \Cref{prop:usturm_basic}). We refer to \Cref{sec:proof:prop:ugw-properties} for its proof.
\begin{proposition}\label{prop:ugw-properties}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$. Then, the following holds:
\begin{enumerate}
\item For any $p\in[1,\infty]$, we always have that $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\geq \dgw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$.
\item For any $1\leq p\leq q\leq \infty$, it holds $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\leq\ugw{q}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}) $;
\item We have that $\lim_{p\rightarrow\infty}\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}).$
\end{enumerate}
\end{proposition}
Next, we verify that $\ugw{p}$ is indeed a metric on the collection of ultrametric measure spaces.
\begin{theorem}\label{thm:ugw-p-metric}
The ultrametric Gromov-Wasserstein distance $\ugw{p}$ is a $p$-metric on the collection $\mathcal{U}^w$ of compact ultrametric measure spaces. In particular, when $p=\infty$, $\ugw{\infty}$ is an ultrametric.
\end{theorem}
The full proof of \Cref{thm:ugw-p-metric}, which is based on the existence of optimal couplings in \Cref{eq:def uGW} (see \Cref{{prop:ugw-ext-opt}}), is postponed to \Cref{sec:proof of thm ugw-p-metric}.
\begin{remark}[$\ugw{p}$ and $\dgw{p}$ induce different topologies]
Reconsidering \Cref{ex:notation two point space}, it is easy to verify that in this setting $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{X}}_n)=2^{-\frac{1}{p}}\left(1+\frac{1}{n}\right)$ while $\dgw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{X}}_n)=\frac{1}{2^{1/p}n}$, $1\leq p\leq \infty$. Hence, just like $\usturm{p}$ and $\dsturm{p}$, $\ugw{p}$ and $\dgw{p}$ do not induce the same topology on $\mathcal{U}^w$. This result can also be obtained from \Cref{sec:topology and geodesic properties} where we derive that $\ugw{p}$ and $\usturm{p}$ give rise to the same topology.
\end{remark}
\begin{remark}\label{rem:computational complexity ugw}
By the same arguments as for $\dgw{p}$, $1\leq p<\infty$, \citep[Sec. 7]{memoli2011gromov}, it follows that for two finite ultrametric
measure spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ the computation of $\ugw{p}(\ensuremath{\mathcal{X}}, \ensuremath{\mathcal{Y}})$, $1\leq p<\infty $, boils down to solving a (non-convex) quadratic program. This is in general NP-hard \citep{pardalos1991quadratic}. On the other hand, for $p=\infty$, we will derive a polynomial time algorithm to determine $\ugw{\infty}(\ensuremath{\mathcal{X}}, \ensuremath{\mathcal{Y}})$ (cf. \Cref{subsubsec:alternative repesentation of ugw infty}).
\end{remark}
\subsubsection{Alternative representations of \texorpdfstring{$\ugw\infty$}{the ultrametric Gromov-Wasserstein distance}}\label{subsubsec:alternative repesentation of ugw infty} In the following, we will derive an alternative representation of $\ugw\infty$ that resembles the one of $u_\mathrm{GH}$ derived in \cite[Theorem 5.7]{memoli2019gromov}. It also leads to a polynomial time algorithm for the computation of $\ugw{\infty}$. For this purpose, we define the \emph{weighted quotient} of an ultrametric measure space. Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }\in\mathcal{U}^w$ and let $t\geq 0$. Then, the \emph{weighted quotient} of $\ensuremath{\mathcal{X}}$ at level $t$, is given as $\ensuremath{\mathcal{X}}_t=(X_t,u_{X_t},\mu_{X_t})$, where $(X_t, u_{X_t})$ is the quotient of the ultrametric space $(X,u_X)$ at level $t$ (see \Cref{sec:ultrametric Gromov-Hausdorff}) and $\mu_{X_t}\in\mathcal{P}(X_t)$ is the push forward of $\ensuremath{\mu_{X}}$ under the canonical quotient map $Q_t:(X,u_X)\rightarrow(X_t,u_{X_t})$ sending $x$ to $[x]_t$ for $x\in X$. \Cref{fig:accumulated measure spaces} illustrates the weighted quotient in a simple example.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figures/ummquotient.eps}
\caption{\textbf{Weighted Quotient:} An ultrametric measure space (black) and its weighted quotient at level $t$ (red).} \label{fig:accumulated measure spaces}
\end{figure}
Based on this definition, we show the following theorem, whose proof is postponed to \Cref{sec:proof of thm ugw-infty-eq}.
\begin{theorem}\label{thm:ugw-infty-eq}
Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\uY,\muY \right) }$ be two compact ultrametric measure spaces.
Then, it holds that
\[u_{\mathrm{GW},\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\min\left\lbrace t\geq 0 \,|\,\ensuremath{\mathcal{X}}_t \cong_w \ensuremath{\mathcal{Y}}_t\right\rbrace.\]
\end{theorem}
\begin{remark} The weighted quotients $\ensuremath{\mathcal{X}}_t$ and $\ensuremath{\mathcal{Y}}_t$ can be considered as vertex weighted, rooted trees and thus it is possible to verify whether $\ensuremath{\mathcal{X}}_t \cong_w \ensuremath{\mathcal{Y}}_t$ in polynomial time \citep{aho1974design}. In consequence, we obtain an polynomial time algorithm for the calculation of $\ugw{\infty}$. See \Cref{subsubsec:p equals infty} for details.
\end{remark}
The representations of $u_\mathrm{GH}$ in \Cref{thm:ultrametric GH-distance} and $\ugw{\infty}$ in \Cref{thm:ugw-infty-eq} strongly resemble themselves. As a direct consequence of both \Cref{thm:ultrametric GH-distance} and \Cref{thm:ugw-infty-eq}, we obtain the following comparison between the two metrics
\begin{corollary}\label{coro:ugw>ugh}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$. Then, it holds that
\begin{equation}\label{eq:ugw geq ugh}
u_{\mathrm{GW},\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\geq u_\mathrm{GH}(X,Y). \end{equation}
\end{corollary}
The inequality in \Cref{eq:ugw geq ugh} is sharp and we illustrate this as follows. By \citet[Corollary 5.8]{memoli2019gromov} we know that if the considered ultrametric spaces $(X,u_X)$ and $(Y,u_Y)$ have different diameters (w.l.o.g. $\diam{X}<\diam{Y}$), then $u_\mathrm{GH}(X,Y)=\diam{Y}$. The same statement also holds for $\ugw{\infty}$
\begin{corollary}\label{coro:uGW trivial case}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$ be such that $\diam{X}<\diam{Y}$. Then,
\[\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\diam{Y}=u_\mathrm{GH}(X,Y).\]
\end{corollary}
\begin{proof}
The rightmost equality follows directly from Corollary 5.8 of \citet{memoli2019gromov}. As for the leftmost equality, let $t\coloneqq\diam Y$, then it is obvious that $\ensuremath{\mathcal{X}}_t\cong_w *\cong_w \ensuremath{\mathcal{Y}}_t$, where $*$ denotes the one point ultrametric measure space. Let $s\in(\diam X,\diam Y)$, then $\ensuremath{\mathcal{X}}_t\cong_w *$ whereas $\ensuremath{\mathcal{Y}}\not\cong_w*$. By \Cref{thm:ugw-infty-eq}, $\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=t=\diam{Y}$.
\end{proof}
\subsection{The relation between \texorpdfstring{$\ugw{p}$}{the ultrametric Gromov-Wasserstein distance} and \texorpdfstring{$\usturm{p}$}{Sturm's ultrametric Gromov-Wasserstein distance}}\label{subsec:relation between ugw and usturm}
In this section, we study the relation of $\usturm{p}$ and $\ugw{p}$, $1\leq p\leq, \infty$ and establish the topological equivalence between the two metrics.
\subsubsection{Lipschitz relation} We first study the Lipschitz relation between $\usturm{p}$ and $\ugw{p}$. For this purpose, we have to distinguish the cases $p<\infty$ and $p =\infty$.
\emph{The case \texorpdfstring{$p<\infty$}{p<infty}.}
We start the consideration of this case by proving that it is essentially enough to consider the case $p=1$ (see \Cref{thm:snow-ugw}). To this end, we need to introduce some notation. For each $\alpha>0$, we define a function $S_\alpha:\R_{\geq 0}\rightarrow\R_{\geq 0}$ by $x\mapsto x^\alpha$. Given an ultrametric space $(X,\ensuremath{u_{X}})$ and $\alpha>0$, we abuse the notation and denote by $S_\alpha(X)$ the new space $(X,S_\alpha\circ\ensuremath{u_{X}})$. It is obvious that $S_\alpha(X)$ is still an ultrametric space. This transformation of metric spaces is also known as the \emph{snowflake transform} \citep{david1997fractured}.
Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\uY,\muY \right) }$ denote two ultrametric measure spaces. Let $1\leq p<\infty$. We denote by $S_p(\ensuremath{\mathcal{X}})$ the ultrametric measure space $(X,S_p\circ\ensuremath{u_{X}},\ensuremath{\mu_{X}})$. The snowflake transform can be used to relate $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ as well as $\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ with $\ugw{1}(S_p(\ensuremath{\mathcal{X}}),S_p(\ensuremath{\mathcal{Y}}))$ and $\usturm{1}(S_p(\ensuremath{\mathcal{X}}),S_p(\ensuremath{\mathcal{Y}}))$, respectively.
\begin{theorem}\label{thm:snow-ugw}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$ and let $p\in [1,\infty)$. Then, we obtain
\[\big(\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\big)^p=\ugw{1}(S_p(\ensuremath{\mathcal{X}}),S_p(\ensuremath{\mathcal{Y}}))~ \text{ and }~\big(\ugw{p}^\mathrm{sturm}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\big)^p=\ugw{1}^\mathrm{sturm}(S_p(\ensuremath{\mathcal{X}}),S_p(\ensuremath{\mathcal{Y}})). \] \end{theorem}
We give full proof of \Cref{thm:snow-ugw} in \Cref{sec:proof of thm snow-ugw}. Based on this result, we can directly relate the metrics $\ugw{p}$ and $\usturm{p}$ by only considering the case $p=1$ and prove the following \Cref{thm:ugw<ugw-sturm} (see \Cref{sec:proof of thm ugw<ugw-sturm} for a detailed proof).
\begin{theorem}\label{thm:ugw<ugw-sturm}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$. Then, we have for $p\in[1,\infty)$ that
\[\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\leq 2^\frac{1}{p}\,\ugw{p}^\mathrm{sturm}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}).\]
\end{theorem}
The subsequent example verifies that the coefficient in \Cref{thm:ugw<ugw-sturm} is tight.
\begin{example} For each $n\in\mathbb N$, let $\ensuremath{\mathcal{X}}_n$ be the three-point space $\Delta_3(1)$ (i.e. the 3-point metric labeled by $\{x_1,x_2,x_3\}$ where all distances are 1) with a probability measure $\ensuremath{\mu_{X}}^n$ such that $\ensuremath{\mu_{X}}^n(x_1)=\ensuremath{\mu_{X}}^n(x_2)=\frac{1}{2n}$ and $\ensuremath{\mu_{X}}^n(x_3)=1-\frac{1}{n}$. Let $Y=*$ and $\ensuremath{\mu_{Y}}$ be the only probability measure on $Y$. Then, it is routine (using \Cref{prop:ugw and one point space} from \Cref{sec:ugw and one point space}) to check that $\ugw{1}(\ensuremath{\mathcal{X}}_n,\ensuremath{\mathcal{Y}})=\frac{2}{n}\left(1-\frac{3}{4n}\right)$ and $\ugw{1}^\mathrm{sturm}(\ensuremath{\mathcal{X}}_n,\ensuremath{\mathcal{Y}})=\frac{1}{n}$. Therefore, we have
\[\lim_{n\rightarrow\infty}\frac{\ugw{1}(\ensuremath{\mathcal{X}}_n,\ensuremath{\mathcal{Y}})}{\ugw{1}^\mathrm{sturm}(\ensuremath{\mathcal{X}}_n,\ensuremath{\mathcal{Y}})}=2. \]
\end{example}
\begin{example}[$\usturm{p}$ and $\ugw{p}$ are not bi-Lipschitz equivalent]\label{ex:non bi lipschitz}
Following \cite[Remark 5.17]{memoli2011gromov}, we verify in \Cref{sec: ex non bi lipschitz} that for any positive integer $n$
\[\usturm{p}\left(\hat{\Delta}_n(1),\hat{\Delta}_{2n}(1)\right)\geq \frac{1}{4}\,\,\text{and}\,\,\ugw{p}\left(\hat{\Delta}_n(1),\hat{\Delta}_{2n}(1)\right)\leq \left(\frac{3}{2n}\right)^\frac{1}{p}.\]
Here, $\hat{\Delta}_n(1)$ denotes the $n$-point metric measure space with interpoint distance $1$ and the uniform probability measure.
Thus, there exists no constant $C>0$ such that $\usturm{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\leq C\cdot\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ holds for every input spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$. Hence, $\usturm{p}$ and $\ugw{p}$ are not bi-Lipschitz equivalent.
\end{example}
\emph{The case \texorpdfstring{$p=\infty$}{infinity}.}
Next, we consider the relation between $\usturm{\infty}$ and $\ugw{\infty}$. By taking the limit $p\rightarrow\infty$ in \Cref{thm:ugw<ugw-sturm}, one might expect that $\usturm{\infty}\geq \ugw{\infty}$. In fact, we prove that the equality holds (for the full proof see \Cref{sec:proof of thm ugw infty and sturm ugw infty}).
\begin{theorem}\label{thm:ugw infty and sturm ugw infty}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$.
Then, it holds that \[\usturm{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}).\]
\end{theorem}
One application of \Cref{thm:ugw infty and sturm ugw infty} is to explicitly derive the minimizing pair $(A,\phi)\in \mathcal{A}^*$ in \Cref{eq:usturm_maxiam_pair} for $p=\infty$ (see \Cref{sec:proof of thm optimal A and varphi (usturm)} for an explicit construction):
\begin{theorem}\label{thm:optimal A and varphi (usturm)}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$. Let $s\coloneqq u^\mathrm{sturm}_{\mathrm{GW},\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ and assume that $s>0$. Then, there exists $(A,\phi)
\in\mathcal{A}$ defined in \Cref{eq:definition of A}
such that
\[\usturm\infty(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=d_{\mathrm{W},\infty}^{Z_A}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}}),\]
where $Z_A$ denotes the ultrametric space defined in \Cref{subsubsec:alt rep for Sturms GW dist}.
\end{theorem}
\subsubsection{Topological equivalence between \texorpdfstring{$\ugw p$}{the ultrametric Gromov-Wasserstein distance} and \texorpdfstring{$\usturm p$}{Sturm's ultrametric Gromov-Wasserstein distance}} \citet{memoli2011gromov} proved the topological equivalence between $\dgw p$ and $\dsturm p$. We establish an analogous result for $\ugw p$ and $\usturm p$. To this end, we recall the \emph{modulus of mass distribution}.
\begin{definition}[{\citet[Def. 2.9]{{greven2009convergence}}}]\label{def:modulus of mass distribution}
Given $\delta>0$ we define \emph{the modulus of mass distribution} of $\ensuremath{\mathcal{X}}\in\mathcal{U}^w$ as
\begin{equation}\label{eq:modulus}
v_\delta(\ensuremath{\mathcal{X}})\coloneqq\inf\left\{ \varepsilon>0|\,\ensuremath{\mu_{X}}\left(\left\{x:\,\ensuremath{\mu_{X}}\left(B_\varepsilon^\circ(x)\right)\leq\delta\right\}\right)\leq \varepsilon\right\},
\end{equation}
where $B_\varepsilon^\circ(x)$ denotes the \emph{open} ball centered at $x$ with radius $\varepsilon$.
\end{definition}
We note that $v_\delta(\ensuremath{\mathcal{X}})$ is non-decreasing, right-continuous and bounded above by 1. Furthermore, it holds that $\lim_{\delta\searrow 0}v_\delta(\ensuremath{\mathcal{X}})=0$ \citep[Lemma 6.5]{greven2009convergence}. With \Cref{def:modulus of mass distribution} at hand, we derive the following theorem.
\begin{theorem}\label{thm:equivalence}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$, $p\in[1,\infty)$ and $\delta\in\left(0,\frac{1}{2}\right)$. Then, whenever $\ugw p(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})<\delta^5$ we have
\[\usturm p(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\leq \left(4\cdot\min(v_\delta(\ensuremath{\mathcal{X}}),v_\delta(\ensuremath{\mathcal{Y}}))+\delta\right)^\frac{1}{p}\cdot M, \]
where $M\coloneqq 2\cdot\max(\diam X,\diam Y)+54$.
\end{theorem}
\begin{remark}
Since it holds that $\lim_{\delta\searrow0}v_\delta(\ensuremath{\mathcal{X}})=0$ and that $2^{-{1}/{p}}\usturm{p}\geq\ugw{p}$ (see \Cref{thm:ugw<ugw-sturm}), the above theorem gives the topological equivalence between $\ugw{p}$ and $\usturm {p}$, $1\leq p<\infty$ (the topological equivalence between $\usturm{\infty}$ and $\ugw{\infty}$ holds trivially thanks to \Cref{thm:ugw infty and sturm ugw infty}).
\end{remark}
The proof of the \Cref{thm:equivalence} follows the same strategy used for proving Proposition 5.3 in \cite{memoli2011gromov} and we refer to \Cref{app:proof of equivalence} for the details.
\subsection{Topological and geodesic properties}\label{sec:topology and geodesic properties}
In this section, we consider the topology induced by $\ugw{p}$ and $\usturm{p}$ on $\mathcal{U}^w$ and discuss the geodesic properties of both $\ugw p$ and $\usturm p$ for $1\leq p\leq \infty$.
\subsubsection{Completeness and separability} We study completeness and separability of the two metrics $\ugw p$ and $\usturm p$, $1\leq p\leq \infty$, on $\mathcal{U}^w$. To this end, we derive the subsequent theorem whose proof is postponed to \Cref{sec:proof of thm complete and separable}.
\begin{theorem}\label{thm: complete and separable}
\begin{enumerate}
\item For $p\in[1,\infty)$, the metric space $(\mathcal{U}^w,\ugw p)$ is neither complete nor separable.
\item For $p\in[1,\infty)$, the metric space $\left(\mathcal{U}^w,\usturm p\right)$ is neither complete nor separable.
\item $(\mathcal{U}^w,\ugw \infty)=(\mathcal{U}^w,\usturm \infty)$ is complete but not separable.
\end{enumerate}
\end{theorem}
\subsubsection{Geodesic property}
A \emph{geodesic} in a metric space $(X,d_X)$ is a continuous function $\gamma:[0,1]\rightarrow X$ such that for each $s,t\in[0,1]$, $d_X(\gamma(s),\gamma(t))=|s-t|\cdot d_X(\gamma(0),\gamma(1))$. We say a metric space is geodesic if for any two distinct points $x,x'\in X$, there exists a geodesic $\gamma:[0,1]\rightarrow X$ such that $\gamma(0)=x$ and $\gamma(1)=x'$. For any $p\in[1,\infty)$, the notion of $p$-geodesic is introduced in \cite{memoli2019gromov}: A $p$-geodesic in a metric space $(X,d_X)$ is a continuous function $\gamma:[0,1]\rightarrow X$ such that for each $s,t\in[0,1]$, $d_X(\gamma(s),\gamma(t))=|s-t|^{1/p}\cdot d_X(\gamma(0),\gamma(1))$. Similarly, we say a metric space is $p$-geodesic if for any two distinct points $x,x'\in X$, there exists a $p$-geodesic $\gamma:[0,1]\rightarrow X$ such that $\gamma(0)=x$ and $\gamma(1)=x'$. Note that a $1$-geodesic is a usual geodesic and a $1$-geodesic space is a usual geodesic space. The subsequent theorem establishes ($p$-)geodesic properties of $\left(\mathcal{U}^w,\usturm p\right)$ for $p\in[1,\infty)$. A full proof is given in \Cref{sec:proof of prop usturm 1 geodesic}.
\begin{theorem}\label{prop:usturm 1 geodesic}
For any $p\in[1,\infty)$, the space $\left(\mathcal{U}^w,\usturm p\right)$ is $p$-geodesic.
\end{theorem}
\begin{remark}
Due to the fact that a $p$-geodesic space cannot be geodesic when $p>1$ (cf. \Cref{lm:p metric not geodesic}), $\left(\mathcal{U}^w,\usturm p\right)$ is not geodesic for all $p> 1$.
\end{remark}
\begin{remark}
Though the geodesic properties of $\left(\mathcal{U}^w,\usturm p\right)$, $1\leq p< \infty$ are clear, we remark that geodesic properties of $\left(\mathcal{U}^w,\ugw p\right)$, $1\leq p<\infty$, still remain unknown to us.
\end{remark}
\begin{remark}[The case $p=\infty$]
Being an ultrametric space itself (cf. \Cref{thm:ugw-p-metric}), $\left(\mathcal{U}^w,\ugw \infty\right)$ ($=\left(\mathcal{U}^w,\usturm \infty\right)$) is \emph{totally disconnected}, i.e., any subspace with at least two elements is disconnected \cite{semmes2007introduction}. This in turn implies that each continuous curve in $\left(\mathcal{U}^w,\ugw \infty\right)$ is constant. Therefore, $\left(\mathcal{U}^w,\ugw \infty\right)$ is not a $p$-geodesic space for any $p\in[1,\infty)$.
\end{remark}
\section{Lower bounds for \texorpdfstring{$\ugw{p}$}{the ultrametric Gromov-Wasserstein distance}}\label{sec:lower bounds}
Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\uY,\muY \right) }$ be two ultrametric measure spaces. The metrics $\usturm{p}$ and $\ugw{p}$ respect the ultrametric structure of the spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$. Thus, one would hope that comparing ultrametric measure spaces with $\usturm{p}$ or $\ugw{p}$ is more meaningful than doing it with the usual Gromov-Wasserstein distance or Sturm's distance. Unfortunately, for $p<\infty$, the computation of both $\usturm{p}$ and $\ugw{p}$ is complicated and for $p=\infty$ both metrics are extremely sensitive to differences in the diameters of the considered spaces (see \Cref{coro:uGW trivial case}). Thus, it is not feasible to use these metrics in many applications. However, we can derive meaningful lower bounds for $\ugw{p}$ (and hence also for $\usturm{p}$) that resemble those of the Gromov-Wasserstein distance. Naturally, the question arises whether these lower bounds are better/sharper than the ones of the usual Gromov-Wasserstein distance in this setting. This question is addressed throughout this section and will be readdressed in \Cref{sec:computational aspects} as well as \Cref{sec:phylogenetic tree shapes}.
In \cite{memoli2011gromov}, the author introduced three lower bounds for $\dgw{p}$ that are computationally less expensive than the calculation of $\dgw{p}$. We will briefly review these three lower bounds and then define candidates for the corresponding lower bounds for $\ugw{p}$. In the following, we always assume $p\in[1,\infty]$.\\
\paragraph{\textbf{{First lower bound}}} Let $s_{X,p}:X\rightarrow\R_{\geq 0}$, $x\mapsto\norm{u_X(x,\cdot)}_{L^p(\mu_X)}$. Then, the first lower bound $\dFLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ for $\dgw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ is defined as follows
\[\dFLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\frac{1}{2}\inf_{\mu\in\mathcal{C}(\mu_X,\mu_Y)}\norm{\Lambda_1(s_{X,p}(\cdot),s_{Y,p}(\cdot))}_{L^p(\mu)}.\]
Following our intuition of replacing $\Lambda_1$ with $\Lambda_\infty$, we define the ultrametric version of $\dFLB{}$ as
\[\uFLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\inf_{\mu\in\mathcal{C}(\mu_X,\mu_Y)}\norm{\Lambda_\infty(s_{X,p}(\cdot),s_{Y,p}(\cdot))}_{L^p(\mu)}.\]
\paragraph{\textbf{Second lower bound}}
The second lower bound $\dSLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ for $\dgw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ is given as
\[\mathbf{SLB}_p(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\frac{1}{2}\inf_{\gamma\in\mathcal{C}(\ensuremath{\mu_{X}}\otimes \ensuremath{\mu_{X}},\ensuremath{\mu_{Y}}\otimes\ensuremath{\mu_{Y}})}\norm{\Lambda_1(u_X,u_Y)}_{L^p(\gamma)}.\]
Thus, we define the ultrametric second lower bound between two ultrametric measure spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ as follows:
\[\uSLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\inf_{\gamma\in\mathcal{C}(\ensuremath{\mu_{X}}\otimes \ensuremath{\mu_{X}},\ensuremath{\mu_{Y}}\otimes\ensuremath{\mu_{Y}})}\norm{\Lambda_\infty(\ensuremath{u_{X}},\ensuremath{u_{Y}})}_{L^p(\gamma)}.\]
\paragraph{\textbf{Third lower bound}}
Before we introduce the final lower bound, we have to define several functions. First, let $\Gamma^1_{X,Y}:X\times Y\times X\times Y\rightarrow\R_{\geq 0}$, $(x,y,x',y')\mapsto\Lambda_1(u_X(x,x'),u_Y(y,y'))$ and let $\Omega_p^1:X\times Y\rightarrow\R_{\geq 0}$, $p\in[1,\infty]$, be given by
\[\Omega_p^1(x,y)\coloneqq\inf_{\mu\in\mathcal{C}(\mu_X,\mu_Y)}\norm{\Gamma_{X,Y}^1(x,y,\cdot,\cdot)}_{L^p(\mu)}. \]
Then, the third lower bound $\dTLB{p}$ is given as
\[\dTLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\frac{1}{2}\inf_{\mu\in\mathcal{C}(\mu_X,\mu_Y)}\norm{\Omega_p^1(\cdot,\cdot)}_{L^p(\mu)}. \]
Analogously to the definition of previous ultrametric versions, we define $\Gamma^\infty_{X,Y}:X\times Y\times X\times Y\rightarrow\R_{\geq 0}$, $(x,y,x',y')\mapsto\Lambda_\infty(u_X(x,x'),u_Y(y,y'))$. Further, for $p\in[1,\infty]$, let $\Omega_p^\infty:X\times Y\rightarrow\R_{\geq 0}$ be given by
\[\Omega_p^\infty(x,y)\coloneqq\inf_{\mu\in\mathcal{C}(\mu_X,\mu_Y)}\norm{\Gamma_{X,Y}^\infty(x,y,\cdot,\cdot)}_{L^p(\mu)}. \]
Then, the ultrametric third lower bound between two ultrametric measure spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ is defined as
\[\uTLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\coloneqq\inf_{\mu\in\mathcal{C}(\mu_X,\mu_Y)}\norm{\Omega_p^\infty(\cdot,\cdot)}_{L^p(\mu)}. \]
\subsection{Properties and computation of the lower bounds}
Next, we examine the quantities $\uFLB{},\uSLB{}$ and $\uTLB{}$ more closely. Since $\Lambda_\infty(a,b)\geq\Lambda_1(a,b)=|a-b|$ for any $a,b\geq 0$, it is easy to conclude that $\uFLB{p}\geq \dFLB{p}$, $\uSLB{p}\geq \dSLB{p}$ and $\uTLB{p}\geq \dTLB{p}$. Moreover, the three ultrametric lower bounds satisfy the following theorem (for a complete proof see \Cref{sec:proof of thm comparison with original}).
\begin{theorem}\label{thm:comparison with original}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$ and let $p\in[1,\infty]$.
\begin{enumerate}
\item $\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\geq \uFLB{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$.
\item $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\geq \uTLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\geq\uSLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$.
\end{enumerate}
\end{theorem}
\begin{remark}
Interestingly, it turns out that $\uFLB{p}$ is not a lower bound of $\ugw{p}$ in general when $p<\infty$. For example, let $X=\{x_1,x_2,\ldots,x_n\}$ and $Y=\{y_1,\ldots,y_n\}$ and define $\ensuremath{u_{X}}$ such that $\ensuremath{u_{X}}(x_1,x_2)=1$ and $\ensuremath{u_{X}}(x_i,x_j)=2\delta_{i\neq j}$ for $(i,j)\neq (1,2)$, $(i,j)\neq (2,1)$ and $i,j=1,\ldots,n$. Let $\ensuremath{u_{Y}}(y_i,y_j)=2\delta_{i\neq j}$, $i,j=1,\ldots,n$, and let $\ensuremath{\mu_{X}}$ and $\ensuremath{\mu_{Y}}$ be uniform measures on $X$ and $Y$, respectively. Then, $\ugw{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})\leq \frac{4}{n^2}$ whereas $\uFLB{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\frac{4n-4}{n^2}$ which is greater than $\ugw{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ as long as $n>2$. Moreover, we have in this case that $\uFLB{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=O\left(\frac{1}{n}\right)$ whereas $\ugw{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=O\left(\frac{1}{n^2}\right)$. Hence, there exists no constant $C>0$ such that $\uFLB{1}\leq C\cdot\ugw 1$ in general.
\end{remark}
\begin{remark}
There exist ultrametric measure spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ such that $\uTLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=0$ whereas $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})>0$ (examples described in \cite[Figure 8]{memoli2011gromov} will serve the purpose). Furthermore, there are spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ such that $\uSLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=0$ whereas $\uTLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})>0$ (see \Cref{sec:uSLB zero uTLB greater zero}). The analogous statement holds true for $\dTLB{p}$ and $\dSLB{p}$, which are nevertheless useful in various applications (see e.g. \cite{gellert2019substrate}).
\end{remark}
From the structure of $\uSLB{p}$ and $\uTLB{p}$ it is obvious that their computations leads to different optimal transport problems (see e.g. \cite{villani2003topics}). However, in analogy to \citet[Theorem 3.1]{chowdhury2019gromov} we can rewrite $\uSLB{p}$ and $\uTLB{p}$ in order to further simplify their computation. The full proof of the subsequent proposition is given in \Cref{sec:proof of prop flb-slb-w-form}.
\begin{proposition}\label{prop:flb-slb-w-form}
Let $\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}\in\mathcal{U}^w$ and let $p\in[1,\infty]$. Then, we find that
\begin{enumerate}
\item $ \uSLB{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=d_{\mathrm{W},p}^{(\R_{\geq 0},\Lambda_\infty)}\left((\ensuremath{u_{X}})_\#(\ensuremath{\mu_{X}}\otimes\ensuremath{\mu_{X}}),(\ensuremath{u_{Y}})_\#(\ensuremath{\mu_{Y}}\otimes\ensuremath{\mu_{Y}})\right);$
\item For each $x,y\in X\times Y$, $\Omega_p^\infty(x,y)= d_{\mathrm{W},p}^{(\R_{\geq 0},\Lambda_\infty)}\left(u_X(x,\cdot)_\#\ensuremath{\mu_{X}},u_Y(y,\cdot)_\#\ensuremath{\mu_{Y}}\right)$.
\end{enumerate}
\end{proposition}
\begin{remark}
Since we have by \Cref{thm:closed-form-w-infty-real} an explicit formula for the Wasserstein distance on $(\R_{\geq 0},\Lambda_\infty)$ between finitely supported probability measures, these alternative representations of the lower bound $\uSLB{p}$ and the cost functional $\Omega_p^\infty$ drastically reduce the computation time of $\uSLB{p}$ and $\uTLB{p}$, respectively. In particular, we note that this allows us to compute $\uSLB{p}$, $1\leq p\leq \infty$, between finite ultrametric measure spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ with $|X|=m$ and $|Y|=n$ in $O((m\vee n)^2)$ steps.
\end{remark}
\Cref{prop:flb-slb-w-form} allows us to direclty compare the two lower bounds $\uSLB{1}$ and $\dSLB{1}$.
\begin{corollary}\label{coro:representation of SLB1}
For any finite ultrametric measure spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$, we have that
\begin{equation}\label{eq:slbu-slb}
\uSLB{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\mathbf{SLB}_1(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})+\frac{1}{2}\int_\mathbb{R}t\,\left|(\ensuremath{u_{X}})_\#(\ensuremath{\mu_{X}}\otimes\ensuremath{\mu_{X}})-(\ensuremath{u_{Y}})_\#(\ensuremath{\mu_{Y}}\otimes\ensuremath{\mu_{Y}})\right|(dt).
\end{equation}
\end{corollary}
\begin{proof}
The claim follows directly from \Cref{prop:flb-slb-w-form} and \Cref{rmk:int-w-inf-real}.
\end{proof}
This corollary implies that $\uSLB{p}$ is more rigid than $\dSLB{p}$, since the second summand on the right hand side of \Cref{eq:slbu-slb} is sensitive to distance perturbations. This is also illustrated very well in the subsequent example.
\begin{example}
Recall notations from \Cref{ex:notation two point space}. For any $d,d'>0$, we let $X\coloneqq\Delta_2(d)$ and let $Y\coloneqq\Delta_2(d')$. Assume that $X$ and $Y$ have underlying sets $\{x_1,x_2\}$ and $\{y_1,y_2\}$, respectively. Define $\ensuremath{\mu_{X}}\in\mathcal{P}(X)$ and $\ensuremath{\mu_{Y}}\in\mathcal{P}(Y)$ as follows. Let $\alpha_1,\alpha_2\geq 0$ be such that $\alpha_1+\alpha_2=1$. Let $\ensuremath{\mu_{X}}(x_1)=\ensuremath{\mu_{Y}}(y_1)\coloneqq\alpha_1$ and let $\ensuremath{\mu_{X}}(x_2)=\ensuremath{\mu_{Y}}(y_2)\coloneqq\alpha_2$. Then, it is easy to verify that
\begin{enumerate}
\item $\ugw{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\uSLB{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=2\alpha_1\alpha_2\Lambda_\infty(d,d').$
\item $\dgw{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\dSLB{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\alpha_1\alpha_2\Lambda_1(d,d')=\alpha_1\alpha_2|d-d'|.$
\item $\frac{1}{2}\int_\mathbb{R}t\,\left|(\ensuremath{u_{X}})_\#(\ensuremath{\mu_{X}}\otimes\ensuremath{\mu_{X}})-(\ensuremath{u_{Y}})_\#(\ensuremath{\mu_{Y}}\otimes\ensuremath{\mu_{Y}})\right|(dt)=\alpha_1\alpha_2(d+d')\delta_{d\neq d'}$.
\end{enumerate}
From 1 and 2 we observe that both second lower bounds are tight.
Moreover, since we obviously have that $(d+d')\delta_{d\neq d'}+ |d-d'|=2\Lambda_\infty(d,d')$, we have also verified \Cref{eq:slbu-slb} through this example.
Unlike $\dSLB{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ being proportional to $|d-d'|$, as long as $d\neq d'$, even if $|d-d'|$ is small, $\Lambda_\infty(d,d')=\max(d,d')$ which results in a large value of $\uSLB{1}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ when $d$ and $d'$ are large numbers. This example illustrates that $\uSLB{1}$ (and hence $\ugw{1}$) is rigid with respect to distance perturbation.
\end{example}
\section{\texorpdfstring{$\ugw p$}{The ultrametric Gromov-Wasserstein distance} on ultra-dissimilarity spaces}\label{sec:ultra-dissimilarity spaces} A natural generalization of ultrametric spaces is provided by \emph{ultra-dissimilarity spaces}. These spaces naturally occur when working with {symmetric ultranetworks} (see \cite{smith2016hierarchical}) or phylogenetic tree data (see \cite{semple2003phylogenetics}). In this section, we will introduce these spaces and briefly illustrate to what extend the results for $\ugw{p}$ can be adapted for ultra-dissimilarity measure spaces. We start by formally introducing \emph{ultra-dissimilarity spaces}.
\begin{definition}[Ultra-dissimilarity spaces] \label{def:ultra dissimilarity}
An \emph{ultra-dissimilarity} space is a couple $(X,\ensuremath{u_{X}})$ consisting of a set $X$ and a function $\ensuremath{u_{X}}:X\times X\rightarrow\R_{\geq 0}$ satisfying the following conditions for any $x,y,z\in X$:
\begin{enumerate}
\item $\ensuremath{u_{X}}(x,y)=\ensuremath{u_{X}}(y,x)$;
\item $\ensuremath{u_{X}}(x,y)\leq\max(\ensuremath{u_{X}}(x,z),\ensuremath{u_{X}}(z,y)); $
\item $\max(\ensuremath{u_{X}}(x,x),\ensuremath{u_{X}}(y,y))\leq \ensuremath{u_{X}}(x,y)$ and the equality holds if and only if $x=y$.
\end{enumerate}
\end{definition}
\begin{remark}
Note that when $(X,u_X)$ is an ultrametric space the third condition is trivially satisfied.
\end{remark}
In the following, we restrict ourselves to \emph{finite} ultra-dissimilarity spaces to avoid technical issues in topology (see \cite{chowdhury2019metric,chowdhury2019gromov} for a more complete treatment of infinite spaces). One important aspect of ultra-dissimilarity spaces is the connection with the so-called \emph{treegrams} \citep{smith2016hierarchical,memoli2019gromov}, {which can be regarded as generalized} dendrograms. For a finite set $X$, let $\mathbf{SubPart}(X)$ denote the collection of all \emph{subpartitions} of $X$: Any partition $P'$ of a non-empty subset $X'\subseteq X$ is called a subpartition of $X$. Given two subpartitions $P_1,P_2$, we say $P_1$ is coarser than $P_2$ if each block in $P_2$ is contained in some block in $P_1$.
\begin{definition}[Treegrams]\label{def:treegram}
A \emph{treegram} $T_X:[0,\infty)\rightarrow \mathbf{SubPart}(X)$ is a map parametrizing a nested family of subpartitions over the same set $X$ and satisfying the following conditions:
\begin{enumerate}
\item For any $0\leq s<t<\infty$, $T_X(t)$ is coarser than $T_X(s)$;
\item There exists $t_X>0$ such that for any $t\geq t_X$, $T_X(t)=\{X\}$;
\item For each $t\geq 0$, there exists $\varepsilon>0$ such that $T_X(t)=T_X(t')$ for all $t'\in[t,t+\varepsilon]$;
\item For each $x\in X$, there exists $t_x\geq 0$ such that $\{x\}$ is a block in $T_X(t_x)$.
\end{enumerate}
\end{definition}
Similar to \Cref{thm:compact ultra-dendro}, which correlates ultrametrics to dendrograms, there exists an equivalence relation between ultra-dissimilarity functions and treegrams on a finite set (see \Cref{fig:treegram} for an illustration).
\begin{proposition}[\citet{smith2016hierarchical}]\label{prop:ultradis-tree}
Given a finite set $X$, denote by $\mathcal{U}_\mathrm{dis}(X)$ the collection of all ultrametric dissimilarity functions on $X$ and by $\mathcal{T}(X)$ the collection of all treegrams over $X$. Then, there exists a bijection $\Delta_X:\mathcal{T}(X)\rightarrow\mathcal{U}_\mathrm{dis}(X)$.
\end{proposition}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Figures/treegram.eps}
\caption{\textbf{Treegrams:} Relation between ultra-dissimilarity functions and treegrams}
\label{fig:treegram}
\end{figure}
An \emph{ultra-dissimilarity measure space} is a triple $\mathcal{X}=(X,u_X,\mu_X)$ where $(X,u_X)$ is an ultra-dissimilarity space and $\mu_X$ is a probability measure fully supported on $X$.
Just as for metric spaces or metric measure spaces, it is important to have a notion of isomorphism between ultra-dissimilarity spaces.
\begin{definition}[Isomorphism]\label{def:isomorphism ultra-dissimilarity}
Given two ultra-dissimilarity measure spaces $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$, we say they are \emph{isomorphic}, denoted $\ensuremath{\mathcal{X}}\cong_w\ensuremath{\mathcal{Y}}$, if there is a bijective function $f:X\rightarrow Y$ such that $f_\#\mu_X=\mu_Y$ and for any $x,x'\in X$ it holds $u_Y(f(x),f(x'))=u_X(x,x')$. The collection of all isomorphism classes of ultra-dissimilarity spaces is denoted by $\ensuremath{\mathcal{U}_\mathrm{dis}^w}$.
\end{definition}
Given the previous results it is straightforward to show that $\ugw{p}$, $1\leq p\leq \infty$, is a metric on the isomorphism classes of $\ensuremath{\mathcal{U}_\mathrm{dis}^w}$. For the complete proof of the subsequent statement, we refer to \Cref{sec: proof of thm ugw-p-metric-dis}.
\begin{theorem}\label{thm:ugw-p-metric-dis}
The ultrametric Gromov-Wasserstein distance $\ugw{p}$ is a $p$-metric on $\ensuremath{\mathcal{U}_\mathrm{dis}^w}$.
\end{theorem}
\begin{remark}
Since $\ugw{p}$ translates to a metric on $\ensuremath{\mathcal{U}_\mathrm{dis}^w}$, it is clear that it admits the lower bounds introduced in \Cref{sec:lower bounds}.
\end{remark}
\section{Computational aspects}\label{sec:computational aspects}
In this section, we investigate algorithms for approximating/calculating $\ugw{p}$, $1\leq p\leq \infty$. Furthermore, we evaluate for $p<\infty$ the performance of the computationally efficient lower bound $\uSLB{}$ introduced in \Cref{sec:lower bounds} and compare our findings to the results of the classical Gromov-Wasserstein distance $d_{\mathrm{GW},p}$ (see \Cref{eq:Gromov Wasserstein}). Matlab implementations of the presented algorithms and comparisons are available at \url{https://github.com/ndag/uGW}.
\subsection{Algorithms}
Let $\ensuremath{\mathcal{X}}=\ensuremath{\left(X,\uX,\muX \right) }$ and $\ensuremath{\mathcal{Y}}=\ensuremath{\left(Y,\uY,\muY \right) }$ be two finite ultrametric measure spaces with cardinalities $m$ and $n$, respectively.
\subsubsection{The case \texorpdfstring{$p<\infty$}{p finite}}\label{subsubsec:p smaller infty} We have already noted in \Cref{rem:computational complexity ugw} that calculating $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ for $p<\infty$ yields a non-convex quadratic program (which is an NP-hard problem in general \citep{pardalos1991quadratic}). Solving this is not feasible in practice. However, in many practical applications it is sufficient to work with good approximations. Therefore, we propose to approximate $\ugw{p}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ for $p<\infty$ via conditional gradient descent. To this end, we note that the gradient $G$ that arises from \Cref{eq:distortion ult} can in the present setting be expressed with the following partial derivative with respect to $\mu\in\mathcal{C}(\ensuremath{\mu_{X}},\ensuremath{\mu_{Y}})$
\begin{equation}\label{eq:ugw gradient}
G_{i,j}=2\sum_{k=1}^m\sum_{l=1}^{n}(\Lambda_\infty(\ensuremath{u_{X}}(x_i,x_k),\ensuremath{u_{Y}}(y_j,y_l)))^p\mu_{kl}, \quad\forall 1\leq i\leq m,1\leq j\leq n.
\end{equation}
As we deal with a non-convex minimization problem, the performance of the gradient descent strongly depends on the starting coupling $\mu^{(0)}$. Therefore, we follow the suggestion of \citet{chowdhury2020generalize} and employ a Markov Chain Monte Carlo Hit-And-Run sampler to obtain multiple random start couplings. Running the gradient descent from each point in this ensemble greatly improves the approximation in many cases. For a precise description of the proposed procedure, we refer to \Cref{algo:gradient descent}.
\begin{algorithm}[htb]
\caption{$\ugw{p}(X,Y,p,N,L)$}\label{algo:gradient descent}
\begin{algorithmic}
\STATE{\emph{//Create a list of random couplings}}
\STATE{couplings =CreateRandomCouplings(N)}
\STATE{stat\_points = cell(N)}
\FOR{i=1:N}
\STATE{$\mu^{(0)}=$couplings\{$i$\}}
\FOR{j=1:L}
\STATE{$G=$ Gradient from \Cref{eq:ugw gradient} w.r.t. $\mu^{(j-1)}$}
\STATE{$\tilde{\mu}^{(j)}=$ Solve OT with ground loss $G$}
\STATE{$\gamma^{(j)}=\frac{2}{j+2}$}
\STATE{\emph{//Alt. find $\gamma\in[0,1]$ that minimizes $\mathrm{dis}_p^\mathrm{ult}\Big(\mu^{(j-1)}+\gamma\big( \tilde{\mu}^{(j)}-\mu^{(j-1)}\big) \Big)$} }
\STATE{$\mu^{(j)}=(1-\gamma^{(j)})\mu^{(j-1)}+\gamma^{(j)}\tilde{\mu}^{(j)}$}
\ENDFOR
\STATE{stat\_points\{$i$\}= $\mu^{(L)}$}
\ENDFOR
\STATE{Find $\mu^*$ in stat\_points that minimizes $\mathrm{dis}_p^\mathrm{ult}(\mu)$}
\STATE{result =$\mathrm{dis}_p^\mathrm{ult}(\mu^*)$}
\end{algorithmic}
\end{algorithm}
\subsubsection{The case \texorpdfstring{$p=\infty$}{p infinite}} \label{subsubsec:p equals infty}
For $p=\infty$, it follows by \Cref{thm:ugw-infty-eq} that
\begin{equation}\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})=\inf\left\lbrace t\geq 0 \,|\,\ensuremath{\mathcal{X}}_t \cong_w \ensuremath{\mathcal{Y}}_t\right\rbrace.\label{eq:p infty algo idea}\end{equation}
This identity allows us to construct a polynomial time algorithm for $\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$ based on the ideas of \citet[Sec. 8.2.2]{memoli2019gromov}. More precisely, let $\spec{X}\coloneqq\{\ensuremath{u_{X}}(x,x')|\,x,x'\in X\}$ denote the spectrum of $X$. Then, it is evident that in order to find the infimum in \Cref{eq:p infty algo idea}, we only have to check $\ensuremath{\mathcal{X}}_t \cong_w \ensuremath{\mathcal{Y}}_t$ for each $t\in\spec{X}\cup\spec{Y}$, starting from the largest to the smallest and $\ugw{\infty}$ is given as the smallest $t$ such that $\ensuremath{\mathcal{X}}_t\cong_w \ensuremath{\mathcal{Y}}_t$. This can be done in polynomial time by considering $\ensuremath{\mathcal{X}}_t$ and $\ensuremath{\mathcal{Y}}_t$ as labeled, weighted trees (e.g. by using a slight modification of the algorithm in Example 3.2 of \cite{aho1974design}). This gives rise to a simple algorithm (see \Cref{algo:ugw infty}) to calculate $\ugw{\infty}$.
\begin{algorithm}[H]
\caption{$\ugw{\infty}(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$}\label{algo:ugw infty}
\begin{algorithmic}
\STATE{spec = sort($\spec{X}\cup\spec{Y}$, 'descent')}
\FOR{$i=1:\mathrm{length(spec)}$}
\STATE{$t=\mathrm{spec}(i)$}
\IF{$\ensuremath{\mathcal{X}}_t\ncong_w \ensuremath{\mathcal{Y}}_t$}
\RETURN $\mathrm{spec}(i-1)$
\ENDIF
\ENDFOR
\RETURN $0$
\end{algorithmic}
\end{algorithm}
\subsection{The relation between \texorpdfstring{$\ugw{1}$, $\ugw{\infty}$}{the ultrametric Gromov-Wasserstein distance} and \texorpdfstring{$\uSLB{1}$}{its second lower bound}}\label{sec:ugw toy examples}
In order to understand how $\ugw{p}$ (or at least its approximation), $\ugw{\infty}$ and $\uSLB{p}$ are influenced by small changes in the structure of the considered ultrametric measure spaces, we exemplarily consider the ultrametric measure spaces $ \ensuremath{\mathcal{X}}_i=(X_i,d_{X_i},\mu_{X_i})$, $1\leq i \leq 4$, displayed in \Cref{fig:umm spaces}. These ultrametric measure spaces differ only by one characteristic (e.g. one side length or the equipped measure). Exemplarily, we calculate $\ugw{1}(\ensuremath{\mathcal{X}}_i,\ensuremath{\mathcal{X}}_j)$ (approximated with \Cref{algo:gradient descent}, where $L=5000$ and $N=40$), $\uSLB{1}(\ensuremath{\mathcal{X}}_i,\ensuremath{\mathcal{X}}_j)$ and $\ugw{\infty}(\ensuremath{\mathcal{X}}_i,\ensuremath{\mathcal{X}}_j)$, $1\leq i,j\leq 4$. The results suggest that $\uSLB{1}$ and $\ugw{1}$ are influenced by the change in the diameter of the spaces the most (see \Cref{tab:exemplary comparison} and \Cref{tab:exemplary comparison uslb} in \Cref{app:ugw toy examples} for the complete results). Changes in the metric influence $\uSLB{1}$ in a similar fashion as $\ugw{1}$, while changes in the measure have less impact on $\uSLB{1}$. Further, we observe that $\ugw{\infty}$ attains for almost all comparisons the maximal possible value. Only the comparison of $\ensuremath{\mathcal{X}}_1$ with $\ensuremath{\mathcal{X}}_3$, where the only small scale structure of the space was changed, yields a value that is smaller than the maximum of the diameters of the considered spaces.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Figures/ummexample.eps}
\caption{\textbf{Ultrametric measure spaces:} Four non-isomorphic ultrametric measure spaces denoted (from left to right) as $\ensuremath{\mathcal{X}}_i=(X_i,d_{X_i},\mu_{X_i})$, $1\leq i \leq 4$.
}\label{fig:umm spaces}
\end{figure}
\subsection{Comparison of \texorpdfstring{$\ugw{1}$, $\uSLB{1}$, $\dgw{1}$ and $\dSLB{1}$}{the ultrametric and the usual Gromov-Wasserstein distance and Second Lower Bounds}}\label{subsec:relation to the GW-dist}
In the remainder of this section, we will demonstrate the differences between $\ugw{1}$, $\uSLB{1}$, $\dgw{1}$ and $\dSLB{1}$. To this end, we first compare the metric measure spaces in \Cref{fig:umm spaces} based on $\dgw{1}$ and $\dSLB{1}$. We observe that $\dgw{1}$ (approximated in the same manner as $\ugw{1}$) and $\dSLB{1}$ are hardly influenced by the differences between the ultrametric measure spaces $\ensuremath{\mathcal{X}}_i$, $1\leq i\leq 4$. In particular, it is remarkable that $\dgw{1}$ is affected the most by the changes made to the measure and not the metric structure (see \Cref{tab:exemplary comparison GW} in \Cref{sec:details from subsec relation to the GW-dist} for the complete results).
Next, we consider the differences between the aforementioned quantities more generally. For this purpose, we generate 4 ultrametric spaces $Z_{k}$, $1\leq k\leq 4$, with totally different dendrogram structures, whose diameters are between 0.5 and 0.6 (for the precise construction of these spaces see \Cref{sec:details from subsec relation to the GW-dist}). For each $t=0,0.2,0.4,0.6$, we perturb each $Z_k$ independently to generate 15 ultrametric spaces $Z^{i}_{k,t}$, $1\leq i\leq 15$, such that $(Z^{i}_{k,t})_t\equiv (Z_{k})_t$ for all $i$. The spaces $Z^{i}_{k,t}$ are called \emph{pertubations of $Z_{k}$ at level $t$} (see \Cref{fig:randomly sampled ultrametric measure spaces} for an illustration and see \Cref{sec:details from subsec relation to the GW-dist} for more details). The spaces $Z^{i}_{k,t}$ are endowed with the uniform probability measure and we obtain a collection of ultrametric measure spaces $\ensuremath{\mathcal{Z}}^{i}_{k,t}$. Naturally, we refer to $k$ as the class of the ultrametric measure space $\ensuremath{\mathcal{Z}}^{i}_{k,t}$. We compute for each $t$ the quantities $\ugw{1}$, $\uSLB{1}$, $\dgw{1}$ and $\dSLB{1}$ among the resulting $60$ ultrametric measure spaces. The results, where the spaces have been ordered lexicographically by $(k,i)$, are visualized in \Cref{fig:noise ums}. As previously, we observe that $\ugw{1}$ and $\uSLB{1}$ as well as $\dgw{1}$ and $\dSLB{1}$ behave in a similar manner. More precisely, we see that both $\dgw{1}$ and $\dSLB{1}$ discriminate well between the different classes and that their behavior does not change too much for an increasing level of perturbation. On the other hand, $\ugw{1}$ and $\uSLB{1}$ are very sensitive to the level of perturbation. For small $t$ they discriminate better than $\dgw{1}$ and $\dSLB{1}$ between the different classes and pick up clearly that the perturbed spaces differ. However, if the level of perturbation becomes too large both quantities start to discriminate between spaces from the same class (see \Cref{fig:noise ums}).
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{Figures/dendronew.pdf}
\caption{\textbf{Randomly sampled ultrametric measure spaces:} Illustration of $Z_k$ for $k=2,3,4,5$ (top row) and instances for perturbations of $Z_4$ with respect to perturbation level $t\in\{0,0.2,0.4,0.6\}$ (bottom row). } \label{fig:randomly sampled ultrametric measure spaces}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{Figures/fourperturbation.pdf}
\caption{\textbf{$\ugw{1}/\uSLB{1}$ and $\dgw{1}/\dSLB{1}$ among randomly generated ultrametric measure spaces:} Heatmap representations of $\uSLB{1}(\ensuremath{\mathcal{Z}}^i_{n,t},\ensuremath{\mathcal{Z}}^{i'}_{n',t})$ (top row), $\ugw{1}(\ensuremath{\mathcal{Z}}^i_{n,t},\ensuremath{\mathcal{Z}}^{i'}_{n',t})$ (second row), $\dSLB{1}(\ensuremath{\mathcal{Z}}^i_{n,t},\ensuremath{\mathcal{Z}}^{i'}_{n',t})$ (third row) and $\dgw{1}(\ensuremath{\mathcal{Z}}^i_{n,t},\ensuremath{\mathcal{Z}}^{i'}_{n',t})$ (bottom row), $k,k'\in\{2,\dots,5\}$ and $i,i'\in\{1,\ldots,15\}$.
}\label{fig:noise ums}
\end{figure}
In conclusion, $\ugw{1}$ and $\uSLB{1}$ are sensitive to differences in the large scales of the considered ultrametric measure spaces. While this leads (from small $t$) to good discrimination in the above example, it also highlights that they are (different from $\dgw{1}$ and $\dSLB{1}$) susceptible to large scale noise.
\section{Phylogenetic tree shapes}\label{sec:phylogenetic tree shapes}
Rooted phylogenetic trees (for a formal definition see e.g., \cite{semple2003phylogenetics}) are a common tool to visualize and analyze the evolutionary relationship between different organisms. In combination with DNA sequencing, they are an important tool to study the rapid evolution of different pathogens. It is well known that the (unweighted) shape of a phylogenetic tree, i.e., the tree's connectivity structure without referring to its labels or the length of its branches, carries important information about macroevolutionary processes (see e.g., \cite{mooers1997inferring, blum2006random,dayarian2014infer, wu2016joint}). In order to study the evolution of and the relation between different pathogens, it is of great interest to compare the shapes of phylogenetic trees created on the basis of different data sets. Currently, the number of tools for performing phylogenetic tree shape comparison is quite limited and the development of new methods for this is an active field of research \citep{colijn2018metric,morozov2018extension,kim2019metric,liu2020polynomial}. It is well known that certain classes of phylogenetic trees (as well as their respective tree shapes) can be identified as ultrametric spaces \citep[Sec. 7]{semple2003phylogenetics}. On the other hand, general phylogenetic trees are closely related to treegrams (see \Cref{def:treegram}). In the following, we will use this connection and demonstrate exemplarily that the computationally efficient lower bound $\uSLB{1}$ has some potential for comparing phylogenetic tree shapes. In particular, we contrast it to the metric defined for this application in Equation (4) of \citet{colijn2018metric}, in the following denoted as $d_{\mathrm{CP},2}$, and study the behavior of $\dSLB{1}$ in this framework.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{Figures/phylotree.eps}
\caption{\textbf{Transforming a phylogenetic tree shape into an ultra-dissimilarity space:} In this figure, we illustrate the treegram corresponding to the ultra-dissimilarity space generated by \Cref{eq:tree shape distance} with respect to the phylogenetic tree shape on the left. Note that the treegram preserves the tree structure and the smallest birth time of points is exactly 0.}
\label{fig:phlogenetric tree to treegram}
\end{figure}
In this section, we reconsider phylogenetic tree shape comparisons from \citet{colijn2018metric} and thereby study HA protein sequences from human influenza A (H3N2) (data downloaded from NCBI on 22 January 2016). More precisely, we investigate the relation between two samples of size 200 of phylogenetic tree shapes with 500 tips. Phylogenetic trees from the first sample are based on a random subsample of size 500 of 2168 HA-sequences that were collected in the USA between March 2010 and September 2015, while trees from the second sample are based on a random subsample of size 500 of 1388 HA-sequences gathered in the tropics between January 2000 and October 2015 (for the exact construction of the trees see \cite{colijn2018metric}). Although both samples of phylogenetic trees are based on HA protein sequences from human influenza A, we expect them to be quite different. On the one hand, influenza A is highly seasonal outside the tropics (where this seasonal variation is absent) with the majority of cases occurring in the winter \citep{russell2008global}. On the other hand, it is well known that the undergoing evolution of the HA protein causes a `ladder-like' shape of long-term influenza phylogenetic trees \citep{koelle2010two,volz2013viral,westgeest2012genetic,luksza2014predictive} that is typically less developed in short term data sets. Thus, also the different collection period of the two data sets will most likely influence the respective phylogenetic tree shapes.
In order to compare the phylogenetic tree shapes of the resulting 400 trees, we have to transform the phylogenetic tree shapes into ultra-dissimilarity measure spaces $\mathcal{X}_i=(X_i,u_{X_i},\mu_{X_i})$, $1\leq i\leq 400$. To this end, we discard all the lables, denote by $X_i$ the tips of the $i$'th phylogenetic tree and refer to the corresponding tree shape as $\mathcal{T}_i$. Next, we define the ultra-dissimilarities $u_{X_i}$ on $X_i$, $1\leq i \leq 400$. For this purpose, we set all edge length in the considered phylogenetic trees to one and construct $u_{X_i}$ as follows: let $x^i_1,x^i_2\in X_i$ and let $a^i_{1,2}$ be the most recent common ancestor of $x^i_1$ and $x^i_2$. Let $d^i_{a_{1,2}}$ be the length of the shortest path from $a^i_{1,2}$ to the root, let $d^i_1$ be the length of the shortest path from $x^i_1$ to the root and let $d^i$ be the length of the longest shortest path from any tip to the root. Then, we define for any $x^i_1,x^i_2\in X_i$
\begin{equation}\label{eq:tree shape distance}
u_{X_i}(x^i_1,x^i_2)=\begin{cases}d^i- d^i_{a_{1,2}}& \text{if }x^i_1\neq x^i_2\\
d^i-d^i_1&\text{if }x^i_1= x^i_2, \end{cases}
\end{equation}
and weight all tips in $X_i$ equally (i.e. $\mu_{X_i}$ is the uniform measure on $X_i$). This naturally transforms the collection of phylogenetic tree shapes $\mathcal{T}_i$, $1\leq i\leq 400$, into a collection of ultra-dissimilarity spaces (see \Cref{fig:phlogenetric tree to treegram} for an illustration), which allows us to directly apply $\uSLB{1}$ to compare them (once again we exemplarily choose $p=1$).
In \Cref{fig:uSLB and colijn phylo} we contrast our findings for the comparisons of the shapes $\mathcal{T}_i$, $1\leq i \leq 400$, to those obtained by computing the metric $d_{\mathrm{CP},2}$ described in \cite{colijn2018metric}. The top row of \Cref{fig:uSLB and colijn phylo} visualizes the dissimilarity matrix for the comparisons of all 400 phylogenetic tree shapes (the first 200 entries correspond to the tree shapes from the US-influenza and the second 200 correspond to the ones from the tropic influenza) obtained by applying $\uSLB{1}$ as heat map (left) and as multidimensional scaling plot (right). The heat map shows that the collection of US trees is divided into a large group $\mathcal{G}_1\coloneqq(\mathcal{T}_i)_{1\leq i\leq 161}$, that is well separated from the phylogenetic tree shapes based on tropical data $\mathcal{G}_3\coloneqq(\mathcal{T}_i)_{201\leq i\leq 400}$, and a smaller subgroup $\mathcal{G}_2\coloneqq(\mathcal{T}_i)_{162\leq i\leq 200}$, that seems to be more similar (in the sense of $\uSLB{1}$) to the tropical phylogenetic tree shapes. In the following $\mathcal{G}_1$ and $\mathcal{G}_2$ are referred to as \emph{US main} and \emph{US secondary group}, respectively. This division is even more evident in the MDS-plot on the right (black points represent trees shapes from the US main group, blue points trees shapes from the US secondary group and red points trees shapes based on the tropical data).
\begin{framed}
We remark that in order to highlight the subgroups the US tree shapes have been reordered according to the output permutation of a single linkage dendrogram (w.r.t. $\uSLB{1}$) based on the US tree submatrix created by \citet{Matlabbasicversion} and that the tropical tree shapes have been reordered analogously.
\end{framed}
The second row of \Cref{fig:uSLB and colijn phylo} displays the analogous plots for $d_{\mathrm{CP},2}$. It is noteworthy, that the coloring in the MDS-plot of the left is the same, i.e., $T_1\in\mathcal{G}_1$ is represented by a black point, $T_2\in\mathcal{G}_2$ by a blue one and $T_3\in\mathcal{G}_3$ by a red one. Interestingly, the analysis based on these plots differs from the previous one. Using $d_{\mathrm{CP},2}$ to compare the phylogenetic tree shapes at hand, we can split the data into two clusters, where one corresponds to the US data and the other one to the tropical data, with only a small overlap (see the MDS-plot in the second row of \Cref{fig:uSLB and colijn phylo} on the right). In particular, we notice that $d_{\mathrm{CP},2}$ does not clearly distinguish between the US groups $\mathcal{G}_1$ and $\mathcal{G}_2$.
\begin{figure}[htb]
\centering
\begin{subfigure}[c]{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/uSLBphylo.pdf}
\end{subfigure}
\begin{subfigure}[c]{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/uSLBphylomds.pdf}
\end{subfigure}
\vspace*{3mm}\\
\begin{subfigure}[c]{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/colijn.pdf}
\end{subfigure}
\begin{subfigure}[c]{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/colijnmds.pdf}
\end{subfigure}
\caption{\textbf{Phylogenetic tree shape comparison:} Visualization of the dissimilarity matrices for the comparison of the phylogenetic tree shapes $\mathcal{T}_i$, $1\leq i \leq 400$, based on $\uSLB{1}$ (top row) and $d_{\mathrm{CP},2}$ (bottom row) as heat maps (left) and MDS-plots (right).
}
\label{fig:uSLB and colijn phylo}
\end{figure}
In order to analyze the different findings of $\uSLB{1}$ and $d_{\mathrm{CP},2}$, we collect and compare different characteristics of the tree shapes in the groups $\mathcal{G}_i$, $1\leq i\leq 3$. More precisely, we concentrate on various ``metric" properties of the considered ultra-dissimilarity spaces like $\frac{1}{500^2|\mathcal{G}_i|}\sum_{\mathcal{T}_i\in \mathcal{G}_i}\sum_{x,x'\in X_i}u_{X_i}(x,x')$ (``mean average distance") or $\frac{1}{|\mathcal{G}_i|}\sum_{\mathcal{T}_i\in \mathcal{G}_i}\max\{u_{X_i}(x,x')|x,x'\in X_i\}$ (``mean maximal distance"), $1\leq i\leq 3$, (these influence $\uSLB{1}$ strongly) as well as the mean numbers of certain connectivity structures, like the 4- and 5-structures (these influence $d_{\mathrm{CP},2}$, for a formal definition see \cite{colijn2018metric}). Theses values (see \Cref{fig:phylo property illustration}) show that the mean average distance and the mean maximal distance differ drastically between the two groups of the US tree shapes. The tree shapes in these two groups are completely different from a metric perspective and the values for the secondary US group strongly resemble those of the tropic tree shapes. On the other hand, the connectivity characteristics do not change too much between the US main and secondary group. Hence, the metric $d_{\mathrm{CP},2}$ does not clearly divide the US trees into two groups, although the differences are certainly present. When carefully checking the phylogenetic trees, the reasons for the differences between trees in the US main group and US secondary group are not immediately apparent. Nevertheless, it is remarkable that trees from the secondary US cluster generally contain more samples from California and Florida (on average 1.92 and 0.88 more) and less from Maryland, Kentucky and Washington (on average 0.73, 0.83 and 0.72 less).
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|}
\hline \rule{0pt}{10pt} & USA (main group) &USA (secondary group)& Tropics \\\hline
Mean Avg. Dist. &36.16 &61.88 & 53.45 \\
Mean Max. Dist. & 56.12 & 86.13 &
94.26 \\
Mean Num. of 4-Struc. & 15.61& 14.08 &7.81 \\
Mean Num. of 5-Struc. & 28.04& 27.97 &35.82 \\\hline
\end{tabular}
\medskip
\caption{\textbf{Tree shape characteristics:} The means of several metric and connectivity characteristics of the ultra-dissimilarity spaces $\ensuremath{\mathcal{X}}_i$ and the corresponding phylogenetic tree shapes $\mathcal{T}_i$, $1\leq i \leq 400$, for the three groups $\mathcal{G}_i$, $1\leq i\leq 3$.}\label{fig:phylo property illustration}
\end{table}
To conclude this section, we remark that using $\dSLB{1}$ instead of $\uSLB{1}$ for comparing the ultra-dissimilarity spaces $\ensuremath{\mathcal{X}}_i$, $1\leq i\leq 400$, gives comparable results (cf. \Cref{fig:SLB phylo}, coloring and ordering as previously). Nevertheless, we observe (as we already have in \Cref{sec:computational aspects}) that $\uSLB{1}$ is more discriminating than $\dSLB{1}$.
Furthermore, we mention that so far we have only considered unweighted phylogenetic tree shapes. However, the branch lengths of the considered phylogenetic trees are relevant in many examples, because they can for instance reflect the (inferred) genetic distance between evolutionary events \citep{colijn2018metric}. While the branch lengths cannot easily be included in the metric $d_{\mathrm{CP},2}$,
the modeling of phylogenetic tree shapes as ultra-dissimilarity spaces is extremely flexible. It is straightforward to include branch lengths into the comparisons or to put emphasis on specific features (via weights on the corresponding tips). However, this is beyond the scope of this illustrative data analysis.
\begin{figure}[htb]
\centering
\begin{subfigure}[c]{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/SLBphylo.pdf}
\end{subfigure}
\begin{subfigure}[c]{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/SLBphylomds.pdf}
\end{subfigure}
\caption{\textbf{Phylogenetic tree shape comparison based on $\dSLB{1}$:} Representation of the dissimilarity matrices for the comparisons of the ultra-dissimilarity spaces $\ensuremath{\mathcal{X}}_i$, $1\leq i \leq 400$, based on $\dSLB{1}$ as heat maps (left) and MDS-plots (right).} \label{fig:SLB phylo}
\end{figure}
\section{Concluding remarks}
Since we suspect that computing $\ugw{p}$ and $\usturm p$ for finite $p$ leads to NP-hard problems, it seems interesting to identify suitable collections of ultrametric measure spaces where these distances can be computed in polynomial time as done for the Gromov-Hausdorff distance in \cite{memoli2019gromov}.
\subsection*{Acknowledgements}
We are grateful to Prof. Colijn for sharing the data from \cite{colijn2018metric} with us. F.M. and Z.W. acknowledge funding from the NSF under grants NSF CCF 1740761, NSF DMS 1723003, and NSF RI 1901360. A.M. and C.W. gratefully acknowledge support by the DFG Research Training Group 2088 and Cluster of Excellence MBExC 2067. F.M. and A.M. thank the Mathematisches Forschungsinstitut Oberwolfach. Conversations which eventually led to this project were initiated during the 2019 workshop ``Statistical and Computational Aspects of Learning with Complex Structure".
\bibliographystyle{plainnat.bst}
|
2101.05709
|
\section{Introduction}
\label{sec:intro}
With the development and integration of cyber physical and safety critical systems in various engineering disciplines, there is an increasing need for computational tools for verification and control of such systems according to rich and complex specifications. A prominent example is
autonomous driving, which received a lot of attention during the last decade.
Besides common objectives in optimal control problems, such as minimizing the energy consumption and travel time, and constraints on control variables, such as maximum acceleration, autonomous vehicles (AVs) should follow complex and possibly conflicting traffic laws with different priorities. They should also meet cultural expectations of reasonable driving behavior \cite{nolte2017towards,shalev2017formal,parseh2019pre,ulbrich2013probabilistic,qian2014priority,iso2019pas,Collin2020}. For example, an AV has to avoid collisions with other road users (high priority), drive faster than the minimum speed limit (low priority), and maintain longitudinal clearance with the lead car (medium priority). We formulate these behavior specifications as a set of rules with a priority structure that captures their importance \cite{Censi2020}.
To accommodate the rules, we formulate the AV control problem as
an optimal control problem, in which the satisfaction of the rules and some vehicle limitations are enforced by Control Barrier Functions (CBF) \cite{Ames2017}, and convergence to desired states is achieved through Control Lyapunov Functions \cite{Freeman1996}.
To minimize violation of the set of rules, we formulate iterative rule relaxation according to the pre-order on the rules.
Control Lyapunov Functions (CLFs) \cite{Freeman1996,Artstein1983} have been used to stabilize systems to desired states. CBFs enforce set forward-invariance \cite{Tee2009,Wisniewski2013}, and have been adopted to guarantee the satisfaction of safety requirements \cite{Ames2017,wang2017safety,Lindemann2018}. In \cite{Ames2017,Glotfelter2017}, the constraints induced by
CBFs and CLFs were used to formulate quadratic programs (QPs) that could be solved in real time to stabilize affine control systems while optimizing quadratic costs and satisfying state and control constraints. The main limitation of this approach is that the resulting QPs can easily become infeasible, especially when bounds on control inputs are imposed in addition to the safety specifications and the state constraints, or for constraints with high relative degree \cite{Xiao2019}. Relaxations of the (hard) CLF \cite{Aaron2012,Ames2017}
and CBF \cite{Xiao2019} constraints
have been proposed to address this issue.
The approaches described above do not consider the (relative) importance of the safety constraints during their relaxations. With particular relevance to the application considered here, AVs often deal with situations where there are conflicts among some of the traffic laws or other requirements. For instance, consider a scenario where a pedestrian walks to the lane in which the AV is driving - it is impossible for the AV to avoid a collision with the pedestrian or another vehicles, stay in lane, and drive faster than the minimum speed limit at the same time.
Given the relative priorities of these specifications, a reasonable AV behavior would be to avoid a collision with the pedestrian or other vehicles (high priority), and instead violate low or medium priority rules, e.g., by reducing speed to a value lower than the minimum speed limit, and/or deviating from its lane. The maximum satisfaction and minimum violation of a set of rules expressed in temporal logic were studied in \cite{dimitrova2018maximum,tuumova2013minimum} and solved by assigning positive numerical weights to formulas based on their priorities \cite{tuumova2013minimum}. In \cite{Censi2020}, the authors proposed \emph{rulebooks}, a framework in which relative priorities were captured by a pre-order. In conjunction with rule violation scores, rulebooks were used to rank vehicle trajectories. These works do not consider the vehicle dynamics, or assume very simple forms, such as finite transition systems. The violation scores are example - specific, or are simply the quantitative semantics of the logic used to formulate the rules. In their current form, they capture worst case scenarios and are non-differentiable, and
cannot be used for generating controllers for realistic vehicle dynamics.
In this paper, we draw inspiration from Signal Temporal Logic (STL) \cite{Maler2004} to formalize traffic laws and other driving rules and to quantify the degree of violation of the rules by AV trajectories. We build on the rulebooks from \cite{Censi2020} to construct a rule priority structure. The main contribution of this paper is an iterative procedure that uses the rule priority to determine a control strategy that minimizes rule violation globally. We show how this procedure can be adapted to develop transparent and reproducible rule-based pass/fail evaluation of AV trajectories in test scenarios.
Central to these approaches is an optimization problem based on \cite{Xiao2019}, which uses detailed vehicle dynamics, ensures the satisfaction of ``hard" vehicle limitations (e.g., acceleration constraints), and can accommodate rule constraints with high relative degree. Another key contribution of this work is the formal definition of a speed dependent, optimal over-approximation of a vehicle footprint that ensures differentiability of clearance-type rules, which enables the use of powerful approaches based on CBF and CLF. Finally, we use and test the proposed architecture and algorithms were implemented in a user-friendly software tool in various driving scenarios.
\section{Preliminaries}
\label{sec:pre}
\subsection{Vehicle Dynamics}\label{sec:vd}
Consider an affine control system given by:\vspace{-3pt}
\begin{equation}\label{eqn:affine}
\dot{\bm{x}}=f(\bm x)+g(\bm x)\bm u,
\vspace{-3pt}
\end{equation}
where $\bm x\in X\subset\mathbb{R}^{n}$ ($X$ is the state constraint set), $\dot{()}$ denotes differentiation with respect to time,
$f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$
and $g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times q}$ are globally
Lipschitz, and $\bm u\in U\subset\mathbb{R}^{q}$, where $U$ is the control constraint set
defined as:
\begin{equation}
U:=\{\bm u\in\mathbb{R}^{q}:\bm u_{min}\leq\bm u\leq\bm u_{max}\},
\label{eqn:control}%
\end{equation}
with $\bm u_{min},\bm u_{max}\in\mathbb{R}^{q}$, and the inequalities are
interpreted componentwise. We use $\bm{x}(t)$ to refer to a trajectory of (\ref{eqn:affine}) at a specific time $t$, and we use $\mathcal{X}$ to denote a whole trajectory starting at time 0 and ending at a final time specified by a scenario.
Note that most vehicle dynamics, such as ``traditional" dynamics defined with respect to an inertial frame \cite{Ames2017} and dynamics defined along a given reference trajectory \cite{Rucco2015} (see (\ref{eqn:vehicle})) are in the form (\ref{eqn:affine}). Throughout the paper, we will refer to the vehicle with dynamics given by (\ref{eqn:affine}) as {\em ego}.
\begin{definition}
\label{def:forwardinv}(\textit{Forward invariance} \cite{Nguyen2016}) A set $C\subset\mathbb{R}^{n}$ is forward invariant for
system (\ref{eqn:affine}) if $\bm x(0) \in C$ implies $\bm x(t)\in C,$ $\forall t\geq0$.
\end{definition}
\begin{definition}
\label{def:relative} (\textit{Relative degree} \cite{Nguyen2016}) The relative degree of a
(sufficiently many times) differentiable function $b:\mathbb{R}^{n}%
\rightarrow\mathbb{R}$ with respect to system (\ref{eqn:affine}) is the number
of times it needs to be differentiated along its dynamics (Lie derivatives) until the control
$\bm u$ explicitly shows in the corresponding derivative.
\end{definition}
In this paper, since function $b$ is used to define a constraint $b(\bm
x)\geq0$, we will also refer to the relative degree of $b$ as the relative
degree of the constraint.
\subsection{High Order Control Barrier Functions}
\begin{definition}
\label{def:classk} (\textit{Class $\mathcal{K}$ function} \cite{Khalil2002}) A
continuous function $\alpha:[0,a)\rightarrow[0,\infty), a > 0$ is said to
belong to class $\mathcal{K}$ if it is strictly increasing and $\alpha(0)=0$.
\end{definition}
Given $b:\mathbb{R}^{n}\rightarrow\mathbb{R}$ and a constraint $b(\bm x)\geq0$ with relative
degree $m$, we define $\psi_{0}(\bm
x):=b(\bm x)$ and a sequence of functions $\psi_{i}:\mathbb{R}%
^{n}\rightarrow\mathbb{R},i\in\{1,\dots,m\}$:
\vspace{-2pt}
\begin{equation}
\begin{aligned} \psi_i(\bm x) := \dot \psi_{i-1}(\bm x) + \alpha_i(\psi_{i-1}(\bm x)),i\in\{1,\dots,m\}, \end{aligned} \label{eqn:functions}%
\end{equation}
where $\alpha_{i}(\cdot),i\in\{1,\dots,m\}$ denotes a $(m-i)^{th}$ order
differentiable class $\mathcal{K}$ function. We further define a sequence of sets $C_{i}, i\in\{1,\dots,m\}$ associated
with (\ref{eqn:functions}) in the following form:
\begin{equation}
\label{eqn:sets}\begin{aligned} C_i := \{\bm x \in \mathbb{R}^n: \psi_{i-1}(\bm x) \geq 0\}, i\in\{1,\dots,m\}. \end{aligned}
\end{equation}
\begin{definition}
\label{def:hocbf} (\textit{High Order Control Barrier Function (HOCBF)}
\cite{Xiao2019}) Let $C_{1}, \dots, C_{m}$ be defined by (\ref{eqn:sets}%
) and $\psi_{1}(\bm x), \dots, \psi_{m}(\bm x)$ be defined by
(\ref{eqn:functions}). A function $b: \mathbb{R}^{n}\rightarrow\mathbb{R}$ is
a High Order Control Barrier Function (HOCBF) of relative degree $m$ for
system (\ref{eqn:affine}) if there exist $(m-i)^{th}$ order differentiable
class $\mathcal{K}$ functions $\alpha_{i},i\in\{1,\dots,m-1\}$ and a class
$\mathcal{K}$ function $\alpha_{m}$ such that
\begin{equation}
\label{eqn:constraint}\begin{aligned} \sup_{\bm u\in U}[L_f^{m}b(\bm x) + L_gL_f^{m-1}b(\bm x)\bm u + S(b(\bm x)) \\+ \alpha_m(\psi_{m-1}(\bm x))] \geq 0, \end{aligned}
\end{equation}
for all $\bm x\in C_{1} \cap,\dots, \cap C_{m}$.
$L_{f}^{m}$ ($L_{g}$) denotes Lie derivatives along
$f$ ($g$) $m$ (one) times, and $S(\cdot)$ denotes the remaining Lie derivatives
along $f$ with degree less than or equal to $m-1$ (see \cite{Xiao2019} for more details).
\end{definition}
The HOCBF is a general form of the relative degree $1$ CBF \cite{Ames2017},
\cite{Glotfelter2017}, \cite{Lindemann2018} (setting $m=1$ reduces the HOCBF to
the common CBF form in \cite{Ames2017}, \cite{Glotfelter2017}, \cite{Lindemann2018}), and is also a general form of the exponential CBF
\cite{Nguyen2016}.
\begin{theorem}
\label{thm:hocbf} (\cite{Xiao2019}) Given a HOCBF $b(\bm x)$ from Def.
\ref{def:hocbf} with the associated sets $C_{1}, \dots, C_{m}$ defined
by (\ref{eqn:sets}), if $\bm x(0) \in C_{1} \cap,\dots,\cap C_{m}$,
then any Lipschitz continuous controller $\bm u(t)$ that satisfies
(\ref{eqn:constraint}) $\forall t\geq0$ renders $C_{1}\cap,\dots,
\cap C_{m}$ forward invariant for system (\ref{eqn:affine}).
\end{theorem}
\begin{definition}\label{def:clf} \textit{(Control Lyapunov Function (CLF) \cite{Aaron2012})}
A continuously differentiable function $V :\mathbb{R}^{n}\rightarrow\mathbb{R}_{\ge0}$ is an exponentially stabilizing control Lyapunov function (CLF) if there exist positive constants $c_1 >0, c_2 > 0, c_3 > 0$ such that $\forall
\bm x\in X$, $c_{1}||\bm x||^{2} \leq V(\bm x)
\leq c_{2} ||\bm x||^{2}$, the following holds:
\begin{equation}\label{eqn:CLF}
\inf_{\bm u\in U} \lbrack L_{f}V(\bm x)+L_{g}V(\bm x)
\bm u + c_{3}V(\bm x)\rbrack\leq0.
\end{equation}
\end{definition}
\begin{theorem} [\cite{Aaron2012}] \label{thm:clf}
Given a CLF as in Def. \ref{def:clf}, any Lipschitz continuous controller $ \bm u(t),\forall t\geq 0$ that satisfies (\ref{eqn:CLF})
exponentially stabilizes system (\ref{eqn:affine}) to the origin.
\end{theorem}
Recent works \cite{Ames2017},\cite{Lindemann2018},\cite{Nguyen2016} combined
CBFs and CLFs with quadratic costs to formulate an optimization problem that stabilized a system using CLFs subject to safety constraints given by CBFs. In this work, we follow a similar approach. Time is discretized and CBFs and CLFs constraints are considered at each discrete time step. Note that these constraints are linear in control since the state value is fixed at the beginning of the discretization interval. Therefore, in every interval, the optimization problem is a QP
. The optimal control obtained by solving each QP is applied at the current time step and held constant for the whole interval. The next state is found by integrating the dynamics (\ref{eqn:affine}). The usefulness of this approach is conditioned upon the feasibility of the QP at every time step. In the case of constraints with high relative degrees, which are common in autonomous driving, the CBFs can be replaced by HOCBFs.
\subsection{Rulebooks}
\label{sub-sec:rulebooks}
As defined in \cite{Censi2020},
a {\em rule} specifies a desired behavior for autonomous vehicles. Rules can be derived from traffic laws, local culture, or consumer expectation, e.g., ``stay in lane for all times", ``maintain clearance from pedestrians for all times", ``obey the maximum speed limit for all times", ``reach the goal". A \textit{rulebook} as introduced in \cite{Censi2020} defines a priority on rules by imposing a pre-order that can be used to rank AV trajectories:
\begin{definition} \label{def:rb}(Rulebook \cite{Censi2020})
A rulebook is a tuple $\langle R,\leq\rangle$, where $R$ is a finite set of rules and $\leq$ is a pre-order on $R$.
\end{definition}
A rulebook can be represented by a directed graph, where each node is a rule and an edge between two rules means that the first rule has higher priority than the second. Formally, $r_1\rightarrow r_2$ in the graph means that $r_1\leq r_2$ ($r_2 \in R$ has a higher priority than $r_1\in R$). Note that, using a pre-order, two rules can be in one of three relations: comparable (one has a higher priority than the other), incomparable, or equivalent (each has a higher priority than the other).
\vspace{-4pt}
\begin{example}\label{exm:pre-order}
Consider the rulebook shown in Fig. \ref{fig:rb}, which consists of 6 rules. In this example, $r_1$ and $r_2$ are incomparable, and both have a higher priority than $r_3$ and $r_4$. Rules $r_3$ and $r_4$ are equivalent ($r_3\leq r_4$ and $r_4\leq r_3$), but are incomparable to $r_5$. Rule $r_6$ has the lowest priority among all rules.
\vspace{-4pt}
\begin{figure}[ptbh]
\centering
\includegraphics[scale=0.3]{rulebook0}
\vspace{-4pt} \caption{Graphical representation of a rulebook $\langle R,\leq\rangle$.
}%
\label{fig:rb}%
\end{figure}
\end{example}
\vspace{-4pt}
Rules are evaluated over vehicle trajectories (i.e., trajectories of system (\ref{eqn:affine})). A {\em violation metric} is a function specific to a rule that takes as input a trajectory and outputs a \emph{violation score} that captures the degree of violation of the rule by the trajectory \cite{Censi2020}. For example, if the AV crosses the lane divider and reaches within the left lane by a maximum distance of 1m along a trajectory, then the violation score for that trajectory against the ``stay in lane for all times" rule can be 1m.
\section{Problem Formulation}
\label{sec:prob}
For a vehicle with dynamics given by (\ref{eqn:affine}) and starting at a given state ${\bm x}(0)=\bm x_0$, consider an optimal control problem in the form:
\vspace{-3pt}
\begin{equation}\label{eqn:gcost}
\min_{\bm u(t)} \int_{0}^{T}J(||\bm u(t)||)dt,
\end{equation}
where $||\cdot||$ denotes the 2-norm of a vector, $T > 0$ denotes a bounded final time, and $J$ is a strictly increasing function of its argument (e.g., an energy consumption function $J(||\bm u(t)||) = ||\bm u(t)||^2$). We consider the following additional requirements:
\textbf{Trajectory tracking}: We require the vehicle to stay as close as possible to a desired {\em reference trajectory} $\mathcal{X}_r$ (e.g., middle of its current lane).
\textbf{State constraints}: We impose a set of constraints (componentwise) on the state of system (\ref{eqn:affine}) in the following form:
\begin{equation}\label{eqn:state}
\bm x_{min} \leq \bm x(t)\leq \bm x_{max}, \forall t\in[0,T],
\end{equation}
where $\bm x_{max}: = (x_{max,1},x_{max,2},\dots,x_{max,n})\in \mathbb{R}^n$ and $\bm x_{min}: = (x_{min,1},x_{min,2},\dots,x_{min,n})\in \mathbb{R}^n$ denote the maximum and minimum state vectors, respectively. Examples of such constraints for a vehicle include maximum acceleration, maximum braking, and maximum steering rate.
\textbf{Priority structure:} We require the system trajectory $\mathcal{X}$ of (\ref{eqn:affine}) starting at $\bm x(0)=\bm x_0$ to satisfy a priority structure $\langle R,\sim_p,\leq_p\rangle$, i.e.:
\begin{equation}\label{eqn:rulebook-sat}
\mathcal{X}\models \langle R,\sim_p,\leq_p\rangle,
\end{equation}
where $\sim_p$ is an equivalence relation over a finite set of rules $R$ and $\leq_p$ is a total order over the equivalence classes.
Our priority structure is related to the rulebook from Sec. \ref{sub-sec:rulebooks}, but it requires that any two rules from $R$ are either comparable or equivalent (see Sec. \ref{sec:prio-struc} for a formal definition). Informally, this means that $\mathcal{X}$ is the ``best" trajectory that (\ref{eqn:affine}) can produce, considering the violation metrics of the rules in $R$ and the priorities captured by $\sim_p$ and $\leq_p$.
A formal definition for a priority structure and its satisfaction will be given in Sec. \ref{sec:prio-struc}.
\textbf{Control bounds}: We impose control bounds as given in (\ref{eqn:control}). Examples include jerk and steering acceleration.
Formally, we can define the optimal control problem as follows:
\vspace{1mm}
\begin{problem}\label{prob:main}
Find a control policy for system (\ref{eqn:affine}) such that the objective function in (\ref{eqn:gcost}) is minimized, and the trajectory tracking, state constraints (\ref{eqn:state}), priority structure $\langle R,\sim_p,\leq_p\rangle$, and control bounds (\ref{eqn:control}) are satisfied by the generated trajectory given $\bm x(0)$.
\end{problem}
Our approach to Problem \ref{prob:main} can be summarized as follows: We use CLFs for tracking the reference trajectory $\mathcal{X}_r$ and HOCBFs to implement the state constraints (\ref{eqn:state}). For each rule in $R$, we define violation metrics. We show that satisfaction of the rules can be written as forward invariance for sets described by differential functions, and enforce them using HOCBFs. The control bounds (\ref{eqn:control}) are considered as constraints. We provide an iterative solution to Problem \ref{prob:main}, where each iteration involves solving a sequence of QPs. In the first iteration, all the rules from $R$ are considered. If the corresponding QPs are feasible, then an optimal control is found. Otherwise, we iteratively relax the satisfaction of rules from subsets of $R$ based on their priorities, and minimize the corresponding relaxations by including them in the cost function
\section{Rules and priority structures
\label{sec:trb}
In this section, we extend the rulebooks from \cite{Censi2020} by formalizing the rules and defining violation metrics. We introduce a {\em priority structure}, in which all rules are comparable, and it is particularly suited for the hierarchical control framework proposed in Sec. \ref{sec:optim-rule-approx}.
\subsection{Rules}
In the definition below, an {\em instance} $i\in S_p$ is a traffic participant or artifact that is involved in a rule, where $S_p$ is the set of all instances involved in the rule. For example, in a rule to maintain clearance from pedestrians, a pedestrian is an instance, and there can be many instances encountered by ego in a given scenario. Instances can also be traffic artifacts like the road boundary (of which there is only one), lane boundaries, or stop lines.
\begin{definition} (Rule)\label{def:rule}
A rule is composed of a statement and three violation metrics.
A statement is a formula that is required to be satisfied for all times. A formula is inductively defined as:
\begin{equation} \label{eqn:task}
\varphi := \mu\vert \neg \varphi \vert \varphi_1\wedge \varphi_2
\end{equation}
where $\varphi,\varphi_1,\varphi_2$ are formulas, $\mu :=(h(\bm x)\geq 0)$ is a predicate on the state vector $\bm x$ of system (\ref{eqn:affine}) with $h:\mathbb{R}^n\rightarrow \mathbb{R}$. $\wedge, \neg$ are Boolean operators for conjunction and negation, respectively. The three violation metrics for a rule $r$ are defined as:
\begin{enumerate}
\item instantaneous violation metric $\varrho_{r,i}(\bm x(t)) \in [0,1],$
\item instance violation metric $\rho_{r,i}(\mathcal{X})\in [0,1]$, and
\item total violation metric $P_{r}(\mathcal{X})\in [0,1]$,
\end{enumerate}
where $i$ is an instance, $\bm{x}(t)$ is a trajectory at time $t$ and $\mathcal{X}$ is a whole trajectory of ego.
The instantaneous violation metric $\varrho_{r,i}(\bm x(t))$
quantifies violation by a trajectory at a specific time $t$ with respect to a given instant $i$. The instance violation metric $\rho_{r,i}(\mathcal{X})$ captures violation with respect to a given instance $i$ over the whole time of a trajectory, and is obtained by aggregating $\varrho_{r,i}(\bm x(t))$ over the entire time of a trajectory $\mathcal{X}$.
The total violation metric $P_{r}$ is the aggregation of the instance violation metric $\rho_{r,i}(\mathcal{X})$ over all instances $i\in S_p$.
\end{definition}
The aggregations in the above definitions can be implemented through selection of a maximum or a minimum, integration over time, summation over instances, or by using general $L_p$ norms. A zero value for a violation score shows satisfaction of the rule. A strictly positive value denotes violation - the larger the score, the more ego violates the rule.
Throughout the paper, for simplicity, we use $\varrho_{r}$ and $\rho_{r}$ instead of $\varrho_{r,i}$ and $\rho_{r,i}$ if there is only one instance. Examples of rules (statements and violations metrics and scores) are given in Sec. \ref{sec:case} and in the Appendix.
We divide the set of rules into two categories: (1) {\em clearance rules} - safety relevant rules
enforcing that ego maintains a minimal distance to other traffic participants and to the side of the road or lane (2) {\em non-clearance rules} - rules that that are not contained in the first category, such as speed limit rules. In Sec. \ref{sec:rule-approx}, we provide a general methodology to express clearance rules as inequalities involving differentiable functions, which will allow us to enforce their satisfaction using HOCBFs.
\begin{remark}
The violation metrics from Def. \ref{def:rule} are inspired from Signal Temporal Logic (STL) robustness \cite{Maler2004,donze,mehdipour2019agm}, which quantifies how a signal (trajectory) satisfies a temporal logic formula. In this paper, we focus on rules that we aim to satisfy for all times. Therefore, the rules in (\ref{eqn:task}) can be seen as (particular) STL formulas, which all start with an ``always" temporal operator (omitted here).
\end{remark}
\begin{comment}
\begin{example}
Consider a clearance rule $r$ with the statement ``ego must maintain at least $3m$ clearance from all pedestrians for all times". Assume at time $t \in [0,2]s$, ego has time-varying distances $(vt + 1)\; m$ and $(vt + 0.5)\; m$ from two pedestrians where $v = 1m/s$. A simple (normalized) instantaneous violation score can be defined based on the difference of the required clearance and the distance between ego and the pedestrians at time $t$, given by $\varrho_{r,1}(\bm x(t)) = \frac{3-(t+1)}{3}$, $ \varrho_{r,2}(\bm x(t)) = \frac{3- (t + 0.5)}{3}, t\in [0,2]s$. The instance violation scores can be defined by an average aggregation over the trajectory as $\rho_{r,1}(\mathcal{X}) = \frac{1}{2-0}{\int_{0}^2 \frac{2-t}{3}\; dt} = \frac{1}{3}$, $\rho_{r,2}(\mathcal{X}) = \frac{1}{2-0}{\int_{0}^2 \frac{2.5-t}{3}\; dt} = \frac{1}{2}$. The total violation score $P_r(\mathcal{X})$ can be defined by a maximum aggregation as $P_r=\max{\big(\rho_{r,1}( \mathcal{X}), \rho_{r,2}(\mathcal{X})\big)}=\frac{1}{2}$. Later in the paper, we use a more advanced aggregation for defining the violation scores based on the available empirical studies and traffic data on clearance requirements.
\end{example}
\end{comment}
\subsection{Priority Structure}
\label{sec:prio-struc}
The pre-order rulebook in Def. \ref{def:rb} defines a ``base" pre-order that captures relative priorities of some (comparable) rules, which are often similar in different states and countries.
A pre-order rulebook can be made more precise for a specific legislation by adding rules and/or priority relations through priority refinement, rule aggregation and augmentation \cite{Censi2020}. This can be done through empirical studies or learning from local data to construct a total order rulebook. To order trajectories, authors of \cite{Censi2020} enumerated all the total orders compatible with a given pre-order. In this paper, motivated by the hierarchical control framework
described in Sec. \ref{sec:optim-rule-approx}, we require that any two rules are in a relationship, in the sense that they are either equivalent or comparable with respect to their priorities.
\begin{comment}
The total order rulebook considered in \cite{Censi2020} does not allow equivalence relationships between rules. This is conservative as some rules may have the same importance, such as maintaining clearance with parked vehicles and maintaining clearance with moving vehicles.
In this paper, we allow for equivalence relationships between rules and introduce a priority structure that is also learned from data, and is defined as follows. \Noushin{I don't think we should call total order conservative. Also, we didnt talk about why the priority structure is needed, and how it improves the optimal control solution:\\
from the notes by Calin: 2 motivations for priority structure: natural way to represent the rulebook, imposes additional constraints rather than permutation of all the rules, and second it has the right structure to help the optimal control}
\end{comment}
\begin{definition} [Priority Structure]
A priority structure is a tuple $\langle R,\sim_p,\leq_p\rangle$, where $R$ is a finite set of rules, $\sim_p$ is an equivalence relation over $R$, and $\leq_p$ is a total order over the set of equivalence classes determined by $\sim_p$.
\end{definition}
Equivalent rules (i.e., rules in the same class) have the same priority. Given two equivalence classes $\mathcal{O}_1$ and $\mathcal{O}_2$ with $\mathcal{O}_1\leq_p \mathcal{O}_2$, every rule $r_1\in \mathcal{O}_1$ has lower priority than every rule $r_2\in \mathcal{O}_2$. Since $\leq_p$ is a total order, any two rules $r_1,r_2\in R$ are comparable, in the sense that exactly one of the following three statements is true: (1) $r_1$ and $r_2$ have the same priority, (2) $r_1$ has higher priority than $r_2$, and (3) $r_2$ has higher priority than $r_1$.
\begin{comment}
Fig. \ref{fig:rulebook1} is an example of a priority structure for a city ``A'' that is learned from a pre-order rulebook and the data from the city ``A''. As a priority structure imposes extra relationships (constraints) between rules based a pre-order rulebook, we can claim that the satisfaction of a priority structure implies the satisfaction of a rulebook. In the rest of the paper, we only consider a priority structure of a rulebook.
\end{comment}
Given a priority structure $\langle R,\sim_p,\leq_p\rangle$, we can assign numerical (integer) priorities to the rules. We assign priority 1 to the equivalence class with the lowest priority, priority 2 to the next one and so on.
The rules inside an equivalence class inherit the priority from their equivalence class. Given a priority structure $\langle R,\sim_p,\leq_p\rangle$ and violation scores for the rules in $R$, we can compare trajectories:
\begin{definition}[Trajectory Comparison] \label{def:traj_cmp}
A trajectory $\mathcal{X}_1$ is said to be {\bf better} (less violating) than another trajectory $\mathcal{X}_2$
if the highest priority rule(s) violated by $\mathcal{X}_1$ has a lower priority than the highest priority rule(s) violated by $\mathcal{X}_2$. If both trajectories violate an equivalent highest priority rule(s), then the one with the smaller (maximum) total violation score is better. In this case, if the trajectories have equal violation scores, then they are equivalent.
\end{definition}
\vspace{-2pt}
It is easy to see that, by following Def. \ref{def:traj_cmp}, given two trajectories, one can be better than the other, or they can be equivalent (i.e., two trajectories cannot be incomparable).
\begin{example}\label{ex:three-traj
Consider the driving scenario from Fig. \ref{fig:autonomous1} and a priority structure $\langle R,\sim_p,\leq_p\rangle$ in Fig. \ref{fig:rulebook1}, where $R = \{r_1, r_2, r_3, r_4\}$, and $r_1$: ``No collision'', $r_2$: ``Lane keeping'', $r_3$: ``Speed limit'' and $r_4$: ``Comfort''. There are 3 equivalence classes given by $\mathcal{O}_1=\{r_4\}$, $\mathcal{O}_2=\{r_2,r_3\}$ and $\mathcal{O}_3=\{r_1\}$. Rule $r_4$ has priority 1, $r_2$ and $r_3$ have priority 2, and $r_1$ has priority 3. Assume the instance (same as total, as there is only one instance for each rule) violation scores of rule $r=1,2,3,4$ by trajectories $a,b,c$ are given by $\rho_r=(\rho_r(a),\rho_r(b),\rho_r(c))$ as shown in Fig. \ref{fig:rulebook1}.
Based on Def. \ref{def:traj_cmp},
trajectory $c$ is better (less violating) than trajectory $a$ since the highest priority rule violated by $c$ ($r_2$) has a lower priority than the highest priority rule violated by $a$ ($r_1$). The same argument holds for trajectories $a$ and $b$, i.e., $b$ is better than $a$. The highest priority rules violated by trajectories $b$ and $c$ have the same priorities. Since the maximum violation score of the highest priority rules violated by $b$ is smaller than that for $c$, i.e., $\max(\rho_2(b),\rho_3(b))=0.35$, $\max(\rho_2(c),\rho_3(c))=0.4$, trajectory $b$ is better than $c$.
\end{example}
\begin{definition} (Priority structure satisfaction) \label{def:rb_satisfy}
A trajectory $\mathcal{X}$ of system (\ref{eqn:affine}) starting at $\bm x(0)$
satisfies a priority structure
$\langle R,\sim_p,\leq_p\rangle$ (i.e., $\mathcal{X}\models \langle R,\sim_p,\leq_p\rangle$), if there are no better trajectories of (\ref{eqn:affine}) starting at $\bm x(0)$.
\end{definition}
Def. \ref{def:rb_satisfy} is central to our solution to Problem \ref{prob:main} (see Sec. \ref{sec:optim-rule-approx}), which is based on an iterative relaxation of the rules according to their satisfaction of the priority structure.
\begin{figure}[htb]
\centering
\subfigure[Possible trajectories]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[scale=0.28]{drive_case}
\vspace{-1.8mm}
\label{fig:autonomous1}%
\end{minipage}%
}
\subfigure[Priority structure with instance violation scores (the colors for the scores correspond to the colors of the trajectories. The rectangles show the equivalence classes.]{
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[scale=0.23]{rulebook}
\label{fig:rulebook1}%
\end{minipage}%
}
\centering
\caption{An autonomous driving scenario with three possible trajectories, 4 rules, and 3 equivalence classes}
\end{figure}
\section{RULE-BASED OPTIMAL CONTROL}
\label{sec:oc}
In this section, we present our approach to solve Problem \ref{prob:main}.
\subsection{Trajectory Tracking}\label{sec:tracking}
As discussed in Sec. \ref{sec:vd}, Eqn. (\ref{eqn:affine}) can define ``traditional" vehicle dynamics with respect to an inertial reference frame
\cite{Ames2017}, or dynamics defined along a given reference trajectory \cite{Rucco2015} (see (\ref{eqn:vehicle})). The case study considered in this paper falls in the second category (the middle of ego's current lane is the default reference trajectory). We use the model from \cite{Rucco2015}, in which part of the state of (\ref{eqn:affine}) captures the tracking errors with respect to the reference trajectory.
The tracking problem then becomes stabilizing the error states to 0. Suppose the error state vector is $\bm y\in{R}^{n_0}, n_0 \leq n$ (the components in $\bm y$ are part of the components in $\bm x$). We define a CLF $V(\bm x) = ||\bm y||^2$ ($c_3 = \epsilon > 0$ in Def. \ref{def:clf}). Any control $\bm u$ that satisfies the relaxed CLF constraint \cite{Ames2017} given by:
\begin{equation} \label{eqn:clf1}
L_{f}V(\bm x)+L_{g}V(\bm x)
\bm u + \epsilon V(\bm x)\leq \delta_e,
\end{equation}
exponentially stabilizes the error states to 0 if $\delta_e(t) = 0, \forall t\in[0,T]$, where $\delta_e>0$ is a relaxation variable that compromises between stabilization and feasibility. Note that the CLF constraint (\ref{eqn:clf1}) only works for $V(\bm x)$ with relative degree one. If the relative degree is larger than $1$, we can use input-to-state linearization and state feedback control \cite{Khalil2002} to reduce the relative degree to one \cite{Xiao2020}.
\subsection{Clearance and Optimal Disk Coverage}
\label{sec:rule-approx}
Satisfaction of a priority structure can be enforced by formulating real-time constraints on ego state $\bm x(t)$ that appear in the violation metrics. Satisfaction of the non-clearance rules can be easily implemented using HOCBFs (See Sec. \ref{sec:optim-rule-approx}, Sec. \ref{sec:app-rule-def}). For clearance rules, we define a notion of clearance region around ego and around the traffic participants in $S_p$ that are involved in the rule (e.g., pedestrians and other vehicles).
The clearance region for ego is defined as a rectangle with tunable speed-dependent lengths (i.e., we may choose to have a larger clearance from pedestrians when ego is driving with higher speed
) and defined based on ego footprint and functions $h_f(\bm x), h_b(\bm x), h_l(\bm x), h_r(\bm x)$ that determine the front, back, left, and right clearances as illustrated in Fig. \ref{fig:approx}, where $h_f,h_b,h_l,h_r:\mathbb{R}^n\rightarrow \mathbb{R}_{\geq0}$. The clearance regions for participants (instances) are defined such that they comply with their geometry and cover their footprints, e.g., (fixed-length) rectangles for other vehicles and (fixed-radius) disks for pedestrians, as shown in Fig. \ref{fig:approx}
To satisfy a clearance rule involving traffic participants, we need to avoid any overlaps between the clearance regions of ego and traffic participants.
We define a function $d_{min}(\bm x, \bm x_i): \mathbb{R}^{n+n_i} \rightarrow \mathbb{R}$ to determine the signed distance between the clearance regions of ego and participant $i\in S_p$ ($\bm x_i\in\mathbb{R}^{n_i}$ denotes the state of participant $i$), which is negative if the clearance regions overlap. Therefore, satisfaction of a clearance rule can be imposed by having a constraint on $d_{min}(\bm x, \bm x_i)$ to be non-negative. For the clearance rules ``stay in lane" and ``stay in drivable area", we require that ego clearance region be within the lane and the drivable area, respectively
However, finding $d_{min}(\bm x, \bm x_i)$ can be computationally expensive. For example, the distance between two rectangles could be from corner to corner, corner to edge, or edge to edge. Since each rectangle has $4$ corners and $4$ edges, there are 64 possible cases. More importantly, this computation leads to a non-smooth $d_{min}(\bm x, \bm x_i)$ function, which cannot be used to enforce clearance using a CBF approach. To address these issues, we propose an optimal coverage of the rectangles with disks, which allows to map the satisfaction of the clearance rules to a set of smooth HOCBF constraints (i.e., there will be one constraint for each pair of centers of disks pertaining to different traffic participants).
We use $l > 0$ and $w > 0$ to denote the length and width of ego's footprint, respectively. Assume we use $z \in\mathbb{N}$ disks with centers located on the center line of the clearance region to cover it (see Fig. \ref{fig:proof}). Since all the disks have the same radius, the minimum radius to fully cover ego's clearance region, denoted by $ \mathfrak{r}>0$, is given by:
\begin{equation}\label{eqn:radius}
\mathfrak{r} = \sqrt{\left(\frac{w + h_l(\bm x) + h_r(\bm x)}{2} \right)^2 + \left(\frac{l+h_f(\bm x) + h_b(\bm x)}{2z}\right)^2}.
\end{equation}
The minimum radius $\mathfrak{r}_i$ of the rectangular clearance region for a traffic participant $i \in S_p$ with disks number $z_i$ is defined in a similar way using the length and width of its footprint and setting $h_l,h_r,h_b,h_f=0$.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.30]{approx}
\caption{The clearance regions and their coverage with disks: the clearance region and the disks are speed dependent for ego and fixed for the other vehicle and the pedestrian. We consider the distances between all the possible pairs of disks from ego and other traffic participants (vehicles, pedestrians, etc.).}
\label{fig:approx}%
\end{figure}
\begin{figure}[!bht]
\centering
\includegraphics[scale=0.30]{proof}
\vspace{-3pt}
\caption{The optimal disk coverage of a clearance region.}
\label{fig:proof}%
\end{figure}
Assume the center of the disk $j\in \{1,\dots,z\}$ for ego, and the center of the disk $k\in \{1,\dots,z_i\}$ for the instance $i \in S_p$ are given by $(x_{e,j}, y_{e,j}) \in \mathbb{R}^2$ and $(x_{i,k}, y_{i,k})\in \mathbb{R}^2$, respectively (See Appendix \ref{sec:app-coverage}). To avoid any overlap between the corresponding disks of ego and the instance $i\in S_p$, we impose the following constraints:
\begin{equation}\label{eqn:rule_cons}
\begin{aligned}
\sqrt{(x_{e,j} - x_{i,k})^2 + (y_{e,j} - y_{i,k})^2} \geq \mathfrak{r} + \mathfrak{r}_i ,\\ \forall j\in\{1,\dots,z\}, \forall k\in\{1,\dots,z_i\}.
\end{aligned}
\end{equation}
Since disks fully cover the clearance regions, enforcing \eqref{eqn:rule_cons} also guarantees that $d_{min}(\bm x, \bm x_i)\ge0$. For the clearance rules ``stay in lane" and ``stay in drivable area", we can get similar constraints as (\ref{eqn:rule_cons}) to make the disks that cover ego's clearance region stay within them (e.g., we can consider $h_l,h_r,h_b,h_f=0$ and formulate \eqref{eqn:rule_cons} such that the distance between ego disk centers and the line in the middle of ego's current lane be less than $\frac{w}{2} - \mathfrak{r}$). Thus, we can formulate satisfaction of all the clearance rules as continuously differentiable constraints (\ref{eqn:rule_cons}), and implement them using HOCBFs.
To efficiently formulate the proposed optimal disk coverage approach, we need to find the minimum number of the disks that fully cover the clearance regions as it determines the number of constraints in \eqref{eqn:rule_cons}. Moreover, we need to minimize the lateral approximation error since large errors imply overly conservative constraint (See Fig. \ref{fig:proof}). This can be formally defined as an optimization problem, and solved offline
to determine the numbers and radii of the disks in \eqref{eqn:rule_cons} (the details are provided in Appendix \ref{sec:app-coverage}).
\subsection{Optimal Control}
\label{sec:optim-rule-approx}
In this section, we present our complete framework to solve Problem \ref{prob:main}.
We propose a recursive algorithm to iteratively relax the satisfaction of the rules in the priority structure $\langle R,\sim_p,\leq_p\rangle$ (if needed) based on the total order over the equivalence classes.
Let $R_\mathcal{O}$ be the set of equivalence classes in $\langle R,\sim_p,\leq_p\rangle$, and $N_\mathcal{O}$ be the cardinality of $R_\mathcal{O}$.
We construct the power set of equivalence classes denoted by $S = 2^{R_\mathcal{O}}$, and incrementally (from low to high priority) sort the sets in $S$ based on the highest priority of the equivalence classes in each set according to the total order and denote the sorted set by $S_{sorted} = \{S_1, S_2, \dots, S_{2^{N_\mathcal{O}}}\}$, where $S_1 =\{ \emptyset\}$. We use this sorted set in our optimal control formulation to obtain satisfaction of the higher priority classes, even at the cost of relaxing satisfaction of the lower priority classes. Therefore, from Def. \ref{def:rb_satisfy}, the solution of the optimal control will satisfy the priority structure.
\begin{example}\label{exm:sorted}
Reconsider Exm. \ref{ex:three-traj}.
We define $R_\mathcal{O} = \{ \mathcal{O}_1,\mathcal{O}_2,\mathcal{O}_3\}$.
Based on the given total order $\mathcal{O}_1\leq_p \mathcal{O}_2 \leq_p \mathcal{O}_3$, we can write the sorted power set as $S_{sorted} = \left \{\right.\!\{\emptyset\}, \{\mathcal{O}_1\},\{\mathcal{O}_2\},\{\mathcal{O}_1,\mathcal{O}_2\},\{\mathcal{O}_3\},\\ \{\mathcal{O}_1,\mathcal{O}_3\},\{\mathcal{O}_2,\mathcal{O}_3\}, \{\mathcal{O}_1,\mathcal{O}_2,\mathcal{O}_3\} \}$.
\end{example}
In order to find a trajectory that satisfies a given priority structure, we first assume that all the rules are satisfied. Starting from $S_1=\{\emptyset\}$ in the sorted set $S_{sorted}$, we solve Problem \ref{prob:main} given that no rules are relaxed, i.e., all the rules must be satisfied. If the problem is infeasible, we move to the next set $S_2 \in S_{sorted}$, and relax all the rules of all the equivalence classes in $S_2$ while enforcing satisfaction of all the other rules in the equivalence class set denoted by $R_\mathcal{O} \setminus S_2$. This procedure is done recursively until we find a feasible solution of Problem \ref{prob:main}.
Formally, at $k = 1,2\dots, 2^{N_\mathcal{O}}$ for $S_k\in S_{sorted}$, we relax all the rules $i\in \mathcal{O}$ for all the equivalence classes $\mathcal{O} \in S_k$ and reformulate Problem \ref{prob:main} as the following optimal control problem:
\begin{equation}
\min_{\bm u,\delta_e, {\delta_i}_{,i\in S_k}} \int_{0}^{T}J(||\bm u||) + p_e\delta_e^2 +\sum_{i\in S_k}p_i \delta_i^2dt \label{eqn:cost2}
\end{equation}
subject to:\\
\text{\qquad dynamics (\ref{eqn:affine}), control bounds (\ref{eqn:control}), CLF constraint (\ref{eqn:clf1}),}
\begin{align}
&\begin{aligned}L_{f}^{m_{j}}b_{j}(\bm x)+L_{g}L_{f}^{m_{j}-1}b_{j}(\bm x)\bm
u+S(b_{j}(\bm x))&\\+\alpha_{m_j}(\psi_{m_{j}-1}(\bm x))&\geq0, \forall j\in \mathcal{O}, \forall \mathcal{O}\in R_{\mathcal{O}}\setminus S_k,\end{aligned}\label{eqn:optim-relax-rules}
\\
&\begin{aligned}L_{f}^{m_{i}}b_{i}(\bm x)+L_{g}L_{f}^{m_{i}-1}b_{i}(\bm x)\bm
u+S(b_{i}(\bm x))&\\+\alpha_{m_i}(\psi_{m_{i}-1}(\bm x))&\geq \delta_i,\forall i\in \mathcal{O}, \forall \mathcal{O}\in S_k,\end{aligned}\label{eqn:optim-not-relax-rules}
\\
&\begin{aligned}
L_{f}^{m_{l}}b_{l}(\bm x)+L_{g}L_{f}^{m_{l}-1}b_{lim,l}(\bm x)\bm
u&+S(b_{lim,l}(\bm x))\\+\alpha_{m_l}(\psi_{m_{l}-1}(\bm x))&\geq0,\forall l\in \{1,\dots, 2n\},
\end{aligned} \label{eqn:optim-state-cons}
\end{align}
where $p_e > 0$ and $p_i>0, i\in S_k$ assign the trade-off between the the CLF relaxation $\delta_e$ (used for trajectory tracking) and the HOCBF relaxations $\delta_i$.
$m_i,m_j,m_l$ denotes the relative degree of $b_i(\bm x),b_j(\bm x),b_{lim,l }(\bm x)$, respectively. The functions $b_i(\bm x)$ and $b_j(\bm x)$ are HOCBFs for the rules in $\langle R,\sim_p,\leq_p\rangle$, and are implemented directly from the rule statement for non-clearance rules or by using the optimal disk coverage framework for clearance rules. At relaxation step $k$, HOCBFs corresponding to the rules in $\mathcal{O}$, $\forall\mathcal{O}\in S_k$ are relaxed by adding $p_i>0, i\in S_k$ in \eqref{eqn:optim-relax-rules}, while for other rules in $R$ in \eqref{eqn:optim-not-relax-rules} and the state constraints \eqref{eqn:optim-state-cons}, regular HOCBFs are used. We assign $p_i, i\in S_k$ according to their relative priorities, i.e., we choose a larger $p_i$ for the rule $i$ that belongs to a higher priority class.
The functions $b_{lim,l}(\bm x), l\in\{1,\dots,2n\}$ are HOCBFs for the state limitations (\ref{eqn:state}). The functions $\psi_{m_i}(\bm x), \psi_{m_j}(\bm x), \psi_{m_l}(\bm x)$ are defined as in (\ref{eqn:functions}). $\alpha_{m_i},\alpha_{m_j},\alpha_{m_l}$ can be penalized to improve the feasibility of the problem above \cite{Xiao2019,Xiao2020CDC}.
If the above optimization problem is feasible for all $t\in[0,T]$, we can specifically determine which rules (within an equivalence class) are relaxed based on the values of $\delta_i, i\in \mathcal{O}, \mathcal{O}\in S_k$ in the optimal solution (i.e., if $\delta_i(t) = 0, \forall t\in\{0,T\}$, then rule $i$ does not need to be relaxed). This procedure is summarized in Alg. \ref{alg:sort}.
\begin{remark}[Complexity]
The optimization problem (\ref{eqn:cost2}) is solved using QPs introduced in Sec.~\ref{sec:pre}. The complexity of the QP is $O(y^3)$, where $y\in\mathbb{N}$ is the dimension of decision variables. It usually takes less than $0.01s$ to solve each QP in Matlab. The total time for each iteration $k\in\{1,\dots, 2^{N_{\mathcal{O}}}\}$ depends on the final time $T$ and the length of the reference trajectory $\mathcal{X}_r$. The computation time can be further improved by running the code in parallel over multiple processors.
\end{remark}
\subsection{Pass/Fail Evaluation}\label{sec:p/f}
As an extension to Problem \ref{prob:main}, we formulate and solve a pass / fail (P/F) procedure, in which we are given a vehicle trajectory, and the goal is to accept (pass, P) or reject (fail, F) it based on the satisfaction of the rules. Specifically, given a candidate trajectory $\mathcal{X}_c$ of system (\ref{eqn:affine}), and given a priority structure $\langle R,\sim_p,\leq_p\rangle$, we pass (P) $\mathcal{X}_c$ if we cannot find a better trajectory according to Def. \ref{def:traj_cmp}. Otherwise, we fail (F) $\mathcal{X}_c$.
We proceed as follows: We find the total violation scores of the rules in $\langle R,\sim_p,\leq_p\rangle$ for the candidate trajectory $\mathcal{X}_c$. If no rules in $R$ are violated, then we pass the candidate trajectory. Otherwise,
we investigate the existence of a better (less violating) trajectory. We take the middle of ego's current lane as the reference trajectory $\mathcal{X}_r$
and re-formulate the optimal control problem in (\ref{eqn:cost2}) to recursively relax rules such that if the optimization is feasible, the generated trajectory is better than the candidate trajectory $\mathcal{X}_c$. Specifically, assume that
the highest priority rule(s) that the candidate trajectory $\mathcal{X}_c$
violates belongs to $\mathcal{O}_H$, $H \in\mathbb{N}$. Let $R_H\subseteq R_{\mathcal{O}}$ denote the set of equivalence classes with priorities not larger than $H$, and $N_H \in\mathbb{N}$ denote the cardinality of $R_H$. We construct a power set $S_{H} = 2^{R_H}$, and then apply Alg. \ref{alg:sort}, in which we replace $R_{\mathcal{O}}$ by $R_H$. \vspace{-3pt}
\begin{remark}\label{remark:cond-pass}
The procedure described above would fail a candidate trajectory $\mathcal{X}_c$ even if only a slightly better alternate trajectory (i.e., violating rules of the same highest priority but with slightly smaller violation scores) can be found by solving the optimal control problem. In practice, this might lead to an undesirably high failure rate. One way to deal with this, which we will consider in future work (see Sec. \ref{sec:con}), is to allow for more classification categories, e.g., ``Provisional Pass" (PP), which can then trigger further investigation of $\mathcal{X}_c$.
\end{remark}
\begin{comment}
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.25]{PF}
\caption{Pass/Fail evaluation of a candidate trajectory.}
\label{fig:pf}%
\end{figure}
\Rad
The notion of "better" may need to be refined and aligned with Def 9 and 10 based on whether we add provisional fail as an option.FWIW, we could adopt the term "Materially better" that Amitai coined to denote a trajectory that violates only lower priority rules.}
\Wei{Figure updated with provisional pass.}
\end{comment}
\begin{example}
Reconsider Exm. \ref{ex:three-traj} and assume trajectory $b$ is a candidate trajectory which violates rules $r_2, r_4$,
thus, the highest priority rule that is violated by trajectory $b$ belongs to $\mathcal{O}_2$.
We construct $R_H = \{ \mathcal{O}_1,\mathcal{O}_2\}$.
The power set $S_H=2^{R_H}$ is then defined as $S_H = \{ \{\emptyset\},\{\mathcal{O}_1\}, \{\mathcal{O}_2\},\{\mathcal{O}_1,\mathcal{O}_2\}\}$, and is sorted based on the total order as $S_{H_{sorted}} = \{\{\emptyset\}, \{\mathcal{O}_1\},\{\mathcal{O}_2\}, \{\mathcal{O}_1,\mathcal{O}_2\}\}$
\end{example}
\begin{algorithm}
\caption{Recursive relaxation algorithm for finding optimal trajectory} \label{alg:sort}
\KwIn{System (\ref{eqn:affine}) with $\bm x(0)$, cost function (\ref{eqn:gcost}), control bound (\ref{eqn:control}), state constraint (\ref{eqn:state}), priority structure $\langle R,\sim_p,\leq_p\rangle$, reference trajectory $\mathcal{X}_r$}
\KwOut{Optimal ego trajectory and set of relaxed rules}
1. Construct the power set of equivalence classes $S = 2^{R_\mathcal{O}}$\;
2. Sort the sets in $S$ based on the highest priority of the equivalence classes in each set according to the total order and get $S_{sorted} = \{S_1, S_2, \dots, S_{2^{N_\mathcal{O}}}\}$\;
3. $k = 0$\;
\While{$k++\leq 2^{N_\mathcal{O}}$
}{
Solve (\ref{eqn:cost2}) s.t. (\ref{eqn:affine}), (\ref{eqn:control}), (\ref{eqn:clf1}), (\ref{eqn:optim-relax-rules}), (\ref{eqn:optim-not-relax-rules}) and (\ref{eqn:optim-state-cons})\;
\If{the above problem is feasible for all $t\in[0,T]$}{
Generate the optimal trajectory $\mathcal{X}^*$ from \eqref{eqn:affine}\;
Construct relaxed set $R_{relax} = \{i: i\in \mathcal{O}, \mathcal{O}\in S_{k}\}$\;
\If{$\delta_i(t) = 0, \forall t\in[0,T]$}{
Remove $i$ from $R_{relax}$\;}
break\;
}
}
4. Return $\mathcal{X}^*$ and $R_{relax}$\;
\end{algorithm}
\section{Case Study}
\label{sec:case}
In this section, we apply the methodology developed in this paper to specific vehicle dynamics and various driving scenarios. Ego dynamics \eqref{eqn:affine} are defined with respect to a reference trajectory \cite{Rucco2015}, which measures the along-trajectory distance $s\in\mathbb{R}$ and the lateral distance $d\in\mathbb{R}$ of the vehicle Center of Gravity (CoG) with respect to the closest point on the reference trajectory as follows:
\begin{equation} \label{eqn:vehicle}
\underbrace{\left[
\begin{array}[c]{c}
\dot s\\
\dot d\\
\dot \mu\\
\dot v\\
\dot a\\
\dot \delta\\
\dot \omega
\end{array}
\right]}_{\dot {\bm x}}
=
\underbrace{\left[
\begin{array}
[c]{c}%
\frac{v\cos(\mu + \beta)}{1 - d\kappa}\\
v\sin(\mu + \beta)\\
\frac{v}{l_r}\sin\beta - \kappa\frac{v\cos(\mu + \beta)}{1 - d\kappa}\\
a\\
0\\
\omega\\
0
\end{array}
\right]}_{f(\bm x)}
+
\underbrace{\left[
\begin{array}[c]{cc}%
0 & 0\\
0 & 0\\
0 & 0\\
0 & 0\\
1 & 0\\
0 & 0\\
0 & 1
\end{array}
\right]}_{g(\bm x)}
\underbrace{\left[
\begin{array}[c]{c}%
u_{jerk}\\
u_{steer}
\end{array}
\right]}_{\bm u},
\vspace{-2pt}
\end{equation}
where $\mu$ is the vehicle local heading error determined by the difference of the global vehicle heading $\theta\in\mathbb{R}$ in (\ref{eqn:center}) and the tangent angle $\phi\in\mathbb{R}$ of the closest point on the reference trajectory (i.e., $\theta = \phi + \mu$); $v$, $a$ denote the vehicle linear speed and acceleration; $\delta$, $\omega$ denote the steering angle and steering rate, respectively; $\kappa$ is the curvature of the reference trajectory at the closest point; $l_r$ is the length of the vehicle from the tail to the CoG; and $u_{jerk}$, $u_{steer}$ denote the two control inputs for jerk and steering acceleration as shown in Fig. \ref{fig:frame}. $\beta = \arctan\left(\frac{l_r}{l_r + l_f}\tan\delta\right)$ where $l_f$ is the length of the vehicle from the head to the CoG.\vspace{-2pt}
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.3]{frame}
\caption{Coordinates of ego w.r.t a reference trajectory.}
\label{fig:frame}
\end{figure}
We consider the cost function in \eqref{eqn:cost2} as:
\begin{equation}
\min_{u_{jerk}(t), u_{steer}(t)}\int_{0}^{T}\left[u_{jerk}^2(t) + u_{steer}^2(t)\right]dt.
\end{equation}
The reference trajectory $\mathcal{X}_r$ is the middle of ego's current lane, and is assumed to be given as an ordered sequence of points $\bm p_1$, $\bm p_2$, $\dots$, $\bm p_{N_r}$, where $\bm p_i \in \mathbb{R}^2, i=1,\dots,N_r$ ($N_r$ denotes the number of points). We can find the reference point $p_{i(t)}$, $i:[0,T]\rightarrow \{1,\ldots,N_r\}$ at time $t$ as:
\vspace{-4pt}
\begin{equation}\label{eqn:tracking}
\begin{aligned}
i(t)= \begin{cases} i(t) + 1 & ||\bm p(t) - \bm p_{i(t)}||\leq \gamma,\\
j & \exists j\in\{1,2,\dots, N_r\}:||\bm p(t) \!-\! \bm p_{i(t)}||\!\geq\! ||\bm p(t) \!-\! \bm p_{j}||,
\end{cases}
\end{aligned}
\end{equation}
where $\bm p(t)\in \mathbb{R}^2$ denotes ego's location. $\gamma > 0$, and $i(0) = k$ for a $k\in\{1,2,\dots, N_r\}$ is chosen such that $||\bm p(0) - \bm p_{j}||\geq ||\bm p(0) - \bm p_{k}|, \forall j\in\{1,2,\, N_r\}$. Once we get $\bm p_{i(t)}$, we can update the progress $s$, the error states $d,\mu$ and the curvature $\kappa$ in (\ref{eqn:vehicle}). The trajectory tracking in this case is to stabilize the error states $d, \mu$ ($\bm y = (d,\mu)$ in (\ref{eqn:clf1})) to 0, as introduced in Sec. \ref{sec:tracking}. We also wish ego to achieve a desired speed $v_d > 0$ (otherwise, ego may stop in curved lanes). We achieve this by re-defining the CLF $V(\bm x)$ in (\ref{eqn:clf1}) as $V(\bm x) = ||\bm y||^2 + c_0(v-v_d)^2, c_0 > 0$. As the relative degree of $V(\bm x)$ w.r.t. (\ref{eqn:vehicle}) is larger than 1, as mentioned in Sec. \ref{sec:tracking}, we use input-to-state linearization and state feedback control \cite{Khalil2002} to reduce the relative degree to one \cite{Xiao2020}. For example, for the desired speed part in the CLF $V(\bm x)$ ( (\ref{eqn:vehicle}) is in linear form from $v$ to $u_{jerk}$, so we don't need to do linearization), we can find a desired state feedback acceleration $\hat a = -k_1(v - v_d), k_1 > 0$. Then we can define a new CLF in the form $V(\bm x) = ||\bm y||^2 + c_0(a -\hat a)^2 = ||\bm y||^2 + c_0(a + k_1(v - v_d))^2$ whose relative degree is just one w.r.t. $u_{jerk}$ in (\ref{eqn:vehicle}). We proceed similarly for driving $d, \mu$ to 0 in the CLF $V(\bm x)$ as the relative degrees of $d, \mu$ are also larger than one
The control bounds (\ref{eqn:control}) and state constraints (\ref{eqn:state}) are given by:
\begin{equation}\label{eqn:physical}
\begin{aligned}
&\text{speed constraint: } \qquad\quad\; v_{\min} \leq v(t)\leq v_{\max},\\
&\text{acceleration constraint: } \;\; a_{\min}\leq a(t)\leq a_{\max},\\
&\text{jerk control constraint: }\; u_{j,\min}\leq u_{jerk}(t)\leq u_{j,\max},\\
&\text{steering angle constraint: } \delta_{\min}\leq \delta(t)\leq \delta_{\max},\\
&\text{steering rate constraint: }\;\; \omega_{\min}\leq \omega(t)\leq \omega_{\max},\\
&\text{steering control constraint: } u_{s,\min}\leq u_{steer}(t)\leq u_{s,\max},
\end{aligned}
\end{equation}
We consider the priority structure $\langle R,\sim_p,\leq_p\rangle$ from Fig. \ref{fig:case_rb}, with rules $R = \{r_1, r_2, r_3, r_4, r_5, r_6, r_7, r_8\}$, where $r_1$ is a pedestrian clearance rule; $r_2$ and $r_3$ are clearance rules for staying in the drivable area and lane, respectively; $r_4$ and $r_5$ are non-clearance rules specifying maximum and minimum speed limits, respectively; $r_6$ is a comfort non-clearance rule; and $r_7$ and $r_8$ are clearance rules for parked and moving vehicles, respectively. The formal rule definitions (statements, violation metrics) are given in Appendix \ref{sec:app-rule-def}. These metrics are used to compute the scores for all the trajectories in the three scenarios below.
The optimal disk coverage from Sec. \ref{sec:rule-approx} is used to compute the optimal controls for all the clearance rules, which are implemented using HOCBFs.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.25]{case_rb.png}
\vspace{-3mm}
\caption{Priority structure for case study.
}
\label{fig:case_rb}%
\vspace{-3mm}
\end{figure}
In the following, we consider three common driving scenarios in our tool (See Appendix \ref{sec:tool}). For each of them, we solve the optimal control Problem \ref{prob:main} and perform pass/fail evaluation. In all three scenarios, in the pass/fail evaluation, an initial candidate trajectory is drawn ``by hand" using the tool described in the Appendix. We use CLFs to generate a feasible trajectory $\mathcal{X}_c$ which tracks the candidate trajectory subject to the vehicle dynamics (\ref{eqn:affine}), control bounds (\ref{eqn:control}) and state constraints (\ref{eqn:state}).
\subsection{Scenario 1}
Assume there is an active vehicle, a parked (inactive) vehicle and a pedestrian, as shown in Fig. \ref{fig:case1}.
\textbf{Optimal control:}
We solve the optimal control problem (\ref{eqn:cost2}) by starting the rule relaxation from $S_1=\{\emptyset\}$ (i.e., without relaxing any rules). This problem is infeasible in the given scenario since ego cannot maintain the required distance between both the active and the parked vehicles as the clearance rules are speed-dependent. Therefore, we relaxed the next lowest priority equivalence class set in $S_{sorted}$, i.e., the minimum speed limit rule in $S_2=\{\{r_{5}\}\}$, for which we were able to find a feasible trajectory as illustrated in Fig. \ref{fig:case1}. By checking $\delta_i$ for $r_5$ from \eqref{eqn:cost2}, we found it is positive in some time intervals in $[0,T]$, and thus, $r_5$ is indeed relaxed. The total violation score for rule $r_{5}$ from \eqref{eqn:r_5} for the generated trajectory is 0.539, and all other rules in $R$ are satisfied. Thus, by Def. \ref{def:rb_satisfy}, the generated trajectory satisfies $\langle R,\sim_p,\leq_p\rangle$ in Fig. \ref{fig:case_rb}.
\begin{figure}[thpb]
\centering
\vspace{-3mm}
\includegraphics[scale=0.5]{case1_oc}
\vspace{-3mm}
\caption{Optimal control for Scenario 1: the subset of ego trajectory violating $r_5$ is shown in blue.
}
\label{fig:case1}%
\vspace{-3mm}
\end{figure}
\textbf{Pass/Fail:} The candidate trajectory $\mathcal{X}_c$ is shown in Fig. \ref{fig:case1_pf}.
This candidate trajectory only violates rule $r_5$ with total violation score 0.682. Following Sec. \ref{sec:p/f}, we can either relax $r_5$ or do not relax any rules to find a possibly better trajectory. As shown in the above optimal control problem for this scenario, we cannot find a feasible solution if we do not relax rule $r_5$.
Since the violation score of the candidate trajectory is larger than the optimal one, we fail this candidate trajectory.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.5]{case1_oc_pass.png}
\vspace{-3mm}
\caption{Pass/Fail for Scenario 1: the subset of the candidate trajectory violating $r_5$ is shown in blue.}
\label{fig:case1_pf}
\vspace{-3mm}
\end{figure}
\subsection{Scenario 2}
Assume there is an active vehicle, two parked (inactive) vehicles and two pedestrians, as shown in Fig. \ref{fig:case2}.
\textbf{Optimal control:} Similar to Scenario 1, the optimal control problem (\ref{eqn:cost2}) starting from $S_1=\{\emptyset\}$ (without relaxing any rules in $R$) is infeasible. We relax the next lowest priority rule set in $S_{sorted}$, i.e., the minimum speed rule in $S_2=\{\{r_{5}\}\}$, for which we are able to find a feasible trajectory as illustrated in Fig. \ref{fig:case2}. Again, the $\delta_i$ for $r_5$ is positive in some time intervals in $[0,T]$, and thus, $r_5$ is indeed relaxed. The total violation score of the rule $r_{5}$ for the generated trajectory is 0.646, and all the other rules in $R$ are satisfied.
\begin{figure}[thpb]
\centering
\vspace{-1mm}
\includegraphics[scale=0.5]{case2_oc}
\vspace{-3mm}
\caption{Optimal control for Scenario 2: the subset of ego trajectory violating $r_5$ is shown in blue.}
\label{fig:case2}%
\vspace{-3mm}
\end{figure}
\textbf{Pass/Fail:} The candidate trajectory $\mathcal{X}_c$ shown in red dashed line in Fig. \ref{fig:case2_pf} violates rules $r_1, r_{3}$ and $r_{8}$ with total violation scores 0.01, 0.23, 0.22 found from \eqref{eqn:r_1}, \eqref{eqn:r_3},\eqref{eqn:r_8},
respectively.
In this scenario, we know that ego can change lane (where the lane keeping rule $r_3$ is in a lower priority equivalence class than $r_1$) to get reasonable trajectory. Thus, we show the case of relaxing the rules in the equivalence classes $\mathcal{O}_2=\{r_{3}, r_6\}$ and $\mathcal{O}_1=\{r_{5}\}$ to find a feasible trajectory that is better than the candidate one. The optimal control problem (\ref{eqn:cost2}) generates a trajectory as the red-solid curve shown in Fig. \ref{fig:case2_pf}, and only the $\delta_i$ for $r_6$ is 0 for all $[0,T]$. Thus, $r_6$ does not need to be relaxed. The generated trajectory violates rules $r_{3}$ and $r_{5}$ with total violation scores 0.124 and 0.111, respectively, but satisfies all the other rules including the highest priority rule $r_1$. According to Def. \ref{def:traj_cmp} for the given $\langle R,\sim_p,\leq_p\rangle$ in Fig. \ref{fig:case_rb}, the new generated trajectory is better than the candidate one, thus, we fail the candidate trajectory. Note that although this trajectory violates the lane keeping rule, it has a smaller violation score for $r_{5}$ compared to the trajectory obtained from the optimal control in Fig. \ref{fig:case2} (0.111 v.s. 0.646), i.e., the average speed of ego in the red-solid trajectory in Fig. \ref{fig:case2_pf} is larger
\begin{figure}[thpb]
\centering
\vspace{-3mm}
\includegraphics[scale=0.5]{case2_pf}
\vspace{-3mm}
\caption{Pass/Fail for Scenario 2: the subsets of ego trajectory violating $r_5, r_3$ are shown in yellow and magenta, respectively; the subsets of the candidate trajectory violating $r_8, r_3, r_1$ are shown in green, magenta and blue, respectively.}
\label{fig:case2_pf}
\vspace{-3mm}
\end{figure}
\subsection{Scenario 3}
Assume there is an active vehicle, a parked vehicle and two pedestrians (one just gets out of the parked vehicle), as shown in Fig. \ref{fig:case3}.
\textbf{Optimal control:} Similar to Scenario 1, the optimal control problem (\ref{eqn:cost2}) starting from $S_1=\{\emptyset\}$ (without relaxing any rules in $R$) is infeasible. We relax the lowest priority rule set in $S_{sorted}$, i.e., the minimum speed rule $S_2=\{\{r_{5}\}\}$, and solve the optimal control problem. In the (feasible) generated trajectory, ego stops before the parked vehicle, which satisfies all the rules in $R$ except $r_{5}$. Thus, by Def. \ref{def:rb_satisfy}, the generated trajectory satisfies the priority structure $\langle R,\sim_p,\leq_p\rangle$. However, this might not be a desirable behavior, thus, we further relax the lane keeping $r_{3}$ and comfort $r_6$ rules and find the feasible trajectory shown in Fig. \ref{fig:case3}. $\delta_i$ for $r_6$ is 0 for all $[0,T]$, and, therefore, $r_6$ does not need to be relaxed. The total violation scores for the rules $r_{3}$ and $r_{5}$ are 0.058 and 0.359, respectively, and all other rules in $R$ are satisfied
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.5]{case3_oc}
\vspace{-3mm}
\caption{Optimal control for Scenario 3: the subsets of ego trajectory violating $r_5, r_3$ are shown in blue and green, respectively.}
\label{fig:case3}%
\vspace{-3mm}
\end{figure}
\textbf{Pass/Fail:} The candidate trajectory $\mathcal{X}_c$ shown as the red-dashed curve in Fig. \ref{fig:case3_pf} violates rules $r_{3}$ and $r_{8}$ with total violation scores 0.025 and 0.01, respectively. In this scenario, from the optimal control in Fig. \ref{fig:case3} we know that ego can change lane (where the lane keeping rule is in a lower priority equivalence class than $r_8$). We show the case of relaxing the rules in the equivalence classes $\mathcal{O}_2=\{r_{3}, r_6\}$ and $\mathcal{O}_1=\{r_{5}\}$ (all have lower priorities than $r_{8}$). The optimal control problem (\ref{eqn:cost2}) generates the red-solid curve shown in Fig. \ref{fig:case3_pf}. By checking $\delta_i$ for $r_6$, we found that $r_6$ is indeed not relaxed. The generated trajectory violates rules $r_{3}$ and $r_{5}$ with total violation scores 0.028 and 0.742, respectively, but satisfies all other rules including $r_{8}$. According to Def. \ref{def:traj_cmp} and Fig. \ref{fig:case_rb}, the new generated trajectory (although violates $r_{3}$ more than the candidate trajectory, it does not violate $r_{8}$ which has a higher priority) is better than the candidate one. Thus, we fail the candidate trajectory.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.5]{case3_pf}
\vspace{-3mm}
\caption{Pass/Fail for Scenario 3: the subsets of ego trajectory violating $r_8, r_5, r_3$ are shown in green, magenta and blue, respectively; the subsets of the candidate trajectory violating $r_5, r_3$ are shown in magenta and blue, respectively.
}
\label{fig:case3_pf}
\vspace{-3mm}
\end{figure}
\section{Conclusions and Future Work}
\label{sec:con}
We developed a framework to design optimal control strategies for autonomous vehicles that are required to satisfy a set of traffic rules with a given priority structure, while following a reference trajectory and satisfying control and state limitations. We showed that, for commonly used traffic rules, by using control barrier functions and control Lyapunov functions, the problem can be cast as an iteration of optimal control problems, where each iteration involves a sequence of quadratic programs. We also showed that the proposed algorithms can be used to pass / fail possible autonomous vehicle behaviors against prioritized traffic rules. We presented multiple case studies for an autonomous vehicle with realistic dynamics and conflicting rules. Future work will be focused on learning priority structures from data, improving the feasibility of the control problems, and refinement of the pass / fail procedure.
\bibliographystyle{ACM-Reference-Format}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.